Serverless TradingBot

Matthew Leung
3 min readJan 4, 2021

Nowadays, building the automated trading robot is quite easy with the cloud technology and trading API. For example, I use python to code the trading strategy in my previous post. Based on the predicted value of my ML Model, I can place the order using Alpaca trading API. Here is the python code:

api = tradeapi.REST(key['APCA_API_KEY_ID'], key['APCA_API_SECRET_KEY'], key['APCA_API_BASE_URL'], 'v2')
q=api.get_last_quote(ticker)
ticker_price = q.bidprice
print("place order for ",ticker,ticker_price)

# We could buy a position and add a stop-loss and a take-profit
try:
r = api.submit_order(
symbol=ticker,
qty=1,
side='buy',
type='market',
time_in_force='gtc',
order_class='bracket',
stop_loss={'stop_price': ticker_price * (1 - spread),
'limit_price': ticker_price * (1 - spread) * 0.95},
take_profit={'limit_price': ticker_price * (1 + spread)}
)
print("place order returned ",r)
except Exception as e:
print(e)

One common way to run the model prediction and trading code is through Docker. By building the docker image of the python code, we can easily submit it to run on the kubernetes cluster. All big Cloud Provider provided managed kubernetes service such as GKE of Google, EKS of Amazon, and AKS of Azure.

Here is the Dockerfile to build the image:

FROM pytorch/pytorch
WORKDIR /app
RUN mkdir =p /app/trainer
ADD trainer/lstm_stock.py /app/trainer/lstm_stock.py
RUN conda install -c conda-forge pytorch-lightning
RUN pip install "ray[tune]"
RUN pip install yfinance
RUN pip install scikit-learn
RUN pip install matplotlib
RUN pip install --upgrade google-cloud-pubsub
RUN pip install --upgrade google-cloud-secret-manager
RUN pip install --upgrade google-cloud-storage
RUN pip install --upgrade google-cloud-bigquery
RUN pip install alpaca-trade-api
ENTRYPOINT [ "python", "trainer/lstm_stock.py" ]

Here is the Kubernetes cronjob yaml file to run the above docker image every workday.

apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: lstm
spec:
schedule: "25 9 * * 1-5"
jobTemplate:
spec:
template:
spec:
containers:
- name: lstm
image: 'testing01.azurecr.io/lstm_build:latest'
restartPolicy: Never

However, if we just need to run the job at the market open to place oder and at the market close to close out all position, we will waste a lot of computing resources/cost when no job is running on the Kubernetes Cluster in between.

To save the cpu cost, GCP provides the serverless way to run a ML-training job for a custom docker image on its ai-platform. Here is the command:

gcloud ai-platform jobs submit training $job_id \
--region "us-central1" \
--master-image-uri=gcr.io/$PROJ/lstmstock:latest \
--service-account=$service_ac \
--job-dir "gs://$bucket" \
-- --key-id=$key_id --secret=$secret

Google only charges the CPU used when the job is running.

To make the whole automation pipeline complete, I set the daily schedule in GCP Cloud Scheduler, which will invoke the following Cloud Build trigger at 9:30 every working day.

URL : https://cloudbuild.googleapis.com/v1/projects/$PROJ/triggers/${trigger-id}:run

The above URL is Cloud Build Trigger REST API endpoint. It is a POST http request to run the Trigger on the main branch specified in the POST data, as follows:

{
"branchName": "main"
}

Cloud Build will download the GitHub repository, and locate the cloudbuild.yaml and run the build job. Here is how I define the cloudbuild.yaml to invoke the GCP ai-plaform job.

steps:
- name: 'gcr.io/cloud-builders/docker'
args: [ 'build', '-t', 'gcr.io/$proj/lstmstock', '.' ]
- name: 'gcr.io/cloud-builders/docker'
args: [ 'push', 'gcr.io/$proj/lstmstock' ]
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args:
- '-eEuo'
- 'pipefail'
- '-c'
- |-
ts=`date +%Y%m%d%H%M`
job_id="mkt_idx_lstm_stock_training_$ts"
gcloud ai-platform jobs submit training $job_id \
--region "us-central1" \
--master-image-uri=gcr.io/$proj/lstmstock:latest \
--service-account=$service_ac \
--job-dir "gs://$bucket" \
-- --key-id=$key_id --secret=$secret

The Cloud job has 3 steps:

  1. Build Docker image
  2. Push Docker image to the GCP repository.
  3. Submit the job to GCP ai-plaform.

The job will run the ML model training/prediction, and then calling the Alpaca API to place order.

Pretty easy! Happy coding!

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

No responses yet

Write a response