This one is the most important, before starting anything you should learn about programming. Coding will make you assimilate a certain logic that’s close to mathematical formulas and can help you formalize your trading process. It’s essential to be able to understand everything that’s “under the hood”, what if you strategy starts to slow down after a few months and you’re not able to improve it yourself.
You won’t learn programming in a day, you should take your time to learn and understand the process. Fortunately, there are multiple free methods you can use to learn about Python. You can use websites like EDX, Coursera, and Udacity.
#2 Backtesting and training on the same period
Let’s say you found the perfect strategy that makes +300% in the 2014 period, you may want to backtest it on a different period, the strategy may work in that specific time but it could make you lose a lot on another period. This beginner mistake has a name: overfitting. Ideally you want to split your data set into at least 2 parts: train and test. But if you want to have a rock-solid performance, you can try K-Fold cross validation, it’ll split your data set into K parts, train 1 part and test it on the other ones, and so on.
#3 Not backtesting enough
Backtest, backtest and backtest. Use different time periods, adjust the trading size, the strategy could work by buying 100$ worth of stocks at a time but what if you want to scale it ? You could introduce slippage and of course broker fees.
Backtesting is good but paper trading is better, you should run the strategy in real-time but without any broker connection, this way you can simulate how it’s going to behave with current market situation.
#4 Not having a risk management strategy
Risk management is going to make a difference during bear markets or high-volatility periods. You can limit the maximum exposure and ignore any buying signal if you hit the limit, or automatically close any position older than a few days. These are suggestions, it’s important to make sure you won’t get stuck with a growing loss over time.
#5 Having unreliable data
Your strategy will be based on financial data, either real-time, minute or daily data, a single data point can destroy your profits. You need to make sure it’s coming from a reliable source and not some random websites, a good source is Quandl, some of their datasets are free.
Finding trading signals is one of the core problems of algorithmic trading, without any good signals your strategy will be useless. This is a very abstract process as you cannot intuitively guess what signals will make your strategy profitable or not, because of that I’m going to explain how you can have at least a visualization of the signals so that you can see if the signals make sense and introduce them in your algorithm.
We’re going to use matplotlib to graph the asset price and add buy/sell signals on the same graph, this way you can see if the signals are generated at the right moment or not: buy low, sell high.
Data preparation
For this tutorial I picked a very simple strategy which is a crossing moving average, the idea is to buy when the “short” moving average, let’s say 5-day is crossing the “long” moving average, let’s say 20-day, and to sell when they cross the other way.
First of all, we need to install matplotlib via the usual pip:
pip install matplotlib
This example requires pandas and matplotlib:
import pandas as pd
import matplotlib.pyplot as plt
Loading data and computing the moving averages is pretty trivial thanks to Pandas:
data = pd.DataFrame.from_csv(path='EMini.csv', sep=',')
# Generate moving averages
data = data.reindex(index=data.index[::-1]) # Reverse for the moving average computation
data['Mavg5'] = data['Settle'].rolling(window=5).mean()
data['Mavg20'] = data['Settle'].rolling(window=20).mean()
Now the actual signal generation part is a bit more tricky:
# Save moving averages for the day before
prev_short_mavg = data['Mavg5'].shift(1)
prev_long_mavg = data['Mavg20'].shift(1)
# Select buying and selling signals: where moving averages cross
buys = data.ix[(data['Mavg5'] <= data['Mavg20']) & (prev_short_mavg >= prev_long_mavg)]
sells = data.ix[(data['Mavg5'] >= data['Mavg20']) & (prev_short_mavg <= prev_long_mavg)]
buys and sells is now containing all dates where we have a signal.
Plotting the signals
The interesting part is the graphing of this, the syntax is simple:
plt.plot(X, Y)
We want to display the E-Mini price and the moving averages is pretty simple, we use data.index because the dates in the DataFrame are in the index:
# The label parameter is useful for the legend
plt.plot(data.index, data['Settle'], label='E-Mini future price')
plt.plot(data.index, data['Mavg5'], label='5-day moving average')
plt.plot(data.index, data['Mavg20'], label='20-day moving average')
But for the signals, we want to put each marker at the specific date, which is in the index, and at the E-Mini price level so that visually it’s not too confusing:
In conclusion, you can interpret this by noticing that most buying signals are at dips in the curve and selling signals are at local maximums. So our signal generation looks promising, however without a real backtest we cannot be sure that the strategy will be profitable, at least we can validate or not a signal. The main advantage of this method is that we can instantly see if the signals are “right” or not, for example you can play with the short and long moving average, you could try 10-day versus 30-day etc. and in the end you can pick the right parameters for this signal.
To show you the full process of creating a trading strategy, I’m going to work on a super simple strategy based on the VIX and its futures. I’m just skipping the data downloading from Quandl, I’m using the VIX index from here and the VIX futures from here, only the VX1 and VX2 continuous contracts datasets.
Data loading
First we need to load all the necessary imports, the backtest import will be used later:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from backtest import backtest
from datetime import datetime
For the sake of simplicity, I’m going to put all values in one DataFrame and in different columns. We have the VIX index, VX1 and VX2, this gives us this code:
VIX = "VIX.csv"
VIX1 = "VX1.csv"
VIX2 = "VX2.csv"
data = []
fileList = []
# Create the base DataFrame
data = pd.DataFrame()
fileList.append(VIX)
fileList.append(VIX1)
fileList.append(VIX2)
# Iterate through all files
for file in fileList:
# Only keep the Close column
tmp = pd.DataFrame(pd.DataFrame.from_csv(path=file, sep=',')['Close'])
# Rename the Close column to the correct index/future name
tmp.rename(columns={'Close': file.replace(".csv", "")}, inplace=True)
# Merge with data already loaded
# It's like a SQL join on the dates
data = data.join(tmp, how = 'right')
# Resort by the dates, in case the join messed up the order
data = data.sort_index()
And here’s the result:
Date
VIX
VX1
VX2
02/01/2008
23.17
23.83
24.42
03/01/2008
22.49
23.30
24.60
04/01/2008
23.94
24.65
25.37
07/01/2008
23.79
24.07
24.79
08/01/2008
25.43
25.53
26.10
Signals
For this tutorial I’m going to use a very basic signal, the structure is the same and you can replace the logic with your whatever strategy you want, using very complex machine learning algos or just crossing moving averages.
The VIX is a mean-reverting asset, at least in theory, it means it will go up and down but in the end its value will move around an average. Our strategy will be to go short when it’s way higher than its mean value and to go short when it’s very low, based on absolute values to keep it simple.
high = 65
low = 12
# By default, set everything to 0
data['Signal'] = 0
# For each day where the VIX is higher than 65, we set the signal to -1 which means: go short
data.loc[data['VIX'] > high, 'Signal'] = -1
# Go long when the VIX is lower than 12
data.loc[data['VIX'] < low, 'Signal'] = 1
# We store only days where we go long/short, so that we can display them on the graph
buys = data.ix[data['Signal'] == 1]
sells = data.ix[data['Signal'] == -1]
Now we’d like to visualize the signal to check if, at least, the strategy looks profitable:
# Plot the VX1, not the VIX since we're going to trade the future and not the index directly
plt.plot(data.index, data['VX1'], label='VX1')
# Plot the buy and sell signals on the same plot
plt.plot(sells.index, data.ix[sells.index]['VX1'], 'v', markersize=10, color='r')
plt.plot(buys.index, data.ix[buys.index]['VX1'], '^', markersize=10, color='g')
plt.ylabel('Price')
plt.xlabel('Date')
plt.legend(loc=0)
# Display everything
plt.show()
The result is quite good, even though there’s no trade between 2009 and 2013, we could improve that later:
Backtesting
Let’s check if the strategy is profitable and get some metrics. We’re going to compare our strategy returns with the “Buy and Hold” strategy, which means we just buy the VX1 future and wait (and roll it at each expiry), this way we can see if our strategy is more profitable than a passive one.
I put the backtest method in a separate file to make the main code less heavy, but you can keep the method in the same file:
import numpy as np
import pandas as pd
# data = prices + dates at least
def backtest(data):
cash = 100000
position = 0
total = 0
data['Total'] = 100000
data['BuyHold'] = 100000
# To compute the Buy and Hold value, I invest all of my cash in the VX1 on the first day of the backtest
positionBeginning = int(100000/float(data.iloc[0]['VX1']))
increment = 1000
for row in data.iterrows():
price = float(row[1]['VX1'])
signal = float(row[1]['Signal'])
if(signal > 0 and cash - increment * price > 0):
# Buy
cash = cash - increment * price
position = position + increment
print(row[0].strftime('%d %b %Y')+" Position = "+str(position)+" Cash = "+str(cash)+" // Total = {:,}".format(int(position*price+cash)))
elif(signal < 0 and abs(position*price) < cash):
# Sell
cash = cash + increment * price
position = position - increment
print(row[0].strftime('%d %b %Y')+" Position = "+str(position)+" Cash = "+str(cash)+" // Total = {:,}".format(int(position*price+cash)))
data.loc[data.index == row[0], 'Total'] = float(position*price+cash)
data.loc[data.index == row[0], 'BuyHold'] = price*positionBeginning
return position*price+cash
In the main code I’m going to use the backtest method like this:
It’s important to display the annualized return, a strategy with a 20% return over 10 years is different than a 20% return over 2 months, we annualize everything so that we can compare strategies easily. The Sharpe Ratio is a useful metric, it allows us to see if the return is worth the risk, in this example I just assumed a 0% risk-free rate, if the ratio is > 1 it means the risk-adjusted return is interesting, if it’s > 10 it means the risk-adjusted return is very interesting, basically high return for a low volatility.
In our example we have a pretty nice Sharpe ratio of 4.6 which is quite good:
The strategy perfomed very well until 2010 but then from 2013 the PnL starts to stagnate:
Conclusion
I showed you a basic structure of creating a strategy, you can adapt it to your needs, for example you can implement your strategy using zipline instead of a custom bactktesting module. With zipline you’ll have way more metrics and you’ll easily be able to run your strategy on different assets, since market data is managed by zipline.
I didn’t mention any transactions fees or bid-ask spread in this post, the backtest doesn’t take into account all of this so maybe if we include them the strategy would lose money!
For this tutorial, we’re going to assume we have the same basic structure as in the previous article about the Random Forest article. The idea is to do some feature engineering to generate a bunch of features, some of them may be useless and reduce the machine learning algorithm prediction score, that’s where the feature selection comes into action.
Feature engineering
This is not a tentative of a perfect feature engineering, we just want to generate a good number of features and pick the most relevant afterwards. Depending on the dataset you have, you can create more interesting feature like the day, the hour, if it’s the weekend or not etc. Let’s assume we only have one column, ‘Mid’ which is the mid price between the bid and the ask. We can generate moving average for various windows, 5 to 50 for example, the code is quite simple using pandas:
for i in range(5, 50, 5):
data["mavgMid"+str(i)] = pd.rolling_mean(data["Mid"], i, min_periods=1)
This way we get new columns: MavgMid5, MavgMid10 and so on. We can also do that for the moving standard deviation which can be useful for a machine learning algorithm, almost the same code as above:
for i in range(5, 50, 5):
data["stdMid"+str(i)] = pd.rolling_std(data["Mid"], i, min_periods=1)
We can continue with various rolling indicators, see the full list here. I personally like rolling_corr() because in the crypto-currencies world, correlation is very volatile and contains a lot of information, especially for inter exchange arbitrage opportunities. In this case you need to add another column with prices from another source.
Here is an example of a full function:
def featureEngineering(data):
# Moving average
for i in range(5, 50, 5):
data["mavgMid"+str(i)] = pd.rolling_mean(data["Mid"], i, min_periods=1)
# Rolling standard deviation
for i in range(5, 50, 5):
data["stdMid"+str(i)] = pd.rolling_std(data["Mid"], i, min_periods=1)
# Remove the 50 last rows since 50 is our max window
data = data.drop(data.head(50).index)
return data
Feature selection
After the feature engineering step we should have 20 features (+1 Signal feature). I ran the algorithm with the same parameters as in the previous article, but on XMR-BTC minute data over a week using the Crypto Compare API (tutorial to come soon) and I got the decent score of 0.53.
That’s a good score but maybe our 20 features are messing with the Random Forest ability to predict.
We’re going to use the SelectKBest algorithm from Sci-kit learn which is quite efficient for a simple strategy, we need to add some import in the code first:
from sklearn.feature_selection import SelectKBest, f_classif
SelectKBest() takes 2 parameters at minimum: an algorithm, here we picked f_classif since we’re using Random Forest Classifier and the number of features you want to keep:
Now data_X_train and data_X_test contains 10 features each, selected using the f_classif algorithm.
Finally the score I got with my XMR-BTC dataset is 0.60, 6% is a pretty nice improvement for a basic feature selection. I picked 10 randomly as a number of feature to keep, but you can loop through different number to determine the best number of features, but be careful of over fitting!