FX: multivariate stochastic volatility – part 2

In part 2 our mean-variance optimal FX portfolio is allowed to choose from multiple models each week based on a measure of goodness (MSSE). The risk-adjusted return improves as a result with the annualized Sharpe Ratio rising to 0.86 from 0.49.

In part 1 we estimated a sequential multivariate stochastic volatility model on a portfolio of five FX pairs (EURAUD, EURCAD, EURGBP, EURJPY, and EURUSD) and constructed a mean-variance optimized portfolio. Using non-optimized settings the portfolio produced an annualized return of 5.4% compared with 4.1% for the benchmark, but this outperformance was due to higher volatility, and the annualized Sharpe Ratio (SR) of 0.49 failed to match the benchmark’s SR of 0.68.

Our non-optimized model in part 1 did not include autoregressive lags (that is, d = 0) and used discount factors close to one (beta = delta = 0.99), resulting in slow evolution of the intercepts and covariance matrix. Today, we will adjust these three parameters in an attempt to improve on the (risk-adjusted) return. Concretely, we let beta = {0.80, 0.85, 0.90, 0.95, 0.99}delta = {0.95, 0.96, …, 0.99}, and d = {0, 1, 3, 5} resulting in 100 model combinations. This is where sequential estimation shines as we do not need to re-estimate 100 models in batch for every time period in our evaluation period (574 weeks). Our use of weekly data makes batch estimation (much) less of a problem in terms of speed than if we have used higher-frequency data, but imagine if we used 1-minute prices instead (>7,000 observations per week).

plot4

Out of the 100 model combinations, in terms of total return the best model delivers 81% (see chart above). This model, which uses d = 0, beta = 0.99 and delta = 0.98, is very close to our original when it comes to both total return performance and the settings used, suggesting that from a total return point of view the parameters we used in part 1 were close to optimal.

However, if we sort by the Sharpe Ratio instead (see chart below), the best model is the one with beta = 0.80delta = 0.98, and d = 3, which  delivers a SR of 0.80, higher than the benchmark’s SR of 0.68. This model includes three autoregressive terms for each FX pair and allows for a very fast evolution of the covariance matrix (i.e. beta = 0.80) resulting in a degrees of freedom of just 4. Therefore this model allows much wider tails than a normal distribution – a reasonable feature, we would expect a priori, given that we are dealing with price return data.

Interestingly, the chart also reveals that those models with autoregressive terms generally perform worse than those without (at least with respect to SR). The first (left-most) 25 combinations, which generally have annualized Sharpe Ratios close to 0.6, are those combinations where we are simply fitting a multivariate stochastic volatility mode (i.e. d = 0) for combinations of beta and delta. However, there are some combinations with higher SR which do include autoregressive terms, including the ‘best model’ mentioned in the previous paragraph.

plot5

The performance of the 100 combinations are of course only known in hindsight and hence would not have been of much help at the start of any given week (n = 1, 2, …, 574) where we would have had to decide on which model – and hence which expected return vector and predictive covariance matrix – to feed into our mean-variance optimizer.

We could calculate a performance statistic such as annualized SR or total return on a rolling basis and choose our model each week accordingly. Instead, each week we calculate the mean squared standardized 1-step forecast errors (MSSE) over a lookback window of 52 weeks (see Triantafyllopoulos [2008], page 7, for details) – similar to the lookback window used in the calculations for the benchmark portfolio in part 1. We then average the MSSE across the five FX pairs to arrive at a measure of goodness. If the model performs well across both lookback window and FX pairs then the measure should be close to one. If it is below one then the model generally predicts too large variances and it is above one then the model generally predicts too small variances.

plot6.png

The portfolio delivers a total return of 43.7% which translates to an annualized return of around 3.3%. This is well below the annualized return of 5.4% achieved with the ‘default settings’, but that was produced via high leverage and high volatility. The drawdowns – as should be clear from the chart above – are much smaller in this portfolio with the largest drawdown at 4.4% and the annualized SR is 0.86 – even higher than the ‘best SR model’ found above.

Because our portfolio is allowed to switch between combinations according to the goodness measure (average MSSE) it can switch between models with high or low predicted covariances and between models with and without autoregressive lags depending on the best average MSSE each week.

The selected model includes five autoregressive lags per FX pair in 287 of the 574 weeks (50%), three lags in 131 weeks, one lag in 101 weeks and zero lags (i.e. a multivariate stochastic volatility model) in 55 weeks. The VAR parameters are allowed to evolve at a moderate pace (delta = 0.95) in 271 weeks while the covariance matrix mostly evolves at a moderate-to-slow pace with 522 weeks spent in combinations where beta >= 0.95. However, in 52 weeks the selected model has a beta of less than or equal to 0.9.

plot0

To demonstrate the robustness of the portfolio’s performance to specific model combinations we randomly exclude 10% of the combinations and re-run the optimization. We do this 1,000 times and find that the portfolio continues to perform well with 90% of the annualized Sharpe Ratios in the interval from 0.76 to 0.97 and 95% in the interval from 0.73 to 1.00. We need to exclude well north of 50% of the combinations before we see a material effect on the SR.

plot9

The portfolio changes its leverage based on the predicted covariance, the predicted return vector and the required return (10% at an annualized rate). No less than 448 weeks (78%) are spent with a leverage of less than 1 while the portfolio is leveraged at least two times (gross exposure of at least 200%) in 73 weeks. This helps explain the relatively smooth cumulative return chart above.

plot10

Leverage reached a high of 4.38 on August 23, 2013 and net exposure (to EUR) climbed to 1.49, meaning that the portfolio had nearly 150% of net long exposure to the Euro during the following week. Similarly, net exposure reached a low of -1.33 on June 21, 2013, indicating that the portfolio was heavily biased against the Euro during the following week.

These figures compare with an average leverage of 0.71 and an average net exposure of 0.00. The fact that net exposure is (very close to) zero is positive given that we have chosen an unconstrained mean-variance optimization. Had we constrained the portfolio weights to sum to one as is typically done, we would in effect have placed a long bet on the Euro relative to the five other currencies (AUD, CAD, JPY, GBP, and USD), thereby assuming – or at least betting on – a positive drift in the five FX pairs during our test period (2005-2015).

In this part 2 we have shown how to (estimate and) select a model sequentially based on a measure of goodness (average MSSE). Based on this measure we can choose the best model each week and use the model’s output (prediction return vector and prediction covariance matrix) for determining the mean-variance optimal weights. One can test many more combinations than the 100 used above and also use another (or multiple) measure(s) of goodness. Furthermore, one can add complexity to the model by incorporating an autoregressive process for the volatility precision covariance matrix (see Triantafyllopoulos, 2013) or move over to particle filters (and perhaps include regime switching, see e.g. Bao et al., 2012).

#####################################################################
### Time-varying Vector Autoregression with Stochastic Volatility ###
#####################################################################

TVVARSV <- function(x, AR = 0, beta = 0.99, delta = 0.99, lookback = ceiling(NROW(x)/10)) {

x <- as.matrix(x)
N <- NROW(x) # The number of observations per time series
J <- NCOL(x) # The number of time series

# Constants
q <- 1/(1 - beta)
k <- (beta * (1 - J) + J)/(beta * (2 - J) + J - 1)

# Prior specification
m <- matrix(0, J * AR + 1, J)
P <- diag(0.01, J * AR + 1, J * AR + 1)
S <- diag(0.02, J, J)

dinv <- diag(1/sqrt(delta), J * AR + 1, J * AR + 1)

jnam <- paste("J", 1:J, sep = "")
cnam <- "Int"
if (AR > 0) for (a in 1:AR) cnam <- c(cnam, paste("AR", a, "_", jnam, sep = ""))

# Storage
out <- list(means = array(NA, dim = c(N, J, 4), dimnames = list(NULL, jnam, c("actual", "pred", "error", "scaled"))),
cov = array(NA, dim = c(N, J, J), dimnames = list(NULL, jnam, jnam)),
stats = array(NA, dim = c(N, J, 4), dimnames = list(NULL, jnam, c("ME", "MAE", "RMSE", "MSSE"))),
df = q * beta, beta = beta, delta = delta, AR = AR)

for (n in (AR + 1):N) {

xn <- matrix(x[n, ], J, 1)
fn <- matrix(1)
if (AR > 0) fn <- rbind(fn, matrix(x[(n - 1):(n - AR), ], ncol = 1, byrow = TRUE))

# Prediction
R <- dinv %*% P %*% dinv
Q <- as.numeric(t(fn) %*% R %*% fn) + 1

ycov <- Q * (1 - beta)/(3 * beta * k - 2 * k) * S

pred <- t(m) %*% fn

# If xn contains missing values then use the predictions
xn <- ifelse(is.na(xn), pred, xn)

err <- xn - pred
zrr <- solve(t(chol(ycov))) %*% err

out$means[n, , "actual"] <- as.numeric(xn)
out$means[n, , "pred"] <- as.numeric(pred)
out$means[n, , "error"] <- as.numeric(err)
out$means[n, , "scaled"] <- as.numeric(zrr)

out$cov[n, , ] <- ycov

# Update
K <- R %*% fn / Q
m <- m + K %*% t(err)
P <- R - tcrossprod(K) * Q
S <- S/k + tcrossprod(err)/Q

}

out$stats[, , "ME"] <- apply(out$means[, , "error", drop = F], 2, TTR::runMean, n = lookback)
out$stats[, , "MAE"] <- apply(abs(out$means[, , "error", drop = F]), 2, TTR::runMean, n = lookback)
out$stats[, , "RMSE"] <- apply(out$means[, , "error", drop = F]^2, 2, TTR::runMean, n = lookback)
out$stats[, , "MSSE"] <- apply(out$means[, , "scaled", drop = F]^2, 2 ,TTR::runMean, n = lookback)

return(out)

}

########################
### Unconstrained MV ###
########################

SeqMeanVar <- function(exp.returns, req.return = 0, covar) {

icovar <- solve(covar)
weights <- req.return * as.numeric(icovar %*% exp.returns) / (t(exp.returns) %*% icovar %*% exp.returns)

return(weights)

}

####################
### MAIN PROGRAM ###
####################

#### Input: A matrix of FX prices (N x J) with dates as rownames

freq <- 52
lookback <- freq * 1
start.date <- "2004-12-31"
return.req <- 1.1^(1/52) - 1

N <- NROW(prices)
J <- NCOL(prices)

start.obs <- which(start.date == rownames(prices))

returns.log <- log(prices[-1, ]/prices[-N, ])
returns.ari <- prices[-1, ]/prices[-N, ] - 1

N <- N - 1

dates <- rownames(returns.log)

# Run models
comb <- expand.grid(beta = c(seq(0.80, 0.95, 0.05), 0.99),
delta = seq(0.95, 0.99, 0.01),
AR = c(0, 1, 3, 5))

# No. of combinations
cnum <- NROW(comb)

# Combination names
comb.names <- c(paste("Comb_", 1:cnum, sep = ""), "BM", "Best")

# Store all combinations (plus the best and the benchmark)
stats.arr <- array(NA, dim = c(N, NROW(comb) + 2, 6),
dimnames = list(dates, comb.names, c("Ret", "Exc", "AnnVol", "Lev", "Net", "Stat")))

# A list of models
model.list <- vector("list", cnum)

# Loop across combinations (cnum + 1 is the benchmark)
for (i in 1:(cnum + 1)) {

cat("Combination", i, "of", NROW(comb), "\n")

if (i <= cnum) {

model <- TVVARSV(x = returns.log, beta = comb[i, "beta"], delta = comb[i, "delta"],
lookback = lookback, AR = comb[i, "AR"])

stats.arr[, i, "Stat"] <- apply(model$stats[, , "MSSE", drop = F], 1, mean, na.rm = T)

model.list[[i]] <- model

}

for (n in (start.obs + 1):N) {

if (i == cnum + 1) {

# Historical mean and covariance
model.mean <- exp(apply(log(1 + returns.ari[(n - lookback):(n - 1), ]), 2, mean)) - 1
model.cov <- exp(cov(log(1 + returns.ari[(n - lookback):(n - 1), ]))) - 1

} else {

# Model mean and covariance
model.mean <- exp(model$means[n, , "pred"]) - 1
model.cov <- exp(model$cov[n, , ]) - 1

}

# Mean-variance optimization
weights.MV <- SeqMeanVar(exp.returns = model.mean, req.return = return.req, covar = model.cov)

# Predicted portfolio volatility at annual frequency
stats.arr[n, i, "AnnVol"] <- log(sqrt(t(weights.MV) %*% model.cov %*% weights.MV) + 1) * sqrt(freq)

# Gross and net exposure
stats.arr[n, i, "Lev"] <- sum(abs(weights.MV))
stats.arr[n, i, "Net"] <- sum(weights.MV)

# Return
stats.arr[n, i, "Ret"] <- sum(weights.MV * returns.ari[n, ])

}

}

# Excess return
stats.arr[, 1:(cnum + 1), "Exc"] <- stats.arr[, 1:(cnum + 1), "Ret"] - matrix(stats.arr[, "BM", "Ret"], N, cnum + 1)

for (n in start.obs:N) {

# Find the best model using MSSE (lagged one week)
best <- which.min((1 - stats.arr[n - 1, 1:cnum, "Stat"])^2)

stats.arr[n, "Best", ] <- stats.arr[n, best, ]

}

# Log returns
log.ret <- matrix(log(1 + stats.arr[start.obs:N, , "Ret"]), ncol = cnum + 2)

# Annualized returns
ann.ret <- exp(apply(log.ret, 2, mean) * freq) - 1

# Annualized (log) Sharpe ratios (risk-free rate = 0%)
ann.lSR <- apply(log.ret, 2, mean)/apply(log.ret, 2, sd) * sqrt(freq)

# Cumulative returns
cum.ret <- rbind(1, apply(1 + stats.arr[start.obs:N, , "Ret", drop = F], 2, cumprod))
tot.ret <- tail(cum.ret, 1)

# Drawdowns
drawdowns <- cum.ret/apply(cum.ret, 2, cummax) - 1
Advertisements

FX: multivariate stochastic volatility – part 1

We apply a (sequential) multivariate stochastic volatility model to five FX pairs. Using non-optimized settings our model beats a benchmark portfolio in terms of total return, but fails when you account for risk [note: we have edited the blog post to show results using mean-variance optimization rather than minimum-variance optimization].

Following an equity-centric start to our blog’s life with random portfolios (part 1 and part 2) and a monthly factor model, we now move on to foreign exchange (FX). We will investigate how a (sequential) multivariate stochastic volatility model can be used to potentially improve a FX portfolio by modelling not only the variances of the FX pairs but also the covariances. These can then for example be used to find mean-variance or minimum-variance portfolios. In this blog post we stick to a simple mean-variance approach, but we aim to add complexity later in an attempt to improve on the results.

The generalized autoregressive conditional heteroscedasticity (GARCH) and its many evolutions (EGARCH, NGARCH, GARCH-M, multivariate GARCH etc.) remains a staple of time series modelling, but we will dive into the stochastic volatility camp and more specifically we will look at a sequential time-varying vector autoregressive model with stochastic volatility, following closely the approach in Triantafyllopoulos (2008). This allows us to caste the model in state-space form providing us with sequential covariance predictions rather than having to re-estimate in batch constantly using a rolling or expanding window of observations. Furthermore, this model provides both the mean and covariance predictions and it is fast (though this is not important in this work as we use weekly data).

FX portfolio

We will investigate the performance of a portfolio of five FX pairs (EURAUD, EURCAD, EURGBP, EURJPY, and EURUSD) on a weekly basis during the eleven-year period 2005-2015 and to keep things simple in the following we assume d = 0, beta = 0.99, and delta = 0.99 (see Triantafyllopoulos for details). This implies that we are estimating a multivariate stochastic volatility model with time-varying intercepts, but without any autoregressive terms (since d = 0), where the parameters are updated only gradually as the discount factors beta and delta are both close to one. The fact that beta is close to one implies that we are modelling the log-returns as normally distributed as the degrees of freedom is 99.

Every trading week the model is updated based on that week’s errors and a set of predictions for means and covariance is given. Based on these predictions we can update the weights of our mean-variance portfolio where we require an annualized return of 10% (similar for the benchmark). The data set used for portfolio evaluation is a (575 x 5) matrix of FX close prices (carry-adjusted) for the five FX pairs. As benchmark we run mean-variance optimization on historical data using a lookback window of 52 weeks. We take this approach to allow a comparison between the model’s ability to model portfolio volatility and the benchmark.

A closer look at EURUSD

Before we get to the performance evaluation we will take a look at the predictions of one of the five FX pairs, EURUSD. Despite the discount factor delta being close to one, the estimated mean return shows some quite large deviations from zero when momentum is high in either direction (see the chart below). During the autumn of 2008 EURUSD plunged from roughly 1.60 to 1.25 and then again in mid-2014 from close to 1.40 to less than 1.05. The annualized expected return (i.e. the annualized mean prediction for EURUSD) declined to as low as -10.8% to reflect this weakness.

plotA

Moving on to the expected volatility, we unsurprisingly see a large spike in late 2008 and well into the following year. Annualized expected volatility is above 10% in 416 of 574 weeks and it spends 284 consecutive weeks above this level starting in October, 2008. The highest predicted volatility of 13.3% is reached in May, 2009.

plotB

Performance

Our portfolio of five FX pairs returns 79% during the period from 2005 to 2015 compared to 55.2% for the benchmark portfolio while our portfolio has 322 winning weeks (56.1%) compared with the benchmark’s 321 winning weeks (55.9%). However, portfolio volatility is so much higher than benchmark volatility that the risk-adjusted return is lower with annualized Sharpe Ratios (assuming a zero risk-free rate) of 0.49 and 0.68, respectively.

In other words, the model delivers an excess return of 23.8% over eleven years, but this is a result of higher leverage and higher volatility (the annualized Information Ratio is 0.20). As a result, the largest drawdown – which unsurprisingly takes place in the midst of the global financial crisis – is 20.2% for our portfolio compared with 10.5% for the benchmark.

plot1

Turning to the expected annualized portfolio volatility it generally agrees with volatility as measured on a rolling 52-week basis reasonably well except during the financial crisis when it undershoots by a wide margin. While actual volatility peaks at 23.3% predicted volatility only reaches 19.6%. The median volatilities are 8.5% and 7.9%, respectively, and predicted volatility is above actual volatility just 37.1% of the time (194 out of 574 – 51 = 523 weeks).

plot3

While the model does produce a higher annualized return (5.4% vs. 4.1%) the annualized volality of our model portfolio during the eleven years 2005-2015 is 8.8% compared with the benchmark portfolio’s 4.5%. Furthermore, portfolio volatility is above 10% in 130 weeks compared with just three for the benchmark.

plot2

We have demonstrated a method to both estimate a multivariate stochastic volatility model and construct a mean-variance optimized portfolio sequentially rather than in batch mode. However, our non-optimized model fails to beat the benchmark on a risk-adjusted basis as leverage and volatility are both much higher in our model portfolio.

In part 2 we will investigate whether we can improve the results, in particular the model’s ability to accurately predict the covariance matrix of the five FX pairs, by tweaking the settings. One remedy could be to make changes to the discount factors (beta and delta) while allowing autoregressive terms could be another. We will explore this in the next blog post, so stay tuned.

Note: the R code will be available in part 2.

Random portfolios: correlation clustering

We investigate whether two clustering techniques, k-means clustering and hierarchical clustering, can improve the risk-adjusted return of a random equity portfolio. We find that both techniques yield significantly higher Sharpe ratios compared to random portfolio with hierarchical clustering coming out on top.

Our debut blog post Towards a better benchmark: random portfolios resulted in a lot of feedback (thank you) and also triggered some thoughts about the diversification aspect of random portfolios. Inspired by the approach in our four-factor equity model we want to investigate whether it is possible to lower the variance of the portfolios by taking account of the correlation clustering present in the data. On the other hand, we do not want to stray too far from the random portfolios idea due to its simplicity.

We will investigate two well-known clustering techniques today, k-means clustering and hierarchical clustering (HCA), and see how they stack up against the results from random portfolios. We proceed by calculating the sample correlation matrix over five years (60 monthly observations) across the S&P 500 index’s constituents every month in the period 2000-2015, i.e. 192 monthly observations. We then generate 50 clusters using either k-means (100 restarts) or HCA on the correlation matrix and assign all equities to a cluster. From each of these 50 clusters we randomly pick one stock and include it in our portfolio for the following month. This implies that our portfolios will have 50 holdings every month. (None of the parameters used have been optimized.)

g1

The vertical dark-grey line represents the annualized return of a buy-and-hold strategy in the S&P 500 index.

The mean random portfolio returns 9.1% on an annualized basis compared to the benchmark S&P 500 index, which returns 4.1%. The k-means algorithm improves a tad on random portfolios with 9.2% while HCA delivers 9.5%. Not a single annualized return from the 1,000 portfolios is below the buy-and-hold return no matter whether we look at random portfolios, k-means or HCA.

We have added a long-only minimum variance portfolio (MinVar) to the mix. The 1,000 MinVar portfolios use the exact same draws as the random portfolios, but then proceed to assign weights so they minimize the portfolios’ variances using the 1,000 covariance matrices rather than simply use equal weights.

Unsurprisingly, MinVar delivers a mean return well below the others at 8.1%. The weights in the MinVar portfolios are only constrained to 0-100% implying that in any given MinVar portfolio a few equities may dominate in terms of weights while the rest have weights very close to 0%. As a result, the distribution of the annualized mean returns of the MinVar portfolios is much wider and 1.2% of the portfolio (12) are below the S&P 500 index’s annualized mean.

g2

The vertical dark-grey line represents the annualized Sharpe ratio of a buy-and-hold strategy in the S&P 500 index.

Turning to the annualized Sharpe ratio using for simplicity a risk-free rate of 0%, we find that a buy-and hold strategy yields a Sharpe ratio of 0.25. Random portfolios, meanwhile, yields 0.46 while k-means and HCA yield 0.49 and 0.58, respectively.

The k-means and HCA algorithms result in lower volatility compared to standard random portfoliios (see table below). While the random portfolios have a mean annualized standard deviation of 19.8%, k-means and HCA have mean standard deviations of 18.7% and 16.3% respectively (in addition to the slightly higher mean annualized returns). Put differently, it seems from the table below and charts above that k-means and in particular HCA add value relative to random portfolios, but do they do so significantly?

US_g3

Using a t-test on the 1,000 portfolios’ annualized means we find p-values of 0.7% and ~0% respectively, supporting the view that significant improvements can be made by using k-means and HCA. The p-values fluctuate a bit for the k-means algorithm with every run of the code, but the HCA’s outperformance (in terms of annualized mean return) is always highly significant. Testing the Sharpe ratios rather than mean returns yield highly significant p-values for both k-means and HCA.

We have looked at two ways to potentially improve random portfolios by diversifying across correlation clusters. The choice of k-means and HCA as clustering techniques was made to keep it as simple as possible, but this choice does come with some assumptions. Variance Explained – for example – details why using k-means clustering is not a free lunch. In addition, what distance function to use in the HCA was not considered in this post, we simply opted for the simple choice of: 1 – correlation matrix. We leave it to the reader to test other, more advanced clustering methods, and/or change the settings used in this blog post.

Portfolio size

So far we have only looked at a portfolio size of 50, but in the original post on random portfolios we also included portfolios of size 10 and 20. For completeness we provide the results below noting simply that the pattern is the same across portfolio sizes. In other words, HCA is the best followed by k-means and random portfolios – though the outperformance decreases with the sample size.

US_g4

Excess returns

The approach above uses the historical correlation matrix of the returns, but there is a case to be made for using excess returns instead. We have therefore replicated the analysis above using excess returns instead of returns, where excess returns are defined as the residuals from regressing the available constituent equity returns on the market returns (i.e. S&P 500 index returns) using the same lookback window of five years (60 observations).

Using the matrix of residuals we construct a correlation matrix and repeat the analysis above; we follow the same steps and use the same parameter settings. Disappointingly, the results do not improve. In fact, the results deteriorate to such a degree that neither k-means nor HCA outperform regardless of whether we look at the mean return or Sharpe ratio.

Postscript: while we use k-means clustering on the correlation matrix we have seen it used in a variety of ways in the finance blogosphere, from Intelligent Trading Tech and Robot Wealth, both of which look at candlestick patterns, over Turing Finance’s focus on countries’ GDP growth to MKTSTK’s trading volume prediction, to mention a few.

###########################################################
# #
# INPUT: #
# prices: (months x equities) matrix of close prices #
# prices.index: (months x 1) matrix of index close prices #
# tickers: (months x equities) binary matrix indicating #
# whether a stock was present in the index in that month #
# #
###########################################################

library(TTR)
library(PerformanceAnalytics)
library(quadprog)
library(xtable)

# Function for finding the minimum-variance portfolio
min_var <- function(cov.mat, short = FALSE) {

if (short) {

aMat <- matrix(1, size.port, 1)
res <- solve.QP(cov.mat, rep(0, size.port), aMat, bvec = 1, meq = 1)

} else {

aMat <- cbind(matrix(1, size.port, 1), diag(size.port))
b0 <- as.matrix(c(1, rep(0, size.port)))
res <- solve.QP(cov.mat, rep(0, size.port), aMat, bvec = b0, meq = 1)

}

return(res)

}

N <- NROW(prices) # Number of months
J <- NCOL(prices) # Number of constituents in the S&P 500 index

prices <- as.matrix(prices)
prices.index <- as.matrix(prices.index)

port.names <- c("Random", "K-means", "HCA", "MinVar")

# Array that stores performance statistics
perf <- array(NA, dim = c(3, draws, length(port.names)),
dimnames = list(c("Ret (ann)", "SD (ann)", "SR (ann)"), NULL, port.names))

# Storage array
ARR <- array(NA, dim = c(N, draws, size.port, length(port.names)),
dimnames = list(rownames(prices), NULL, NULL, port.names))

rows <- which(start.port == rownames(prices)):N

# Loop across time (months)
for (n in rows) {

cat(rownames(prices)[n], "\n")

# Which equities are available?
cols <- which(tickers[n, ] == 1)

# Forward and backward return for available equities
fwd.returns <- prices[n, cols]/prices[n - 1, cols] - 1
bwd.returns <- log(prices[(n - lookback):(n - 1), cols]/
prices[(n - lookback - 1):(n - 2), cols])

# Are these equities also available at n + 1?
cols <- which(is.na(fwd.returns) == FALSE & apply(is.na(bwd.returns) == FALSE, 2, all))

# Returns for available equities
fwd.returns <- fwd.returns[cols]
bwd.returns <- bwd.returns[, cols]

bwd.returns.index <- log(prices.index[(n - lookback):(n - 1), 1]/
prices.index[(n - lookback - 1):(n - 2), 1])

# Covariance and correlation matrices
cov.mat <- cov(bwd.returns)
cor.mat <- cor(bwd.returns)

# K-means on covariance matrix
km <- kmeans(x = scale(cor.mat), centers = size.port, iter.max = 100, nstart = 100)
hc <- hclust(d = as.dist(1 - cor.mat), method = "average")

for (d in 1:draws) {

samp <- sample(x = 1:length(cols), size = size.port, replace = FALSE)
opt <- min_var(cov.mat[samp, samp], short = FALSE)

ARR[n, d, , "Random"] <- fwd.returns[samp]
ARR[n, d, , "MinVar"] <- opt$solution * fwd.returns[samp] * size.port

}

hc.cut <- cutree(hc, k = size.port)

for (q in 1:size.port) {

ARR[n, , q, "K-means"] <- sample(x = fwd.returns[which(km$cluster == q)], size = draws, replace = TRUE)
ARR[n, , q, "HCA"] <- sample(x = fwd.returns[which(hc.cut == q)], size = draws, replace = TRUE)

}

}

ARR <- ARR[rows, , , ]

# Performance calculations
returns.mean <- apply(ARR, c(1, 2, 4), mean) - cost.port/100 ; N2 <- NROW(ARR)
returns.cum <- apply(returns.mean + 1, c(2, 3), cumprod)
returns.ann <- returns.cum[N2, , ]^(12/N2) - 1

std.ann <- exp(apply(log(1 + returns.mean), c(2, 3), StdDev.annualized, scale = 12)) - 1
sr.ann <- returns.ann / std.ann

returns.index <- c(0, prices.index[-1, 1]/prices.index[-N, 1] - 1)
returns.index <- returns.index[rows]
returns.cum.index <- c(1, cumprod(returns.index + 1))
returns.ann.index <- tail(returns.cum.index, 1)^(12/N2) - 1

std.ann.index <- exp(StdDev.annualized(log(1 + returns.index[-1]), scale = 12)) - 1
sr.ann.index <- returns.ann.index / std.ann.index

perf["Ret (ann)", , ] <- returns.ann * 100
perf["SD (ann)", , ] <- std.ann * 100
perf["SR (ann)", , ] <- sr.ann

results.mean <- as.data.frame(apply(perf, c(1, 3), mean))
results.mean$Benchmark <- c(returns.ann.index * 100, std.ann.index * 100, sr.ann.index)

print(round(results.mean, 2))

t.test(returns.ann[,2], returns.ann[, 1], alternative = "greater")
t.test(returns.ann[,3], returns.ann[, 1], alternative = "greater")

Towards a better equity benchmark: random portfolios

Random portfolios deliver alpha relative to a buy-and-hold position in the S&P 500 index – even after allowing for trading costs. Random portfolios will serve as our benchmark for our future quantitative equity models.

The evaluation of quantitative equity portfolios typically involves a comparison with a relevant benchmark, routinely a broad index such as the S&P 500 index. This is an easy and straightforward approach, but also – we believe – sets the bar too low resulting in too many favorable research outcomes. However, we want to hold ourselves to a higher standard and as a bare minimum that must include being able to beat a random portfolio – the classic dart-throwing monkey portfolio.

A market capitalization-weighted index, such as the S&P 500 index, is inherently a size-based portfolio where those equities which have done well in the long run are given the highest weight. It is, in other words, dominated by equities with long-run momentum. Given that an index is nothing more than a size-based trading strategy, we should ask ourselves whether there exist other simple strategies that perform better. The answer to this is in the affirmative and one such strategy is to generate a portfolio purely from chance. Such a strategy will choose, say, 20 equities among the constituents of an index each month and invest an equal amount in each position. After a month the process is repeated, 20 new equities are randomly picked, and so forth.

The fact that random portfolios outperform their corresponding indices is nothing new. David Winton Harding, the founder of the hedge fund Winton Capital, even went on CNBC last year to explain the concept, but he is far from only in highlighting the out-performance of random portfolios (e.g. here and here).

To demonstrate the idea we have carried out the research ourselves. We use the S&P 500 index – without survival bias and accounting for corporate actions such as dividends – and focus on the period from January 2000 to November 2015. We limit ourselves in this post to this time span as it mostly covers the digitization period, but results from January 1990 show similar results. Starting on 31 December 1999 we randomly select X equities from the list of S&P 500 constituents that particular month, assign equal weights, and hold this portfolio in January 2000. We then repeat the process on the final day of January and hold in February and every subsequent month until November 2015.

g1

In the chart above the annualized returns for 1000 portfolios containing 10, 20, and 50 equities are depicted (the orange line represents the S&P 500 index). A few things are readily apparent:

  • The mean of the annualized returns is practically unchanged across portfolio sizes
  • The standard deviation of the annualized returns narrows as the portfolio size increases
  • All three portfolio sizes beat the S&P 500 index
  • 6.5% of the 1000 random portfolios of size 10 have an annualized return which is lower than that of the S&P 500 index (4.2%). Of the portfolios with sizes 20 and 50 the percentages are 0.9% and 0%, respectively. In other words, not a single of the 1000 random portfolios of size 50 delivers a annualized return below 4.2%.

So far we have talked one or many random portfolios without being too specific, but for random portfolios to work we need a large number of samples (i.e. portfolios) so that performance statistics, such as the annualized return, tend toward stable values. The chart below shows the cumulative mean over the number of random portfolios (size = 10), which stabilizes as the number of random portfolios increases (the orange line represents the mean across all 1000 portfolios).

g2

All three portfolios beat the S&P 500 index in terms of annualized return, but we must keep in mind that these portfolios’ turnovers are high and hence we need to allow for trading costs. The analysis is thus repeated below  for 1000 random portfolios with 50 positions with trade costs of 0%, 0.1% and 0.2% (round trip).

g3

Unsurprisingly the mean annualized return declines as trading costs increase. Whereas not a single of the 1000 random portfolios of size 50 delivered an annualized return below 4.2% without trade costs , 2 and 40 portfolios have lower returns when trade costs of 0.1% and 0.2%, respectively, are added. Put differently, even with trade costs of 0.2% (round trip) every month a portfolio of 50 random stocks outperformed the S&P 500 index in terms of annualized return in 960 of 1000 instances.

g4

Weighing by size is simple, and we like simple. But when it comes to equity portfolios we demand more. Put differently, if our upcoming quantitative equity portfolios cannot beat a randomly-generated portfolio what is the point? Therefore, going forward, we will refrain from using the S&P 500 index or any other appropriate index and instead compare our equity models to the results presented above. We want to beat not only the index, we want to beat random. We want to beat the dart-throwing monkey!

ADDENDUM: A couple of comments noted that we must use an an equal weight index to be consistent with the random portfolio approach. This can, for example, be achieved by investing in the Guggenheim S&P 500 Equal Weight ETF, which has yielded 9.6% annualized since inception in April, 2003 (with an expense ratio of 0.4%). The ETF has delivered a higher annualized return than that of the random portfolios (mean) when trading costs are added.

In other words, if you want to invest invest equally in the S&P 500 constituents, there is an easy way to do it. We will continue to use random portfolios as a benchmark as it is a simple approach, which our models must beat and we can choose the starting date as we please.

###########################################################
# #
# INPUT: #
# prices: (months x equities) matrix of close prices #
# prices_index: (months x 1) matrix of index close prices #
# tickers: (months x equities) binary matrix indicating #
# whether a stock was present in the index in that month #
# #
###########################################################

draws &lt;- 1000
start_time &lt;- &quot;1999-12-31&quot;
freq_cal &lt;- &quot;MONTHLY&quot;

N &lt;- NROW(prices) # Number of months
J &lt;- NCOL(prices) # Number of constituents in the S&amp;P 500 index

prices &lt;- as.matrix(prices)
prices_index &lt;- as.matrix(prices_index)
prices_ETF &lt;- as.matrix(prices_ETF)

# Narrow the window
#prices &lt;- prices[-1:-40, , drop = FALSE]
#prices_index &lt;- prices_index[-1:-40, , drop = FALSE]
#prices_ETF &lt;- prices_ETF[-1:-40, , drop = FALSE]

#N &lt;- NROW(prices)

# Combinations
sizes &lt;- c(10, 20, 50) ; nsizes &lt;- length(sizes) # Portfolio sizes
costs &lt;- c(0, 0.1, 0.2) ; ncosts &lt;- length(costs) # Trading costs (round trip)

# Array that stores performance statistics
perf &lt;- array(NA, dim = c(nsizes, ncosts, 3, draws),
dimnames = list(paste(&quot;Size&quot;, sizes, sep = &quot; &quot;), paste(&quot;Cost&quot;, costs, sep = &quot; &quot;),
c(&quot;Ret(ann)&quot;, &quot;SD(ann)&quot;, &quot;SR(ann)&quot;), NULL))

# Loop across portfolio sizes
for(m in 1:nsizes) {

# Storage array
ARR &lt;- array(NA, dim = c(N, sizes[m], draws))

# Loop across time (months)
for(n in 1:(N - 1)) {

# Which equities are available?
cols &lt;- which(tickers[n, ] == 1)

# Forward return for available equities
fwd_returns &lt;- prices[n + 1, cols]/prices[n, cols] - 1

# Are these equities also available at n + 1?
cols &lt;- which(is.na(fwd_returns) == FALSE)

# Forward return for available equities
fwd_returns &lt;- fwd_returns[cols]

# Loop across portfolios
for(i in 1:draws) {

# Sample a portfolio of size 'sizes[m]'
samp &lt;- sample(x = cols, size = sizes[m], replace = F)

# Store a vector of forward returns in ARR
ARR[n, , i] &lt;- fwd_returns[samp]

} # End of i loop

} # End of n loop

ARR[is.na(ARR)] &lt;- 0

# Loop across trading costs
for(m2 in 1:ncosts) {

# Performance calculations
returns_mean &lt;- apply(ARR, c(1, 3), mean) - costs[m2]/100
returns_cum &lt;- apply(returns_mean + 1, 2, cumprod)
returns_ann &lt;- tail(returns_cum, 1)^(percent_exponent/N) - 1

std_ann &lt;- exp(apply(log(1 + returns_mean), 2, StdDev.annualized, scale = percent_exponent)) - 1
sr_ann &lt;- returns_ann / std_ann

perf[m, m2, &quot;Ret(ann)&quot;, ] &lt;- returns_ann * 100
perf[m, m2, &quot;SD(ann)&quot;, ] &lt;- std_ann * 100
perf[m, m2, &quot;SR(ann)&quot;, ] &lt;- sr_ann

} # End of m2 loop

} # End of m loop

# Index and ETF returns
returns_index &lt;- prices_index[-1, 1]/prices_index[-N, 1] - 1
returns_ava_index &lt;- sum(!is.na(returns_index))
returns_index[is.na(returns_index)] &lt;- 0
returns_cum_index &lt;- c(1, cumprod(1 + returns_index))
returns_ann_index &lt;- tail(returns_cum_index, 1)^(percent_exponent/returns_ava_index) - 1

returns_ETF &lt;- prices_ETF[-1, 1]/prices_ETF[-N, 1] - 1
returns_ava_ETF &lt;- sum(!is.na(returns_ETF))
returns_ETF[is.na(returns_ETF)] &lt;- 0
returns_cum_ETF &lt;- c(1, cumprod(1 + returns_ETF))
returns_ann_ETF &lt;- tail(returns_cum_ETF, 1)^(percent_exponent/returns_ava_ETF) - 1

# Print medians to screen
STAT_MED &lt;- apply(perf, c(1, 2, 3), median, na.rm = TRUE)
rownames(STAT_MED) &lt;- paste(&quot;Size &quot;, sizes, sep = &quot;&quot;)
colnames(STAT_MED) &lt;- paste(&quot;Cost &quot;, costs, sep = &quot;&quot;)
print(round(STAT_MED, 2))