Global Asset Allocation and Stock Selection

Approaches to Asset Allocation

The theme is to detail the various quantative approaches to tactical global asset management.

The Common Ingredients

Research team

The first step is to put together a dedicated research team. It is critical that the project leader have some knowledge of regression econometrics. In particular, knowledge is needed on the following topics: It is also critical that a research protocol is developed. To often, members of the research team get sidetracked into searching for the best R2. It is important to realize that the success of the Global TAA effort will not be measured by the 2. Success will be judged relative to a benchmark return -- out of sample.

Data

The research team must have easy access to a variety of data. It is often appropriate to designate one member of the research team to data. The collection and maintenance of the database is very important. Tactical decisions need to be made quickly after new data arrives. It is best to invest in a database system that takes the new data and automatically runs the quantiative proframs.

Computing

This is not really a major issue. While workstations are desirable, most calculations can be performed quickly on PCs - given that the focus is on monthly or quarterly returns. Database systems are also desirable. While most top-down data management exercises can be handled within Excel, the bottom up projects are not feasible within a spreadsheet. The bottom-up projects may include up to 10,000 securities along with vectors of attributes for each security.

Top Down - X-Opt

I call this "Top Down X-Opt" because the predicted stock returns are not used in any type of optimizer, i.e. a mean-variance optimizer which chooses portfolios which have the highest expected returns per unit of variance. The steps are as following:

This approach does not tell us how many countries to invest in, nor does it tell us how much to invest in each country. Some possibilities are:

"Hedge" strategies are also possible. This involves taking long positions in the highest expected returns countries and short positions in the lowest expected returns countries. A number of caveats are in order for this style of strategy.

Let me elaborate on this last point. Usually, a hedge strategy is measured with respect to some benchmark. For example, consider a U.S. equity hedge strategy. High expected returns securities are purchased and low expected returns securities are sold. The "beta" or sensitivity of the high expected returns portfolio is calculated with respect to a benchmark, like the S&P 500 stock indexd return. The "hedge" portfolio is constructed by selling the portfolio of low expected returns securities that has the same beta as the high expected returns portfolio. This produces a zero beta or hedged portfolio. The expected return on the hedge portfolio is the "alpha".

Importantly, this portfolio does not have zero volatility. The beta measures the average movement in the portfolio given a movement in the benchmark. The higher the R2 in the beta regression, the more closely the portfolio and the benchmark move. In the U.S. hedge strategy, the volatility would be much lower than holding either side of the hedge. Nevertheless, volatility could be on the order of one half the S&P 500 volatility.

With the international portfolio, it is more difficult to achieve this hedge. First, a benchmark needs to be designated. This depends on your performance criteria. A global manager may choose the Morgan Stanley Capital International (MSCI) world index. An international manager may choose the MSCI EAFE. A U.S. manager may choose the S&P 500.

Three issues arise (which we will detail in the International Risk Management). First, the betas of some of the countries could be zero. This is especially the case with some of the emerging stock markets. Harvey (1995) shows that many of the emerging markets have zero betas and some have negative betas. The problem arises in the following way. If you are shorting India, which has a negative beta, you are increasing the risk of the portfolio.

The second issue has to do with the low R2 in the beta regression. Harvey (1991) shows that the average correlation among developed markets is 41% (which implies an R2 of only 16%). The correlations among emerging markets are lower. Hence, the hedge portfolio, even if it has zero beta risk, will have substantial volatility.

The third issue has to do with the stability of the betas. As we will see, the betas could be very unstable. The usual method of obtaining betas relies on a regression analysis which essentially averages the historical comovements between the country and the benchmark. There is no guarantee that the future comovements will look like the past. We will tackle this problem later by proposing dynamic risk models which explicitly forecast the future comovements between the country and the benchmark.

Top-Down Opt

This approach follows the first initial steps of Top Down X-Opt. Country forecasting models are built, validated and out-of-sample forecasts are formed. The difference is that the information in both the volatilities and correlations is used in determining optimal portfolio weights. Top Down Opt is usually performed within the context of optimal portfolio control techniques. These techniques minimize variance for target levels of expected returns and maximize expected returns for target levels of variance. That is, Opt almost always refers to a portfolio strategy conducted within the mean-variance paradigm.

So the extra steps involved are:

There are a number of different possibilities which are detailed in the section on volatily. Here are some options:

Average conditional variance, covariance and correlation

Let me elaborate on the last two possibilities (the first two are simple). The average conditional variance is defined as:

          ACV(ri) = AVERAGE{(r - E[r|Z])2}
Note the difference between this and the usual variance:
          V(ri) = AVERAGE{(r - E[r])2}
In the usual estimator, we square the returns minus the average return and take the average value. In the average conditional variance estimator, we square the returns minus their predicted values.

The intuition behind this estimator is that it measures the average dispersion of the realized returns from their predicted values.

The same type of formula can be applied to covariance:

          ACC(ri, rj) = AVERAGE{(ri - E[ri|Z])(rj - E[rj|Z)}
Note the difference between this and the usual covariance:
          C(ri, r_j) = AVERAGE{(ri - E[ri])(rj - E[rj]}
The intution is identical to the variance estimator. The average conditional covariance tells us how two securities move relative to their predicted values. For example, a positive average conditional covariance tells us that when one security is above (below) its predicted value the other security is, on average, above (below) its predicted value.

Importantly, the average conditional variance and covariance estimators are easy to implement. No special econometric techniques are required. Once the forecasting models are built for the returns, it is simply some averages that need to be calculated. Indeed, another way to expressing the average conditional variance is just:

   ACV(r) = AVERAGE(residuals2)
where residuals are the regression residuals from the forecasting model for the returns. Indeed, some regression programs report this statistic as the Mean Squared Error. Note that the residual is just defined as:
   residual = r - E[r|Z]

Similarly, the average conditional covariance is just

  ACC(ri, rj) = AVERAGE(residuali x residualj)
Also, note that the mean value of the residuals is always zero (by construction in regression). Hence, one can just feed the residuals into any statistical routine and calculate the variance, covariance and correlation. The output will give the average conditional variance, average conditional covariance and the average conditional correlation.

However, note that an average of past observations is not necessarily the best forecast of the future. Indeed, this average places equal weights on all historical observations. One possibility is to use an exponentially weighted moving averages on squared residuals and the residual cross-products. This would give more weight to recent observations. But even with this modification, it is not clear that this is the best method to forecast future volatility and correlation. However, it could be a significant improvement over naive historical averages of returns from their mean values (the unconditional measures).

Conditional variance, covariance, and correlation

For these measures, see the section on volatility models. The intuition for the conditional variance is that we want to provide the best forecast of the squared deviation from the predicted return. That is, we are predicting the dispersion from what we predicted - we are not predicting whether the dispersion will be above or below the predicted value. Indeed, we can't predict the residual itself. If we can, then our forecasting model is missing something (i.e. if you can predict the model mistakes, then the model needs to be rebuilt).

Investment horizon is important here. If your rebalancing period is one quarter, you should look at the quarterly dispersion and covement measures. You may not care about some of the intra-quarter movements that you might be measuring with the conditional variance fit on monthly data.

Bottom-Up - X-Opt

I will spend a considerable amount of time discussing this style of asset management in a later lecture. For now, the idea is to select individual securities. From a variety of methods, forecasted winners are purchased and forecasted losers are sold. Briefly, the two most popular methods are portfolio attribute classification and cross-sectional regression.

Portfolio attribute classification

In this method, portfolios are formed based on particular attributes. For example, at the end of the year, suppose we have 2000 securities and the price to book (PB ) ratio for each security. Sort by PB and form quintile portfolios. Track the return over the next year. Re-form the quintiles at the end of the year. Depending on the results, we can implement a simple "value"-based strategy. One can, say, purchase the low PB portfolio and sell the high PB portfolio. Usually, the portfolios are value weighted.

The portfolios can be sorted again. Suppose we sort each PB quintile based on firm market capitalization (SIZE). Then there are 25 portfolios. Again, the performance of these portfolios and a strategy can be implement. For example, we may choose to buy low PB and low SIZE portfolios and sell high PB and high SIZE portfolios.

An important limitation of this analysis is that we can't sort on too many attributes at the same time. With three attributes, we would have 125 portfolios. With 2000 securities, there would be times when very few securities fell into a particular portfolio.

Overall, the appeal of this method is its simplicity. However, the method has severe limitations in practice. In addition, note that no optimization has been performed.

Cross-sectional regression

With this method, returns in quarter t are regressed on attributes that are available at t-1. Using the estimated coefficients, an out-of-sample forecast of the returns in quarter is formed by multiplying the regression coefficients times the attributes that are available today, at time t.

Forecasts are obtained and some portfolio strategy is implemented, such as purchasing the high expected return securities and selling the low expected return securities. It is even possible with this method to implement a Bottom-Up Opt. However, some special problems arise. We cannot go ahead and use the standard mean-variance tools. It is not feasible to put 2,000 securities into a mean-variance optimizer. However, there are some ways around this which will be discussed later.