The theme is to detail the various quantative approaches to tactical global asset management.

- model building
- general specification tests
- overfitting and cross validation
- residual diagnostics, including heteroskedasticity and serial correlation
- nonlinear regression
- model choice techniques

The research team must have easy access to a variety of data. It is often appropriate to designate one member of the research team to data. The collection and maintenance of the database is very important. Tactical decisions need to be made quickly after new data arrives. It is best to invest in a database system that takes the new data and automatically runs the quantiative proframs.

This is not really a major issue. While workstations are desirable, most calculations can be performed quickly on PCs - given that the focus is on monthly or quarterly returns. Database systems are also desirable. While most top-down data management exercises can be handled within Excel, the bottom up projects are not feasible within a spreadsheet. The bottom-up projects may include up to 10,000 securities along with vectors of attributes for each security.

I call this "Top Down X-Opt" because the predicted stock returns are not used in any type of optimizer, i.e. a mean-variance optimizer which chooses portfolios which have the highest expected returns per unit of variance. The steps are as following:

- Build country-by-country forecasting models based on benchmark return from MSCI or IFC.
- After validation or models, forecast out of sample returns.
- Sort country returns
- Invest in portfolio of highest expected return countries.

This approach does not tell us how many countries to invest in, nor does it tell us how much to invest in each country. Some possibilities are:

- Equal weight countries selected by judgement
- Value weight countries selected by judgement
- Use judgement to both weight and select the highest expected returns countries.

"Hedge" strategies are also possible. This involves taking long positions in the highest expected returns countries and short positions in the lowest expected returns countries. A number of caveats are in order for this style of strategy.

- Some countries may not be shortable.
- Some countries, while shortable, may be prohibitively expensive to short.
- Short positions almost always involve positive investments, hence the hedge strategy is not a zero net investment strategy.
- The "hedge" strategy may not really be a hedge. That is, the long and the shorts are not necessarily offseting.

Let me elaborate on this last point. Usually, a hedge strategy is measured with respect to some benchmark. For example, consider a U.S. equity hedge strategy. High expected returns securities are purchased and low expected returns securities are sold. The "beta" or sensitivity of the high expected returns portfolio is calculated with respect to a benchmark, like the S&P 500 stock indexd return. The "hedge" portfolio is constructed by selling the portfolio of low expected returns securities that has the same beta as the high expected returns portfolio. This produces a zero beta or hedged portfolio. The expected return on the hedge portfolio is the "alpha".

Importantly, this portfolio does not have zero volatility. The beta measures the average movement in
the portfolio given a movement in the benchmark. The higher the R^{2} in the beta regression, the
more closely the portfolio and the benchmark move. In the U.S. hedge strategy, the volatility would
be much lower than holding either side of the hedge. Nevertheless, volatility could be on the order
of one half the S&P 500 volatility.

With the international portfolio, it is more difficult to achieve this hedge. First, a benchmark needs to be designated. This depends on your performance criteria. A global manager may choose the Morgan Stanley Capital International (MSCI) world index. An international manager may choose the MSCI EAFE. A U.S. manager may choose the S&P 500.

Three issues arise (which we will detail in the International Risk Management). First, the betas of
some of the countries could be zero. This is especially the case with some of the emerging stock
markets.
Harvey (1995) shows that many of the emerging markets have zero betas
and some have negative betas. The problem arises in the following way. If you are shorting India,
which has a negative beta, you are *increasing* the risk of the portfolio.

The second issue has to do with the low R^{2} in the beta regression. Harvey (1991)
shows that the average correlation among developed markets is 41% (which implies an R^{2} of
only 16%). The correlations among emerging markets are lower. Hence, the hedge portfolio, even
if it has zero beta risk, will have substantial volatility.

The third issue has to do with the stability of the betas. As we will see, the betas could be very unstable. The usual method of obtaining betas relies on a regression analysis which essentially averages the historical comovements between the country and the benchmark. There is no guarantee that the future comovements will look like the past. We will tackle this problem later by proposing dynamic risk models which explicitly forecast the future comovements between the country and the benchmark.

This approach follows the first initial steps of Top Down X-Opt. Country forecasting models are built, validated and out-of-sample forecasts are formed. The difference is that the information in both the volatilities and correlations is used in determining optimal portfolio weights. Top Down Opt is usually performed within the context of optimal portfolio control techniques. These techniques minimize variance for target levels of expected returns and maximize expected returns for target levels of variance. That is, Opt almost always refers to a portfolio strategy conducted within the mean-variance paradigm.

So the extra steps involved are:

- Forecast variance out of sample
- Forecast covariance (or correlation) out of sample.

There are a number of different possibilities which are detailed in the section on volatily. Here are some options:

- Use historical estimates of volatility and correlation (perhaps based on the last five years
of data). This is essentially an equally weighted moving average. These are called
*unconditional*variances and covariances. - Use modified historical estimate that places more weight on recent information. An example of this is an exponentially weighted moving average (EWMA). This is what J.P. Morgan calls RiskMetrics.
- Use average conditional variance and covariance.
- Use conditional variance and covariance.

Let me elaborate on the last two possibilities (the first two are simple). The average conditional variance is defined as:

ACV(rNote the difference between this and the usual variance:_{i}) = AVERAGE{(r - E[r|Z])^{2}}

V(rIn the usual estimator, we square the returns minus the average return and take the average value. In the average conditional variance estimator, we square the returns minus their_{i}) = AVERAGE{(r - E[r])^{2}}

The intuition behind this estimator is that it measures the average dispersion of the realized returns from their predicted values.

The same type of formula can be applied to covariance:

ACC(rNote the difference between this and the usual covariance:_{i}, r_{j}) = AVERAGE{(r_{i}- E[r_{i}|Z])(r_{j}- E[r_{j}|Z)}

C(rThe intution is identical to the variance estimator. The average conditional covariance tells us how two securities move relative to their predicted values. For example, a positive average conditional covariance tells us that when one security is above (below) its predicted value the other security is, on average, above (below) its predicted value._{i}, r_j) = AVERAGE{(r_{i}- E[r_{i}])(r_{j}- E[r_{j}]}

Importantly, the average conditional variance and covariance estimators are easy to implement. No special econometric techniques are required. Once the forecasting models are built for the returns, it is simply some averages that need to be calculated. Indeed, another way to expressing the average conditional variance is just:

ACV(r) = AVERAGE(residualswhere^{2})

residual = r - E[r|Z]

Similarly, the average conditional covariance is just

ACC(rAlso, note that the mean value of the residuals is always zero (by construction in regression). Hence, one can just feed the residuals into any statistical routine and calculate the variance, covariance and correlation. The output will give the average conditional variance, average conditional covariance and the average conditional correlation._{i}, r_{j}) = AVERAGE(residual_{i}x residual_{j})

However, note that an average of past observations is not necessarily the best forecast of the future. Indeed, this average places equal weights on all historical observations. One possibility is to use an exponentially weighted moving averages on squared residuals and the residual cross-products. This would give more weight to recent observations. But even with this modification, it is not clear that this is the best method to forecast future volatility and correlation. However, it could be a significant improvement over naive historical averages of returns from their mean values (the unconditional measures).

For these measures, see the section on volatility models. The intuition for the conditional variance is that we want to provide the best forecast of the squared deviation from the predicted return. That is, we are predicting the dispersion from what we predicted - we are not predicting whether the dispersion will be above or below the predicted value. Indeed, we can't predict the residual itself. If we can, then our forecasting model is missing something (i.e. if you can predict the model mistakes, then the model needs to be rebuilt).

Investment horizon is important here. If your rebalancing period is one quarter, you should look at the quarterly dispersion and covement measures. You may not care about some of the intra-quarter movements that you might be measuring with the conditional variance fit on monthly data.

I will spend a considerable amount of time discussing this style of asset management in a later lecture. For now, the idea is to select individual securities. From a variety of methods, forecasted winners are purchased and forecasted losers are sold. Briefly, the two most popular methods are portfolio attribute classification and cross-sectional regression.

In this method, portfolios are formed based on particular attributes. For example, at the end of the year, suppose we have 2000 securities and the price to book (PB ) ratio for each security. Sort by PB and form quintile portfolios. Track the return over the next year. Re-form the quintiles at the end of the year. Depending on the results, we can implement a simple "value"-based strategy. One can, say, purchase the low PB portfolio and sell the high PB portfolio. Usually, the portfolios are value weighted.

The portfolios can be sorted again. Suppose we sort each PB quintile based on firm market capitalization (SIZE). Then there are 25 portfolios. Again, the performance of these portfolios and a strategy can be implement. For example, we may choose to buy low PB and low SIZE portfolios and sell high PB and high SIZE portfolios.

An important limitation of this analysis is that we can't sort on too many attributes at the same time. With three attributes, we would have 125 portfolios. With 2000 securities, there would be times when very few securities fell into a particular portfolio.

Overall, the appeal of this method is its simplicity. However, the method has severe limitations in practice. In addition, note that no optimization has been performed.

With this method, returns in quarter *t* are regressed on attributes that are available
at *t-1*. Using the estimated coefficients, an out-of-sample forecast of the returns
in quarter *t*.

Forecasts are obtained and some portfolio strategy is implemented, such as purchasing the high expected return securities and selling the low expected return securities. It is even possible with this method to implement a Bottom-Up Opt. However, some special problems arise. We cannot go ahead and use the standard mean-variance tools. It is not feasible to put 2,000 securities into a mean-variance optimizer. However, there are some ways around this which will be discussed later.