Value at Risk (VAR) is defined as the most amount of money you are willing to lose given a certain confidence interval and over a defined period of time. Given the leverage positions most hedge funds employ and the subsequent daily marks to market that must be reconciled with their futures positions, a daily VAR calculation makes the most sense to measure the risk of a hedge fund.
There are three different measures of Value at Risk:
The parametric method VAR (also known as Variance/Covariance VAR) calculation is the most common form used in practice with hedge fund managers. This method is the popular because the only variables you need to do the calculation are the mean and standard deviation of the portfolio. The biggest assumption that mangers using Parametric VAR are making is that the returns from their portfolios are normally distributed. This allows the manager to use the calculated standard deviation to compute a standard normal z score to determine his/her risk position with a degree of confidence right off of a standard normal table. This is an important assumption because it allows the manager to use the normal distribution as a proxy for what expected returns might look like. In addition, the returns are assumed to be serially independent in that no prior return should influence the current return. In practice, this assumption of return normality has proven to be extremely risky. Indeed, this was the biggest mistake that LTCM made it gravely underestimating their portfolio risks.
An example of a parametric VAR calculation is as follows:
Standard Deviation ($ terms): $50,000Mean ($ terms): $35,000Z Score for 95% confidence: 1.65
Calculated VAR for the period with 95% confidence is:
35,000 - 50,000 (1.65) = -$47,500
The strengths of this method as demonstrated above are the simplicity of the calculations and the fact that the data for the inputs is very easy to obtain. The biggest weakness of this method is the assumption of normality. Without actually plotting your data on a histogram to ensure such an assumption, you are exposing yourself to a enormous underestimate of possible standard deviation moves away from your historical mean. Another problem with this method is the stability of both the standard deviation through time as well as the stability of the variance/covariance matrix in your portfolio. It is easy to depict how correlations have changed over time particularly in emerging markets and through contagion in times of financial crisis. Without appropriately adjusting the VAR calculation for these extreme events you are in fact corrupting the confidence intervals through which you are defining your risk exposure.
Historical VAR is a better methodology to use if you cannot determine the distribution of your return series. This calculation is much easier than even the Parametric VAR calculation in that all you are doing is literally ranking all of your past historical returns in terms of lowest to highest and computing with a predetermined confidence rate what your lowest return historically has been. This means if you had 100 past returns you and you wanted to know with 95% confidence what's the worst you can do, you would go to the 5th data point on your ranked series and know that 95% of the time you will do no worse than this amount.
Historical VAR seems way too simplistic and in fact that is the biggest criticism of the methodology. Without a distribution to help determine future returns, you are assuming that the past will exactly replicate the future, which is very unlikely in itself. The strengths of the method are that all past data has been fully incorporated in the risk calculation without the forced assumption of a normal distribution and that no variance/covariance matrix is needed to calculate the portfolio standard deviation. This avoids the risk of a changing matrix over time as described in the weakness of the Parametric VAR paragraph. Unfortunately this historical VAR calculation is only as strong as the number of data points you available to measure and collecting this data back in time may prove cumbersome or even impossible. In theory this method would be better than Parametric VAR if you had enough data to fully represent all of the crisis events and changing business cycles that occurred. You would then know exactly how the portfolio performed and how much was at risk at any period in time. However, as mentioned above, even with all this history on your side there is no proof it will ever fully replicate itself historically without a known distribution.
This VAR method is a much more complex analytical tool where you try to map out any possible return scenario for your portfolio on a computer generated model. After the model is run you would look at all the resulting return paths and then determine how much you could lose at a certain probability. While Monte Carlo VAR allows for an infinite number of possible scenarios you are exposing yourself to huge model risks in determining the likelihood of any given path. In addition, as you had more and more variables that could possibly alter your return paths, model complexity and model risks also increase in scale. Like Historical VAR however, this methodology removes any assumption of normality and thus if modeled accurately (not an easy task), probably would give the most accurate measure of the portfolio's true Value at Risk.
Return to Satchmo
Home Page
Last Update: March 03, 2002