var_main_image

Value at risk or VaR is a widely used risk metric in the financial industry for quantifying the expected capital loss one should realize for a portfolio on the worst of 20 days (in the 95% VaR case) or the worst in 100 days (in the 99% VaR case).  From a statistician’s perspective, VaR estimation is equivalent to low quantiles of a portfolio’s simple return distribution.  However, VaR statistics are often improperly estimated in practice when risk analysts fail to take into account the accuracy and precision error inherent in common estimation procedures.  Our aim below is to outline a method developed from ideas in order statistics that will allow one to calculate the necessary sample size required to estimate VaR to a desired level of accuracy.

We first review results in order statistics that will enable us to understand the error associated with the most common VaR/quantile estimation procedure.  We then will use these results to develop a rule of thumb formula that determines the number of data points needed to ensure that the variance of the VaR estimator is sufficiently small for practical purposes.  We then demonstrate these ideas through an exponential distribution model that has been selected to mirror the daily return distribution of the S&P 500.  Next, we construct a Monte Carlo simulation to develop an understanding of when the rule of thumb performs well and when it breaks down.  Finally,  we conclude and provide references.

All data gathering, wrangling, modeling, and plotting has been done in Python and code is available in this Jupyter notebook.

Order Statistics Review

Order statistics studies the properties of sorted samples from probability distributions.  When the sample size is sufficiently large, order statistics may be used to estimate quantiles of the distribution that they are drawn from.  For example, suppose we have 10,000 draws from a standard uniform random variable.  Sorting the samples in increasing order X_{(1)}\leq\cdots\leq X_{(10,000)}, then X_{(1,000)} may be used to estimate the q = 0.1 quantile of this distribution.  In fact, nearly all the major statistical packages in Python, R, Stata, etc. use linear combinations of consecutive order statistics for quantile estimation purposes.

The probability distribution function for X_{(p)} can be written explicitly as a function of the probability and cumulative distribution functions from which the sample was constructed.  We are interested in two properties of this distribution.  First, we summarize a result in order statistics which shows that the mean of this distribution converges to a quantile of the distribution they were drawn from in the large sample size limit.  Second, we review an asymptotic result that gives a simple approximation for the variance of this distribution again in the large sample size limit.  This will enable us to develop a rule of thumb to determine the number of sample points one will need to estimate a fixed quantile to a specified accuracy.  For example, given a probability distribution f(x) we will be able to answer the question: How many sample points n will be needed to estimate the q=0.05 quantile of a stock’s daily return distribution to within \alpha=0.001, e.g. 0.1\%?

More formally, let f(x) denote a continuous probability density function and F(x) its associated cumulative density function. Suppose the X_i for i=1,\ldots,n are independent samples from this distribution. If these samples are ordered from least to greatest, X_{(1)} \leq X_{(2)}\leq\cdots\leq X_{(n)}, then we refer to X_{(p)} as the p-th order statistic of the sample.

The distribution of the p-th order statistic X_{(p)} can be shown to be

\displaystyle f_{X_{(p)}}(x)=\frac{n!}{(p-1)!(n-p)!}F^{p-1}(x)[1-F(x)]^{n-p}f(x).

In [1], Mosteller derived an asymptotic result in the large n limit for the variance of this distribution.  Specifically, if q\in[0,1] is a quantile of f(x), and we assume that q\approx p/n and f(F^{-1}(q))\in(0,\infty), then X_{(p)} limits to a normal random variable:

(1)\quad \displaystyle X_{(p)}\sim\mathcal{N}\left(F^{-1}(q),\frac{q(1-q)}{n f(F^{-1}(q))^2}\right).

From this, we can determine a rule of thumb for the number of data points required to estimate X_{(p)} so that the standard deviation of this estimator is a fixed value \alpha by solving for n

\displaystyle n(q,\alpha,f) = \frac{q(1-q)}{\alpha^2 f(F^{-1}(q)^2)}.

We plot this function for our example exponential distribution below in order to visualize sample size requirements as a function of the quantile and error threshold \alpha.

We now turn to constructing an example dataset and model to demonstrate these ideas.

Constructing a Dataset and Selecting a Model

In order to demonstrate error inherent in VaR estimation, we would like to select a model that resembles an actual equity return distribution as closely as possible.  With this in mind, we use the Python pandas datareader to download end of day prices for the SPY ETF which tracks the S&P 500 equity index from 1/4/2011 to 5/1/2017.  We plot the price time series of this ETF in the first chart below and its associated daily simple return histogram in the lower subplot.

var_spy

Note the heavy tailed nature of this return distribution, i.e. the left and right tails of the distribution decay slowly when compared to a moment matched normal distribution which implies extreme positive or negative returns being more likely than in the normal case.

We would like to select a model that captures this heavy tailed feature of the return distribution but also has a relatively simple form for its probability and cumulative distribution functions for demonstration purposes.  The shifted exponential distribution has both these properties.  The probability density function for this distribution is given by

\displaystyle f(x)=\frac{1}{\lambda}\exp\left(-\frac{x-x_0}{\lambda}\right), \quad x > x_0,

where here x_0 is a shift parameter and \lambda controls the speed with which the tail decays.   We will only consider this function for positive x values.  Since we are interested in losses, i.e. negative returns, we remove all positive returns from our daily return dataset, and multiply all remaining negative returns by -1, so that estimating the q=0.1 quantile of the previous dataset is equivalent to estimating the q=0.8 quantile of this transformed dataset.

We also note that the cumulative distribution function associated with f(x) is given by

\displaystyle F(x)=\int_{x_0}^x f(y)dy=1-\exp\left(-\frac{x-x_0}{\lambda}\right)\quad x > x_0.

We next determine the quantile function for this distribution by fixing a quantile q \in (0,1), and solving q = F(x) to find:

\displaystyle F^{-1}(q)=x_0 - \lambda\ln(1-q),\quad q\in(0,1).

We now can fit this distribution to our loss return data through maximum likelihood estimation by determining the two parameters x_0,\lambda that define the most likely exponential distribution that this data could have been drawn from.  In the figure below, we plot this best fit distribution in red overlaid upon the histogram of the SPY loss return data.  Note how it captures the relatively slow decay of the histogram.var_mle

We will use the MLE parameters in all our subsequent experiments below.  First, we plot n(q,\alpha,f) as a function of q for fixed values of \alpha=0.0005, 0.001, 0.005 in blue, green, and red below respectively.  We use a logarithmic scale for the vertical axis.  In the first graph below, we plot over the full (0,1) quantile domain, and in the second, we restrict to [0.9,1) in order to better visualize the number of points needed to estimate extreme quantiles.

var_thumb

For example, if we want to estimate the 99\% VaR value (or equivalently the q=0.01 quantile) of the original return distribution, then we can see that the equivalent q=0.98 quantile of the loss distribution requires around 100 sample points to have a one standard deviation error of 0.5\%, approximately 2500 points for an error of 0.1\% and approximately 10,000 points for a 0.05\% error.

Next, we turn to a Monte Carlo experiment to see how well its results align with this approximation.

A Monte Carlo Example

We now would like to confirm that these approximate results agree with corresponding Monte Carlo simulations in the large sample size limit as well as understand how the rule of thumb formula breaks down for small sample sizes in a concrete example.

The Monte Carlo simulation consists of sampling the MLE exponential distribution n times, estimating the q=0.9, 0.98, and 0.998 quantiles which we label q1=0.9, q2=0.98, and q3=0.998, and storing the results.  This is then repeated 100,000 times, and we finally take the mean and standard deviation of the quantile estimates.

We note that most empirical quantile estimators, including the one we are using in this simulation, are constructed from convex combinations or order statistics, e.g. of the form \lambda X_{(p)} + (1-\lambda) X_{(p+1)} for some \lambda\in(0,1).  Although it may be possible to compute the joint density and variance of this in the case of the exponential distribution, assuming n is sufficiently large, we assume that it is approximated by the density of X_{(p+1)}.  We use the (1/3,1/3) quantile estimator in scipy based on the recommendation in [2] where the authors compare different quantile estimators.

In the table below, we display the mean quantile estimates of the Monte Carlo simulation.  Here we take sample sizes to correspond to 3M, 6M, 1Y, 5Y, 10Y, and 100Y in the business calendar.

Samp. Size q1 MC Mean q2 MC Mean q3 MC Mean
61 1.5808% 2.9559% 3.1142%
126 1.5546% 2.7533% 3.5956%
252 1.5426% 2.6629% 4.0480%
1260 1.5325% 2.6099% 4.2823%
2520 1.5316% 2.6030% 4.1922%
25200 1.5304% 2.5974% 4.1287%

For comparison, we note that the true values are 1.5303%, 2.5967%, and 4.1224%.  In the following table, we report the standard deviation of the 100,000 quantile estimates per simulation.

Samp. Size q1 MC Std Err q2 MC Std Err q3 MC Std Err
61 0.2606% 0.7250% 0.8421%
126 0.1779% 0.4384% 0.8456%
252 0.1259% 0.3057 0.8459%
1260 0.0562% 0.1310% 0.4458%
2520 0.0396% 0.0926% 0.3113%
25200 0.0125% 0.0293% 0.0933%

Finally, we compute the corresponding standard deviation values using equation (1) in the table below.  Note how the simulation and exact approximation results converge for increasing sample sizes.  In addition, relative errors are approximately 10% or lower for sample sizes greater than 252 which gives us confidence that this approximation many be used in practice in the case that one considers at least year of data for quantile estimation.

Samp. Size q1 Apx Std Err q2 Apx Std Err q3 Apx Std Err
61 0.2545% 0.5939% 1.8952%
126 0.1771% 0.4132% 1.3186%
252 0.1252% 0.2922 0.9324%
1260 0.0560% 0.1307% 0.4170%
2520 0.0396% 0.0924% 0.2949%
25200 0.0125% 0.0292% 0.0932%

Conclusions and References

In summary, equation (1) may be used to develop a formula to determine the required sample size to estimate VaR values to a desired accuracy.  It is important in practice to check if one has a sufficient amount of data to estimate VaR especially when considering very low quantiles, i.e. q=0.001,0.005, otherwise the error bars on common quantile estimators are so large that one cannot make a meaningful inference.  We hope that the above technique will be useful in practice to ensure that one has an adequate sample size for the value at risk estimation problem being considered.

This is a link to a Jupyter notebook that contains Python code to produce the above plots and run the Monte Carlo simulation.

[1] Mosteller, F. The annals of Mathematical Statistics. Vol 17, No. 4 (1946)

[2] Hyndman R. and Fan Y. The American Statician. Vol 50, No. 4 (1996)