Skip to main content

Value at Risk

Using imprecise language, the Value at Risk – abbreviated as VaR – of a particular asset is just an amount, for example $1M, which represents the worst possible future loss sustained by that asset.

In order to be able to compute the VaR, the underlined phrase worst possible future loss deserves a precise, technical definition.
First, the word future is construed to refer to some fixed time instant that has not yet occurred. This time instant is commonly called the VaR horizon. It is conventionally set to either one day or two weeks from today, with the respecting VaR being the one-day or two-week VaR.

Second, the two words worst possible imply the existence of several candidate losses.

This is indeed so, since there exist infinite possible values Vi that the underlying asset may assume at horizon. If a value Vi, were realized at horizon, it would lead to a respective realized asset loss Li = V0 – Vi, where V0 is the current asset value. Under the assumption that we ignore extreme losses Li that are deemed too unlikely to occur, the statement worst possible future loss becomes tenable.

Therefore, all boils down to establishing the probability distribution – or equivalently the probability density – function of the asset values Vi that may be realized at horizon. The exact shape of that function depends on both the type of the involved asset and our assumptions about the stochastic evolution of the particular market prices that affect the asset value.

For illustration purposes, a possible probability density function for the asset value at horizon could look like that:

 

Since we have so far described VaR as an amount, it turns out that a single number must be somehow associated with the density curve. This requirement may be seen as unnecessarily restrictive, given the fact that a density curve conveys much more information about the riskiness of the referenced asset than any single number in the world. Its justification lies on the fact that human brains and automation systems come better in terms with single numbers than the infinity of numbers implied by a curve.

Out of the many ways that make it possible to extract a single number out of a density curve, the one chosen in the context of VaR is the following:

For a given probability C referred as confidence level (typically C equals .95 or 95%), find the number Vc (point on the horizontal axis) such that the probability that the realized asset value at horizon is greater than Vc equals C.

This defines the single number Vc as the worst possible asset value at horizon, in the loose sense that Vc is so bad and unlikely at the same time that we focus on it as if no other still worse outcome existed.

Another more accurate interpretation of Vc is that we feel C confident – for example, we feel 95% confident, if C = 95% – that the asset value at horizon will not drop below Vc.

VaR Definition

This brings us to the formal definition of VaR as the difference V0 – Vc.

The following chart illustrates this definition:

 

VaR Calculation Methods

The key part in the calculation of VaR is the construction of the probability density function of the asset value at horizon. This can be achieved by one of the following three methods, ordered in terms of increasing complexity:

  1. The parametric method, also known as variance-covariance method, assumes the asset value spread at horizon ΔV = V – V0 is normally distributed with parameters (mean and standard deviation) implied by the recorded historical values. This method is theoretically wrong as it is highly unlikely that the asset value would ever follow such a nice, smooth distribution. But in the vicinity of the current value V0, the distribution of ΔV becomes indeed normally distributed under the simplifying assumption that changes in market prices are normally distributed and the respective asset value change ΔV depends linearly on these changes. Overall this method has the advantage that it only requires the determination of two numbers, the mean and standard deviation.
  2. The historical method is similar to the parametric method in that it requires knowledge of the asset values realized in the past. When you follow this method, rather than trying to fit a normal distribution on the historical data, you create a histogram out of them, in effect a probability distribution that is by construction capable of reproducing the observed frequency of the historical asset values.
  3. The simulation method makes no use of historical data. According to this method, you construct the probability distribution of the asset value at horizon in four steps:

Step 1: You find out the elementary market data that determine the value of the referenced asset and decide on the K risk factors that presumably drive the market data. For example, if the referenced asset is a stock option, K equals 3 and the risk factors are: stock price, implied volatility and interest rate relative to the option’s maturity.

Step 2: Assuming a certain stochastic evolution of the assumed risk factors, you run a Monte Carlo simulation in order to produce N scenarios, where N is a big number. The typical value used in production is N = 10,000, although N = 1,000 is also regarded as quite accurate. Each scenario generates a sample, i.e. an array of K numbers that represents the values realized by the K risk factors during the corresponding simulation run.

Step 3: For each i from 1 to N, you associate with the ith sample the corresponding asset value Vi. This step is the most difficult as it requires that the value of the asset can be inferred from the knowledge of the simulated risk factors. For example, in the case of the asset being a stock option, a model implementation must be in place – such as the Black Scholes model – that is capable of calculating the option price for any given set of the simulated risk factors, i.e. the stock price, implied volatility and interest rate.

Step 4: You finally assemble into a histogram all the asset values Vi calculated at the previous step, creating in effect a discretized probability density function of the asset value at horizon.

VaR Calculation in Excel

A step-by-step calculation of VaR in Excel is described in my article How to compute the VaR: Step-by-Step Excel Guide

Deriscope uses the simulation method to calculate the VaR of any single asset or portfolio of assets, under the provision that the price of each asset can be independently computed. It follows that the VaR output – just like the pricing output – is model-dependent.

More specifically, for any given object of type Tradable, there is a Function called VaR that produces the Value at Risk of that object under certain user-specified assumptions. The wizard discovers the relevant risk factors by determining which market data are involved in the pricing of this object.

As an example, assume you have created an object of type Stock Option, as shown below, where cell A1 is selected and the blue arrows show the dependencies:

 

Then you can go to the wizard and use the Function Selector to choose the VaR Function. The screen below shows the input parameters of the VaR Function as displayed inside the wizard’s Browse Area:

 

The Models= and Markets= inputs are the same as those used in the Price Function.

The VaR Spec= entry contains the user-defined VaR specifications. Clicking on $VaRSpec takes you to the following screen:

 

As usual, all entries with blue color are editable.

The top three rows are only needed if you want to change either the Tradable object, of which the VaR you want to compute, or the reference asset used for denominating the VaR output.

The last three rows are in black color and only provide information on various quantities implied by the entries above them.

All keys are accompanied by a detailed description that appears at the bottom of the wizard as soon as they are selected. They are:

Buckets Treatment

This input appears only in the wizard and never in the spreadsheet. It instructs the wizard to build the appropriate default Simulated Market table $Set#4 two rows below it. The shown default setting of Parallel is the simplest and most common choice because it associates a single risk factor with every market curve input. For example, if the market data in $Mkt of the previous screen included a yield curve built out of swap rates, then during simulation all swap rates would move in tandem through a flat parallel shift driven by a single common risk factor. Other more complex – but more accurate – choices are Bipolar, Tripolar, All Buckets. The Bipolar choice lets every market curve to be driven by two risk factors and the Tripolar by three, whereas the All Buckets choice treats each bucket – such as the 2y swap rate – as a distinct risk factor.

Include Div Risk

Decides whether the dividend yield (discrete or continuous) should be considered as a risk factor or not.

Simulated Market

This is the table that contains the risk factors against which the VaR is calculated. Its contents that you see below, are displayed after you have clicked on the little pen-on-a-pad sign to the left of the Simulated Market key of the previous screen

 

One row is allocated to each market element, which in turn may be driven by more than one risk factors. Therefore, the total number of risk factors may exceed the number of table rows.

There are 7 columns that describe what risk factors are associated with each market element and how each of them should be evolved during simulation.

The 1st column is titled #MarketFactor and identifies the market element associated with a given row by supplying the handle name of an object, the type of which must inherit from Valuation (which itself inherits from Market).

The 2nd column is titled #BucketsTreatment and allows you to apply a different Buckets Treatment (see above) setting on each row.

The 3rd column is titled #ModelledFactor and describes how the data of a market element (which data may consist of a whole curve or single numerical values) are produced out of the simulated value of the respective driving risk factor.

The 4th column is titled #ProcessType and indicates the stochastic process followed by the respective simulated risk factor.

The 5th column is titled #Drift and contains the drift associated with the stochastic process of the respective risk factor.

The 6th column is titled #Vol and contains the vol associated with the stochastic process of the respective risk factor.

The 7th column is titled #Process and gives you the option to define a custom stochastic process associated with the respective risk factor. If used, the entry here must be a handle name of an object of type Stoch Process and the entries in the previous 3 columns are ignored.

Correlation

This is the matrix of pairwise correlations between the risk factors.

Report Horizon

Sets the period that determines the horizon date, on which the reported VaR applies.

Actual Horizon

Sets the period that determines the horizon date, on which the market data are evolved during simulation and the respective price of the referenced asset is calculated.

Percent

Controls whether the reported VaR refers to the absolute loss amount or its percentage relative to the current asset value.

Simulation Mdl

Contains specification details about the simulation as part of an object with the shown handle name $SimMdl#3. When you click on the little pen-on-a-pad sign, its contents are:

 

Random Engine

Allows you to choose between a pseudo-random or Sobol-based random number generator.

Scenarios

Sets the total number of scenarios (samples) produced during the simulation.

Steps

Equals the number of time steps used by the simulation in the construction of the paths realized by the risk factors. For a short horizon an entry of 1 is sufficient. For longer horizons, a large number of time steps can be useful as it can lead to terminal risk factor values that are not normally distributed and therefore more realistic.

Antithetic

Enables you to switch the antithetic method on and off.

Seed

This is an arbitrary integer that acts as the seed for the random number generation. The special value of 0 results in a different random number series output every time the simulation runs.

Vol Time DC

Refers to the daycount convention used to calculate the time length associated with the increase in variance of the Monte Carlo simulated quantities.

Simulated Values

Controls whether the values generated during the simulation should be included in the produced report or not.

Report Quantiles

Controls whether a quantile of a specified probability (confidence interval) is also calculated and reported. This must be checked since VaR requires the calculation of a quantile.

Quantile Probs

Specifies the array of probabilities Pᵢ associated with the requested quantiles qᵢ.

The associated VaR confidence intervals are given by 1 – qᵢ.

So, the shown entry 0.05 corresponds to a VaR confidence interval of 0.95 or 95%.

Report Distribution

Controls whether the cumulative probability and density functions of the simulated tradable price and the respective loss are also built and reported.

Histogram Buckets

Defines the number N of equidistant intervals into which the range of simulated values is partitioned for the purpose of counting the occurrences that fall in each such interval. These intervals are used to construct charts of probability distribution and density functions. They do not affect the computation of VaR but they may have an effect on the estimation of its standard error because the latter depends on the probability density function.

Report Level

Here you decide what kind of data should be generated and reported alongside with the main VaR output.

Execution and Final Result

On first look, the great variety of the input possibilities looks intimidating.

The fact though is that all displayed defaults are optimally chosen by the wizard for the task at hand. In other words, you would normally keep the suggested defaults and click on the Go button to generate and paste the applicable formulas in the spreadsheet.

The image below shows the VaR output of 0.700511285 at cell D1, as generated by the simple formula =ds(D2:E7). You also see the object of type Var Spec generated in cell G1 as &VaRSpec_G1:1.1 and entered as input to the VaR function through cell E7

 

The formula in cell D1 returns only a number because of the Report Level entry of Level0 in cell H7 of the specification data. Setting the latter to Level1, some more interesting information is produced. Then the output in cell D1, rather than just a number, turns to a handle name of an object of type Variant, of which the contents are shown by the wizard when the containing cell is selected.

Below you see that type of output, with the setting Scenarios= 10,000 as is usually done in production:

 

We notice that Report VaR= 1,044, which is substantially different from the previous result of 0.700511285, but this is not surprising since the latter was based on Scenarios= 10.

Next, we see that Report Var %= 0.1056 or 10.56%, which is the VaR expressed as a percentage of the current asset value.

Very interesting is the data Report VaR Error = 0.01248, which equals the standard deviation of the probability distribution associated with the VaR itself. As such, the interpretation of this number is that the true VaR lies between 1,044 – 0.012 and 1,044 + 0.012 with a probability of about 68%.

The VaR standard error calculation is based on the mathematical fact that the estimated p-th quantile is asymptotically normal around the true p-th quantile α with variance equal to p(1-p)/[Nf(α)²] where f(x) is the probability density function of the related random variable and N is the sample size.

The standard deviation of the estimated p-th quantile equals the square root of that variance.

Running simulations with very high number of scenarios indicates that the VaR converges to a value around 1.056, which is within the reported standard error of 0.012.

The last two entries indicate that the calculation lasted 64 seconds.

I will show you below how to reduce the calculation time by a factor of 10 by selecting a Low Discrepancy random number generator of the Sobol type.

For now, it would be interesting to have a visual representation of the probability distribution that governs the asset value at horizon, out of which all data mentioned above are derived. This can be achieved by setting Report Distribution= FALSE in cell K18, which produces the following additional output (the top two rows actually appear inside the $SimRep object shown in the previous screen):

 

Each of these elements contains the x and y coordinates of the respective function. For example, after clicking on the lens sign on Price Density=, the following data appear:

 

We may now transfer these data in the spreadsheet the usual way, by clicking on Go and selecting the Paste Object Contents option. The image below shows the spreadsheet array formula created by the wizard on the spreadsheet and the corresponding chart next to it:

 

The reason there exist 21 rows of data lies with our VaR specification object, which has an optional key called Histogram Buckets with a default value of 20. The result is the 21 equidistant x coordinates shown on the left column with the #Price title. The respective densities on the right column are estimated by counting the price points produced by the simulation that correspond to these x coordinates.

The next image shows the data and chart associated with the Loss Density:

 

Finally, it is possible to inquire the full dataset comprising all the samples generated by the simulation by setting Simulated Values= TRUE in cell K13, which produces the following additional output inside the $SimRep object:

Clicking on the lens sign to the left of the $Set#4 object, the following two-dimensional array of data appears:

The above displays only the first 16 scenarios produced by the simulation.

The first column simply contains an index.

The next three columns contain the simulated factors that multiply the spot interest rate, volatility and spot stock price in each scenario.

The last column contains the stock option price that results from the respective modified market data.

Scrolling down we reach to the last 16 scenarios

 

Using a Quasi Random Number Generator of the Sobol type

As I mentioned earlier, it is possible to reach a satisfactory VaR result with far less simulations by resorting to a Sobol sequence of quasi random numbers.

For an in-depth comparative analysis of the quasi random versus pseudo random simulation methods, you may read my article Combining Randomized Quasi Monte Carlo (Sobol) and Parallel Processing (Multithreading) when Pricing Derivatives in Excel

I do that by recreating my simulation model object in cell J1 as shown here:

 

Notice that I use 1023 scenarios, rather than 1000, due to the Sobol constrtaints that are explained in the above referenced article.

The result is shown below:

You notice that the Report VaR now displays as 1.06 with the actual stored value being 1.0599.

No standard error is reported because this would have required running a randomized simulation, as explained in my article referenced above.

It is impressive that this Sobol-based calculation took only 6 seconds to complete and still produced a result that is much closer to the limit of around 1.056 than the previous pseudo-random-based VaR of 1.044.

Click on ValueAtRisk.xlsx to download the spreadsheet produced with the above steps.