Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Herts Bank PLC.: Report 1 Exclusively For

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 17

REPORT 1

EXCLUSIVELY FOR

Herts Bank Plc.

Task:
Your team has the responsibility to manage the portfolio of Herts Bank plc. The portfolio consists of
two stocks from the FTSE100 index:
Stock Number of shares in the portfolio
HSBC Holdings 1 million
Vodafone Group 2 million
Your bank has appointed a new chief risk officer who wants your team to report to her the risk of your
portfolio. You have decided to use Value at Risk to measure and explain the risk of your portfolio.
Prepare a report for the chief risk officer critically explaining why you use VaR, demonstrate the
application of any two VaR methods (commenting on their strengths and weaknesses) and evaluate
the robustness of your results

Mohammed H. Khan - 08178887

Mishal J. Dave – 08176409

Kashan A. Rathore - 08198187

Evonne Cheng - 08197740


Table of Contents
Section Page No.
Introduction 3

Value at Risk explained 3

Methodology: 4

Selecting VaR approach 5


Computing VaR :

- Variance Covariance VaR 5


- Historical VaR 6

Advantages and disadvantages to 7


each approach

Results 9
Robustness of results 10

Conclusion 12

Appendix 1 - Preamble 13

Reference 15

2|Page
Introduction
What is the most I can lose on this investment? This is a question that almost every investor
who has invested or is considering investing in a risky asset asks at some point in time. Value
at Risk (VaR) tries to provide an answer, at least within a reasonable bound.

As risk analysts at Herts Bank plc we have been given the task to work out the VaR on a
portfolio consisting of HSBC and Vodaphone stock. We use two methods to calculate the
VaR and assess whether the result is accurate enough to be relied upon.

(Refer to appendix 1 for preamble)

Value at Risk explained


VaR is a statistical measure of the risk that estimates the maximum loss that maybe incurred
on a portfolio with a given level of confidence1. VaR comes with a probability that infers
likelihood of the losses being smaller than the amount given. VaR is a monetary amount
which maybe lost over a specific period of time. The period of time is dependent on the
period the portfolio is held (holding period). (Best: 1998)

VaR is often calculated using a 95% confidence level. This means that on average there is a
95% chance of the loss on the portfolio being lower than the VaR calculated. For example, if
daily VaR is stated at £100,000 to a 95% confidence level, the means during the day there is
only a 5% chance the loss will be greater than £100,000. (Choudhry: 2006)

Financial markets are notorious for volatile price changes. VaR is not designed to and cannot
cope with exuberant (abnormal) price changes, so should be used in conjunction with another
risk metric to gain a more comprehensive market risk measurement.(Best: 1998)

Best (1998) suggests the advantages of using VaR are: it gives an estimate of the likely hood
of a loss greater than a figure occurring i.e. VaR has a probability associated with it.

1
The degree of certainty that a statistical prediction is accurate.(Marrison: 2002)

3|Page
Methodology
Calculating VaR involves four steps. The first step is to set a time horizon over which the
firm wishes to estimate a potential loss. The time horizon used to calculate VaR should depend
on the liquidity of the securities in the portfolio and how frequently they are traded. Less
liquid securities call for a longer time horizon. The most common time horizons used by
commercial and investment banks to calculate VaRs of their trading rooms are one day, one
week, and two weeks (Simons: 2000). For the purpose of this report, we are assuming the
portfolio is traded actively so have designated a 1 day time horizon.

The second step is setting the degree of certainty; the confidence level that will apply to the
VaR estimate. The confidence interval defines the percentage of time that the firm should not
lose more than the VAR amount.
Usual confidence levels are 95% and 99%, a 99% confidence level is more appropriate when
interested in potential loss arising from extreme events such as stock market crashes.
(Choudhry: 2006) As we are interested in calculating potential loss, we will also use
95% and 99% confidence levels.

To calculate VaR historical data is required, this introduces the problem of choosing the
optimal period for the historical period. It is a generally assumed that risk factors are
stationary2, risk factors consist of any risk especially in a macroeconomic situation that may
affect an asset, examples of risk factors include; interest rate risk, exchange rate risk and
commodity risk (Pike & Neale: 2006)

If the assumption of risk factors being stationary is correct then a longer time period would
be advocated as it would provide a more accurate VaR calculation. However, in practise risk
factors are never stationary, and so historical data used should not be too long so as to avoid
“paradigm shifts”. (Sourd & Amenc: 2003) The amount of historical data used for this
task will be three years. Long periods of data according to Fabozzi (1998) have a “richer
return distribution”. Also, this historical data will be collected in the form of daily returns
rather than monthly, since Coggins (2008) suggests for VaR analysis “the use of daily returns
rather than monthly returns improves performance”.

Fabozzi (1998) also suggests that the role of outliers3 in the data set should be considered,
for example, the effects of war on oil prices, there is much contention as to whether such
extreme events should be included in the data series, one argument for including such events
is that they reflect real history and add to the “fat tailed” richness of the data series. The
argument against is that the inclusion and exclusion of extreme events will result in different
VaRs. For example if a firm was to use 10 years of historical data say between 2000 and
1990 to calculate VAR. The financial crisis of 1987 would be excluded resulting in a
decrease in VAR which does not reflect the actual risk to the firm. A solution proposed by
Fabozzi (1998) is to use exponentially weighted data so as to give more weight to recent data,
allowing VAR to react to changing market conditions rapidly.4
2
If a data series is mean reverting, it is said to be stationary. (Sourd & Amenc: 2003)

3
“Data objects that do not comply with the general behaviour or model of the data, i.e. data objects which are which are grossly different
from or inconsistent with the remaining set of data. ( Han & Kamber:2006)

4
Although, extremities such as the 1987 financial crisis would not be included in a data between 2000 and 1990, the exponential weighting
would give more weight to more recent data, thus there could be discrete data points in the data series but the effect of them would be
diminished due to exponential weighting. Fabozzi (1998)

4|Page
However, intuitively this process proposed by Fabozzi (1998) can be challenged, given the
concept of VAR is to predict losses using past experience, disregarding volatile periods of the
past which is the objective of smoothing would provide a misleading VAR. Thus we have
decided to calculate returns using logarithmic return rather than exponentially weightings.

Selecting VaR approach

Singh (2010) suggests if one is studying VaR for portfolios which do not include derivates
and have short time horizons, then the variance covariance approach should be used since it
“does a reasonably good job”, irrespective of its assumption of normality.

He goes on to state “if the Value at Risk is being computed for a risk source that is stable and
where there is substantial historical data (commodity prices, for instance), historical
simulations provide good estimates. In the most general case of computing VaR for nonlinear
portfolios (which include options) over longer time periods, where the historical data is
volatile and non-stationary and the normality assumption is questionable, Monte Carlo
simulations do best.”

Considering we are interested in calculating VaR on a day to day basis we will use the
variance covariance method along with historical data given the data is stationary and appears
to be stable.

Computing VaR

- Variance- Covariance VaR


This method assumes that stock returns are normally distributed. It requires that we estimate
the expected volatility in a portfolio of two securities and then multiplying by a factor that is
selected based on the desired confidence level. For two securities the VaR is:

Diagram 1: Variance – Covariance


formula & example

5|Page
N.B The critical value for a 95% confidence interval is 1.645. The critical value for 99%
confidence interval is 2.326.

Diagram 1 shows how to work out VaR using variance covariance model, the data used is
fictional and does not represent the VaR of our portfolio.

A major benefit of variance covariance is that sensitivity analysis5 can be performed. That is,
whilst the VaR captures the daily exposure to market risk, sensitivity analysis evaluates the
impact of possible change in market risk. Thus a longer time frame will compliment
sensitivity analysis and helps assess market risk exposure. (Tohmatsu: 2007)

- Historical Simulation VaR


Unlike the variance covariance method, the historical simulation method does not depend
on calculating the correlations or volatilities. Rather it uses historical data of actual price
movements to determine the actual portfolio distribution. This is a major advantage of
historical VaR in that the fat tailed nature of a securities distribution is conserved.
(Alexander: 2008)

To calculate historical VaR the returns of the portfolio are plotted on to a distribution, the
gains and losses are then added across the portfolio for each day and then ranked in order.
The return corresponding to the desired confidence level is then selected as the VaR. For
example, if level of confidence is 95% and there are 1000 points of data, the 50th lowest
return would be selected (50 = (100% - 95%) *1000) (Fabozzi: 1998)

N.B. This method is greatly dependent on the historical sample. For the VaR estimation to be
statistically significant, the sample period should not be too short or too long as the
characteristics of the factors change over time.
5
Sensitivity analysis is very useful when attempting to determine the impact the actual outcome of a particular variable will have if it differs
from what was previously assumed. By creating a given set of scenarios, the analyst can determine how changes in one variable(s) will
impact the target variable.  (Investopedia.: 2011)

6|Page
Table 1 : Advantages and disadvantages of Variance Covariance
&
Advantages and disadvantages of Historical simulation
Method to calculate Advantages Disadvantages
VaR
Variance Covariance - It assumes all risk factors are
- Fast and relatively easy to normally distributed. This
implement assumption causes the fat tail
problem. Fabozzi (1998)
- Requires only portfolio suggests this is not a problem
level sensitivities at the 95% confidence level
but is problematic at 99%.
- Sensitivity analysis can
be performed helping - Measures only linear risk
assess market risk (Chaudary :2006)
exposure.
- Input error: Even if the
standardized return
distribution assumption holds
up, the VaR can still be
wrong if the variances and
covariances that are used to
estimate it are incorrect.

- Non-stationary variables: A
related problem occurs when
the variances and covariances
across assets change over
time. This nonstationarity in
values is not uncommon
because the fundamentals
driving these numbers do
change over time.

Historical simulation - It makes no explicit - Relies on history thus


assumption about the implicitly assuming that the
variances of portfolio shape of future returns will be
assets and the correlations the same as those of the past.
between them. (Urbani: 2007)
(Choudhry: 2006)
- Computational and data
- It makes no assumptions intensive
about the shape of the
distribution of asset - Calculated VaR dependent on
returns, including no quality of data, as all that is
assumption of normality required is a small amount of

7|Page
(Choudhry: 2006) “incoherent data” to distort
Naturally, addressing the results. (Sourd & Amenc:
fat tails problem. 2003)

- Can fully capture non - An argument can be made


linear risks. (Choudhry: about the way in which VaR
2006) is computed, using historical
data, where all data points are
- It requires no weighted equally. In other
simplification or mapping words, the price changes
of cash flows (Choudhry: from trading days in 1992
2006) affect the VaR inexactly the
same proportion as price
changes from trading days in
1998. To the extent that there
is a trend of increasing
volatility even within the
historical time period, we will
understate the Value at Risk.

- While this could be a critique


of the other approach for
estimating VaR, the historical
simulation approach has the
most difficulty dealing with
new risks and assets, there is
no historic data available to
compute the Value at Risk.
Assessing the Value at Risk
to a firm from developments
in online commerce in the
late 1990s would have been
difficult to do, since the
online business was in its
nascent stage. (Singh: 2010)

8|Page
Results

Date

HSBC Holding share price

Vodaphone share price

01/02/2011

(1st February 2011)

687.7p

177p

HSBC Holding

Vodaphone

Number of shares

1 million

2 million

Value

(1 million * 687.7p)

687 700 000 p

(2 million * 177p)

354 000 000 p

Portfolio Value
Number of shares (Weighted 3 million shares
sum) 687 700 000 p + 354 000 000 p
= 1 041 700 000 p (1041.7 million
pence )
9|Page
Results achieved using Historical method

Given there are 1,020 data points

The return we would select at 99% confidence level =

(100% - 99%) * 1020) = 10

This means with a 99% confidence level and with 1020 days, we would select the 10th lowest
return.

Outcome: we are 99% confident that we will not lose more than £571 986.53
 on 2nd February.

The return we would select at 95% confidence level = (100% - 95%) * 1020) = 51

This means with a 95% confidence level and with 1020 days, we would select the 51th lowest
return.

 Outcome: we are 95% confident that we will not lose more than £308 215.87 on 2nd
February.

Results achieved using variance covariance method

To workout VaR for 1 day at 99%:

2.345 * σ6 portfolio - µ7 portfolio

That is 2.345 * 2039344017.52 - -14944470.36 = 47 584 58655

1 day 99% VaR = £47 584 586.55

To workout VaR for 1 day at 95%:

1.645 * σ portfolio - µ portfolio .

That is 1.645 * 2039344017.52 - 14944470.36 = 33 696 65379


6
Standard deviation of portfolio

7
Mean of portfolio

10 | P a g e
1 day 95% VaR = £33 696 653.79

The result obtained from the two models is significantly different. The data used is likely to
be the cause for the major discrepancy. Singh (2010) states “that the answers we obtain from
the two approaches are a function of the inputs. For instance, the historical simulation and
variance-covariance methods will yield the same VaR if the historical returns data is
normally distributed”. From the outset we assumed the data used in the analysis was normally
distributed maybe it is not which is why there is a major difference between the results.

Robustness of results
It is imperative know the accuracy of VaR. If the VaR is underestimated, a firm may suffer
losses greater than expected; increasing bankruptcy risk. A simple way to evaluate the
accuracy of the VaR forecasts is to count how many times the losses exceed the value of the
VaR. If this count does not differ substantially from what is expected then the VaR forecasts
are considered accurate.

One method to evaluate the accuracy of VaR is Back testing. Underlying the simplest back
testing framework is the idea that for a 1 – α confidence VaR, one expresses to observe
exceptions on α% of the days. For example, if α = 0.05 and the model is back tested using the
last 250 days daily returns, the expected number of exceptions is 0.05 * 250 = 12.5. If the
number of exceptions is greater than this 12.5 the VaR estimation should be rejected as it is
not accurate. (Pearson: 2006)

For 99% confidence level the VaR was £571 986.53. This means that 1% of the time we do
expect to lose at least £571 986.53. Assuming in the year there are 250 days, then for 1% or
2.5 days in the year we expect the portfolio to lose over £571 986.53.

Then with the back test we look at the historical data and count how many times did we
exceed the VaR as compared to how many times we expected to exceed the VaR. Under 99%
daily VaR and 250 days we would expect the previous year that on 2.5 (1% of 250) or 3 days
we would have lost at least £571 986.53, which is in line with our expectation with a 99%
VaR.

Unfortunately, the VaR obtained for the historical method at 95% and 99% confidence level
was not accurate since there were a myriad exceptions were losses exceeded the estimated
VaR. This was the same for the Variance Covariance method.

However, Pearson (2006) suggests there is a major problem with this back testing method,
stating “a large sample of daily returns is required to have a reasonable number of exceptions
of 95% confidence VaR estimates which this back testing method does not acknowledge. A
possible solution he proposes is to use a lower confidence level.

11 | P a g e
Furthermore, naturally the VaR that we compute for a portfolio can be wrong, and sometimes
the errors can be large enough to make VaR a misleading measure of risk exposure. The
reasons for the errors can vary.

The two measures used to calculate VaR use historical data to some extent. In the variance-
covariance method, historical data is used to compute the variance-covariance matrix that is
the basis for the computation of VaR. In historical simulations, the VaR is entirely based
upon the historical data with the likelihood of value losses computed from the time series of
returns. This means the VaR measure will be a function of the time period over which the
historical data was collected. If that time period was a relatively stable one, the computed
Value at Risk will be a low number and will understate the risk looking forward.

Going forward, Singh (2010) argues, firms could potentially be under prepared for large and
potentially catastrophic events that are extremely unlikely in a normal distribution but seem
to occur at regular intervals in the real world. This means the VaR estimation obtained using
variance covariance should not be considered as totally accurate or reliable.

Conclusion

The objective of this study was to work out the maximum potential loss on a given day for a
portfolio consisting of HSBC and Vodaphone shares held by Herts Bank Plc, using Value at
Risk, which is a statistical measure of the risk that estimates the maximum loss that maybe
incurred on a portfolio with a given level of confidence. Two methods used to calculate the
VaR were the Variance covariance method which assumes that stock returns are normally
distributed, so only requiring estimations of two factors, an expected (or average) return and a
standard deviation which are then plotted a normal distribution curve.

The other method historical VaR uses the returns of the portfolio plotted on to a distribution.
The gains and losses are then added across the portfolio for each day and then ranked in
order; the return corresponding to the desired confidence level is then selected as the VaR.

The variance-covariance approach is simple to implement but the normality assumption can
be tough to sustain, whilst the downfall of the historical method is that it assumes that the
past time periods used are representative of the future.

12 | P a g e
According to back testing, the VaR estimates using the two models were deemed unreliable and
inaccurate, since the number of exceptions i.e. the number of losses exceeded the estimated VaR for
the given time period. However, the back testing method used to evaluate accuracy fails to
acknowledge the inevitability of exceptions in daily returns.

It must be emphasised VaR should not be the sole risk metric applied by the bank given that,
it relies on historical data and with the variance covariance method, which assumes returns
always follow a normal distribution If however, returns are not normally distributed, the
computed VaR will understate the true VaR. In other words, if there are far more outliers in
the actual return distribution than would be expected given the normality assumption, the
actual Value at Risk will be much higher than the computed Value at Risk.

Appendix 1 – Preamble
- Normal Distribution
Choudhry (2006) suggests it is suitable to assume returns from holding assets are normally
distributed8. The returns are then defined as logarithmic form as:

8
Bell-shaped symmetrical frequency distribution curve. In normal distribution, extremely-large values and
extremely-small values are rare and occur near the tail ends. Most-frequent values are clustered around the mean and fall off smoothly in
either side of it. (Choudhry: 2006)

13 | P a g e
It is convenient to assume stock price returns are log normally distributed9. Since, a normal
distribution involves all values, including negatives. This means that there is a very small
probability that a stock will have a negative price. ( i.e -100% return), which is absurd.
That is why we assume returns are log-normally distributed. We assume that the log of
returns is normally distributed; which is good because if the log is normally distributed, the
stock price can’t be negative. This is due to the properties of the exponential function.
(Chaudary: 2008)

- Normal Distribution and VaR

Most VaR models use the normal curve to calculate the estimation of losses over a specified
time period.

It should be noted that although the markets assume a normal distribution of asset prices, in
practise however it is apparent that prices follow a more skewed distribution10, asset prices
exhibit leptokurtosis, also known as ‘fat tails’, which is a normal distribution with fatter tails
than the theoretical. To paraphrase extreme price movements such as stock market
corrections occur more frequently than the normal distribution would suggest.11
(Chaudary:2008)
Unfortunately, one of the methods known as variance covariance used to calculate VaR
assumes stock returns are normally distributed, which is not always correct. Fortunately,
another technique which used to calculate VaR known as historical simulation does not
assume stock returns are normally distributed.

- Correlation
Correlation is a measure of how much the price of one asset moves in relation to the price of
another asset. The VaR is mitigated in a portfolio were the correlation between say a portfolio
of two assets is weak. (German:2008)

Correlation between the two assets HSBC & Vodaphone equals:

Given, the number of observations in the study conducted for the purpose of the report is
1020.

1020 –2 = 1018
Degrees of freedom = 1018

9
“The lognormal distribution (with two parameters) may be defined as the distribution of a random variable whose logarithm is normally
distributed.” (A logarithm is simply an exponent required to produce a given number). A lognormal distribution is a distribution that
becomes a normal distribution if one converts the values of the variable to the natural logarithms, or in of the values of the variable. (Crow
& Shimizu :1988)

10
A distribution is skewed if one of its tails is longer than the other

11
Fat Tails are “those uncommon things that bunch way out on the extremities of bell curves. They are things which should not happen
very often, but tend to happen more often than people expect.” (Bonner & Wiggin :2006)

14 | P a g e
The critical t value for 95% confidence interval = 1.96
The critical t value for 99% confidence interval = 2.58

Given correlation between Vodaphone and HSBC is 0.034; which is statistically insignificant
at 95% and 99% meaning no correlation between the two variables.

Reference
Alexander, C. (2009) Market risk analysis: Value at risk models.8th edition. West Sussex:
John Wiley & Sons Ltd.

Best, P. (1998) Implementing value at risk. West Sussex: John Wiley & Sons Ltd.

Bonner, W & Wiggin, A (2006) Empire of Debt: The Rise of an Epic Financial Crisis. West
Sussex: John Wiley & Sons Ltd.

Choudhry,M. (2006) An introduction to value-at-risk.4th edition. West Sussex: John Wiley &
Sons Ltd.

Coggins, F. (2008) Performance and Conservatism of Monthly Fhs Var: An International


Investigation. Available at: http://efmaefm.org/0EFMSYMPOSIUM/Nantes
%202009/paper/Coggins.pdf [Accessed on: 26 April 2011]

15 | P a g e
Crow, E. L. & Shimizu, K. (1988) Lognormal distributions: theory and applications. New
York: M. Dekker

Fabozzi, F.J. (1998) Perspectives on interest rate risk management for money managers and
traders. Pennsylvania: Fabozzi Associates

German,H (2008) Risk Management in commodity markets. West Sussex: John Wiley & Sons
Ltd.

Han, J & Kamber, M (2006) Data Mining: Concepts and Techniques. 2nd Edition. San
Francisco: Morgan Kaufmann Publishers

Hendricks, D., (1996), Evaluation of value-at-risk models using historical data, Federal
Reserve Bank ofNew York, Economic Policy Review, v2,, 39–70.

Investopedia (2011) Define Sensitivity Analysis. Available at:


http://www.investopedia.com/terms/s/sensitivityanalysis.asp [Accessed on: 5 April 2011]

Investopedia (2011) Define VaR. Available at: http://www.investopedia.com/terms/v/var.asp


[Accessed: 22 March 2011]

Investopedia (2011) Conditional Value at Risk. Available at:


http://www.investopedia.com/terms/c/conditional_value_at_risk.asp [Accessed on: 18 April
2011]

Marrison, C.I (2002) The fundamentals of risk measurement. Harlow: Mcgraw-Hill


Publications

Pearson, N.D. (2006) Risk budgeting: portfolio problem solving with value-at-risk. New
York: John Wiley & Sons

 Pike, R. Neale,B. (2006) Corporate Finance & Investment Decisions & Strategies. 5th
Edition. Harlow: Pearson Education Limited

Rahl, L. (2005) Risk Budgeting: The Next Step of the Risk Management Journey. Available
at:http://www.colepartners.com/downloads/RiskBudgetingTheNextStepoftheRiskManagemen
tJourney.pdf [Accessed on: 28 March 2011]

Singh, P. (2010) Value at Risk. Available at:


http://pages.stern.nyu.edu/~adamodar/pdfiles/papers/VAR.pdf [Accessed on: 17 April 2011]

Simons, K. (2000) “Use of value at risk by institutional investors”. New England Economic
Review pp. 21-30

Sourd, V.L. & Amenc, N. (2003) Portfolio theory and performance analysis. West Sussex:
John Wiley & Sons Ltd.

Tohmatsu, D.T. (2007) Financial Reporting in Hong Kong. Illustrative Financial


Statement.Hong Kong: Deloitte

16 | P a g e
Urbani, P (2007) All about Value at Risk. Available at: http://www.edge-fund.com/Urbani.pdf
[ Accessed on 4 March 2011]

17 | P a g e

You might also like