Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Risk Management Models - What To Use, and Not To Use: Peter Luk - April 2008

Download as pdf or txt
Download as pdf or txt
You are on page 1of 14

Risk Management Models what to use, and not to use

Peter Luk April 2008

Fisher Black and Myron Scholes paper on Option Pricing in 1973 ushered in a new age in financial mathematics. Since then thousands of mathematical models have been developed to describe the financial markets and to devise ways to reduce or eliminate risks. But since then the financial markets collapsed once every ten years or so because of these models. At least one well-known professor (whose book outsold Greenspan) called this phony bell curve-style mathematics. Portfolio insurance was designed in 1980s, using mathematical models to ensure stock portfolios stay immune from market ups and downs and then in October 1987 the market collapsed and the Dow Jones Industrial Average, the bellwether of the US stock market, dropped more than 22% in a single day, thanks to the models. About a decade later, in 1998, the famous LTCM (Long Term Capital Management, which prided itself on having two Nobel laureates as its partners, one of which was the very same Myron Scholes) collapsed having lost US$4.5 billion, including $1.6 billion on swaps and US$1.3 billion on equity volatility. The DJIA lost 18% in about three months time. One would have thought people learned about risk management. But another ten years later, we now have the sub-prime crisis. Between October 2007 and March 2008, the DJIA lost 17% one estimate put the total losses at US$400 billion. Mathematical modeling played an important part in all of the above, the most popular one being the normality assumption about stock price change. As Nassim Taleb, professor and trader, said in Fortune (April 14, 2008), We replaced so much experience and common sense with models that work worse than astrology,... I noticed that while portfolio models got worse and worse in tracking reality, their use kept increasing as if nothing was happening. Why? Because in the past 15 years business schools accelerated their teaching of portfolio theory as a replacement for our experiences. Professor Eugene Fama (the thesis advisor to LTCMs Myron Scholes) had this to say, If the population of price change is strictly normal,...an observation that is more than five standard deviations from the mean should be observed once every 7,000 years. In fact, such observations seem to occur about once every three or four years. In August 2007, Wall Street Journal reported that events that (Lehman Brothers) models predicted would only happen once in 10,000 years happened every day for three days and Financial Times reported that Goldman Sachs witnessed something that only happens once every 100,000 years according to their model. It is time to have a look at what models can safely be used and what should only be used with caution.

Derivative Model

A typical model involves the following common assumptions: Risk-free interest rate: constant or distribute normally with constant volatility Equity price change (or its log): distribute normally with constant volatility In reality, these things are never distributed normally. See the following chart.

For a standardized normal distribution (i.e., mean=0, standard deviation=1),


= Q( x ) =

Z ( t ) dt
x

1 -t 2 / 2 e dt 2

or
x = Q -1 ( )

When the distribution is not normal, i.e., when skewness, kurtosis and higher moments are not zero, the Cornish-Fisher expansion says,

x = mean + { Q -1 ( ) +

1 ([Q-1()]2 1)
6

+(

2 ([Q-1()]3 3Q-1() )
24
2

12 (2[Q-1()]3 5 Q-1() )
36

)}

+ ..........

where,
1 = 2 =
3rd central moment

3
4th central moment

is the coefficient of skewness and


3

is the coefficient of kurtosis

If skewness and higher moments are zero, this expansion is reduced to normal. In reality, these numbers are far from zero, but people tend to ignore this fact. Some empirical tests of normality were performed on five indices: Hang Seng Index, Nikkei 225, Dow Jones Industrial Average, Standard & Poor 500 and FTSE 100. For each of them, three 6-month periods were selected: Jan 2007 Jun 2007, Jul 2007 Dec 2007 and Oct 2007 Mar 2008. For each of the 15 combinations, a set of three tests (i.e., chi-square test, Kolmogorov-Smirnov test and Anderson-Darling test) was performed at 5% level. About half of them fail the tests. It is not unreasonable to assume many of individual stocks will also fail the tests. If you use observed risk-free interest rate and market volatility, you will not get the correct derivative prices because the normality assumption is wrong. So people do the reverse way, calculating a volatility number from the observed risk-free interest rate and market prices. This number is often called implied volatility. This is, of course, a misnomer. There is no such thing as implied volatility. This is actually a number representing the combined influence of volatility, skewness, kurtosis and higher moments. Therefore, the conventional derivative formulas, particularly the famous Black-Scholes formula, are wrong and useless as you have one equation with two unknown variables. The relationship between implied volatility and historical volatility can be found from the following approximate formula derived from the above Cornish-Fisher expansion : Implied volatility = historical volatility

{1 +

1 ([Q-1()] 1/[Q-1()])
6

+(

2 ([Q-1()]2 3)
24

12 (2[Q-1()]2 5)
36

)}

This relationship is only approximate as we have no idea how important a role the higher moments play under any particular circumstance. When this approximation does not even come close to the implied volatility (which happens not infrequently) one can only surmise that the market factors anticipated future changes of volatility (which is basically 3

a piece of guesswork) into this so-called implied volatility. That makes the implied volatility, and hence the Black-Scholes formula even less reliable than they would otherwise be. Another important (probably the most important) thing not often mentioned in the textbook is that all the models have an implicit assumption about market liquidity. They all assume that for every seller there is a ready buyer. LTCM learned a painful lesson when they found there were no buyers when they were forced to sell. Similarly, many sub-prime failures are victims of illiquidity.

Conclusions: People will continue to use these formulas and models because they are simple to use and it is reasonable to hope that the so-called implied volatility will not change much over the near future. Other things being equal, preference should always be given to simpler models (this is called Occams razor). If it is a short-term model, it is likely to be ok. If it is a long-term model, it could turn out unpleasant. The longer the term, the less reliable the model will become (it is useful to remember that insurance models are always of longer term than trading models). It is more likely to be ok if it is about the average, but if it is about the tail expectation, it is very likely to be wrong, since our knowledge of tails of distributions is very very limited. It is important NOT TO USE any mathematical models to calculate the tail expectations. Financial markets are subject to the influence of mass human psychology and history has proven us wrong every time we thought we got it right. To Use: short term models simple models constant risk-free yield curve constant volatility Not to use (or at least be wary) : complicated models (unless they have passed stress testing) long term models variable risk-free yield curve fluctuating volatility for the calculation of the tail expectation

Credit Rating model

The growth of modern economy, notably the growth of our banking system, is founded on credit. As such, the credit standing of the borrowers are of paramount importance. It is the usual practice to express such credit standing quantitatively as probability of default or, more often qualitatively using such words as good, excellent, or alphabetically as AAA, B, etc. These are called credit rating. Credit rating models are models that try to predict future failures by using past statistics. Here is an illustration of a simplified model. Suppose we have existing data as follows.

Default 0 0 0 1 1 1

x Income 50,000 55,000 60,000 20,000 21,000 22,000

y Mortgage / Default Mortgage Income experience 9,000 11,000 13,000 8,000 10,000 10,000 18% 20% 22% 40% 48% 45% 0.0% 0.0% 0.0% 100.0% 100.0% 100.0%

There are six mortgages with the mortgagors income and the amount of mortgage given. The first three are fine but the last three end up in default. The model used here is called a logit model. The maximum likelihood method is used here to establish the probability of default as follows.

1
Probability of default (P) =

1+ e

- ( 32 .9 - 0 .0019 x + 0 .0041 y)

The following table gives the result of this formula. The last column of the table is the calculated default probability as per this formula. For future mortgage applications we can estimate the probability of default as illustrated in this table.

x Income 50,000 50,000 50,000 50,000 50,000 20,000 20,000 20,000 20,000 20,000 20,000 20,000

Predicted y Mortgage / default Mortgage Income probability 20,000 18,000 16,000 14,000 12,000 8,000 6,000 4,000 2,000 1,000 500 200 40% 36% 32% 28% 24% 40% 30% 20% 10% 5% 3% 1% 100.0% 100.0% 83.7% 0.1% 0.0% 100.0% 100.0% 100.0% 90.2% 12.9% 1.9% 0.5%

This is, of course, a simplified model, which is far from perfect. It is, in fact, far from good at all. But it does illustrate an important characteristic of credit rating models. A credit rating model is a probability model. As such, it relies heavily on past statistics. Two important factors determine whether such a model has the potential to be a good model. First, the size of the past data. Any model that does not have a large size cannot be a good model. The above model has its R2 = 1 indicating a very good fit, but the data size is too small for it to be a good predictor. Second, the data must be homogeneous. This condition however is not easy to meet. Credit behavior can be quite different under different economic conditions and data collected during economic boom can hardly be considered relevant and used to predict default probability under a recession. There are many international rating agencies such Standard & Poor, Moodys, Fitch Ratings, A M Best, etc. that provide rating information on companies and government debts. They provide a useful service to investors at large. However, these credit rating agencies received widespread criticism over the past decade and more so during the current sub-prime crisis for their lack of transparency. Many critics claim that many CDOs received their triple-A rating too easily. Similarly the history of credit default swap is too short for their statistics to be meaningful (i.e., homogeneous). Indeed, the role played by rating agencies can be said to be critical or even instrumental in the development of the sub-prime crisis. Without CDO (collateral debt obligation), CMO (collateral mortgage obligation) and CDS (credit default swap), banks can lend to subprime customers only limited amount of money subject to the size of their capital base. With the securitization of such mortgages into CDOs and CMOs (helped by good ratings and swaps), banks can lend much more to sub-prime customers and a potential minor credit crunch was turned into a world-wide crisis. The rating agencies were also 6

criticized during the 1997 Asian financial crisis when a Thai company received their top rating just a few months before their collapse. For investors to continue to have faith in these ratings in future, it is important that the rating methods be transparent showing the size of the data from which the rating is based and how far back such data went.

Conclusion: A simple credit rating model, either a logit model (where errors are assumed to be logistically distributed) or a probit model (where errors are assumed to be normally distributed), developed in-house, can often help a company in their decision-making with the full knowledge that such ratings are not foolproof. While publicly available ratings are very useful in helping ones decision making, we should not develop a blind faith in them. There has been sufficient evidence that caution is warranted when going through uncharted waters. To Use: In-house developed rating models (logit or probit models) degree of confidence in such models depends on the size and homogeneousness of the data publicly available ratings for securities issued by long established industries publicly available ratings for familiar financial instruments Not to use (or at least be wary) : publicly available ratings for securities issued by new industries publicly available ratings for securities issued in newly developed economies publicly available ratings for unfamiliar financial instruments publicly available ratings from new/unknown rating agencies

Asset and Liability Model

Relative to liability models, asset models are much easier. Generally speaking, once you have the necessary market-consistent assumptions such as risk-free interest rate, stock return volatility, credit rating, etc., your model will come up with a number, i.e., the value of the asset. Usually, asset modeling is a valuation model, which is one-dimensional. Most of the discussions in the previous section on derivative models relate to asset models. Liability models on the other hand are multi-dimensional. The model needs to tell you several things at one time as will be demonstrated below. Whereas you can spread open your Wall Street Journal or turn on your Bloomberg and find out all the market data relating to your assets, market-consistent liability data are generally not available. Most of us use comparable asset data in our liability modeling, which is strictly speaking incorrect. For instance, my company issued a bond sometime ago. It is due (a total of $100 million) in exactly one years time. The risk-free interest rate for one year is 5%. I therefore value my liability as $95.23 million. The only asset my company has is an old building which the realtor says is worth $80 million. Technically, my company is insolvent. Somehow, the market hears the rumor that my company is unable to repay the debt and the bond is downgraded to junk status with a very high probability of default. The market assumes a 30% default probability and the bonds market value has dropped to $70 million. The same bond, which as a liability has a value of $95.23 million, now has a market value of $70 million as an asset. This is what happened to some bonds in Hong Kong market during the Asian financial crisis in 1997. Furthermore, the term market consistent assumptions is a misnomer. It is my guess that ten or fifteen years from now, people will stop talking about market consistent assumptions. Why? Because under the principle of market consistent assumptions, there can be only one risk-free yield curve in one market and everybody in that market must use the same yield curve for their valuation. It therefore follows that some authority (say, a regulatory or accounting authority) can or should declare the yield curve (a set of risk-free rates for various terms) for everybody to follow thereby reducing the whole thing to a rule-based approach as opposed to a principle-based approach that is being advocated these days. I really cant imagine that to happen. But it is politically incorrect to do so under the current environment. Hence a shell life of 15 years in my estimation. The second dimension in the liability modeling is that an asset can stand alone without accompanying liabilities, whereas a liability has always accompanying assets and modeling liabilities must take into account the nature of the accompanying assets.

In a way a liability is not a liability until it is due. If you borrow money from your friend to open a restaurant, you can run the restaurant as long as you like so long as you have positive cashflow and your friend does not ask for the return of the money. This is also why many of those defined-benefit pension schemes do not have their past-service benefits fully funded in many developed countries simply because they are many years away from being due. This dimension of liability modeling requires one to consider if there is sufficient cashflow to meet the liability that is about to become due. If a company does not have enough cashflow to meet the liability it is technically insolvent regardless of the value of the liability. One of the main purposes of liability modeling is to avoid insolvency, which makes the due date an important consideration in liability modeling. When the due date changes unexpectedly, the nature of the liabilities changes and very often the nature of the accompanying assets changes too. If a bank suddenly withdraws its lending facilities to a company its liability becomes immediately due and the nature of the liabilities changes significantly. Similarly if an otherwise sound and solid bank suddenly faces a bank run, the due dates and the nature of its liabilities change. The consequence of such untoward changes of liabilities is often to lead to forced sale of the accompanying assets at greatly reduced prices. Cross default and chain reaction in the recent events of sub-prime crisis (particularly those that lead to the fire sale of Bear Stearn and the wind-up of Carlyle Capital Corp.) highlight the importance of observing the changing nature of the liabilities in liability modeling. A third dimension of liability modeling is to be wary of the worst possible scenario. While conventional financial reporting requires us to report only the average of the liabilities and modern capital adequacy test may requires us to be ready for the 95 or 99 percentile, such requirements are not necessarily adequate. Financial Times recently reported that Turmoil reveals the inadequacy of Basil II. In fact, actuaries have known this dimension of liability for a long time. For catastrophe insurance, no matter how small the probability of occurrence is reinsurance is always sought so that the maximum loss is containable in the event the unthinkable happens. The lesson one learns from this is that one can model the average (for the premium calculation), or model the conditional tail expectation (for the required capital), but before one takes on the risk one should always first look at the maximum possible loss regardless of the probability of occurrence. Mathematical models may help a lot when dealing with natural events such as earthquake, hurricane, etc. by accumulating statistics over a long period of time, but they are of much less use when dealing with financial markets where the most fundamental underlying variable is human behavior, which mutates against any well-developed models. In such cases, we need to look at the absolute maximum without 9

considering the probability. Of course, this does not mean we must not take any risk where the maximum loss is considered too high, but we must be fully aware of the consequences. This is what risk management is all about. The fourth dimension is regulatory requirements which may, and indeed often do, stipulate certain rules that could appear quite arbitrary from the point of view of those who practice asset liability management. In the context of liability modeling, required capital (as used in the usual financial reporting) is just another form of liability and should be treated as such. The difference is just in semantics. The fifth dimension is the confidence level of the public, which we dont even know how to measure. What we do know is that when confidence sags, the markets behavior could change unpredictably and assets shrink in size and liabilities balloon beyond all expectations. This happens, for instance, when there is a bank run. Nobody knows as yet how to model this dimension of liabilities. All that we know is that it does not happen very often, but when it happens its impact could be devastating. Another difficulty encountered in liability modeling is that there is no publicly recognized methodology. There must be hundreds and possibly thousands of different models around the world. There is no standard by which one can judge which one is to be trusted. It is worth noting that the impact of models used in pricing will remain there during the entire duration of the insurance (or any other financial) contract while models used in liability valuation has only impact on one particular moment of time. Liability models can change from time to time. Only the latest results count even though models used for the prior years might be inappropriate. This is not the case for pricing models, which have lasting impact. One needs to be extremely careful in deciding on the pricing models. When dealing with contingent liabilities arising from guarantees, liability modeling is a necessity. When the guarantee is a short term one, conventional modeling may be sufficient. When the guarantee is a long term one, history tells us that conventional modeling as those commonly taught in the textbooks is no longer adequate. As of now, it appears that the only safe long term guarantee is the mortality guarantee under life insurance policies (i.e., cum geographical diversification). Nothing else has stood the test of time. Let us explore two examples here: equity-linked guarantee and interest guarantee. Guaranteed equity-linked life insurance is not a new subject. The first book on this subject was written by an actuary some thirty years ago. This product has recently gained favor in Hong Kong. The pricing of this kind of products is fairly straightforward and most actuaries know how to do it. The investment execution, the delta hedging the buying or selling a portion of the equity portfolio to ensure it remain delta-neutral is however tricky. Lets roll back 20 years to October 1987. Then people who understood Black-Scholes formula programmed this delta hedging buying and selling into their 10

computers. One day, it appeared, everybody was hit by the selling signal at the same time and the US stock market dropped 22% in one day. Some companies I know of lost up to US$1 billion during that week. If you issued guarantee on Japanese stocks at the end of 1989 when Nikkei was 39,000, you are still in deep water today (i.e., after 18 years) with Nikkei at 13,000. For a while after that people focused on interest rate guarantee as we entered into a declining interest environment. By 1997-98, prevailing interest rates dropped to around 2% - 3% and interest rate guarantee provided at early 1990s for 7% or 8% became a heavy burden for many insurance companies. By todays IFRS, many of these companies in Asia (perhaps for the whole world) would have been insolvent. There would have been a global catastrophe if those companies were forced into liquidation. Then, the second dimension mentioned above came into play as almost all of them had positive cash flow, diverting the necessity of forced liquidation. The regulatory requirements were relaxed (the fourth dimension mentioned above) so that such companies could stay in business. Today, many of them have a strong and thriving business.

Conclusion: While asset modeling is one-dimensional, there are at least five dimensions associated with liability modeling: market consistent data, positive cashflow and due dates, worst possible scenario, regulatory requirements and confidence crisis. Pricing models (actually it is more of the product design rather than the pricing) are far more important than valuation models. Unpleasant surprises often come from faulty product design or process management (such as very high leveraging) rather than liability valuation. Modeling for long term liabilities is unreliable. To Use: Regulatory requirements, whether considered reasonable or not, must be observed Always model cash flow under different scenarios Always consider the worst case scenario Use stress testing or back testing Not to use (or at least be wary) : Any pricing model without considering the worst case scenario. Long term modeling for contingent liabilities 11

Macro and Micro Model

When Frederick Taylor introduced time-motion studies in 1881, unit cost became the centre piece of management science covering such giant industries as manufacturing, financial services, etc. Every actuary who masters his or her pricing work today knows how to derive the unit cost, whether expressed as per thousand sum assured, per policy or percentage of premium. The twentieth century was the century of micro-modeling. The introduction of micro-computer, about one hundred years later, together with the ubiquitous internet that became widely popular around a further decade later, launched the digital age as we know today. This will be the beginning of the age of macromodeling. Each unit in the macro model is no longer an independent unit. Take for instance the mobile phone in your pocket. The mobile operator did not incur a unit cost when it acquired you as its customer. In fact, once the necessary infrastructure was set up, acquiring a thousand or even a million customers incurs very little additional cost. There is no matching between the income and the cost in the normal business sense (we are not referring to the matching in the accounting sense). In any such modeling, income and expenses have to be separately modeled as they are not highly correlated. Modern IT system is approaching the stage where micro-modeling our business is no longer appropriate. On the other hand, market behavior of individuals, driven by mass psychology and ultrafast information dissemination, can be highly correlated, as witnessed during the recent collapse of the financial markets due to the sub-prime crisis. Under such circumstances, the critically important assumption of independence of individual variables in actuarial modeling could be seriously violated and modelers must be made aware of this possibility. Macro-modeling is a relatively new concept and there is very little actuarial literature on this subject except one paper that appeared in the journal of Australian Institute of Actuaries some years ago. Well see more papers on this subject in the years to come. Lets look at one familiar example below. In our pricing, we often incorporate an assumption called cost per policy, which is accompanied by an inflation factor. We then project our cash flow thirty or fifty years into the future using this assumption. How many of us ever attempt to verify whether such assumptions made by us or by our predecessors in the company were right or just close to reality? The answer is probably none. If we have an audience of 10,000 people today, the answer is probably still none. We actuaries like to make long term projections, but very few of us care afterwards whether such projections were right or not. 12

In fact, this does not apply to actuaries alone. It is a universal phenomenon true to all scientists who make long term projections. I had the opportunity of having a look at the alternative assumption cost as a percentage of premiums. That data went back almost 100 years for the US life insurance companies and I found that percentage stayed at around 20% for the last 100 years. This can only be explained by the macro phenomenon If we know a certain cost content is acceptable to the consumers there is no incentive to reduce that and we must design our products with sufficient cost incentives to keep our salesmen alive. On the other hand, our cost content must not be so high as to drive away our customers. If you check the records of your company (to the extent they are available) you will find that cost as a percentage of premiums is a far more stable assumption as compared to the cost per policy assumption. The underlying truth of the this example is that the mutation of human behavior can be observed at the macro level, but not at the micro level.

Conclusion: As we go deeper and deeper into the digital age, there is bound to be further changes at the macro level that will drastically alter our business models. A good understanding of the macro-modeling will definitely improve our competitive edge vis--vis other actuaries or other financial professionals. There is something to be learned for the regulatory authorities as well. Maybe our future financial regulations should include something at the macro level. Instead of just a trigger for individual companies (such as capital adequacy trigger), there should be a trigger for the whole industry as well.

13

References: Random Walk In Stock Market Prices Eugene Fama, Journal of Business 1965 The Pricing Of Options And Corporate Liabilities Fischer Black, Myron Scholes, Journal of Political Economy 1973 Pricing And Investment Strategies For Guaranteed Equity-Linked Life Insurance Michael Brennan, Eduardo Schwartz, University of Pennsylvania 1979 Guaranteed Investment Contracts Kenneth Walker, Richard D. Irwin 1989 Market Volatility Robert Shiller, The MIT Press 1989 Continuous Univariate Distributions Norman Johnson, Samuel Kotz, N. Balakrishnan, Wiley 1994 Option Volatility & Pricing Sheldon Natenberg, McGraw-Hill 1994 Interest Rate Spreads & Market Analysis Citicorp 1994 Beyond Value At Risk Kevin Dowd, Wiley 1998 Interest Rate Modeling Jessica James, Nick Webber, Wiley 2000 When Genius Failed Roger Lowenstein, Random House 2000 Inventing Money Nicholas Dunbar, Wiley 2001 Interest Rte Models-Theory and Practice Damiano Brigo, Fabio Mercurio, Springer 2001 Building and Using Dynamic Interest Rate Models Ken Kortanek, Vladimir Medvedev, Wiley 2001 The Fundamentals Of Risk Measurement Chris Marrison, McGraw-Hill 2002 Fooled by Randomness Nassim N. Taleb, Random House 2005 Credit Risk Modeling using Excel and VBA Gunter Loffler, Peter Posch, Wiley 2007 The Black Swan Nassim N. Taleb, random House 2007 14

You might also like