Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Probability Distributions and Insurance Applications

Download as pdf or txt
Download as pdf or txt
You are on page 1of 26

1

Probability Distributions and


Insurance Applications

1.1 Introduction
This book is about risk theory, with particular emphasis on the two major
topics in the field, namely risk models and ruin theory. Risk theory provides
a mathematical basis for the study of general insurance risks, and so it is
appropriate to start with a brief description of the nature of general insurance
risks. The term general insurance essentially applies to an insurance risk
that is not a life insurance or health insurance risk, and so the term covers
familiar forms of personal insurance such as motor vehicle insurance, home
and contents insurance, and travel insurance.
Let us focus on how a motor vehicle insurance policy typically operates
from an insurer’s point of view. Under such a policy, the insured party pays
an amount of money (the premium) to the insurer at the start of the period of
insurance cover, which we assume to be one year. The insured party will make
a claim under the insurance policy each time the insured party has an accident
during the year which results in damage to the motor vehicle, and hence
requires repair costs. There are two sources of uncertainty for the insurer: how
many claims will the insured party make, and, if claims are made, what will the
amounts of these claims be? Thus, if the insurer were to build a probabilistic
model to represent its claims outgo under the policy, the model would require
a component that modelled the number of claims and another that modelled
the amounts of these claims. This is a general framework that applies to
modelling claims outgo under any general insurance policy, not just motor
vehicle insurance, and we will describe it in greater detail in later chapters.
In this chapter we start with a review of distributions, most of which are
commonly used to model either the number of claims arising from an insurance
risk or the amounts of individual claims. We then describe mixed distribu-
tions before introducing two simple forms of reinsurance arrangement and
describing these in mathematical terms. We close the chapter by considering a

https://doi.org/10.1017/9781316650776.002 Published online by Cambridge University Press


2 Probability Distributions and Insurance Applications

problem that is important in the context of risk models, namely finding the dis-
tribution of a sum of independent and identically distributed random variables.

1.2 Important Discrete Distributions


1.2.1 The Poisson Distribution
When a random variable N has a Poisson distribution with parameter λ > 0,
its probability function is given by
λx
Pr(N = x) = e−λ
x!
for x = 0, 1, 2, . . . . The moment generating function is

 x ∞

tx −λ λ −λ (λet )x
MN (t) = e e =e = exp{λ(et − 1)} (1.1)
x! x!
x=0 x=0
and the probability generating function is

 λx
PN (r) = rx e−λ = exp {λ(r − 1)} .
x!
x=0
The moments of N can be found from the moment generating function. For
example,
MN (t) = λet MN (t)
and
MN (t) = λet MN (t) + (λet )2 MN (t)
 
from which it follows that E [N] = λ and E N 2 = λ + λ2 , so that V [N] = λ.
We use the notation P(λ) to denote a Poisson distribution with parameter λ.

1.2.2 The Binomial Distribution


When a random variable N has a binomial distribution with parameters n
and q, where n is a positive integer and 0 < q < 1, its probability function
is given by
 
n x
Pr(N = x) = q (1 − q)n−x
x
for x = 0, 1, 2, . . . , n. The moment generating function is

n  
tx n
MN (t) = e qx (1 − q)n−x
x
x=0

https://doi.org/10.1017/9781316650776.002 Published online by Cambridge University Press


1.2 Important Discrete Distributions 3

n  
 n
= (qet )x (1 − q)n−x
x
x=0
 n
= qet + 1 − q ,

and the probability generating function is

PN (r) = (qr + 1 − q)n .

As
 n−1 t
MN (t) = n qet + 1 − q qe

and
 n−2  t 2  n−1 t
MN (t) = n(n − 1) qet + 1 − q qe + n qet + 1 − q qe ,
 2
it follows that E [N] = nq, E N = n(n − 1)q2 + nq and V [N] = nq(1 − q).
We use the notation B(n, q) to denote a binomial distribution with parame-
ters n and q.

1.2.3 The Negative Binomial Distribution


When a random variable N has a negative binomial distribution with parame-
ters k > 0 and p, where 0 < p < 1, its probability function is given by
 
k+x−1 k x
Pr(N = x) = pq
x
for x = 0, 1, 2, . . . , where q = 1 − p. When k is an integer, calculation
of the probability function is straightforward as the probability function can
be expressed in terms of factorials. An alternative method of calculating the
probability function, regardless of whether k is an integer, is recursively as
k+x
Pr(N = x + 1) = q Pr(N = x)
x+1
for x = 0, 1, 2, . . . , with starting value Pr(N = 0) = pk .
The moment generating function can be found by making use of the identity


Pr(N = x) = 1. (1.2)
x=0

From this it follows that


∞ 
 
k+x−1
(1 − qet )k (qet )x = 1
x
x=0

https://doi.org/10.1017/9781316650776.002 Published online by Cambridge University Press


4 Probability Distributions and Insurance Applications

provided that 0 < qet < 1. Hence



  
k+x−1 k x
MN (t) = etx pq
x
x=0
∞  
pk k+x−1
= (1 − qet )k (qet )x
(1 − qet )k x
x=0
 k
p
=
1 − qet
provided that 0 < qet < 1, or, equivalently, t < − log q. Similarly, the
probability generating function is
 k
p
PN (r) = .
1 − qr
Moments of this distribution can be found by differentiating the moment
generating function, and the mean and variance are given by E [N] = kq/p and
V [N] = kq/p2 .
Equality (1.2) trivially gives
∞ 
 
k+x−1
pk qx = 1 − pk , (1.3)
x
x=1

a result we shall use in Section 4.5.1.


We use the notation NB(k, p) to denote a negative binomial distribution with
parameters k and p.

1.2.4 The Geometric Distribution


The geometric distribution is a special case of the negative binomial distribu-
tion. When the negative binomial parameter k is 1, the distribution is called a
geometric distribution with parameter p and the probability function is

Pr(N = x) = pqx

for x = 0, 1, 2, . . . . From above, it follows that E[N] = q/p, V[N] = q/p2 and
p
MN (t) =
1 − qet
for t < − log q.
This distribution plays an important role in ruin theory, as will be seen in
Chapter 7.

https://doi.org/10.1017/9781316650776.002 Published online by Cambridge University Press


1.3 Important Continuous Distributions 5

1.3 Important Continuous Distributions


1.3.1 The Gamma Distribution
When a random variable X has a gamma distribution with parameters α > 0
and λ > 0, its density function is given by

λα xα−1 e−λx
f (x) =
(α)
for x > 0, where (α) is the gamma function, defined as

(α) = xα−1 e−x dx.
0

In the special case when α is an integer the distribution is also known as an


Erlang distribution, and repeated integration by parts gives the distribution
function as

α−1
(λx)j
F(x) = 1 − e−λx
j!
j=0

for x ≥ 0. The moments and moment generating function of the gamma


distribution can be found by noting that

f (x)dx = 1
0

yields
∞ (α)
xα−1 e−λx dx = . (1.4)
0 λα
The nth moment is
∞ α xα−1 e−λx ∞
 n nλ λα
E X = x dx = xn+α−1 e−λx dx,
0 (α) (α) 0

and from identity (1.4) it follows that


  λα (α + n) (α + n)
E Xn = α+n
= . (1.5)
(α) λ (α)λn
 
In particular, E [X] = α/λ and E X 2 = α(α + 1)/λ2 , so that V [X] = α/λ2 .
We can find the moment generating function in a similar fashion. As
∞ α α−1 e−λx ∞
tx λ x λα
MX (t) = e dx = xα−1 e−(λ−t)x dx , (1.6)
0 (α) (α) 0

https://doi.org/10.1017/9781316650776.002 Published online by Cambridge University Press


6 Probability Distributions and Insurance Applications

application of identity (1.4) gives


 α
λα (α) λ
MX (t) = = . (1.7)
(α) (λ − t)α λ−t
Note that in identity (1.4), λ > 0. Hence, in order to apply (1.4) to (1.6) we
require that λ − t > 0, so that the moment generating function exists when
t < λ.
A result that will be used in Section 4.8.2 is that the coefficient of skewness

of X, which we denote by Sk[X], is 2/ α. This follows from the definition of
the coefficient of skewness, namely third central moment divided by standard
deviation cubed, and the fact that the third central moment is


α 3   α α 3
E X− = E X 3 − 3 E[X 2 ] + 2
λ λ λ
α(α + 1)(α + 2) − 3α 2 (α + 1) + 2α 3
=
λ3

= 3.
λ
We use the notation γ (α, λ) to denote a gamma distribution with parameters
α and λ.

1.3.2 The Exponential Distribution


The exponential distribution is a special case of the gamma distribution.
It is just a gamma distribution with parameter α = 1. Hence, the exponential
distribution with parameter λ > 0 has density function

f (x) = λe−λx

for x > 0, and has distribution function

F(x) = 1 − e−λx

for x ≥ 0. From equation (1.5), the nth moment of the distribution is


  n!
E Xn = n ,
λ
and from equation (1.7) the moment generating function is
λ
MX (t) =
λ−t
for t < λ.

https://doi.org/10.1017/9781316650776.002 Published online by Cambridge University Press


1.3 Important Continuous Distributions 7

1.3.3 The Pareto Distribution


When a random variable X has a Pareto distribution with parameters α > 0
and λ > 0, its density function is given by
αλα
f (x) =
(λ + x)α+1
for x > 0. Integrating this density we find that the distribution function is
 α
λ
F(x) = 1 −
λ+x
for x ≥ 0. Whenever moments of the distribution exist, they can be found from

E[X n ] = xn f (x)dx
0

by integration by parts. However, they can also be found individually using


the following approach. Since the integral of the density function over (0, ∞)
equals 1, we have

dx 1
= ,
0 (λ + x) α+1 αλ α

an identity which holds provided that α > 0. To find E [X], we can write
∞ ∞ ∞
E [X] = xf (x)dx = (x + λ − λ)f (x)dx = (x + λ)f (x)dx − λ,
0 0 0

and inserting for f we have


∞ αλα
E [X] = dx − λ.
0 (λ + x)α
We can evaluate the integral expression by rewriting the integrand in terms of
a Pareto density function with parameters α − 1 and λ. Thus,

αλ (α − 1)λα−1
E [X] = dx − λ, (1.8)
α−1 0 (λ + x)α
and since the integral equals 1,
αλ λ
E [X] = −λ= .
α−1 α−1
It is important to note that the integrand in equation (1.8) is a Pareto density
function only
 if α > 1, and hence E [X] exists only for α > 1. Similarly, we
can find E X 2 from

https://doi.org/10.1017/9781316650776.002 Published online by Cambridge University Press


8 Probability Distributions and Insurance Applications

  ∞
E X2 = (x + λ)2 − 2λx − λ2 f (x)dx
0

= (x + λ)2 f (x)dx − 2λE[X] − λ2 .
0

Proceeding as in the case of E [X] we can show that


  2λ2
E X2 =
(α − 1)(α − 2)
provided that α > 2, and hence that

αλ2
V [X] = .
(α − 1)2 (α − 2)
An alternative method of finding moments of the Pareto distribution is given
in Exercise 5 at the end of this chapter.
We use the notation Pa(α, λ) to denote a Pareto distribution with parameters
α and λ.

1.3.4 The Normal Distribution


When a random variable X has a normal distribution with parameters μ
and σ 2 , its density function is given by
 
1 (x − μ)2
f (x) = √ exp −
σ 2π 2σ 2

for −∞ < x < ∞. We use the notation N(μ, σ 2 ) to denote a normal


distribution with parameters μ and σ 2 .
The standard normal distribution has parameters 0 and 1 and its distribution
function is denoted , where
x  
1
(x) = √ exp −z2 /2 dz.
−∞ 2π

A key relationship is that if X ∼ N(μ, σ 2 ) and if Z = (X − μ)/σ , then Z ∼


N(0, 1).
The moment generating function is
 
MX (t) = exp μt + 12 σ 2 t2 (1.9)

from which it can be shown (see Exercise 7) that E[X] = μ and V[X] = σ 2 .

https://doi.org/10.1017/9781316650776.002 Published online by Cambridge University Press


1.4 Mixed Distributions 9

1.3.5 The Lognormal Distribution


When a random variable X has a lognormal distribution with parameters μ
and σ , where −∞ < μ < ∞ and σ > 0, its density function is given by
 
1 (log x − μ)2
f (x) = √ exp −
xσ 2π 2σ 2
for x > 0. The distribution function can be obtained by integrating the density
function as follows:
x  
1 (log y − μ)2
F(x) = √ exp − dy ,
0 yσ 2π 2σ 2
and the substitution z = log y yields
log x  
1 (z − μ)2
F(x) = √ exp − dz.
−∞ σ 2π 2σ 2
As the integrand is the N(μ, σ 2 ) density function,
 
log x − μ
F(x) = .
σ
Thus, probabilities under a lognormal distribution can be calculated from the
standard normal distribution function.
We use the notation LN(μ, σ ) to denote a lognormal distribution with
parameters μ and σ . From the preceding argument it follows that if
X ∼ LN(μ, σ ), then log X ∼ N(μ, σ 2 ).
This relationship between normal and lognormal distributions is extremely
useful, particularly in deriving moments. If X ∼ LN(μ, σ ) and Y = log X, then
     
E X n = E enY = MY (n) = exp μn + 12 σ 2 n2 ,

where the final equality follows by equation (1.9).

1.4 Mixed Distributions


Many of the distributions encountered in this book are mixed distributions. To
illustrate the idea of a mixed distribution, let X be exponentially distributed
with mean 100, and let the random variable Y be defined by

⎨ 0 if X < 20
Y= X − 20 if 20 ≤ X < 300 .

280 if X ≥ 300

https://doi.org/10.1017/9781316650776.002 Published online by Cambridge University Press


10 Probability Distributions and Insurance Applications

0.9

0.8

0.7

0.6
H(x)

0.5

0.4

0.3

0.2

0.1

0
0 50 100 150 200 250 300
x

Figure 1.1 The distribution function H.

Then
Pr(Y = 0) = Pr(X < 20) = 1 − e−0.2 = 0.1813,
and similarly Pr(Y = 280) = 0.0498. Thus, Y has masses of probability at
the points 0 and 280. However, in the interval (0, 280), the distribution of Y is
continuous, with, for example,
Pr(30 < Y ≤ 100) = Pr(50 < X ≤ 120) = 0.3053.
Figure 1.1 shows the distribution function, H, of Y. Note that there are jumps
at 0 and 280, corresponding to the masses of probability at these points.
As the distribution function is differentiable in the interval (0, 280), Y has a
density function in this interval. Letting h denote the density function of Y, the
moments of Y can be found from
280
 r
E Y = xr h(x)dx + 280r Pr(Y = 280).
0
At certain points in this book, it will be convenient to use Stieltjes integral
notation, so that we do not have to specify whether a distribution is discrete,
continuous or mixed. In this notation, we write the rth moment of Y as

 r
E Y = xr dH(x).
0

https://doi.org/10.1017/9781316650776.002 Published online by Cambridge University Press


1.5 Insurance Applications 11

More generally, if K(x) = Pr(Z ≤ x) is a mixed distribution on [0, ∞), and m


is a function, then

E [m(Z)] = m(x)dK(x),
0

where we interpret the integral as



m (xi ) Pr(Z = xi ) + m(x)k(x)dx,
xi

where summation is over the points {xi } at which there is a mass of probability,
and integration is over the intervals in which K is continuous with density
function k.

1.5 Insurance Applications


In this section we discuss some functions of random variables. In particular, we
focus on functions that are natural in the context of reinsurance. Throughout
this section we let X denote the amount of a claim, and let X have distribution
function F. Further, we assume that all claim amounts are non-negative
quantities, so that F(x) = 0 for x < 0, and, with the exception of Example 1.7,
we assume that X is a continuous random variable, with density function f .
A reinsurance arrangement is an agreement between an insurer and a
reinsurer under which claims that occur in a fixed period of time (e.g. one year)
are split between the insurer and the reinsurer in an agreed manner. Thus, the
insurer is effectively insuring part of a risk with a reinsurer and, of course, pays
a premium to the reinsurer for this cover. One effect of reinsurance is that it
reduces the variability of claim payments by the insurer.

1.5.1 Proportional Reinsurance


Under a proportional reinsurance arrangement, the insurer pays a fixed pro-
portion, say, a, of each claim that occurs during the period of the reinsurance
arrangement. The remaining proportion, 1 − a, of each claim is paid by the
reinsurer.
Let Y denote the part of a claim paid by the insurer under this proportional
reinsurance arrangement, and let Z denote the part paid by the reinsurer.
In terms of random variables, Y = aX and Z = (1 − a)X, and trivially
Y + Z = X. Thus, the random variables Y and Z are both scale transformations
of the random variable X. The distribution function of Y is given by

https://doi.org/10.1017/9781316650776.002 Published online by Cambridge University Press


12 Probability Distributions and Insurance Applications

Pr(Y ≤ x) = Pr(aX ≤ x) = Pr(X ≤ x/a) = F(x/a),


and the density function is
1
a f (x/a).

Example 1.1 Let X ∼ γ (α, λ). What is the distribution of aX?


Solution 1.1 As
λα xα−1 e−λx
f (x) = ,
(α)
it follows that the density function of aX is
λα xα−1 e−λx/a
.
aα (α)
Thus, the distribution of aX is γ (α, λ/a).
Example 1.2 Let X ∼ LN(μ, σ ). What is the distribution of aX?
Solution 1.2 As
 
1 (log x − μ)2
f (x) = √ exp − ,
xσ 2π 2σ 2
it follows that the density function of aX is
 
1 (log x − log a − μ)2
√ exp − .
xσ 2π 2σ 2
Thus, the distribution of aX is LN(μ + log a, σ ).

1.5.2 Excess of Loss Reinsurance


Under an excess of loss reinsurance arrangement, a claim is shared between
the insurer and the reinsurer only if the claim exceeds a fixed amount called
the retention level. Otherwise, the insurer pays the claim in full. Let M denote
the retention level, and let Y and Z denote the amounts paid by the insurer and
the reinsurer respectively under this reinsurance arrangement. Mathematically,
this arrangement can be represented as the insurer pays Y = min(X, M) and
the reinsurer pays Z = max(0, X − M), with Y + Z = X.

The Insurer’s Position


Let FY be the distribution function of Y. Then it follows from the definition of
Y that

F(x) for x < M
FY (x) = .
1 for x ≥ M

https://doi.org/10.1017/9781316650776.002 Published online by Cambridge University Press


1.5 Insurance Applications 13

Thus, the distribution of Y is mixed, with a density function f (x) for 0 <
x < M, and a mass of probability at M, with Pr(Y = M) = 1 − F(M).
As Y is a function of X, the moments of Y can be calculated from

 
E Yn = (min(x, M))n f (x)dx,
0
and this integral can be split into two parts since min(x, M) equals x for 0 ≤
x < M and equals M for x ≥ M. Hence
M ∞
 
E Yn = xn f (x)dx + M n f (x)dx
0 M
M
= xn f (x)dx + M n (1 − F(M)) . (1.10)
0
In particular,
M
E[Y] = xf (x)dx + M (1 − F(M)) ,
0
so that
d
E[Y] = 1 − F(M) > 0.
dM
Thus, as a function of M, E [Y] increases from 0 when M = 0 to E [X] as
M → ∞.
Example 1.3 Let F(x) = 1 − e−λx , x ≥ 0. Find E[Y].
Solution 1.3 We have
M
E [Y] = xλe−λx dx + Me−λM ,
0
and integration by parts yields
 
E [Y] = 1
λ 1 − e−λM .
Example 1.4 Let X ∼ LN(μ, σ ). Find E[Y n ].
Solution 1.4 Inserting the lognormal density function into the integral in
equation (1.10) we get
M  
  1 (log x − μ)2
E Yn = xn √ exp − 2
dx+M n (1 − F(M)) . (1.11)
0 xσ 2π 2σ
To evaluate this, we consider separately each term on the right-hand side of
equation (1.11). Let
M  
1 (log x − μ)2
I= x n
√ exp − dx.
0 xσ 2π 2σ 2

https://doi.org/10.1017/9781316650776.002 Published online by Cambridge University Press


14 Probability Distributions and Insurance Applications

To deal with an integral of this type, there is a standard substitution, namely


y = log x. This gives
 
log M 1 (y − μ)2
I= exp{yn} √ exp − dy.
−∞ σ 2π 2σ 2

The technique in evaluating this integral is to write the integrand in terms of a


normal density function (different to the N(μ, σ 2 ) density function). To achieve
this we apply the technique of “completing the square” in the exponent, as
follows:

(y − μ)2 −1  
yn − 2
= (y − μ)2 − 2σ 2 yn
2σ 2σ 2
−1  
= y2 − 2μy + μ2 − 2σ 2 yn
2σ 2
−1  
= y2 − 2y(μ + σ 2 n) + μ2 .
2σ 2
Noting that the terms inside the square brackets would give the square of
y − (μ + σ 2 n) if the final term were (μ + σ 2 n)2 instead of μ2 , we can write
the exponent as

−1  
(y − (μ + σ 2 n))2 − (μ + σ 2 n)2 + μ2
2σ 2
−1  
= (y − (μ + σ 2 n))2 − 2μσ 2 n − σ 4 n2
2σ 2
1
= μn + 12 σ 2 n2 − (y − (μ + σ 2 n))2 .
2σ 2
Hence
  log M  
1 1
I = exp μn + 1 2 2
2σ n √ exp − 2 (y − (μ + σ n)) dy,
2 2
−∞ σ 2π 2σ

and as the integrand is the N(μ + σ 2 n, σ 2 ) density function,


   
log M − μ − σ 2 n
I = exp μn + 1 2 2
2σ n .
σ

Finally, using the relationship between normal and lognormal distributions,


 
log M − μ
1 − F(M) = 1 − ,
σ

https://doi.org/10.1017/9781316650776.002 Published online by Cambridge University Press


1.5 Insurance Applications 15

so that
 
  log M − μ − σ 2 n
E Y n
= exp{μn + 2 σ n }
1 2 2
σ
  
log M − μ
+ Mn 1 − .
σ

The Reinsurer’s Position


From the definition of Z it follows that Z takes the value zero if X ≤ M, and
takes the value X − M if X > M. Hence, if FZ denotes the distribution function
of Z, then FZ (0) = F(M), and for x > 0, FZ (x) = F(x + M). Thus, FZ is a
mixed distribution with a mass of probability at 0.
The moments of Z can be found in a similar fashion to those of Y. We have

 
E Zn = (max(0, x − M))n f (x)dx,
0
and since max(0, x − M) is 0 for 0 ≤ x ≤ M, we have

 
E Zn = (x − M)n f (x)dx. (1.12)
M

Example 1.5 Let F(x) = 1 − e−λx , x ≥ 0. Find E[Z].


Solution 1.5 Setting n = 1 in equation (1.12) we have

E [Z] = (x − M)λe−λx dx
M

= yλe−λ(y+M) dy
0
= e−λM E [X]
= λ1 e−λM .
Alternatively, the identity E [Z] = E [X] − E [Y] yields the answer with
E[X] = 1/λ and E[Y] given by the solution to Example 1.3.
Example 1.6 Let F(x) = 1 − e−λx , x ≥ 0. Find MZ (t).
 
Solution 1.6 By definition, MZ (t) = E etZ and as Z = max(0, X − M),

MZ (t) = et max(0,x−M) λe−λx dx
0
M ∞
−λx
= 0
e λe dx + et(x−M) λe−λx dx
0 M

https://doi.org/10.1017/9781316650776.002 Published online by Cambridge University Press


16 Probability Distributions and Insurance Applications


= 1 − e−λM + λ ety−λ(y+M) dy
0
λe −λM
= 1 − e−λM +
λ−t
provided that t < λ.

The above approach is a slightly artificial way of looking at the reinsurer’s


position since it includes zero as a possible “claim amount” for the reinsurer.
An alternative, and more realistic, way of considering the reinsurer’s position
is to consider the distribution of the non-zero amounts paid by the reinsurer. In
practice, the reinsurer is likely to have information only on these amounts, as
the insurer is unlikely to inform the reinsurer each time there is a claim whose
amount is less than M.

Example 1.7 Let X have a discrete distribution as follows:

Pr(X = 100) = 0.6


Pr(X = 175) = 0.3 .
Pr(X = 200) = 0.1

If the insurer effects excess of loss reinsurance with retention level 150, what
is the distribution of the non-zero payments made by the reinsurer?

Solution 1.7 First, we note that the distribution of Z is given by

Pr(Z = 0) = 0.6
Pr(Z = 25) = 0.3 .
Pr(Z = 50) = 0.1

Now let W denote the amount of a non-zero payment made by the reinsurer.
Then W can take one of two values: 25 and 50. Since payments of amount
25 are three times as likely as payments of amount 50, we can write the
distribution of W as
Pr(W = 25) = 0.75
.
Pr(W = 50) = 0.25

The argument in Example 1.7 can be formalised, as follows. Let W denote


the amount of a non-zero payment by the reinsurer under an excess of loss
reinsurance arrangement with retention level M. The distribution of W is
identical to that of Z|Z > 0. Hence

Pr(W ≤ x) = Pr(Z ≤ x|Z > 0) = Pr(X ≤ x + M|X > M)

https://doi.org/10.1017/9781316650776.002 Published online by Cambridge University Press


1.5 Insurance Applications 17

from which it follows that


Pr(M < X ≤ x + M) F(x + M) − F(M)
Pr(W ≤ x) = = . (1.13)
Pr(X > M) 1 − F(M)
Differentiation gives the density function of W as
f (x + M)
. (1.14)
1 − F(M)

Example 1.8 Let F(x) = 1 − e−λx , x ≥ 0. What is the distribution of the


non-zero claim payments made by the reinsurer?

Solution 1.8 By formula (1.14), the density function is

λe−λ(x+M)
= λe−λx ,
e−λM
so that the distribution of W is the same as that of X. (This rather surprising
result is a consequence of the “memoryless” property of the exponential
distribution.)

Example 1.9 Let X ∼ Pa(α, λ). What is the distribution of the non-zero claim
payments made by the reinsurer?

Solution 1.9 Again applying formula (1.14), the density function is


 
αλα λ+M α α(λ + M)α
= ,
(λ + M + x) α+1 λ (λ + M + x)α+1

so that the distribution of W is Pa(α, λ + M).

1.5.3 Policy Excess


Insurance policies with a policy excess are very common, particularly in motor
vehicle insurance. If a policy is issued with an excess of d, then the insured
party pays any loss of amount less than or equal to d in full, and pays d on
any loss in excess of d. Thus, if X represents the amount of a loss, when a loss
occurs the insured party pays min(X, d) and the insurer pays max(0, X − d).
These quantities are of the same form as the amounts paid by the insurer and
the reinsurer when a claim occurs (for the insurer) under an excess of loss
reinsurance arrangement. Hence there are no new mathematical considerations
involved. It is important, however, to recognise that X represents the amount
of a loss, and not the amount of a claim.

https://doi.org/10.1017/9781316650776.002 Published online by Cambridge University Press


18 Probability Distributions and Insurance Applications

1.6 Sums of Random Variables


In many insurance applications we are interested in the distribution of the
sum of independent and identically distributed random variables. For example,
suppose that an insurer issues n policies, and the claim amount from policy
i, i = 1, 2, . . . , n, is a random variable Xi . Then the total amount the insurer

pays in claims from these n policies is Sn = ni=1 Xi . An obvious question
to ask is what is the distribution of Sn ? This is the question we consider in
this section, on the assumption that {Xi }ni=1 are independent and identically
distributed random variables. When the distribution of Sn exists in a closed
form, we can usually find it by one of the methods described in the next two
sections.

1.6.1 Moment Generating Function Method


This is a very neat way of finding the distribution of Sn . Define MS to be the
moment generating function of Sn , and define MX to be the moment generating
function of X1 . Then
   
MS (t) = E etSn = E et(X1 +X2 +···+Xn ) .

Using independence, it follows that


     
MS (t) = E etX1 E etX2 · · · E etXn ,

and as the Xi ’s are identically distributed,

MS (t) = MX (t)n .

Hence, if we can identify MX (t)n as the moment generating function of a


distribution, we know the distribution of Sn by the uniqueness property of
moment generating functions.

Example 1.10 Let X1 have a Poisson distribution with parameter λ. What is


the distribution of Sn ?

Solution 1.10 As
 
MX (t) = exp λ(et − 1) ,

we have
 
MS (t) = exp λn(et − 1) ,

and so Sn has a Poisson distribution with parameter λn.

https://doi.org/10.1017/9781316650776.002 Published online by Cambridge University Press


1.6 Sums of Random Variables 19

Example 1.11 Let X1 have an exponential distribution with mean 1/λ. What
is the distribution of Sn ?
Solution 1.11 As
λ
MX (t) =
λ−t
for t < λ, we have
 n
λ
MS (t) = ,
λ−t
and so Sn has a γ (n, λ) distribution.

1.6.2 Direct Convolution of Distributions


Direct convolution is a more direct, and less elegant, method of finding the
distribution of Sn . Let us first assume that {Xi }ni=1 are discrete random variables,
distributed on the non-negative integers, so that Sn is also distributed on the
non-negative integers.
Let x be a non-negative integer, and consider first the distribution of S2 . The
convolution approach to finding Pr(S2 ≤ x) considers how the event {S2 ≤ x}
can occur. This event occurs when X2 takes the value j, where j can be any
value from 0 up to x, and when X1 takes a value less than or equal to x − j, so
that their sum is less than or equal to x. Summing over all possible values of j
and using the fact that X1 and X2 are independent, we have

x
Pr(S2 ≤ x) = Pr(X1 ≤ x − j) Pr(X2 = j).
j=0

The same argument can be applied to find Pr(S3 ≤ x) by writing S3 = S2 + X3 ,


and by noting that S2 and X3 are independent (as S2 = X1 + X2 ). Thus,

x
Pr(S3 ≤ x) = Pr(S2 ≤ x − j) Pr(X3 = j),
j=0

and, in general,

x
Pr(Sn ≤ x) = Pr(Sn−1 ≤ x − j) Pr(Xn = j). (1.15)
j=0

The same reasoning gives



x
Pr(Sn = x) = Pr(Sn−1 = x − j) Pr(Xn = j).
j=0

https://doi.org/10.1017/9781316650776.002 Published online by Cambridge University Press


20 Probability Distributions and Insurance Applications

Now let F be the distribution function of X1 and let fj = Pr(X1 = j). We define

F n∗ (x) = Pr(Sn ≤ x)

and call F n∗ the n-fold convolution of the distribution F with itself. Then by
equation (1.15),

x
F n∗ (x) = F (n−1)∗ (x − j)fj .
j=0

Note that F 1∗ = F, and, by convention, we define F 0∗ (x) = 1 for x ≥ 0 with


F 0∗ (x) = 0 for x < 0. Similarly, we define fxn∗ = Pr(Sn = x), so that


x
(n−1)∗
fxn∗ = fx−j fj
j=0

with f 1∗ = f .
When F is a continuous distribution on (0, ∞) with density function f , the
analogues of the above results are
x
F n∗ (x) = F (n−1)∗ (x − y)f (y)dy
0

and
x
f n∗ (x) = f (n−1)∗ (x − y)f (y)dy. (1.16)
0

These results can be used to find the distribution of Sn directly.

Example 1.12 What is the distribution of Sn when {Xi }ni=1 are independent
exponentially distributed random variables, each with mean 1/λ?

Solution 1.12 Setting n = 2 in equation (1.16) we get


x
f (x) =
2∗
f (x − y)f (y)dy
0 x
= λe−λ(x−y) λe−λy dy
0
x
= λ2 e−λx dy
0
−λx
= λ xe 2
,

https://doi.org/10.1017/9781316650776.002 Published online by Cambridge University Press


1.6 Sums of Random Variables 21

so that S2 has a γ (2, λ) distribution. Next, setting n = 3 in equation (1.16)


we get
x
f 3∗ (x) = f 2∗ (x − y)f (y)dy
0
x
= f 2∗ (y)f (x − y)dy
0
x
= λ2 ye−λy λe−λ(x−y) dy
0
1 3 2 −λx
= 2λ x e ,

so that the distribution of S3 is γ (3, λ). An inductive argument can now be used
to show that for a general value of n, Sn has a γ (n, λ) distribution.

In general, it is much easier to apply the moment generating function method


to find the distribution of Sn .

1.6.3 Recursive Calculation for Discrete Random Variables


In the case when X1 is a discrete random variable, distributed on the non-
negative integers, it is possible to calculate the probability function of Sn
recursively. Define

fj = Pr(X1 = j) and gj = Pr(Sn = j),

each for j = 0, 1, 2, . . . . We denote the probability generating function of X1


by PX , so that


PX (r) = r j fj ,
j=0

and the probability generating function of Sn by PS , so that




PS (r) = rk gk .
k=0

Using arguments that have previously been applied to moment generating


functions, we have
PS (r) = PX (r)n ,

and differentiation with respect to r gives

PS (r) = nPX (r)n−1 PX (r).

https://doi.org/10.1017/9781316650776.002 Published online by Cambridge University Press


22 Probability Distributions and Insurance Applications

When we multiply each side of the above identity by rPX (r), we get

PX (r)rPS (r) = nPS (r)rPX (r),

which can be expressed as



 ∞
 ∞
 ∞

j
r fj kr gk = n
k k
r gk jrj fj . (1.17)
j=0 k=1 k=0 j=1

To find an expression for gx , we consider the coefficient of rx on each side


of equation (1.17), where x is a positive integer. On the left-hand side, the
coefficient of rx can be found as follows. For j = 0, 1, 2, . . . , x − 1, multiply
together the coefficient of rj in the first sum with the coefficient of rx−j in the
second sum. Adding these products together gives the coefficient of rx , namely


x−1
f0 xgx + f1 (x − 1)gx−1 + · · · + fx−1 g1 = (x − j)fj gx−j .
j=0

Similarly, on the right-hand side of equation (1.17) the coefficient of rx is



x
n (g0 xfx + g1 (x − 1)fx−1 + · · · + gx−1 f1 ) = n jfj gx−j .
j=1

Since these coefficients must be equal we have


x−1 
x
xgx f0 + (x − j)fj gx−j = n jfj gx−j
j=1 j=1

which gives (noting that the sum on the left-hand side is unaltered when the
upper limit of summation is increased to x)
x  
1 j
gx = (n + 1) − 1 fj gx−j . (1.18)
f0 x
j=1

The important point about this result is that it gives a recursive method of
calculating the probability function {gx }∞ ∞
x=0 . Given the values {fj }j=0 we can
use the value of g0 to calculate g1 , then the values of g0 and g1 to calculate g2 ,
and so on. The starting value for the recursive calculation is g0 which is given
by f0n since Sn takes the value 0 if and only if each Xi , i = 1, 2, . . . , n, takes the
value 0.
This is a very useful result as it permits much more efficient evaluation of the
probability function of Sn than the direct convolution approach of the previous
section.

https://doi.org/10.1017/9781316650776.002 Published online by Cambridge University Press


1.6 Sums of Random Variables 23

We conclude with three remarks about this result:


(i) Computer implementation of formula (1.18) is necessary, especially when
n is large. It is, however, an easy task to program this formula.
(ii) It is straightforward (see Exercise 12) to adapt this result to the situation
when X1 is distributed on m, m + 1, m + 2, . . . , where m is a positive
integer.
(iii) The recursion formula is unstable. That is, it may give numerical answers
which do not make sense. Thus, caution should be employed when
applying this formula. However, for most practical purposes, numerical
stability is not an issue.

Example 1.13 Let {Xi }4i=1 be independent and identically distributed random
variables with common probability function fj = Pr(X1 = j) given by

f0 = 0.4 f2 = 0.2
f1 = 0.3 f3 = 0.1.
4
Let S4 = i=1 Xi . Recursively calculate Pr(S4 = r) for r = 1, 2, 3 and 4.

Solution 1.13 The starting value for the recursive calculation is

g0 = Pr(S4 = 0) = f04 = 0.44 = 0.0256.

Now note that as fj = 0 for j = 4, 5, 6, . . ., equation (1.18) can be written with


a different upper limit of summation as

min(3,x)  
1  5j
gx = − 1 fj gx−j ,
f0 x
j=1

and so
1
g1 = 4f1 g0 = 0.0768,
f0
1 3
g2 = 2 f1 g1 + 4f2 g0 = 0.1376,
f0
1 2
g3 = f 1 g2 + 7
f 2 g1 + 4f 3 g 0 = 0.1840,
f0 3 3

1 1
g4 = f 1 g3 + 3
f 2 g2 + 11
f 3 g 1 = 0.1905.
f0 4 2 4

https://doi.org/10.1017/9781316650776.002 Published online by Cambridge University Press


24 Probability Distributions and Insurance Applications

1.7 Notes and References


Further details of the distributions discussed in this chapter, including a
discussion of how to fit parameters to these distributions, can be found in
Hogg and Klugman (1984). See also Klugman et al. (1998).
The recursive formula of Section 1.6.3 was derived by De Pril (1985), and a
very elegant proof of the result can be found in his paper.

1.8 Exercises
1. A random variable X has a logarithmic distribution with parameter θ ,
where 0 < θ < 1, if its probability function is
−1 θx
Pr(X = x) =
log(1 − θ ) x
for x = 1, 2, 3, . . . . Show that
log(1 − θ et )
MX (t) =
log(1 − θ )
for t < − log θ . Hence, or otherwise, find the mean and variance of this
distribution.
2. A random variable X has a beta distribution with parameters α > 0 and
β > 0 if its density function is
(α + β) α−1
f (x) = x (1 − x)β−1
(α)(β)
for 0 < x < 1. Show that
  (α + β)(n + α)
E Xn =
(α)(n + α + β)
and hence find the mean and variance of X.
3. A random variable X has a Weibull distribution with parameters c > 0 and
γ > 0 if its density function is

f (x) = cγ xγ −1 exp{−cxγ }

for x > 0.
(a) Show that X has distribution function

F(x) = 1 − exp{−cxγ }

for x ≥ 0.

https://doi.org/10.1017/9781316650776.002 Published online by Cambridge University Press


1.8 Exercises 25

(b) Let Y = X γ . Show that Y has an exponential distribution with mean


1/c. Hence show that

  (1 + n/γ )
E Xn = .
cn/γ

4. Let γn (x) = β n xn−1 e−βx / (n) denote the Erlang(n, β) density function,
where n is a positive integer. Show that

1
n
γn (x + y) = γn−j+1 (x) γj (y).
β
j=1

5. The random variable X has a generalised Pareto distribution with parame-


ters α > 0, λ > 0 and k > 0 if its density function is

(α + k)λα xk−1


f (x) =
(α)(k)(λ + x)k+α

for x > 0. Use the fact that the integral of this density function over (0, ∞)
equals 1 to find the first three moments of a Pa(α, λ) distribution, where
α > 3.
6. The random variable X has a Pa(α, λ) distribution. Let M be a positive
constant. Show that
  α−1 
λ λ
E[min(X, M)] = 1− .
α−1 λ+M

7. Use the technique of completing the square from Example


 1.4 to show that
when X ∼ N(μ, σ ), MX (t) = exp μt + 2 σ t . Verify that E[X] = μ
2 1 2 2

and V[X] = σ 2 by differentiating this moment generating function.


8. Let the random variable X have distribution function F given by

⎨ 0 for x < 20
F(x) = (x + 20)/80 for 20 ≤ x < 40 .

1 for x ≥ 40

Calculate
(a) Pr(X ≤ 30),
(b) Pr(X = 40),
(c) E[X] and
(d) V[X].

https://doi.org/10.1017/9781316650776.002 Published online by Cambridge University Press


26 Probability Distributions and Insurance Applications

9. The random variable X has a lognormal distribution with mean 100 and
variance 30, 000. Calculate
(a) E[min(X, 250)],
(b) E[max(0, X − 250)],
(c) V[min(X, 250)] and
(d) E[X|X > 250].
10. Let {Xi }ni=1 be independent and identically distributed random variables.

Find the distribution of ni=1 Xi when
(a) X1 ∼ B(m, q) and
(b) X1 ∼ N(μ, σ 2 ).
11. {Xi }4i=1 are independent and identically distributed random variables. X1
has a geometric distribution with
Pr(X1 = x) = 0.75(0.25x )

4
for x = 0, 1, 2, . . . . Calculate Pr i=1 Xi ≤ 4
4
(a) by finding the distribution of i=1 Xi and
(b) by applying the recursion formula of Section 1.6.3.
12. Let {Xi }ni=1 be independent and identically distributed random variables,
each distributed on m, m + 1, m + 2, . . . , where m is a positive integer.
n
Let Sn = i=1 Xi and define fj = Pr(X1 = j) for j = m, m + 1,
m + 2, . . . and gj = Pr(Sn = j) for j = mn, mn + 1, mn + 2, . . . . Show that
gmn = fmn ,
and for r = mn + 1, mn + 2, mn + 3, . . . ,
r−mn  
1  (n + 1)j
gr = − 1 fj+m gr−j .
fm r − mn
j=1

https://doi.org/10.1017/9781316650776.002 Published online by Cambridge University Press

You might also like