Derivation of The Poisson Distribution
Derivation of The Poisson Distribution
Derivation of The Poisson Distribution
RHUL Physics
1 December, 2009
Derivation of the Poisson distribution
I this note we derive the functional form of the Poisson distribution and investigate some
of its properties. Consider a time t in which some number n of events may occur. Examples
are the number of photons collected by a telescope or the number of decays of a large sample
of radioactive nuclei. Suppose that the events are independent, i.e., the occurrence of one
event has no inuence on the probability for the occurrence of another. Furthermore, suppose
that the probability of a single event in any short time interval t is
P(1; t) = t , (1)
where is a constant. In Section 1 we will show that the probability for n events in the time
t is given by
P(n; ) =
n
n!
e
, (2)
where the parameter is related to by
= t . (3)
We will follow the convention that arguments in a probability distribution to the left of the
semi-colon are random variables, that is, outcomes of a repeatable experiment, such as the
number of events n. Arguments to the right of the semi-colon are parameters, i.e., constants.
The Poisson distribution is shown in Fig. 1 for several values of the parameter . In
Section 2 we will show that the mean value n of the Poisson distribution is given by
n = , (4)
and that the standard deviation is
=
. (5)
The mean roughly indicates the central region of the distribution, but this is not the same
as the most probable value of n. Indeed n is an integer but in general is not. The standard
deviation is a measure of the width of the distribution.
1 Derivation of the Poisson distribution
Consider the time interval t broken into small subintervals of length t. If t is suciently
short then we can neglect the probability that two events will occur in it. We will nd one
event with probability
1
n
0
0.2
0.4
0 5 10 15 20
P
(
n
;
)
=2
n
0
0.2
0.4
0 5 10 15 20
P
(
n
;
)
=5
n
0
0.2
0.4
0 5 10 15 20
P
(
n
;
)
=10
Figure 1: The Poisson distribution
P(n; ) for several values of the mean
.
P(1; t) = t (6)
and no events with probability
P(0; t) = 1 t . (7)
What we want to nd is the probability to nd n events in t. We can start by nding the
probability to nd zero events in t, P(0; t) and then generalize this result by induction.
Suppose we knew P(0; t). We could then ask what is the probability to nd no events
in the time t + t. Since the events are independent, the probability for no events in both
intervals, rst none in t and then none in t, is given by the product of the two individual
probabilities. That is,
P(0; t + t) = P(0; t)(1 t) . (8)
This can be rewritten as
P(0; t + t) P(0; t)
t
= P(0; t) , (9)
which in the limit of small t becomes a dierential equation,
dP(0; t)
dt
= P(0; t) . (10)
Integrating to nd the solution gives
P(0; t) = e
t
+ C . (11)
For a length of time t = 0 we must have zero events, i.e., we require the boundary condition
P(0; 0) = 1. The constant C must therefore be zero and we obtain
2
P(0; t) = e
t
. (12)
Now consider the case where the number of events n is not zero. The probability of nding
n events in a time t + t is given by the sum of two terms:
P(n; t + t) = P(n; t)(1 t) + P(n 1; t)t . (13)
The rst term gives the probability to have all n events in the rst subinterval of time t
and then no events in the nal t. The second term corresponds to having n 1 events in t
followed by one event in the last t. In the limit of small t this gives a dierential equation
for P(n; t):
dP(n; t)
dt
+ P(n; t) = P(n 1; t) . (14)
We can solve equation (14) by nding an integrating factor (t), i.e., a function which
when multiplied by the left-hand side of the equation results in a total derivative with respect
to t. That is, we want a function (t) such that
(t)
dP(n; t)
dt
+ P(n; t)
=
d
dt
[(t)P(n; t)] . (15)
We can easily show that the function
(t) = e
t
(16)
has the desired property and therefore we nd
d
dt
e
t
P(n; t)
= e
t
P(n 1; t) . (17)
We can use this result, for example, with n = 1 to nd
d
dt
e
t
P(1; t)
= e
t
P(0; t) = e
t
e
t
= , (18)
where we substituted our previous result (12) for P(0; t). Integrating equation (18) gives
e
t
P(1; t) =
dt = t + C . (19)
Now the probability to nd one event in zero time must be zero, i.e., P(1; 0) = 0 and therefore
C = 0, so we nd
P(1; t) = te
t
. (20)
We can generalize this result to arbitrary n by induction. We assert that the probability
to nd n events in a time t is
P(n; t) =
(t)
n
n!
e
t
. (21)
3
We have already shown that this is true for n = 0 as well as for n = 1. Using the dierential
equation (17) with n + 1 on the left-hand side and substituting (21) on the right, we nd
d
dt
e
t
P(n + 1; t)
= e
t
P(n; t) = e
t
(t)
n
n!
e
t
=
(t)
n
n!
. (22)
Integrating equation (22) gives
e
t
P(n + 1; t) =
(t)
n
n!
dt =
(t)
n+1
(n + 1)!
+ C . (23)
Imposing the boundary condition P(n + 1; 0) = 0 implies C = 0 and therefore
P(n + 1; t) =
(t)
n+1
(n + 1)!
e
t
. (24)
Thus the assertion (21) for n also holds for n + 1 and the result is proved by induction.
2 Mean and standard deviation of the Poisson distribution
First we can verify that the sum of the probabilities for all n is equal to unity. Using now
= t, we nd
n=0
P(n; ) =
n=0
n
n!
e
= e
n=0
n
n!
= e
= 1 , (25)
where we have identied the nal sum with the Taylor expansion of e
.
The mean value (or expectation value) of a discrete random variable n is dened as
n =
n
nP(n) , (26)
where P(n) is the probability to observe n and the sum extends over all possible outcomes.
In the case of the Poisson distribution this is
n =
n=0
nP(n; ) =
n=0
n
n
n!
e
. (27)
To carry out the sum note rst that the n = 0 term is zero and therefore
4
n = e
n=1
n
n
n!
= e
n=1
n1
(n 1)!
= e
m=0
m
m!
= e
= . (28)
Here in the third line we simply relabelled the index with the replacement m = n 1 and
then we again identied the Taylor expansion of e
.
To nd the standard deviation of n we use the dening relation
2
= n
2
n
2
. (29)
We already have n, and we can nd n
2
using the following trick:
n
2
= n(n 1) + n . (30)
We can nd n(n 1) in a manner similar that used to nd n, namely,
n(n 1) =
n=0
n(n 1)
n
n!
e
=
2
e
n=2
n2
(n 2)!
=
2
e
m=0
m
m!
=
2
e
=
2
, (31)
where we used the fact that the n = 0 and n = 1 terms are zero. In the third line we
relabelled the index using m = n 2 and identied the resulting series with e
. Putting this
into equation (29) for
2
gives
2
=
2
+
2
= or
=
. (32)
This is the important result that the standard deviation of a Poisson distribution is equal to
the square root of its mean.
5