Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Common Probability Distributions: 1.1 Bernoulli Distribution

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

G14FOS University of Nottingham

Common probability distributions

1 Discrete distributions

1.1 Bernoulli distribution

Probability model Let X be a random variable which can only assume


two outcomes; 0 if the outcome fo the experiment is a “failure”, 1 if the
outcome of the experiment is a “success”. Furthermore, let p denote the
probability of a success. Hence 1 − p denotes the probability of a failure.

Probability function
px (1 − p)1−x if x = 0, 1

p(x) =
0 otherwise.

Notation X ∼ Ber(p)

Properties
1. E(X) = p
2. Var(X) = p(1 − p)
3. MX (t) = 1 − p + p expt for all real t.

1.2 Binomial distribution

Probability model Suppose an experiment has only two possible out-


comes; “success” or “failure” with probability p and 1 − p, respectively.
Furthermore assume that the experiment is repeated n times (n trials) in
such a way that each trial is independent and the probability of success
remains the same for each trial. Define the random variable X to be the
number of successes observed in the n trials. Then X is said to have a
binomial distribution.

Probability function
  
 n x
p (1 − p)n−x if x = 0, 1, . . . , n
p(x) = x
0 otherwise.

1
Notation X ∼ Bin(n, p)

Properties
1. E(X) = np
2. Var(X) = np(1 − p)
3. MX (t) = (1 − p + p expt )n for all real t.
4. Let Xi ∼ Bin(ni , p) be independent random variables for i = 1, . . . , k
then
k k
!
X X
Xi ∼ Bin ni , p .
i=1 i=1

Alternative definition Let X1 , X2 , . . . , Xn be a random sample from


of Bernoulli random variables each with probability of success p, then
P n
i=1 Xi ∼ Bin(n, p).

1.3 Poisson distribution

Definition A discrete random variable X which can take on one of the


values 0, 1, 2, . . . is said to be a Poisson random variable if its probability
function is viven by

 e−λ λx
p(x) = if x = 0, 1, 2, . . . for some λ > 0
 x! 0 otherwise.

Notation X ∼ Poi(λ)

Remark The Poisson model was originally developed by Simeon-Denis


Poisson (1837) as an approximation to the binomial distribution when the
number of trials n was large and the probability of success p was small.
It wasn’t until 1889 that the German-Russian mathematician Bortkiewicz
showed that the Poisson model itself was a probability distribution.

Probability model Since the Poisson distribution arose out of the search
for an approximation for the binomial distribution when n is large and p
is small, one can use the Poisson random variable to model experiments
where one wants to count the number of occurrences of an event over
some interval (e.g. time, area). This arises since one can partition the
interval into n small subintervals, where n is a fairly large number and
the probability of success, p, over any one of those subintervals is small.
Then the number of occurrences over the entire interval is approximately
binomial, or Poisson. For example, the number of accidents that occur at

2
an intersection per hour, the number of misprints in a document, or the
number of particles emitted by a radioactive substance in one second can
be modeled by a Poisson random variable.

Properties
1. E(X) = λ
2. Var(X) = λ
t
3. MX (t) = eλ(e −1) for all real t.
4. Let Xi ∼ Poi(λi ) be independent random variables for i = 1, . . . , k
then
k k
!
X X
Xi ∼ Poi λi .
i=1 i=1

2 Continuous distributions

2.1 Uniform distribution

Definition A random variable X is said to be uniformly distributed over


an interval (a, b) if its probability density function is given by
( 1
if a < x < b
f (x) = b−a
0 otherwise.

Notation X ∼ U (a, b)

Properties
a+b
1. E(X) =
2
(b − a)2
2. Var(X) =
 12
 etb − eta
if t 6= 0
3. MX (t) = t(b − a)
1 if t = 0.

3
2.2 Normal distribution

Definition A random variable X is said to have a normal distribution with


parameters µ and σ 2 if its probability density function is given by

(x − µ)2
1 −
f (x) = √ e 2σ 2
2πσ 2
for −∞ < x < ∞.

Notation X ∼ N (µ, σ 2 )

Properties
1. E(X) = µ
2. Var(X) = σ 2
2 2
3. MX (t) = eµt+σ t /2 for all real t.
4. If X ∼ N (µ, σ 2 ) and Y = aX + b for any constants a, b (a 6= 0), then
Y ∼ N (aµ + b, a2 σ 2 ).
5. If X1 , X2 , . . . , Xk are independent random variables where Xi ∼ N (µi , σi2 )
for i = 1, . . . , k then
k k k
!
X X X
2
Xi ∼ N µi , σi .
i=1 i=1 i=1

2.3 Exponential distribution

Definition A random variable X is said to have an exponetial distribution


with parameter λ > 0 if its probability density function is given by
 −λx
λe if x ≥ 0
f (x) =
0 otherwise.

Notation X ∼ Exp(λ)

Properties
1
1. E(X) =
λ
1
2. Var(X) = 2
λ
λ
3. MX (t) = if t < λ.
λ−t

4
3 Sampling distributions

3.1 Chi-squared distribution

Definition 1 Let Z be a standard normal random variable, i.e. Z ∼


N (0, 1). Define Y = Z 2 , then Y is said to have a chi-squared distribution
with 1 degree of freedom.

Notation Y ∼ χ21

DefinitionP2 Let Y1 , Y2 , . . . , Yn be independent χ21 random variables. De-


fine X = ni=1 Yi , then X is said to have a chi-squared distribution with
n degrees of freedom.

Notation X ∼ χ2n

Remark An alternative to definition 2 is suppose that Z1 , Z2 , . . . , Zn are


independent standard normal random variables then X = Z12 +Z22 +· · ·+Zn2
is a chi-squared random variable with n degrees of freedom.

Probability density function


1 n n
f (x) = x 2 −1 e− 2
2n/2 Γ( n2 )
if x ≥ 0, where n 1
Z ∞
n x
Γ = x 2 −1 e− 2 dx.
2 2n/2 0

Properties
1. E(X) = n
2. Var(X) = 2n
3. MX (t) = (1 − 2t)−n/2 if t < 21 .
4. If U and V are independent random variables such that U ∼ χ2n and
V ∼ χ2m , then Y = U + V ∼ χ2n+m .

3.2 Student’s t distribution

Definition Let Z be a standard normal random variable, i.e. Z ∼ N (0, 1),


and U be an independent chi-squared random variable with n degrees of
freedom. Define T = √Z , then T is said to have a t distribution with n
U/n
degrees of freedom.

5
Notation T ∼ tn

Probability density function


− n+1
Γ n+1
 
2 t2 2
f (t) = √ 1 +
nπΓ n2

n

for −∞ < t < ∞.

Properties
1. E(T ) = 0 if n > 1. (If n = 1, E(T ) does not exist.)
n
2. Var(T ) = n−2 if n > 2.
3. MT (t) does not exist since not all moments are finite.

3.3 F distribution

Definition Let U and V be independent random variables such that U ∼


U/n
χ2n and V ∼ χ2m . Define X = , then X is said to have an F
V /m
distribution with n degrees of freedom in the numerator and m degrees of
freedom in the denominator.

Notation X ∼ Fn,m

Probability density function

Γ n+m
  n
2 n 2 n −1  n − n+m
2
f (x) = n
 m
 x 2 1+ x
Γ 2 Γ 2 m m

for x ≥ 0.

Properties
m
1. E(X) = if m > 2.
m−2
2m2 (n + m − 2)
2. Var(X) = if m > 4.
n(m − 2)2 (m − 4)
3. MX (t) does not exist.
4. If T ∼ tn , then T 2 ∼ F1,n .
5. If X ∼ Fn,m , then X1 ∼ Fm,n .

You might also like