Common Probability Distributions: 1.1 Bernoulli Distribution
Common Probability Distributions: 1.1 Bernoulli Distribution
Common Probability Distributions: 1.1 Bernoulli Distribution
1 Discrete distributions
Probability function
px (1 − p)1−x if x = 0, 1
p(x) =
0 otherwise.
Notation X ∼ Ber(p)
Properties
1. E(X) = p
2. Var(X) = p(1 − p)
3. MX (t) = 1 − p + p expt for all real t.
Probability function
n x
p (1 − p)n−x if x = 0, 1, . . . , n
p(x) = x
0 otherwise.
1
Notation X ∼ Bin(n, p)
Properties
1. E(X) = np
2. Var(X) = np(1 − p)
3. MX (t) = (1 − p + p expt )n for all real t.
4. Let Xi ∼ Bin(ni , p) be independent random variables for i = 1, . . . , k
then
k k
!
X X
Xi ∼ Bin ni , p .
i=1 i=1
Notation X ∼ Poi(λ)
Probability model Since the Poisson distribution arose out of the search
for an approximation for the binomial distribution when n is large and p
is small, one can use the Poisson random variable to model experiments
where one wants to count the number of occurrences of an event over
some interval (e.g. time, area). This arises since one can partition the
interval into n small subintervals, where n is a fairly large number and
the probability of success, p, over any one of those subintervals is small.
Then the number of occurrences over the entire interval is approximately
binomial, or Poisson. For example, the number of accidents that occur at
2
an intersection per hour, the number of misprints in a document, or the
number of particles emitted by a radioactive substance in one second can
be modeled by a Poisson random variable.
Properties
1. E(X) = λ
2. Var(X) = λ
t
3. MX (t) = eλ(e −1) for all real t.
4. Let Xi ∼ Poi(λi ) be independent random variables for i = 1, . . . , k
then
k k
!
X X
Xi ∼ Poi λi .
i=1 i=1
2 Continuous distributions
Notation X ∼ U (a, b)
Properties
a+b
1. E(X) =
2
(b − a)2
2. Var(X) =
12
etb − eta
if t 6= 0
3. MX (t) = t(b − a)
1 if t = 0.
3
2.2 Normal distribution
(x − µ)2
1 −
f (x) = √ e 2σ 2
2πσ 2
for −∞ < x < ∞.
Notation X ∼ N (µ, σ 2 )
Properties
1. E(X) = µ
2. Var(X) = σ 2
2 2
3. MX (t) = eµt+σ t /2 for all real t.
4. If X ∼ N (µ, σ 2 ) and Y = aX + b for any constants a, b (a 6= 0), then
Y ∼ N (aµ + b, a2 σ 2 ).
5. If X1 , X2 , . . . , Xk are independent random variables where Xi ∼ N (µi , σi2 )
for i = 1, . . . , k then
k k k
!
X X X
2
Xi ∼ N µi , σi .
i=1 i=1 i=1
Notation X ∼ Exp(λ)
Properties
1
1. E(X) =
λ
1
2. Var(X) = 2
λ
λ
3. MX (t) = if t < λ.
λ−t
4
3 Sampling distributions
Notation Y ∼ χ21
Notation X ∼ χ2n
Properties
1. E(X) = n
2. Var(X) = 2n
3. MX (t) = (1 − 2t)−n/2 if t < 21 .
4. If U and V are independent random variables such that U ∼ χ2n and
V ∼ χ2m , then Y = U + V ∼ χ2n+m .
5
Notation T ∼ tn
Properties
1. E(T ) = 0 if n > 1. (If n = 1, E(T ) does not exist.)
n
2. Var(T ) = n−2 if n > 2.
3. MT (t) does not exist since not all moments are finite.
3.3 F distribution
Notation X ∼ Fn,m
Γ n+m
n
2 n 2 n −1 n − n+m
2
f (x) = n
m
x 2 1+ x
Γ 2 Γ 2 m m
for x ≥ 0.
Properties
m
1. E(X) = if m > 2.
m−2
2m2 (n + m − 2)
2. Var(X) = if m > 4.
n(m − 2)2 (m − 4)
3. MX (t) does not exist.
4. If T ∼ tn , then T 2 ∼ F1,n .
5. If X ∼ Fn,m , then X1 ∼ Fm,n .