Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Discrete Random Variables (DRV) Summary

Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

CHAPTER 16

DISCRETE RANDOM VARIABLES


AN EXECUTIVE SUMMARY

Discrete random variable

A random variable that assumes countable values x1 , x2 , x3 ,..., xn (can be infinite).

Probability Distribution
Since the values of the random variable are determined by chance, there is a distribution
associated with them. We call this distribution a probability distribution. A probability
distribution describes all possible values of the random variable and their corresponding
probabilities.
Example: A single die is thrown. Let X be the random variable representing the number of
dots showing on the die. The possible values of X are given by x = 1, 2, 3, 4, 5, 6. The
probability distribution associated with X can be given in table form:

x 1 2 3 4 5 6
1 1 1 1 1 1
P( X  x) 6 6 6 6 6 6

Definition: Conditions for a Discrete Random Variable


For a random variable X that
1. assumes only countable (finite or infinitely many) values,
2. with a probability distribution such that  P( X  x )  1 ,
all x

we say that X is a discrete random variable.

The notation P( X  x )

Definition: If X is a discrete random variable with P(X = x) for x  x1 , x2 ,..., xn , then

P( X  x)  P( X  x1 )  P( X  x2 )  .....  P( X  xn )
 
all xi up to x
P( X  xi ) xi  x1 , x2 ,....., xn

EXPECTATION OF A DISCRETE RANDOM VARIABLE, E(X)


The expectation of a random variable X is denoted as E(X). It is referred to as the mean, long
term average or expected value of X.

E( X )   xP( X  x )
all x

We commonly use the symbol,  where   E( X ) .


Expectation of g(X) i.e. E(g(X))
Definition: If g(X) is any function of the discrete random variable X, then

E  g( X )    g( x ) P( X  x )
all x

For example: E(10 X )   10 x P( x  x)


E( X 2 )   x 2 P( X  x)
E( X  4)   ( x  4) P( X  x)

Important results:
Given that a and b are constants,

Result 1: E(a)  a

Result 2: E(aX )  aE( X )

Result 3: E(aX  b)  aE( X )  b

Result 4: E(f1 ( X )  f 2 ( X ))  E(f1 ( X ))  E(f 2 ( X )) where f1 and f 2 are functions of X

2
Note: In general, E(g( X ))  g(E( X )) e.g. E( X 2 )   E( X ) .

VARIANCE OF A DISCRETE RANDOM VARIABLE, VAR(X)

The population variance  2 , or Var(X) if X is the random variable, is defined as the average
of the squared distance of x from the population mean  . Since X is a random variable, the
2
squared distance,  X    is also a random variable, we have
2
Var( X )  E  X       ( x   ) 2 P( X  x )
  all x

A preferred form: Var( X )  E( X 2 )  [E( X )]2

A small value for the variance indicates that most of the values that X can take are clustered
about the mean. On the other hand, a higher value for the variance indicates that the values
that X can assume are spread over a larger range about the mean.
Note: the standard deviation is the positive square root of the variance, is denoted by  .
Important results:
Given that a and b are constants,
Result 1: Var(a) = 0
Result 2: Var (aX) = a2 Var (X)
Result 3: Var (aX + b) = a2 Var (X)
EXPECTATION AND VARIANCE OF MORE THAN 1 RANDOM VARIABLE

If X and Y are two random variables, then

E(aX ± bY) = a E(X) ± b E(Y)

If X and Y are two independent random variables, then

Var(aX ± bY) = a2 Var(X) + b2 Var(Y)

If X1 , X2 , … , Xn are n independent random variables, we can then use the above results to
extend to:
E(X1 + X2 + ….. + Xn) = E(X1) + E(X2) + ….. + E(Xn)

Var(X1 + X2 +…..+ Xn) = Var(X1) + Var(X2) + ….. + Var(Xn)

Note: Var(2 X )  Var( X1 )  Var( X 2 )

BINOMIAL DISTRIBUTION
The conditions for a Binomial model is as follows:
 a finite number, n, trials are carried out;
 the trials are independent;
 the outcome of each trial is termed a ‘success’ or, if not, a ‘failure’;
 the probability of success, p, is the same for each trial.
If these conditions are satisfied, we define the discrete random variable, X, as
X: the number of trials, out of n trials, that are successful.

X is said to follow a binomial distribution, written as X ~ B (n, p).


(Note: We need both the number of trials n and the probability of success p to define the
distribution completely. They are also known as the parameters of the binomial distribution.)

In general, if X ~ B(n, p),


n n n!
P(X = x) =   p x (1  p) n x , where    , for x  0,1, 2, , n .
x  x
  x ! n  x !

Using a GC:
To calculate P( X  r )
binompdf (n, p, r)

To calculate P( X  r )
binomcdf (n, p, r)
Expectation and Variance of a Binomial Distribution

For X ~ B(n, p), then E(X) = np and Var  X   np 1 – p  .


(They can be found in MF 26)

Graphs of the Probability Distribution of a Binomial Random Variable


Given that X  B( n, p ) , the graphs of the probability distribution of X for various values of n
and p are shown below.

0.6 P(X=x) 0.6 P(X=x) 0.6 P(X=x)

0.5 0.5 0.5


n=5, p=0.2 n=5, p=0.5 n=5, p=0.9
0.4 0.4 0.4

0.3 0.3 0.3

0.2 0.2 0.2

0.1 0.1 0.1


x x x
2 4 6 2 4 6 2 4 6

0.4 P(X=x) 0.4 P(X=x) 0.4 P(X=x)

0.3 n=25, p=0.2 0.3 n=25, p=0.5 0.3 n=25, p=0.9

0.2 0.2 0.2

0.1 0.1 0.1

x x x
10 20 30 10 20 30 10 20 30

Mode(s) for the Binomial Distribution


The mode is the value of X that is most likely to occur (i.e. most probable).
To find the mode of the binomial distribution, we can calculate all the binomial probabilities
and find the value of X with the highest probability.
It usually hovers around the mean.

You might also like