Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
9 views

Module-1 Part-1 - Merged

Uploaded by

siddhanth.you
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

Module-1 Part-1 - Merged

Uploaded by

siddhanth.you
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 228

Module-1

Continuous and Discrete


Multiple Random Variables

1 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Synopsis
 Introduction to Random Variables
 Vector Random Variables
 Joint Distribution and its Properties
 Joint Density and its Properties
 Joint Probability Mass Function
 Conditional Distribution and Density
 Statistical Independence
 Distribution and Density of Function of Random Variables
 Central Limit Theorem.

2 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Introduction
 In the previous discussion, we studied the concept of probability
space that completely describes the outcome of a random
experiment.

 We also observed that in many of the examples of random


experiments, the result was not a numerical quantity, but in
descriptive form.

 For example, in experiment of tossing a coin, the outcome is head or


tail. Similarly, in classifying a manufactured item we used categories
like ‘defective’ and non-defective’.

 In some experiments, the description of outcomes are sufficient.


However in many cases, it is required to record the outcome of the
experiment as a number.

3 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Introduction
 Consider a random experiment E with sample space S.

 Let s be any outcome of the experiment.

 Then we can write 𝐬𝛜𝐒.

 We are interested in assigning a real number for every s.

 Let us consider a function X that assigns real number X(s) to every s.

 Then X(s) is called a random variable.

 Thus, the random variable X(s) maps every point in the sample space
to a real value.

4 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Introduction
 For example, in the experiment of tossing a coin, we can assign a
number to each outcome.

 We can assign zero to tail X(tails) = 0 and one to head X(heads) = 1

 The set of all such real numbers associated with an experiment is


known as a random variable

Sample space of coin-tossing experiment mapping to real value

5 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Introduction
 Consider another experiment in which we toss two coins. Then the
sample space S = {HH,TH, HT,TT}

 Now define a random variable X to denote the number of heads


obtained in the two tosses. Then X (HH) = 2; X(HT) = X(TH) = 1
and X(TT) = 0.

 Now the range space RX = {x: x = 0, 1, 2}.

6 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Assigning Probability to a Random Variable

 We know that for each outcome in the sample space S; a probability


may be associated.

 When we assign a real number for each outcome, the probability


associated with the outcome will be assigned to the real number.

 That is, we assign probability to events associated with RX in terms


of probabilities defined over S.

7 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Assigning Probability to a Random Variable
 Consider the experiment discussed in Example 2.
 The sample space S = {TT, HT, TH, HH}. Since all events are equally likely,
1
P(TT) = P(HT) = P(TH) = P(HH) =
4

 Let X be a random variable that gives the number of heads in each


outcome.
 Then X(TT) = 0; X(HT) = X(TH) = 1; X(HH) = 2.The range RX = {0, 1, 2}.
1 1 1 1
 Since P(HT) = P(TH) = ; P(HT,TH) = + =
4 4 4 2
1
 If we consider the event (X = 0) then we have P(X = 0) =
4

 The event (X = 1) is equivalent to the event (TH, HT). Therefore, P(X = 1)


1
=
2
1
 Similarly, P(X = 2) = P(HH) =
4
8 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT
Assigning Probability to a Random Variable
Example

 The sample space for an experiment is S = {0, 2, 4, 6}. List all the
possible values of the following random variable

 Solution:

 Given S = {0, 2, 4, 6}.

9 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Events Defined by Random Variable
 Let X be a random variable and x be a fixed real value.

 Let us consider an event in which the random variable X takes a


value equal to x denoted by (X = x).

 Then the probability associated with that event is denoted by 𝐏(𝐗 =


𝐱).

 Similarly, 𝐏(𝐗 ≤ 𝐱) is the probability that X takes a value less that or


equal to x.

 𝐏(𝐗 > 𝐱) is the probability that X takes a value greater than x. This
is equal to 𝟏 − 𝐏(𝐗 < 𝐱)

 P(x1 < X < x2 ) is the probability that X takes a value between x1


and x2 .
10 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT
Conditions for a Function to be a Random
Variable
 The random variable X is a function that maps the sample points in
the sample space to real values on the real axis.

 For a function to be a random variable, it has to satisfy two


conditions:

1. The set (𝐗 ≤ 𝐱) shall be an event for any real number x. The


probability of this event P(𝐗 ≤ 𝐱) is equal to the sum of the
probabilities of all the elementary events corresponding to (𝐗 ≤
𝐱)

2. The probability of the events (𝐗 = ∞) and (𝐗 = −∞) be zero.


That is P 𝐗 = ∞ = 𝐏 𝐗 = −∞ = 𝟎

11 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Classification of Random Variables

 A random variable assigns a real value to each outcome in the


sample space of a random experiment.

1. If the range of the random variable takes continuous range of


values then the random variable is a continuous random
variable.

2. Whenever the random variable takes on only discrete values then


the random variable is a discrete random variable.

3. A mixed random variable is one for which some of its values are
discrete and some are continuous.

12 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Classification of Random Variables

Examples of discrete Examples of continuous


random variables: random variables:

number of scratches electrical current


on a surface
length
number of defective
parts among 1000
tested Time

number of transmitted Temperature


bits received as error
weight
13 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT
Discrete Random Variable
 Example 1 : Machine Breakdowns
 Sample space : S  {electrical , mechanical , misuse}
 Each of these failures may be associated with a repair cost

 State space : {50, 200,350}

 Cost is a random variable : 50, 200, and 350

14 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Probability Mass Function
 Probability Mass Function (p.m.f.) [represented as 𝒇(𝒙)]

 A probability mass function (pmf) provides a simple description


of the probabilities associated with a random variable.
 It is defined as the probability that the random variable X has
in an infinitesimal interval about the point X = x, normalized by
the length of the interval.
 A set of probability value 𝑝𝑖 assigned to each of the values taken by
the discrete random variable

 0 ≤ 𝑝𝑖 ≤ 1 and 𝑝𝑖 = 1

 Probability : P X = 𝑥𝑖 = 𝑝𝑖

15 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Probability Mass Function
Example 1 : Machine Breakdowns

 P(cost=50)=0.3, P(cost=200)=0.2, P(cost=350)=0.5

 0.3 + 0.2 + 0.5 =1 𝑥𝑖 50 200 350


𝑝𝑖 0.3 0.2 0.5
f ( x)
0.5

0.3
0.2

50 200 350 Cost($)


16 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT
Cumulative Distribution Function

 Let X be a random variable and x be a number. Let the event be X


taking a value less than or equal to x.

 Then the Cumulative Distribution Function (CDF) of X is denoted


by
𝐅𝐱 𝐗 = 𝐏 𝐗 ≤ 𝐱 , −∞ < 𝐱 < ∞

𝐅𝐱 𝐗 = 𝐏(𝐗 = 𝐲)
𝐲:𝐲≤𝐱

17 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Cumulative Distribution Function
 Example 1 : Machine Breakdowns
𝑥𝑖 50 200 350
  x  50  F ( x)  P(cost  x)  0
50  x  200  F ( x)  P(cost  x)  0.3
𝑝𝑖 0.3 0.2 0.5
200  x  350  F ( x)  P(cost  x)  0.3  0.2  0.5
350  x    F ( x)  P(cost  x)  0.3  0.2  0.5  1.0
F ( x)

1.0

0.5

0.3

0 50 200 350 x($cost)


18 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT
Cumulative Distribution Function
Properties:

1. 𝟎 ≤ 𝐅𝐱 𝐗 ≤ 𝟏
 Since PDF is a probability, it must take on values between 0 and 1.

2. 𝐅𝐱 −∞ = 𝟎 and 𝐅𝐱 ∞ = 𝟏

3. 𝐅𝐱 𝐗 is monotonic non-decreasing function

4. If we consider two values x1 and x2 such that 𝐱 𝟏 < 𝐱 𝟐 , then the


event (X ≤ 𝐱 𝟏 ) is a subset of (X ≤ 𝐱 𝟐 )

 Hence, 𝐅𝐱 𝐱 𝟏 < 𝐅𝐱 𝐱 𝟐

19 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Cumulative Distribution Function
 Consider the following probability distribution function of a random
variable X shown

20 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Cumulative Distribution Function
5. 𝐏(𝐱 𝟏 < 𝐗 < 𝐱 𝟐 ) = 𝐅𝐱 𝐱 𝟐 − 𝐅𝐱 𝐱 𝟏

6. 𝐏(𝐗 < 𝒙) = 𝟏 − 𝐅𝐱 𝒙
21 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT
Cumulative Distribution Function
 For a discrete random variable X1 the probability mass function
(pmf), 𝑃𝑋 𝑥 is defined as follows:

 𝑃𝑋 𝑥 = 𝑃(𝑋 = 𝑥)

 The CDF of X can be expressed in terms of 𝑃𝑋 𝑥 as follows:


𝑁

𝐹𝑋 𝑥 = 𝑃𝑋 𝑋 = 𝑥𝑖 𝑢[𝑥 − 𝑥𝑖 ]
𝑖=1

 where 𝑢[𝑥 − 𝑥𝑖 ] is a step function defined as

22 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Continuous Random Variables

Example of Continuous Random Variables

 Suppose that the random variable 𝑋 is the diameter of a randomly


chosen cylinder manufactured by the company. Since this random
variable can take any value between 49.5 and 50.5, it is a
continuous random variable.

23 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Probability Density Function
 A probability density function (pdf) provides a simple description of
the probabilities associated with a random variable.

 It is defined as the probability that the random variable X has in an


infinitesimal interval about the point X = x, normalized by the length
of the interval.

 Probabilistic properties of a continuous random variable


𝒇 𝐱 ≥𝟎

𝒇 𝒙 𝒅𝒙 = 𝟏
𝒔𝒕𝒂𝒕𝒆 𝒔𝒑𝒂𝒄𝒆

24 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Probability Density Function
 Example
 Suppose that the diameter of a metal cylinder has a p.d.f

f ( x)  1.5  6( x  50.2) 2 for 49.5  x  50.5


f ( x)  0, elsewhere

f ( x)

49.5 50.5 x
25 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT
Probability Density Function
 This is a valid p.d.f.

50.5
49.5
(1.5  6( x  50.0)2 )dx  [1.5 x  2( x  50.0)3 ]50.5
49.5

 [1.5  50.5  2(50.5  50.0)3 ]


[1.5  49.5  2(49.5  50.0)3 ]
 75.5  74.5  1.0

26 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Probability Density Function
 The probability that a metal cylinder has a diameter between 49.8
and 50.1 mm can be calculated to be

50.1
 49.8
(1.5  6( x  50.0) 2 )dx  [1.5 x  2( x  50.0)3 ]50.1
49.8

 [1.5  50.1  2(50.1  50.0)3 ]


[1.5  49.8  2(49.8  50.0)3 ]
f ( x)
 75.148  74.716  0.432

49.5 49.8 50.1 50.5 x


27 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT
Probability Density Function
Properties

1. 𝐟𝐱 𝐱 ≥ 𝟎 From the definition of pdf, it is apparent that the pdf is a


non-negative function.

2. 𝐟
−∞ 𝐱
𝐱 𝐝𝐱 = 1

 The area under the density function is unity. This property along with
the property 1 can be used to

 check whether the given function is a valid density function or not.


x
3. 𝐅𝐱 𝐱 = 𝐟
−∞ 𝐱
𝐮 𝐝𝐮

 The above equation states that the distribution function 𝐅𝐱 𝐱 is equal


to the integral of the density function up to the value of x.
28 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT
Probability Density Function

29 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Probability Density Function
b
4.
a 𝐱
𝐟 𝐱 𝐝𝐱 = 𝐅𝐱 𝒃 − 𝐅𝐱 𝒂 = 𝐏(𝐚 < 𝐱 ≤ 𝐛)

 This property states that the probability that the value of x lies
between a and b is the area under the density curve from a to b.

30 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Cumulative Distribution Function
x
 F ( x)  P( X  x)   f ( y )dy


dF ( x)
 f ( x) 
dx
 That is, the probability density function (PDF) of a random variable is
the derivative of its CDF.

 Therefore, CDF of a random can be obtained by integrating PDF


 P ( a  X  b)  P ( X  b)  P ( X  a )
 F (b)  F (a )

 P ( a  X  b)  P (a  X  b)

31 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Cumulative Distribution Function
 Example: Given PDF f ( x)  1.5  6( x  50.2) 2 for 49.5  x  50.5
f ( x)  0, elsewhere
x
F ( x)  P( X  x)   (1.5  6( y  50.0) 2 )dy
49.5

 [1.5 y  2( y  50.0)3 ]49.5


x

 [1.5 x  2( x  50.0)3 ]  [1.5  49.5  2(49.5  50.0)3 ]


1.5 x  2( x  50.0)3  74.5
P(49.7  X  50.0)  F (50.0)  F (49.7)
 (1.5  50.0  2(50.0  50.0)3  74.5)
(1.5  49.7  2(49.7  50.0)3  74.5)
 0.5  0.104  0.396
32 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT
Cumulative Distribution Function

P(49.7  X  50.0)  0.396


1

P( X  50.0)  0.5
F ( x)

P( X  49.7)  0.104

49.5 49.7 50.0 50.5 x

33 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


The Expectation of a Random Variable
 Expectation of a discrete random variable with p.m.f 𝐏 𝑿 = 𝒙𝒊 = 𝒑𝒊

𝐸 𝑋 = 𝒑𝒊 𝒙𝒊
𝑖

 Expectation of a continuous random variable with p.d.f 𝑓(𝑥)

𝐸 𝑋 = 𝑥𝑓 𝑥 𝑑𝑥
𝑠𝑡𝑎𝑡𝑒 𝑠𝑝𝑎𝑐𝑒

 The expected value of a random variable is also called the MEAN of


the random variable

34 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Expectations of Discrete Random Variables

 Example 1 (discrete random variable)


 The expected repair cost is

E (cost)  ($50  0.3)  ($200  0.2)  ($350  0.5)  $230

35 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Expectations of Continuous Random
Variables
 Example 2 (continuous random variable)
 The expected diameter of a metal cylinder is
50.5
E( X )   x(1.5  6( x  50.0) 2 ) dx
49.5

 Change of variable:𝑦 = 𝑥 − 50

0.5
E ( x)   ( y  50)(1.5  6 y 2 )dy
0.5
0.5
 (6 y 3  300 y 2  1.5 y  75)dy
0.5

 [3 y 4 / 2  100 y 3  0.75 y 2  75 y ]0.5


0.5

 [25.09375]  [24.90625]  50.0


36 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT
Expectations of Continuous Random
Variables
 Symmetric Random Variables
 If 𝑥 has a p.d.f 𝑓(𝑥) that is symmetric
about a point so that
𝑓 𝜇+𝑥 =𝑓 𝜇−𝑥 f ( x) E( X )  
 Then, 𝐸 𝑋 = 𝜇

 So that the expectation of the


random variable is equal to the
point of symmetry

 x
37 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT
Medians of Random Variables
 Median
 Information about the “middle” value of the random variable

F ( x)  0.5
 Symmetric Random Variable
 If a continuous random variable is symmetric about a point 𝜇, then
both the median and the expectation of the random variable are equal
to 𝜇

F ( x)  1.5 x  2( x  50.0)3  74.5  0.5

x  50.0
38 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT
Moments
Moments of a random variable X are of two types:

 (i) Moments about origin

 (ii) Central moments

Moments about Origin

 Let X is a random variable with pdf 𝑓𝑋 (𝑥).

 Then the nth order moment about the origin is given by

39 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Moments
 If n = 0, we get the area of the function 𝑓𝑋 (𝑥) which is equal to 1.

 While n = 1 is equal to E[X].

 The second moment about origin is known as mean square of X


(mean square value) given by

 If X is a discrete random variable then nth order moment about


origin is given by

40 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Moments
Central Moments

 In central moments, the mean is subtracted from the variable before


the moment is taken in order to remove bias in the higher moments
due to the mean.

 For a random variable X with pdf 𝑓𝑋 (𝑥), the nth order central
moment is given by

41 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Moments
 For discrete random variable the nth order central moment is given
by

 For a random variable the first-order central moment in zero.That is

 Therefore, the lowest central moment or any real interest is the


second central moment

42 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


The variance of a Random Variable
Variance(𝜎 2 )

 The central moment for the case of n = 2 is very important and is


known as variance, which is denoted by 𝝈𝟐

 Thus, the variance is given by a positive quantity that measures the


spread of the distribution of the random variable about its mean value

Var( X )  E (( X  E ( X )) 2 )
 E ( X 2 )  ( E ( X )) 2
 Standard Deviation

 The positive square root of the variance

 Denoted by 𝜎

43 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


The variance of a Random Variable
 Three distribution with identical mean values but different variances

 Example 1 Var( X )  E (( X  E ( X )) 2 )   pi ( xi  E ( X )) 2
i

 0.3(50  230) 2  0.2(200  230) 2  0.5(350  230) 2


17,100   2   17,100  130.77
44 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT
Types of Discrete Distributions
 These distributions model the probabilities of random variables that
can have discrete values as outcomes.

1. Bernoulli Distribution

2. Binomial Distribution

3. Poisson Distribution

4. Hypergeometric Distribution

5. Negative Binomial Distribution

6. Geometric Distribution

7. Multinomial Distribution and so on…

45 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Bernoulli Distribution
 This distribution is generated when we perform an experiment once

 It has only two possible outcomes – success and failure.

 The trials of this type are called Bernoulli trials, which form the basis
for many distributions discussed below.

 Let p be the probability of success and 1 – p is the probability of


failure.The pmf is

 The pmf is also defined as

46 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Bernoulli Distribution
 The cumulative distribution function of a Bernoulli random variable
X is given by

0 𝑖𝑓 𝑥 < 0
𝐹𝑋 𝑥 = 1 − 𝑝 𝑖𝑓0 ≤ 𝑥 < 1
1 𝑖𝑓𝑥 ≥ 1

 Mean of Bernoulli Distribution: 𝑬[𝑿] = 𝒑

 Variance of Bernoulli Distribution: 𝑽𝒂𝒓[𝑿] = 𝒑(𝒑 − 𝟏)

47 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Bernoulli Distribution

48 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Binomial Distribution
 This is generated for random variables with only two possible
outcomes.

 Let p denote the probability of an event is a success which implies 1


– p is the probability of the event being a failure.

 Performing the experiment repeatedly and plotting the probability


each time gives us the Binomial distribution.

 The distribution of a binomial random variable with parameters (n,


p) is given by

49 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Binomial Distribution
 The binomial density and distribution functions are also given by

 Mean of Binomial Distribution: 𝑬[𝑿] = 𝒏𝒑

 Variance of Binomial Distribution: 𝑽𝒂𝒓[𝑿] = 𝒏𝒑(𝒑 − 𝟏)

50 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Binomial Distribution

51 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Poisson Distribution
 This distribution describes the events that occur in a fixed interval of
time or space.

Example

 Consider the case of the number of calls received by a customer


care center per hour.

 We can estimate the average number of calls per hour but we


cannot determine the exact number and the exact time at which
there is a call.

 Each occurrence of an event is independent of the other


occurrences.

52 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Poisson Distribution
 Poisson random variable with parameter 𝝀 , where 𝝀 >0, if its
distribution is of the form

 The Poisson random variable density and distribution functions

53 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Poisson Distribution
 Mean of Poisson Distribution: 𝑬[𝑿] = 𝝀

 Variance of Poisson Distribution: 𝑽𝒂𝒓[𝑿] = 𝝀

 For Poisson Distribution the mean and variance are same.

54 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Types of Discrete Distributions
 These distributions model the probabilities of random variables that
can have continuous values as outcomes.

1. Uniform Distribution

2. Normal/Gaussian Distribution

3. Exponential Distribution

4. Rayleigh Distribution

5. Gamma Distribution

6. Weibull Distribution

7. Chi-square Distribution and so on…

55 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Uniform Distribution
 This distribution plots the random variables whose values have equal
probabilities of occurring.

 The most common example is flipping a fair die. Here, all 6 outcomes
are equally likely to happen. Hence, the probability is constant.

 The probability density function of uniform random variable X over


the interval (a, b) is

 The CDF of a continuous uniform random variable

56 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Uniform Distribution
 Consider the example where a = 10 and b = 20,

𝒃+𝒂
 Mean of Uniform Distribution: 𝑬[𝑿] =
𝟐
(𝒃+𝒂)𝟐
 Variance of Uniform Distribution: 𝑽𝒂𝒓[𝑿] =
𝟐
57 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT
Normal/Gaussian Distribution
 This is the most commonly discussed distribution and most often
found in the real world.

 Many continuous distributions often reach normal distribution given


a large enough sample.

 This has two parameters namely mean and standard deviation.

 The pdf of a random variable is given by

 We can write the CDF of x as

58 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Normal/Gaussian Distribution

 Mean of Normal Distribution: 𝑬[𝑿] = 𝝁𝒙

 Variance of Normal Distribution: 𝑽𝒂𝒓[𝑿] = 𝝈𝟐𝒙

 Mean of Gaussian Distribution: 𝑬 𝑿 = 𝝁𝒙 = 𝟎

 Variance of Gaussian Distribution: 𝑽𝒂𝒓 𝑿 = 𝝈𝟐𝒙 = 𝟏

59 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Normal/Gaussian Distribution

60 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Exponential Distribution
 Exponential distribution is a good model for the time between two
consecutive occurrences of independent events.

 In the Poisson distribution, we took the example of calls received by


the customer care center.

 In that example, we considered the average number of calls per hour.

 Now, in this distribution, the time between successive calls is


explained.

 The exponential distribution can be seen as an inverse of the Poisson


distribution.

 The events in consideration are independent of each other.

61 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Exponential Distribution
 The pdf of an exponential random variable is given by

 The CDF of an exponential random variable is given by

62 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Exponential Distribution
𝟏
 Mean of Exponential Distribution: 𝑬[𝑿] =
𝝀
𝟏
 Variance of Exponential Distribution: 𝑽𝒂𝒓[𝑿] =
𝝀𝟐

63 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Multiple Random Variables

1 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Introduction
 In the previous chapter we studied the properties of a single random
variable defined on a given sample space.

 But in many random experiments, we may have to deal with two or


more random variables defined on the same sample space.

 Let us consider another example in which we collect the details of


students.

 If we concentrate only on the age of students then we deal with a


single random variable.

 On the other hand, if we collect details like age, weight and height
then we deal with multiple random variables.

2 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Vector Random Variables
 Consider two random variables X and Y on a sample space.

 Let X and Y represent age and height of students respectively.

 If X and Y represents the age and the height of a specific student


then this value (X, Y) can be represented as a random point in the x-
y plane.

 If the number of students is n, there will be n such values


𝑥1 , 𝑦1 , 𝑥2 , 𝑦2 , … 𝑥𝑛 , 𝑦𝑛 ,that can be represented in the x-y plane.

 This order pair of number (x, y) is known as specific value of vector


random variable where X and Y denote a two-dimensional vector
random variable.

3 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Vector Random Variables
 The figure below shows the mapping from s-plane to xy plane.

 The plane of all points (x, y) in the ranges of X and Y is known as


joint sample space denoted by Sj.

4 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Joint Probability Mass Function
 Consider a two-dimensional random variable (X, Y). Let the possible values
of (X,Y) be from a countable set 𝑆 = 𝑥𝑗 , 𝑦𝑘
 That is, the possible values of (X, Y) may be represented as
𝑥𝑗 , 𝑦𝑘 𝑤ℎ𝑒𝑟𝑒 𝑗 = 1,2, … , 𝑛 𝑎𝑛𝑑 𝑘 = 1,2, … , 𝑚
 For each possible outcome of a two-dimensional random variable, we
assume a number 𝑝𝑋,𝑌 𝑥𝑗 , 𝑦𝑘
 Probability Mass Function (PMF) given by
𝒑𝑿,𝒀 𝒙𝒋 , 𝒚𝒌 = 𝑷(𝑿 ≤ 𝒙𝒋 , 𝒀 ≤ 𝒚𝒌 )
 Thus, the joint pmf gives the possible probability of the occurrence of the
pairs 𝑥𝑗 , 𝑦𝑘
 Since the probability of the sample space ‘S’ is 1, we can
𝒎
write 𝒏𝒋=𝟏 𝒌=𝟏 𝒑𝑿,𝒀 𝒙𝒋 , 𝒚𝒌 = 𝟏

5 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Marginal Probability Mass Function
 We are also interested in finding the probability of events involving
each of the random variables in isolation.

 The pmf of the random variable X is given by

 Similarly,

6 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Joint Probability Mass Function

Properties of Joint pmf

1. The pmf can neither be negative nor exceed unity, that is


𝟎 ≤ 𝒑𝑿,𝒀 𝒙𝒋 , 𝒚𝒌 ≤ 𝟏

2. 𝒙≤𝒂 𝒚≤𝒃 𝒑𝑿,𝒀 𝒙𝒋 , 𝒚𝒌 = 𝟏

3. 𝒙≤𝒂 𝒚≤𝒃 𝒑𝑿,𝒀 𝒙𝒋 , 𝒚𝒌 = 𝑭𝑿,𝒀 (𝒂, 𝒃)

4. If X and Y are independent random variables,


𝒑𝑿,𝒀 𝒙𝒋 , 𝒚𝒌 = 𝒑𝑿 𝒙 𝒑𝒀 𝒚

7 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Joint Probability Mass Function
 If X has N possible values 𝑥1, 𝑥2 … , 𝑥𝑁 and Y has M possible values
𝑦1, 𝑦2, … , 𝑦𝑀, then
𝑵 𝑴

𝑭𝑿,𝒀 𝒂, 𝒃 = 𝒑𝑿,𝒀 𝒙𝒋 , 𝒚𝒌 𝒖(𝒙 − 𝒙𝒋 )𝒖(𝒚 − 𝒚𝒌 )


𝒋=𝟏 𝒌=𝟏

 where u(x) is a unit step function defined as

 The shifted unit step function is defined as

8 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


JOINT PROBABILITY MATRIX

 Consider two discrete random variables X and Y with joint


probability distribution P(X,Y).

 Let X take values(𝑥𝑖 ) 𝑤ℎ𝑒𝑟𝑒 𝑖 = 1,2, … , 𝑛

 Let Y take values 𝑦𝑗 𝑤ℎ𝑒𝑟𝑒 𝑗 = 1,2, … , 𝑛

 Then the joint probability distribution can be represented by an


𝒎 × 𝒏 matrix with X representing the rows and Y representing the
columns.

9 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


JOINT PROBABILITY MATRIX

 From the joint distribution matrix, we can find the distribution of


individual random variables.
 The distribution of X can be obtained by summing each row.
 The distribution of Y can be obtained by summing each column.

10 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


JOINT PROBABILITY MATRIX
 From the joint distribution matrix, we can observe.

11 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Examples

 The joint probability distribution of two random variables X and Y is


represented by a joint probability matrix given by

 Find the marginal distribution of X and Y. Find P(X ≤ 2, Y ≤ 4).

12 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Examples
Given the joint distribution,
 P(Y = 1) = 0.1 + 0 + 0.2 + 0 = 0.3 (Sum of first column)
 P(Y = 2) = 0 + 0.1 + 0 + 0 = 0.1 (Sum of second column)
 P(Y = 3) = 0.2 + 0.3 = 0.5 (Sum of third column)
 P(Y = 4) = 0 + 0 + 0 + 0.1 = 0.1 (Sum of fourth column)
 P(X = 1) = 0.1 + 0 + 0.2 + 0 = 0.3 (Sum of first row)
 P(X = 2) = 0 + 0.1 + 0 + 0 = 0.1 (Sum of second row)
 P(X = 3) = 0.2 + 0 + 0.3 + 0 = 0.5 (Sum of third row)
 P(X = 4) = 0 + 0 + 0 + 0.1 (Sum of fourth row)
 P(X ≤ 2, Y ≤ 4)= 0.1 + 0 + 0.2 + 0 + 0 + 0.1 + 0 + 0 = 0.4 (Sum of
the elements in the rows 1 and 2)

13 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Examples

 The joint pmf of (X,Y) is given by

 where k is constant.

 (a) Find the value of k.

 (b) Find the marginal pmf of X and Y.

14 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Examples
 Solution:

15 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Examples
 The marginal pmf of X:

 The marginal pmf of Y:

16 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Joint Distribution(CDF) and its Properties
 Consider two random variables X and Y.

 Let us define two events A and B as A = X ≤ x and B = Y ≤ y

 We already know the probability of these two events can be defined


as 𝐹𝑋 𝑥 = 𝑃(𝑋 ≤ 𝑥) and 𝐹𝑌 𝑦 = 𝑃(𝑌 ≤ 𝑦)

 Now let us define a joint event 𝐗 ≤ 𝐱, 𝐘 ≤ 𝒚

 The probability of this event is a function of the numbers x and y by


a joint probability distribution function 𝑭𝑿𝒀 𝒙, 𝒚 is given by
𝐅𝐗𝐘 𝐱, 𝐲 = 𝐏(𝐗 ≤ 𝐱, 𝐘 ≤ 𝐲)

17 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Joint Distribution(CDF) and its Properties
1. Since 𝐅𝐗𝐘 𝐱, 𝐲 is a probability, the CDF of two random variables is
also bounded between 0 and 1.
 That is, 𝟎 ≤ 𝐅𝐗𝐘 𝐱, 𝐲 ≤ 𝟏 for −∞ < x < ∞, −∞ < y < ∞

2. 𝐅𝐗𝐘 𝐱, 𝐲 is a non-decreasing function of both x and y.

3. 𝐅𝐗𝐘 −∞, ∞ = 𝐅𝐗𝐘 −∞, 𝐲 = 𝐅𝐗𝐘 𝐱, −∞ = 𝟎 & 𝐅𝐗𝐘 ∞, ∞ = 𝟏


 The joint CDF of two random variables X and Y is

18 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Joint Distribution(CDF) and its Properties
4. If then x1 ≤ x2 and y1 ≤ y2 then
 FXY x1 , y1 ≤ FXY x1 , y2 ≤ FXY x2 , y2
 Similarly FXY x1 , y1 ≤ FXY x2 , y1 ≤ FXY x2 , y2
5. P x1 ≤ X ≤ x2 , Y ≤ y = FXY x2 , y − FXY x1 , y
6. P X ≤ x, y1 ≤ Y ≤ y2 = FXY x, y2 − FXY x, y1
7. P x1 ≤ X ≤ x2 , y1 ≤ Y ≤ y2 = FXY x2 , y2 + FXY x1 , y1 −
FXY x2 , y1 − FXY x1 , y2
8. lim FXY x, y = FXY a, y
x→a+

9. lim FXY x, y = FXY x, b


x→b+

19 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Joint Distribution(CDF) and its Properties

 The marginal CDFs are obtained as


FX x = FXY x, ∞
FY y = FXY ∞, 𝑦

20 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Example

 The joint distribution function for two random variables X and Y is

 Assuming a = 0.6,

 Find 𝑃(𝑋 ≤ 1, 𝑌 ≤ 1),

 𝑃(1 > 𝑋 ≤ 2),

 𝑃(−1 < 𝑋 ≤ 2,1 < 𝑌 ≤ 2)

21 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Example
Solution:The joint distribution function is shown below

22 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Example

= 1 − 𝑒 −2 0.6
− 1 − 𝑒− 0.6
= 0.2476

23 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Joint Density Function and its Properties
 The joint pdf of 𝒇𝑿, 𝒀(𝒙, 𝒚) can be obtained from the joint CDF
𝑭𝑿, 𝒀(𝒙, 𝒚) by taking a partial derivative with respect to each variable.

 The joint pdf of two random variables X and Y is defined as the


second derivative of the joint distribution function whenever it
exists.
pdf CDF

 The joint CDF can be obtained in terms of joint pdf using the
equation
pdf

CDF
1 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT
Joint Density Function and its Properties
1. For all x and y, fXY (x, y) ≥ 0
∞ ∞
2. f
−∞ −∞ XY
x, y dx dy = 1
x y
3. FXY x, y = f
−∞ −∞ XY
u, v du dv - JCDF
∞ x
4. FX x = f
−∞ −∞ XY
u, v du dv - MCDF of X
y ∞
5. FY y = f
−∞ −∞ XY
u, v du dv - MCDF of Y

6. fX x = f (x, y)dy
−∞ XY
- Mpdf of X

7. fY y = f (x, y)dx
−∞ XY
- Mpdf of Y

2 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Joint Density Function and its Properties

8. fXY (x, y) is continuous for all except possibly finite values of x and
y.
y2 x2
9. P(x1 < X ≤ x2 , y1 < Y ≤ y2 = f
y1 x1 XY
x, y dx dy

10. If X and Y are statistically independent random variables


fXY x, y = fX (x)fY (y)

3 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Example

 If the probability density function is given by

 obtain the marginal pdf of X and that of Y.


1 3
 Hence, find 𝑃 ≤𝑦≤
4 4

4 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Example

5 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Example

6 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Example

7 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Example

 Joint probabilities of two random variables X and Y are given in the


table:

 Find out joint and marginal distribution functions.

 Plot joint and marginal density functions.

8 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Example

9 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Example
 The expression for joint distribution function is given by
𝐹𝑋,𝑌 𝑥, 𝑦
= 0.2 𝑢(𝑥 – 1) 𝑢(𝑦 – 1) + 0.1 𝑢(𝑥 – 2) 𝑢(𝑦 – 1)
+ 0.2 𝑢(𝑥 – 3) 𝑢(𝑦 – 1) + 0.15 𝑢(𝑥 – 1) 𝑢(𝑦 – 2)
+ 0.2 𝑢(𝑥 – 2) 𝑢(𝑦 – 2) + 0.15 𝑢(𝑥 – 3) 𝑢(𝑦 – 2)

10 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


11 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT
Tutorial Example
 The marginal distribution function is given by,
𝐹𝑋 𝑥 = 𝐹𝑋 𝑥, ∞
= 0.2 𝑢(𝑥 – 1) + 0.1 𝑢(𝑥 – 2) + 0.2 𝑢(𝑥 – 3) + 0.15 𝑢(𝑥 – 1)
+ 0.2 𝑢(𝑥 – 2) + 0.15 𝑢(𝑥 – 3)
= 0.35 𝑢(𝑥 – 1) + 0.3 𝑢(𝑥 – 2) + 0.35 𝑢(𝑥 – 3)

12 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Example
 The marginal distribution function is given by,

𝐹𝑌 𝑦 = 𝐹𝑋 ∞, 𝑦
= 0.2 𝑢 𝑦 – 1 + 0.1 𝑢 𝑦 – 1 + 0.2 𝑢 𝑦 – 1 + 0.15 𝑢 𝑦 – 2
+ 0.2 𝑢 𝑦 – 2 + 0.15 𝑢 𝑦 – 2 = 0.5 u(y – 1) + 0.5 u(y – 2)

13 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Example
 The joint pdf of the random variables X and Y can be obtained by
using

 We know if we differentiate a unit step function, we get an impulse.

 Therefore, 𝑓𝑋𝑌 (𝑥, 𝑦) can be written as


𝑓𝑋𝑌 𝑥, 𝑦
= 0.2 𝛿 𝑥 – 1 𝛿 𝑦 – 1 + 0.1𝛿 𝑥 – 2 𝛿 𝑦 – 1
+ 0.2𝛿(𝑥 – 3)𝛿(𝑦 – 1) + 0.15𝛿(𝑥 – 1)𝛿(𝑦 – 2)
+ 0.2𝛿(𝑥 – 2)𝛿(𝑦 – 2) + 0.15𝛿(𝑥 – 3)𝛿(𝑦 – 2)

14 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Example
𝑓𝑋𝑌 𝑥, 𝑦
= 0.2 𝛿 𝑥 – 1 𝛿 𝑦 – 1
+ 0.1𝛿 𝑥 – 2 𝛿 𝑦 – 1
+ 0.2𝛿(𝑥 – 3)𝛿(𝑦 – 1)
+ 0.15𝛿(𝑥 – 1)𝛿(𝑦 – 2)
+ 0.2𝛿(𝑥 – 2)𝛿(𝑦 – 2)
+ 0.15𝛿(𝑥 – 3)𝛿(𝑦 – 2)

15 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Example
 Similarly, by differentiating 𝐹𝑋 (𝑥) and 𝐹𝑌 (𝑦), we get,

 𝑓𝑋 (𝑥) = 0.35 𝛿(𝑥 – 1) + 0.3𝛿(𝑥 – 2) + 0.35𝛿(𝑥 – 3) and

 𝑓𝑌 (𝑦) = 0.5𝛿(𝑦 – 1) + 0.5𝛿(𝑦 – 2)

16 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Example
 A random vector (X, Y) is uniformly distributed (𝑓𝑋𝑌 (𝑥, 𝑦) in the
region shown in Figure below and zero elsewhere.

 (a) Find the value of k.

 (b) Find the marginal pdfs of X and Y.

17 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Example
Solution:
∞ ∞
 We know that f
−∞ −∞ XY
x, y dx dy = 1

 Rewriting the above expression in terms of ‘y’


1 1−𝑦

0 0
𝑘dx dy = 1

18 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Example
 The marginal pdf of X and Y

19 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Example

 The joint CDF of a bivariate random variable is given by


1 − e−ax 1 − e−by x ≥ 0, y ≥ 0; a, b > 0
FXY x, y =
0 otherwise
 Find marginal CDFs of X and Y.

 Find whether X and Y are independent.

 Find 𝑃 𝑋 ≤ 1, 𝑌 ≤ 1 ; 𝑃 𝑋 ≤ 1 ; 𝑃 𝑌 > 1 ; 𝑃 𝑋 > 𝑥, 𝑌 > 𝑦

20 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Example
 Solution:

21 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Example
 𝑷 𝑿 ≤ 𝟏, 𝒀 ≤ 𝟏 ; 𝑷 𝑿 ≤ 𝟏 ; 𝑷 𝒀 > 𝟏

22 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Example
 𝑃 𝑋 > 𝑥, 𝑌 > 𝑦 = 𝑃 𝑥 < 𝑋 ≤ ∞, 𝑦 < 𝑌 ≤ ∞

 Using the property P x1 ≤ X ≤ x2 , y1 ≤ Y ≤ y2 = FXY x2 , y2 +


FXY x1 , y1 − FXY x2 , y1 − FXY x1 , y2

23 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Example
 The joint pdf of (X,Y) is given by
abe−(ax+by x > 0, y > 0; a, b > 0
fXY x, y =
0 otherwise
 Find 𝑃(𝑋 > 𝑌)

24 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Example

25 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Example

 The joint pdf of (X,Y) is given by


ke−(x+2y x > 0, y > 0
fXY x, y =
0 otherwise
 Where k is a constant

 Find the value of k,

 Find 𝑃(𝑋 > 1, 𝑌 < 1), 𝑃(𝑋 < 𝑌) and 𝑃(𝑋 ≤ 2).

26 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Example
 For a valid pdf,

27 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Example

0.865

0.865
28 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT
Tutorial Example

29 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Example

30 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Conditional Distribution & Density Function
 The conditional distribution function of a random variable X,
given some event A, is defined as

 The conditional density function is given by

1 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


For Continuous Random Variable
 The conditional CDF of X given Y = y is given by,
𝒙
𝒇
−∞ 𝑿𝒀
𝒙, 𝒚 𝒅𝒙
𝑭𝑿 𝒙 𝒀 = 𝒚 =
𝒇𝒀 (𝒚)

 Differentiating the above expression on both sides, we get

 The Conditional pdf of X given Y = y which is given by,


𝒇𝑿𝒀 𝒙, 𝒚
𝒇𝑿 𝒙 𝒀 = 𝒚 =
𝒇𝒀 (𝒚)

Which holds good for every value of 𝑦 such that 𝑓𝑌 (𝑦) ≠ 0

2 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


For Continuous Random Variable
 The conditional CDF of 𝑌 given X = x is given by,
𝒚
𝒇
−∞ 𝑿𝒀
𝒙, 𝒚 𝒅𝒚
𝑭𝒀 𝒚 𝑿 = 𝒙 =
𝒇𝑿 (𝒙)

 Differentiating the above expression on both sides, we get

 The Conditional pdf of 𝑌 given X = x which is given by,


𝒇𝑿𝒀 𝒙, 𝒚
𝒇𝒀 𝒚 𝑿 = 𝒙 =
𝒇𝑿 (𝒙)

Which holds good for every value of 𝑦 such that 𝑓𝑋 (𝑥) ≠ 0

3 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


For Discrete Random Variable

 The conditional CDF of X given Y = y is given by,


𝑵
𝒊=𝟏 𝒑 𝒙𝒊 , 𝒚𝒌 𝒖(𝒙 − 𝒙𝒊 )
𝑭𝑿 𝒙 𝒚𝒌 =
𝑷(𝒚𝒌 )

 Differentiating the above expression on both sides, we get

 The Conditional pmf of X given Y = y which is given by,


𝑵
𝒊=𝟏 𝒑 𝒙𝒊 , 𝒚𝒌 𝜹(𝒙 − 𝒙𝒊 )
𝒇𝑿 𝒙 𝒚𝒌 =
𝑷(𝒚𝒌 )

4 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


For Discrete Random Variable

 The conditional CDF of 𝑌 given X = x is given by,


𝑵
𝒋=𝟏 𝒑 𝒙𝒌 , 𝒚𝒋 𝒖(𝒚 − 𝒚𝒋 )
𝑭𝒀 𝒚 𝒙 𝒌 =
𝑷(𝒙𝒌 )

 Differentiating the above expression on both sides, we get

 The Conditional pmf of X given Y = y which is given by,


𝑵
𝒋=𝟏 𝒑 𝒙𝒌 , 𝒚𝒋 𝜹(𝒚 − 𝒚𝒋 )
𝒇𝒀 𝒚 𝒙𝒌 =
𝐏(𝒙𝐤 )

5 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Interval Conditioning
CDF
𝑭𝑿,𝒀 𝒙, 𝒚𝟐 − 𝑭𝑿,𝒀 (𝒙, 𝒚𝟏 )
𝑭𝑿 𝒙 𝒚𝟏 ≤ 𝒀 ≤ 𝒚𝟐 =
𝑭𝒀 𝒚𝟐 − 𝑭𝒀 (𝒚𝟏 )
𝒚𝟐 𝒙
𝒇
𝒚𝟏 −∞ 𝑿,𝒀
𝒙, 𝒚 𝒅𝒙𝒅𝒚
= 𝒚𝟐 ∞
𝒚
𝒇
−∞ 𝑿,𝒀
𝒙, 𝒚 𝒅𝒙𝒅𝒚
𝟏

pdf/pmf

 After differentiation, we can get the conditional density function as


𝒚𝟐
𝒇
𝒚𝟏 𝑿,𝒀
𝒙, 𝒚 𝒅𝒚
𝒇𝑿 𝒙 𝒚𝟏 ≤ 𝒀 ≤ 𝒚𝟐 = 𝒚𝟐 ∞
𝒇
𝒚𝟏 −∞ 𝑿,𝒀
𝒙, 𝒚 𝒅𝒙𝒅𝒚

6 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Properties of Conditional Density Functions

7 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Example
 A product is classified according to the number of defects it contains
(X1) and the factors that produce it (X2).

 The joint probability distribution is

3
 Find the marginal distribution function of X1.

 Find the conditional CDF of X1 when X2 is equal to 1.

8 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Example
 The marginal distribution of X1 can be found by summing each row

 The conditional distribution is,

9 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Example

10 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Example

 The joint density function of the random variables X and Y is given


by
𝑓𝑋𝑌 𝑥, 𝑦 = 8𝑥𝑦, 0 < 𝑥 < 1; 0 < 𝑦 < 𝑥

1 1
 Find 𝑃 𝑌 < ∕ 𝑋 < .
8 2

 Also find the conditional density function 𝑓𝑌 (𝑌/𝑋)

11 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Example

12 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Example
1 1
 Interval of integration for x and y are (𝑦, ) and (0, ) respectively.
2 8

13 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Example

14 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Example

15 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Statistical Independence
 The conditional distribution function for an independent event is

 By using the condition for independence, we get

16 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Statistical Independence

 Similarly, for independent random variables, the density function is

17 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Example

 The joint density function of two random variables X and Y is given


by

 Find the value of K and

 Prove also that X and Y are independent.

18 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Example
 Solution:

19 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Example

20 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Example

21 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Example

22 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Functions of Random Variables
 In the study of the systems, it is often required to find pdf of the
output variable Y if the pdf of input variable X is known.

 If the input is a set of random variables (RVs), then the output will
also be a set of RVs.
 In the analysis of electrical systems, we will be often interested in
finding the properties of a signal after it has been processed by a
system.
 These signal processing operations may be viewed as
transformations of a set of input variables to a set of output
variables.
1 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT
Function of One Random Variable
 Let X be an RV with the associated sample space Sx and a known
probability distribution.

 Let g be a scalar function that maps each 𝑥𝜖𝑆𝑥 into y = g (x).

 The expression Y = g(X) defines a new random variable Y.

 For a given outcome, X(s) is a number x and g [X(s)] is another


number specified by g (x).

 This number is the value of the RV Y, i.e., Y(s) = y = g(x).

 The sample space Sy of Y is the set

2 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Function of One Random Variable
How to find 𝒇𝒀 (𝒚), when 𝒇𝑿 (𝒙)is known

 Let us now derive a procedure to find 𝑓𝑌 (𝑦), the pdf of Y, when 𝑦 =


𝑔(𝑋), where X is a continuous RV with pdf 𝑓𝑋 (𝑥), and g(x) is a
strictly monotonic function of x.

Case (i): g(x) is a strictly increasing function of x.

3 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Function of One Random Variable
 Case (ii): g(x) is a strictly decreasing function of x

 Combining both the both cases, we get

4 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


One Function of Two Random Variables
 If a random variable Z is defined as 𝒁 = 𝒈(𝑿, 𝒀) where X and Y are
RVs, we proceed to find 𝒇𝒁 (𝒛) in the following way.
 If z is a given number, we can find a region 𝐷𝑍 in the xy-plane such
that all points in 𝐷𝑍 satisfy the condition 𝑔 𝑥, 𝑦 ≤ 𝑧.

 where 𝑓(𝑥, 𝑦) is the joint pdf of (X,Y).


 Thus, to find 𝑭𝒁 (𝒛) it is sufficient to find the region 𝑫𝒁 for every Z
and to evaluate the above integral.
 𝒇𝒁 (𝒛) is then found out as usual.

5 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


One Function of Two Random Variables
Theorem 1

 If two RVs are independent, then the density function of their sum is
given by the convolution of their density functions.

6 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


One Function of Two Random Variables
 Since X and Y are independent

7 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


One Function of Two Random Variables
 The pdf of z is obtained by differentiating the CDF.

 The cumulative distribution is called the convolution of the


distributions 𝐹𝑋 (𝑥) and 𝐹𝑌 (𝑦)

 Differentiating both sides with respect to z.

8 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


One Function of Two Random Variables

 Now we can get 𝒇𝒁 (𝒛) by using above expression under the


condition 𝑧 > 0, as

 The above expression is known as convolution integral.

 That is, the density function of the sum of two statistically


independent random variables is the convolution of their individual
density functions.

9 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


One Function of Two Random Variables
Theorem 2

 If two RVs X and Y are independent, find the pdf of Z = XY in terms


of the density functions of X and Y.

 Let the joint pdf of (X,Y) be 𝑓(𝑥, 𝑦)

xy = z is a rectangular hyperbola
10 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT
One Function of Two Random Variables

11 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


One Function of Two Random Variables
Theorem 3
𝑋
 If two RVs X and Y are independent, find the pdf of Z = in terms of
𝑌
the density functions of X and Y.

12 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


One Function of Two Random Variables

 Since X and Y are independent variables

13 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Example
 Find the density of W = X + Y where the densities of X and Y are
assumed to be 𝑓𝑋 𝑥 = 𝑢 𝑥 − 𝑢(𝑥 − 1); 𝑓𝑌 𝑦 = 𝑢 𝑦 − 𝑢(𝑦 − 1)

 Solution:

 The pdfs of random variables X and Y are shown as below

14 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Example
 Given:W = X + Y

 The pdf of the sum of two random variables is the convolution of


their individual density functions.That is,

 The sketch of 𝑓𝑋 (−𝑦)and 𝑓𝑋 (𝑤 − 𝑦) are shown below

15 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Example
 Sketch 𝒇𝒀 (𝒚)and 𝒇𝑿 (𝒘 − 𝒚) on the same axis as below.

 From the figure it is observed that 𝒇𝑾 𝒘 = 𝟎 for w < 0, since the


functions 𝑓𝑋 (𝑤 − 𝑦) and 𝑓𝑌 (𝑦) do not overlap.

16 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Example
 For 0 < w < 1, the function 𝒇𝑿 (𝒘 − 𝒚) and 𝒇𝒀 (𝒚) are drawn as
below

 The overlapping interval is from 0 to w.Therefore,

17 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Example
 For 1 < w < 2, the functions 𝒇𝑿 (𝒘 − 𝒚) and 𝒇𝒀 (𝒚) are sketched as

 The overlapping integral is from w – 1 to 1.

18 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Example
 For w > 2, the functions 𝒇𝑿 (𝒘 − 𝒚) and 𝒇𝒀 (𝒚) do not overlap.
Therefore,

 Hence we can write the pdf of W as,

19 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Example
 If X and Y are independent random variables with density functions
𝑓𝑋 𝑥 = 𝑒 −𝑥 𝑢(𝑥) and 𝑓𝑌 𝑦 = 𝑒 −2𝑦 𝑢 𝑦 . find the density function
of Z = X + Y.

 Solution:

20 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Example

21 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Example
 Two independent random variables X and Y have the probability
density functions respectively as

𝑓𝑋 𝑥 = 𝑥𝑒 −𝑥 𝑥 > 0 ; 𝑓𝑌 𝑦 = 1 (0 ≤ 𝑦 ≤ 1)

 Calculate the probability distribution and density functions of the


random variable Z = X + Y.

 Solution:

22 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Example

23 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Example

 For z < 0, 𝑓𝑍 𝑧 = 0 because no overalap


24 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT
Tutorial Example

25 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Example
 For z > 1,

26 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Example
 X and Y are two independent random variables with uniform density
over (–2, 2) and (–1, 1) respectively.

 Find P(Z < –1) where Z = X + Y.

 Solution:

 The pdf s of X and Y random variables are shown as,

27 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Example

 For z < –3, 𝑓𝑍 𝑧 = 0 since no overlap

28 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Example

29 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Example

30 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Example

31 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Example

32 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Central Limit Theorem (CLT)
p(X)

The Normal Distribution


 Bell Shaped s
X
 Symmetrical m
 Mean, Median and Mode are Equal
Mean = Median =Mode
 m=mean

 s = standard deviation

 The random variable has an infinite theoretical range: +  to  

1 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Central Limit Theorem (CLT)
 Small standard deviation

2 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Central Limit Theorem (CLT)
 Larger standard deviation

3 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Central Limit Theorem (CLT)
 Even larger standard deviation

4 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Central Limit Theorem (CLT)
 The Normal Distribution: as mathematical function (pdf)

1 xm 2
1  ( )
f ( x)  e 2 s
s 2
This is a bell shaped
Note constants: curve with different
=3.14159 centers and spreads
e=2.71828 depending on m and s
5 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT
The Central Limit Theorem is not a
result about
individual observations

 Individual observations of a random sample:

X 1 , X 2 , X 3 , , X n

6 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Central Limit Theorem (CLT)
 Assume there is a population …

 Population size N=4 D


A B C
 Random variable, X, is age of individuals

 Values of X:

18, 20, 22, 24 (years)

7 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Central Limit Theorem (CLT)
 Summary Measures for the Population Distribution:

1 p(x)
μ  i X i
N
.25
18  20  22  24
  21
4
0
1 18 20 22 24 x
σ
N
i ( X i  μ)  2.236
2
A B C D

Uniform Distribution

8 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Central Limit Theorem (CLT)
 Now consider all possible samples of size n = 2

1st 2nd Observation 16 Sample Means


Obs 18 20 22 24
18 18,18 18,20 18,22 18,24
1st 2nd Observation
20 20,18 20,20 20,22 20,24 Obs 18 20 22 24
22 22,18 22,20 22,22 22,24 18 18 19 20 21
24 24,18 24,20 24,22 24,24 20 19 20 21 22
16 possible samples 22 20 21 22 23
(sampling with
replacement)
24 21 22 23 24
9 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT
Central Limit Theorem (CLT)
 Sampling Distribution of All Sample Means

16 Sample Means Sample Means


Distribution
1st 2nd Observation _
P(X)
Obs 18 20 22 24
.3
18 18 19 20 21
.2
20 19 20 21 22
.1
22 20 21 22 23
0 _
24 21 22 23 24 18 19 20 21 22 23 24 X

10 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Central Limit Theorem (CLT)
 Summary Measures of this Sampling Distribution:

1 18  19  21    24
E(X) 
N
 Xi  16
 21  μ

1
σX 
N
 ( X i  μ) 2

(18 - 21) 2  (19 - 21) 2    (24 - 21) 2


  1.58
16

11 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Central Limit Theorem (CLT)
 Comparing the Population with its Sampling Distribution
Population Sample Means Distribution
N=4 n=2
μ  21 σ  2.236 μ X  21 σ X  1.58
_
p(X) p(X)
.3 .3

.2 .2

.1 .1

0
18 20 22 24 X
0
18 19 20 21 22 23 24
_
X
A B C D
12 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT
Central Limit Theorem (CLT)
Expected Value of Sample Mean
 Let X1, X2, . . ., Xn represent a random sample from a
population
 That is, each observation is the realization of a random variable Xi

 If the sample is random all Xi are independent and follow the same
distribution.

 The sample mean value of these observations is defined as

1 n
X   Xi
n i1
13 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT
Central Limit Theorem (CLT)
Standard Error of the Mean
 Different samples of the same size from the same population will
yield different sample means
 A measure of the variability in the mean from sample to sample is
given by the Standard Error of the Mean:

σ
σX 
n
 Note that the standard error of the mean decreases as the sample
size increases

14 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Central Limit Theorem (CLT)

 If a population is normal with mean μ and standard deviation σ, the


sampling distribution of X is also normally distributed with

σ
μX  μ σX 
n

15 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Central Limit Theorem (CLT)

As n increases, Larger sample


σ x decreases size

Smaller
sample size

μ x
16 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT
Central Limit Theorem (CLT)
 We can apply the Central Limit Theorem:

 Even if the population is not normal,

 …sample means from the population will be approximately normal


as long as the sample size is large enough.

 Properties of the sampling distribution:

σ
μx  μ σx 
n
17 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT
Central Limit Theorem (CLT)
Population Distribution

μ x
Sampling Distribution
(becomes normal as n increases)
Larger
Smaller sample
sample size size

μx x
18 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT
Central Limit Theorem (CLT)
Example Using Sample Mean X
 Population: 65 Random Process Students.
17 18 18 18 19 19 19 19
 Variable: CAT-1 marks
20 20 20 20 20 20 20 20
20 20 21 21 21 21 21 21
21 21 21 21 22 22 22 22
22 22 22 22 22 23 23 23
23 23 24 24 24 24 24 25
25 25 26 26 27 27 29 29
30 32 32 33 33 34 38 43
50
19 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT
Central Limit Theorem (CLT)
 Displaying the population using a histogram
 Population skewed right

marks
20 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT
Central Limit Theorem (CLT)
Sample of Size 1

 Sample mean of sample of size n=1 is


17 18 18 18 19 19 19 19
20 20 20 20 20 20 20 20
X1
X  X1 20 20 21 21 21 21 21 21
1 21 21 21 21 22 22 22 22
22 22 22 22 22 23 23 23
23 23 24 24 24 24 24 25
25 25 26 26 27 27 29 29
30 32 32 33 33 34 38 43
50
21 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT
Central Limit Theorem (CLT)
Sample of Size 1

 Sample mean of sample of size n=1 is


17 18 18 18 19 19 19 19
20 20 20 20 20 20 20 20
21
X  21 20 20 21 21 21 21 21 21
1 21 21 21 21 22 22 22 22
22 22 22 22 22 23 23 23
23 23 24 24 24 24 24 25
25 25 26 26 27 27 29 29
30 32 32 33 33 34 38 43
50
22 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT
Central Limit Theorem (CLT)
Sample of Size 1

 Sample mean of sample of size n=1 is


17 18 18 18 19 19 19 19
20 20 20 20 20 20 20 20
33
X  33 20 20 21 21 21 21 21 21
1 21 21 21 21 22 22 22 22
22 22 22 22 22 23 23 23
23 23 24 24 24 24 24 25
25 25 26 26 27 27 29 29
30 32 32 33 33 34 38 43
50
23 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT
Central Limit Theorem (CLT)
Distribution of 𝑿

 There are 65 possible


samples of size n=1.

 Therefore, there are


65 values of 𝑋.

 The distribution of all


possible values of 𝑋 is
called the sampling
distribution of 𝑋 .

24 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Central Limit Theorem (CLT)
Sample of Size 2

 Sample mean of sample of size n=2 is


17 18 18 18 19 19 19 19
20 20 20 20 20 20 20 20
X1  X 2
X 20 20 21 21 21 21 21 21
2 21 21 21 21 22 22 22 22
22 22 22 22 22 23 23 23
23 23 24 24 24 24 24 25
25 25 26 26 27 27 29 29
30 32 32 33 33 34 38 43
50
25 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT
Central Limit Theorem (CLT)
Sample of Size 2

 Sample mean of sample of size n=2 is


17 18 18 18 19 19 19 19
20 20 20 20 20 20 20 20
21  27
X  24 20 20 21 21 21 21 21 21
2 21 21 21 21 22 22 22 22
22 22 22 22 22 23 23 23
23 23 24 24 24 24 24 25
25 25 26 26 27 27 29 29
30 32 32 33 33 34 38 43
50
26 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT
Central Limit Theorem (CLT)
 There are 2080 possible
samples of size n=2.

 Therefore, there are


2080 values of 𝑋.

 The distribution of all


possible values of is
called the sampling
distribution of 𝑋

27 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Central Limit Theorem (CLT)
Sample of Size 3
 Sample mean of sample of size n=3 is
17 18 18 18 19 19 19 19
20 20 20 20 20 20 20 20
X1  X 2  X 3
X 20 20 21 21 21 21 21 21
3 21 21 21 21 22 22 22 22
22 22 22 22 22 23 23 23
23 23 24 24 24 24 24 25
25 25 26 26 27 27 29 29
30 32 32 33 33 34 38 43
50
28 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT
Central Limit Theorem (CLT)
 There are 43,680
possible samples of size
n=3.

 Therefore, there are


43,680 values of 𝑋.

 The distribution of all


possible values of 𝑋 is
called the sampling
distribution of 𝑋.

29 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Central Limit Theorem (CLT)
 The number of possible samples increases very quickly as n
increases.

 For n=5 there are 8,259,888 possible samples.

30 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Central Limit Theorem (CLT)
 Sampling distribution
of 𝑋 for n=10.

 Note: shape starting


to look very
symmetric.

 Note: range of
sample mean is
decreasing.

 Shape of the normal


distribution
appearing.

31 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Central Limit Theorem (CLT)

32 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Central Limit Theorem (CLT)

 The original distribution of marks is very skewed and disjoint.

 The distribution of possible 𝑋 values is called the sampling


distribution of 𝑋.

 As sample size n increases the sampling distribution of 𝑋 becomes


symmetrical.

 The larger the sample size, the more closely the distribution
resembles the normal distribution.

33 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Central Limit Theorem (CLT)

 The mean of the sampling distribution of 𝑋 equals the mean of the


original distribution.

mX  m

The mean of the sampling distribution


equals the mean of the original
distribution

34 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Central Limit Theorem (CLT)
Mean = 23.9 Mean = 23.9

Mean = 23.9 Mean = 23.9

35 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Central Limit Theorem (CLT)
 The standard deviation (spread) of the sampling distribution of 𝑋
equals the standard deviation of the original distribution divided by
the square root of the sample size.

sX s / n
The standard deviation of the
sampling distribution
decreases as n increases.

36 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Central Limit Theorem (CLT)
The standard deviation of the sampling distribution
decreases as n increases.

Standard Deviation = 5.88

37 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Central Limit Theorem (CLT)
The standard deviation of the sampling distribution
decreases as n increases.

Standard Deviation = 2.63

38 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Central Limit Theorem (CLT)
The standard deviation of the sampling distribution
decreases as n increases.

Standard Deviation = 1.86

39 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Central Limit Theorem (CLT)
The standard deviation of the sampling distribution
decreases as n increases.

Standard Deviation = 1.52

40 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Central Limit Theorem (CLT)

Central Limit Theorem (Liapounoff ’s Form)

 If 𝑋1 , 𝑋2 , … , 𝑋𝑛 be a sequence of independent RVs with 𝑬 𝑿𝒊 = 𝝁𝒊


and 𝐕𝐚𝐫 𝑿𝒊 = 𝝈𝟐𝒊 , 𝑖 = 1,2, … , 𝑛 and if 𝑊𝑁 = 𝑋1 + 𝑋2 + ⋯ + 𝑋𝑛

 Then under certain general conditions, 𝑾𝒏 follows a normal


distribution with
𝒏 𝒏

𝑴𝒆𝒂𝒏, 𝝁 = 𝝁𝒊 𝑎𝑛𝑑 𝑽𝒂𝒓𝒊𝒂𝒏𝒄𝒆, 𝝈𝟐 = 𝝈𝟐𝒊


𝒊=𝟏 𝒊=𝟏

 As ‘n’ tends to infinity

41 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Central Limit Theorem (CLT)

Central Limit Theorem ((Lindeberg-Levy’s Form)

 If 𝑋1 , 𝑋2 , … , 𝑋𝑛 be a sequence of independent identically distributed


RVs with 𝑬 𝑿𝒊 = 𝝁 and 𝐕𝐚𝐫 𝑿𝒊 = 𝝈𝟐 , 𝑖 = 1,2, … , 𝑛 and if
𝑾𝑵 = 𝑿𝟏 + 𝑿𝟐 + ⋯ + 𝑿𝒏

 Then under certain general conditions, 𝑾𝒏 follows a normal


distribution with mean 𝐍𝝁 and variance 𝐍𝝈𝟐 as ‘N’ tends to
infinity

42 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Central Limit Theorem (CLT)
Corollary
𝟏
 If 𝑿 = 𝑿𝟏 + 𝑿𝟐 + ⋯ + 𝑿𝒏 , then
𝒏
𝟐
𝟏 𝝈
E(𝑿) = 𝝁 and Var(𝑿) = 𝟐 𝒏𝝈𝟐 =
𝒏 𝒏

43 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Central Limit Theorem (CLT)
 If the sample mean is the sum of random variables, the CLT says that
it tends to be asymptotically normal regardless of the distribution of
the random variable.

 Let 𝑋𝑖 , 𝑖 = 1,2, … , 𝑁 if we define the standard normal score of the


sample mean 𝒁
𝑿 − 𝝁𝒙
𝒁=𝝈
𝒙
𝑵
 Then the CDF of the sample mean is
𝑿 − 𝝁𝒙
𝑭𝑿 𝒙 = 𝑷 𝑿 < 𝒙 = ∅ 𝝈
𝒙
𝑵

44 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Example
 Ten dice are thrown. Find the approximate probability that
the sum obtained is between 40 and 50.

SOLUTION

 Let Xi be the event of throwing a dice. Then we can define the sum
as a random variable 𝑌𝑁 = 𝑁 𝑖=1 𝑋𝑖 .

 We know that when we throw a dice, the possible outcomes are 1, 2,


3, 4, 5 and 6, each with probability 1/6.Therefore, we can find

45 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Example

𝜎𝑌2𝑁

46 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Example
 If the sum approximates the normal random variable then the CDF
of the normal random variable YN is

47 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Example
 The lifetime of a certain brand of an electric bulb may be considered
a RV with mean 1200 hrs and standard deviation 250 hrs. Find the
probability, using central limit theorem, that the average lifetime of 60
bulbs exceeds 1250 hrs.

 SOLUTION

48 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Example

49 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Example
 Suppose that orders at a restaurant are identically independent
random variables with mean 𝜇 = 𝑅𝑠. 8 and standard deviation 𝑅𝑠. 2.

 Estimate,

 (a) The probability that the first 100 customers spend a total of more
than Rs. 840,

50 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Example
 SOLUTION

51 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Example

52 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Example
 The round-off error to the second decimal place has the uniform
distribution on the interval place has the uniform distribution on the
interval (– 0.05, 0.05).

 What is the probability that the absolute error in the sum of 1000
numbers is less than 1?

SOLUTION

 Since error is uniformly distributed on (–0.05, 0.05), the mean value


is zero and variance

53 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Example

54 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Example

0.816

55 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Example

 Solution

56 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Example

 If the sum approximates the normal random variable, the CDF of the
normalized random variable of YN becomes

57 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT


Tutorial Example

58 Dr. R.K.Mugelan, Associate Professor, SENSE, VIT

You might also like