Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
26 views

Functions of Continuous Random Variables - PDF - CDF

This document discusses how to find the probability distribution of a random variable Y that is a function of another random variable X. It provides examples of using both the CDF method and the method of transformations. The CDF method involves finding the CDF of Y in terms of the CDF of X and then taking the derivative to get the PDF. The method of transformations can be used if the function g(x) is differentiable and strictly increasing, and involves using the change of variables formula to directly find the PDF of Y in terms of the PDF of X.

Uploaded by

gibadew959
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views

Functions of Continuous Random Variables - PDF - CDF

This document discusses how to find the probability distribution of a random variable Y that is a function of another random variable X. It provides examples of using both the CDF method and the method of transformations. The CDF method involves finding the CDF of Y in terms of the CDF of X and then taking the derivative to get the PDF. The method of transformations can be used if the function g(x) is differentiable and strictly increasing, and involves using the change of variables formula to directly find the PDF of Y in terms of the PDF of X.

Uploaded by

gibadew959
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

27/02/2024, 23:39 Functions of Continuous Random Variables | PDF | CDF

4.1.3 Functions of Continuous Random Variables


If X is a continuous random variable and Y = g(X) is a function of X, then Y itself is a random variable. Thus, we should be able to find
the CDF and PDF of Y . It is usually more straightforward to start from the CDF and then to find the PDF by taking the derivative of the CDF.
Note that before differentiating the CDF, we should check that the CDF is continuous. As we will see later, the function of a continuous
random variable might be a non-continuous random variable. Let's look at an example.

Example 4.7

Let X be a U nif orm(0, 1) random variable, and let Y = e


X
.

a. Find the CDF of Y .


b. Find the PDF of Y .
c. Find E Y .

Solution
First, note that we already know the CDF and PDF of X. In particular,

⎧ 0 for x < 0

F X (x) = ⎨ x for 0 ≤ x ≤ 1


1 for x > 1

It is a good idea to think about the range of Y before finding the distribution. Since ex is an increasing function of x and
RX = [0, 1] , we conclude that RY = [1, e] . So we immediately know that

F Y (y) = P (Y ≤ y) = 0, for y < 1,

F Y (y) = P (Y ≤ y) = 1, for y ≥ e.

a. To find F Y (y) for y ∈ [1, e] , we can write

F Y (y) = P (Y ≤ y)

X
= P (e ≤ y)
x
= P (X ≤ ln y) since e is an increasing function

= F X (ln y) = ln y since 0 ≤ ln y ≤ 1.

To summarize

⎧0

for y < 1

F Y (y) = ⎨ ln y for 1 ≤ y < e




1 for y ≥ e

b. The above CDF is a continuous function, so we can obtain the PDF of Y by taking its derivative. We have

1
for 1 ≤ y ≤ e
′ y
f Y (y) = F (y) = {
Y
0 otherwise

Note that the CDF is not technically differentiable at points 1 and e, but as we mentioned earlier we do not worry about
this since this is a continuous random variable and changing the PDF at a finite number of points does not change
probabilities.

c. To find the E Y , we can directly apply LOTUS,


∞ x
X
E [Y ] = E [e ] = ∫ e f X (x)dx
−∞

1 x
= ∫ e dx
0

= e − 1.

For this problem, we could also find E Y using the PDF of Y ,



E [Y ] = ∫ yf Y (y)dy
−∞

e 1
= ∫ y dy
1 y

= e − 1.

https://www.probabilitycourse.com/chapter4/4_1_3_functions_continuous_var.php 1/5
27/02/2024, 23:39 Functions of Continuous Random Variables | PDF | CDF

Note that since we have already found the PDF of Y it did not matter which method we used to find E [Y ] . However, if
the problem only asked for E [Y ] without asking for the PDF of Y , then using LOTUS would be much easier.

Example 4.8

Let X ∼ U nif orm(−1, 1) and Y = X


2
. Find the CDF and PDF of Y .

Solution
First, we note that RY = [0, 1] . As usual, we start with the CDF. For y ∈ [0, 1] , we have
F Y (y) = P (Y ≤ y)

2
= P (X ≤ y)

= P (−√y ≤ X ≤ √y )

√y −(−√y )
= since X ∼ U nif orm(−1, 1)
1−(−1)

= √y .

Thus, the CDF of Y is given by

⎧ 0 for y < 0

F Y (y) = ⎨ √y for 0 ≤ y ≤ 1


1 for y > 1

Note that the CDF is a continuous function of Y , so Y is a continuous random variable. Thus, we can find the PDF of Y by
differentiating F Y (y),

1
for 0 ≤ y ≤ 1
′ 2√y
f Y (y) = F (y) = {
Y

0 otherwise

The Method of Transformations

So far, we have discussed how we can find the distribution of a function of a continuous random variable starting from finding the CDF. If we
are interested in finding the PDF of Y = g(X), and the function g satisfies some properties, it might be easier to use a method called the
method of transformations. Let's start with the case where g is a function satisfying the following properties:

g(x) is differentiable;
g(x) is a strictly increasing function, that is, if x1 < x2 , then g(x1 ) < g(x2 ) .

Now, let X be a continuous random variable and Y = g(X) . We will show that you can directly find the PDF of Y using the following
formula.

f X (x 1 ) dx1
= f X (x1 ). where g(x1 ) = y

g (x 1 ) dy
f Y (y) = {

0 if g(x) = y does not have a solution

Note that since g is strictly increasing, its inverse function g −1 is well defined. That is, for each y ∈ RY , there exists a unique x1 such that
g(x1 ) = y . We can write x1 = g (y).
−1

F Y (y) = P (Y ≤ y)

= P (g(X) ≤ y)

−1
= P (X < g (y)) since g is strictly increasing

−1
= F X (g (y)).

To find the PDF of Y , we differentiate


d
f Y (y) = F X (x1 ) where g(x1 ) = y
dy

dx1

= ⋅ F (x1 )
dy X

dx1
= f X (x1 )
dy

f X (x 1 ) dx 1
since = .
= dy dy

g (x 1 )
dx

https://www.probabilitycourse.com/chapter4/4_1_3_functions_continuous_var.php 2/5
27/02/2024, 23:39 Functions of Continuous Random Variables | PDF | CDF

We can repeat the same argument for the case where g is strictly decreasing. In that case, g ′ (x1 ) will be negative, so we need to use
|g (x1 )|. Thus, we can state the following theorem for a strictly monotonic function. (A function g : R → R is called strictly monotonic if

it is strictly increasing or strictly decreasing.)

Theorem 4.1
Suppose that X is a continuous random variable and g : R → R is a strictly monotonic differentiable function. Let Y . Then
= g(X)

the PDF of Y is given by

f X (x 1 ) dx1
= f X (x1 ). | | where g(x1 ) = y

|g (x1 )| dy
f Y (y) = { (4.5)

0 if g(x) = y does not have a solution

To see how to use the formula, let's look at an example.

Example 4.9

Let X be a continuous random variable with PDF


3
4x 0 < x ≤ 1
f X (x) = {
0 otherwise

and let Y =
1
. Find f Y (y) .
X

Solution
First note that RY = [1, ∞) . Also, note that g(x) is a strictly decreasing and differentiable function on (0, 1], so we may use
1 1
Equation 4.5. We have g ′ (x) = −
2
. For any y ,
∈ [1, ∞) x1 = g
−1
(y) =
y
. So, for y ∈ [1, ∞)
x

f X (x 1 )
f Y (y) =

|g (x1 )|

3
4x
1
=
1
|− |
2
x
1

5
= 4x
1

4
= .
5
y

Thus, we conclude

4
y ≥ 1
5
y
f Y (y) = {
0 otherwise

Theorem 4.1 can be extended to a more general case. In particular, if g is not monotonic, we can usually divide it into a finite number of
monotonic differentiable functions. Figure 4.3 shows a function g that has been divided into monotonic parts. We may state a more general
form of Theorem 4.1.

https://www.probabilitycourse.com/chapter4/4_1_3_functions_continuous_var.php 3/5
27/02/2024, 23:39 Functions of Continuous Random Variables | PDF | CDF

Fig.4.3 - Partitioning a function to monotone parts.

Theorem 4.2
Consider a continuous random variable X with domain RX , and let Y = g(X). Suppose that we can partition RX into a finite
number of intervals such that g(x) is strictly monotone and differentiable on each partition. Then the PDF of Y is given by
n n
f X (xi ) ∣ dxi ∣
f Y (y) = ∑ = ∑ f X (xi ). ∣ ∣ (4.6)

|g (xi )| ∣ dy ∣
i=1 i=1

where x1 , x2 , . . . , xn are real solutions to g(x) = y .

Let us look at an example to see how we can use Theorem 4.2.

Example 4.10

Let X be a continuous random variable with PDF


2
1 x

f X (x) = e 2 , for all x ∈ R
−−
√2π

and let Y = X
2
. Find f Y (y) .

Solution
We note that the function g(x) = x2 is strictly decreasing on the interval (−∞, 0), strictly increasing on the interval (0, ∞),
and differentiable on both intervals, g ′ (x) = 2x . Thus, we can use Equation 4.6. First, note that RY = (0, ∞). Next, for any
y ∈ (0, ∞) we have two solutions for y = g(x) , in particular,

x1 = √y , x2 = −√y

https://www.probabilitycourse.com/chapter4/4_1_3_functions_continuous_var.php 4/5
27/02/2024, 23:39 Functions of Continuous Random Variables | PDF | CDF
Note that although 0 ∈ RX it has not been included in our partition of RX . This is not a problem, since P (X = 0) = 0.
Indeed, in the statement of Theorem 4.2, we could replace RX by RX − A , where A is any set for which P (X ∈ A) = 0.
In particular, this is convenient when we exclude the endpoints of the intervals. Thus, we have
f X (x 1 ) f X (x 2 )
f Y (y) = +
′ ′
|g (x1 )| |g (x2 )|

fX ( ) f X (−√y )
√y
= +
|2 | |−2 |
√y √y
y y
1 − 1 −
= e 2 + e 2

2√ 2πy 2√ 2πy

y
1 −
= e 2 , for y ∈ (0, ∞).
√ 2πy

← previous
next →

The print version of the book is available on Amazon.

Practical uncertainty: Useful Ideas in Decision-Making, Risk, Randomness, & AI

https://www.probabilitycourse.com/chapter4/4_1_3_functions_continuous_var.php 5/5

You might also like