Functions of Continuous Random Variables - PDF - CDF
Functions of Continuous Random Variables - PDF - CDF
Example 4.7
Solution
First, note that we already know the CDF and PDF of X. In particular,
⎧ 0 for x < 0
⎪
F X (x) = ⎨ x for 0 ≤ x ≤ 1
⎩
⎪
1 for x > 1
It is a good idea to think about the range of Y before finding the distribution. Since ex is an increasing function of x and
RX = [0, 1] , we conclude that RY = [1, e] . So we immediately know that
F Y (y) = P (Y ≤ y) = 1, for y ≥ e.
F Y (y) = P (Y ≤ y)
X
= P (e ≤ y)
x
= P (X ≤ ln y) since e is an increasing function
= F X (ln y) = ln y since 0 ≤ ln y ≤ 1.
To summarize
⎧0
⎪
for y < 1
b. The above CDF is a continuous function, so we can obtain the PDF of Y by taking its derivative. We have
1
for 1 ≤ y ≤ e
′ y
f Y (y) = F (y) = {
Y
0 otherwise
Note that the CDF is not technically differentiable at points 1 and e, but as we mentioned earlier we do not worry about
this since this is a continuous random variable and changing the PDF at a finite number of points does not change
probabilities.
1 x
= ∫ e dx
0
= e − 1.
e 1
= ∫ y dy
1 y
= e − 1.
https://www.probabilitycourse.com/chapter4/4_1_3_functions_continuous_var.php 1/5
27/02/2024, 23:39 Functions of Continuous Random Variables | PDF | CDF
Note that since we have already found the PDF of Y it did not matter which method we used to find E [Y ] . However, if
the problem only asked for E [Y ] without asking for the PDF of Y , then using LOTUS would be much easier.
Example 4.8
Solution
First, we note that RY = [0, 1] . As usual, we start with the CDF. For y ∈ [0, 1] , we have
F Y (y) = P (Y ≤ y)
2
= P (X ≤ y)
= P (−√y ≤ X ≤ √y )
√y −(−√y )
= since X ∼ U nif orm(−1, 1)
1−(−1)
= √y .
⎧ 0 for y < 0
⎪
F Y (y) = ⎨ √y for 0 ≤ y ≤ 1
⎩
⎪
1 for y > 1
Note that the CDF is a continuous function of Y , so Y is a continuous random variable. Thus, we can find the PDF of Y by
differentiating F Y (y),
1
for 0 ≤ y ≤ 1
′ 2√y
f Y (y) = F (y) = {
Y
0 otherwise
So far, we have discussed how we can find the distribution of a function of a continuous random variable starting from finding the CDF. If we
are interested in finding the PDF of Y = g(X), and the function g satisfies some properties, it might be easier to use a method called the
method of transformations. Let's start with the case where g is a function satisfying the following properties:
g(x) is differentiable;
g(x) is a strictly increasing function, that is, if x1 < x2 , then g(x1 ) < g(x2 ) .
Now, let X be a continuous random variable and Y = g(X) . We will show that you can directly find the PDF of Y using the following
formula.
f X (x 1 ) dx1
= f X (x1 ). where g(x1 ) = y
′
g (x 1 ) dy
f Y (y) = {
Note that since g is strictly increasing, its inverse function g −1 is well defined. That is, for each y ∈ RY , there exists a unique x1 such that
g(x1 ) = y . We can write x1 = g (y).
−1
F Y (y) = P (Y ≤ y)
= P (g(X) ≤ y)
−1
= P (X < g (y)) since g is strictly increasing
−1
= F X (g (y)).
dx1
′
= ⋅ F (x1 )
dy X
dx1
= f X (x1 )
dy
f X (x 1 ) dx 1
since = .
= dy dy
′
g (x 1 )
dx
https://www.probabilitycourse.com/chapter4/4_1_3_functions_continuous_var.php 2/5
27/02/2024, 23:39 Functions of Continuous Random Variables | PDF | CDF
We can repeat the same argument for the case where g is strictly decreasing. In that case, g ′ (x1 ) will be negative, so we need to use
|g (x1 )|. Thus, we can state the following theorem for a strictly monotonic function. (A function g : R → R is called strictly monotonic if
′
Theorem 4.1
Suppose that X is a continuous random variable and g : R → R is a strictly monotonic differentiable function. Let Y . Then
= g(X)
f X (x 1 ) dx1
= f X (x1 ). | | where g(x1 ) = y
′
|g (x1 )| dy
f Y (y) = { (4.5)
Example 4.9
and let Y =
1
. Find f Y (y) .
X
Solution
First note that RY = [1, ∞) . Also, note that g(x) is a strictly decreasing and differentiable function on (0, 1], so we may use
1 1
Equation 4.5. We have g ′ (x) = −
2
. For any y ,
∈ [1, ∞) x1 = g
−1
(y) =
y
. So, for y ∈ [1, ∞)
x
f X (x 1 )
f Y (y) =
′
|g (x1 )|
3
4x
1
=
1
|− |
2
x
1
5
= 4x
1
4
= .
5
y
Thus, we conclude
4
y ≥ 1
5
y
f Y (y) = {
0 otherwise
Theorem 4.1 can be extended to a more general case. In particular, if g is not monotonic, we can usually divide it into a finite number of
monotonic differentiable functions. Figure 4.3 shows a function g that has been divided into monotonic parts. We may state a more general
form of Theorem 4.1.
https://www.probabilitycourse.com/chapter4/4_1_3_functions_continuous_var.php 3/5
27/02/2024, 23:39 Functions of Continuous Random Variables | PDF | CDF
Theorem 4.2
Consider a continuous random variable X with domain RX , and let Y = g(X). Suppose that we can partition RX into a finite
number of intervals such that g(x) is strictly monotone and differentiable on each partition. Then the PDF of Y is given by
n n
f X (xi ) ∣ dxi ∣
f Y (y) = ∑ = ∑ f X (xi ). ∣ ∣ (4.6)
′
|g (xi )| ∣ dy ∣
i=1 i=1
Example 4.10
and let Y = X
2
. Find f Y (y) .
Solution
We note that the function g(x) = x2 is strictly decreasing on the interval (−∞, 0), strictly increasing on the interval (0, ∞),
and differentiable on both intervals, g ′ (x) = 2x . Thus, we can use Equation 4.6. First, note that RY = (0, ∞). Next, for any
y ∈ (0, ∞) we have two solutions for y = g(x) , in particular,
x1 = √y , x2 = −√y
https://www.probabilitycourse.com/chapter4/4_1_3_functions_continuous_var.php 4/5
27/02/2024, 23:39 Functions of Continuous Random Variables | PDF | CDF
Note that although 0 ∈ RX it has not been included in our partition of RX . This is not a problem, since P (X = 0) = 0.
Indeed, in the statement of Theorem 4.2, we could replace RX by RX − A , where A is any set for which P (X ∈ A) = 0.
In particular, this is convenient when we exclude the endpoints of the intervals. Thus, we have
f X (x 1 ) f X (x 2 )
f Y (y) = +
′ ′
|g (x1 )| |g (x2 )|
fX ( ) f X (−√y )
√y
= +
|2 | |−2 |
√y √y
y y
1 − 1 −
= e 2 + e 2
2√ 2πy 2√ 2πy
y
1 −
= e 2 , for y ∈ (0, ∞).
√ 2πy
← previous
next →
https://www.probabilitycourse.com/chapter4/4_1_3_functions_continuous_var.php 5/5