Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

36-401 Modern Regression Homework #1 Solutions: DUE: September 8, 2017 Problem 1 (20 PTS.)

Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

Carnegie Mellon University Department of Statistics

36-401 Modern Regression Homework #1 Solutions

DUE: September 8, 2017

Problem 1 [20 pts.]


First consider X1 and X2 with joint probability function p(x1 , x2 ).

E[a1 X1 + a2 X2 ] = ∑ ∑(a1 x1 + a2 x2 ) ⋅ p(x1 , x2 ) (1)


x1 x2
= a1 ∑ ∑ x1 ⋅ p(x1 , x2 ) + a2 ∑ ∑ x2 ⋅ p(x1 , x2 ) (2)
x1 x2 x1 x2
= a1 ∑ x1 ∑ p(x1 , x2 ) + a2 ∑ x2 ∑ p(x1 , x2 ) (3)
x1 x2 x2 x1
= a1 ∑ x1 ⋅ p(x1 ) + a2 ∑ x2 ⋅ p(x2 ) (4)
x1 x2
= a1 E[X1 ] + a2 E[X2 ] (5)

Now assume
⎡ k ⎤ k
⎢ ⎥
E⎢ ∑ aj Xj ⎥⎥ = ∑ aj E[Xj ]

⎢ j=1 ⎥ j=1
⎣ ⎦
holds for some k ∈ Z+ , and define
k
Y ∶= ∑ aj Xj .
j=1

Then
⎡ k+1 ⎤
⎢ ⎥

E⎢ ∑ aj Xj ⎥⎥ = E[Y + ak+1 Xk+1 ]
⎢ j=1 ⎥
⎣ ⎦
= E[Y ] + ak+1 E[Xk+1 ] (6)
k+1
= ∑ aj E[Xj ],
j=1

where (6) follows from (1)-(5). Therefore,


⎡m ⎤ m
⎢ ⎥
E⎢⎢ ∑ aj Xj ⎥⎥ = ∑ aj E[Xj ]
⎢ j=1 ⎥ j=1
⎣ ⎦
for any m ∈ Z+ , by induction.

Notice (5) also implies


E[a1 X + a2 ] = a1 ⋅ E[X] + a2
by letting X2 be a degenerate random variable with P (X2 = 1) = 1.

1
36-401 Modern Regression: Homework 1

Problem 2 [20 pts.]

∞ ∞ ∞
∑ P (X ≥ j) = ∑ ∑ P (X = k) (7)
j=1 j=1 k=j
∞ k
= ∑ ∑ P (X = k) (8)
k=1 j=1

= ∑ k ⋅ P (X = k)
k=1
= E[X]

To understand what is happening with the interchange of summations in (8) it may help to
write (7) as
∞ ∞
∑ ∑ P (X = k) = P (X = 1) + P (X = 2) +P (X = 3) + P (X = 4) +P (X = 5) + P (X = 6) + ⋯
j=1 k=j

+ P (X = 2) +P (X = 3) + P (X = 4) +P (X = 5) + P (X = 6) + ⋯
+P (X = 3) + P (X = 4) +P (X = 5) + P (X = 6) + ⋯
+ P (X = 4) +P (X = 5) + P (X = 6) + ⋯
+P (X = 5) + P (X = 6) + ⋯
+ P (X = 6) + ⋯

(7) sums over this “matrix” by row and (8) sums over it by column.

Alternate approach
One could also sum over all entries of the above matrix with zeros plugged into the lower
triangle, i.e.
∞ ∞ ∞
∑ P (X ≥ j) = ∑ ∑ P (X = k) ⋅ 1{k≥j}
j=1 j=1 k=1
∞ ∞
= ∑ ∑ P (X = k) ⋅ 1{k≥j}
k=1 j=1

= ∑ k ⋅ P (X = k)
k=1
= E[X].

2
36-401 Modern Regression: Homework 1

Problem 3 [20 pts.]


(a)

Var(X) = E[(X − E[X])2 ]


= ∫ (x − E[X])2 ⋅ p(x)dx
R

= (a − E[X])2 ⋅ p(a) + ∫ (x − E[X])2 ⋅ p(x) dx


R/{a} ±
= 0 for all x ∈ R/{a}

= (a − E[X]) ⋅ 1
2

2
= (a − ∫ x ⋅ p(x)dx)
R
= (a − a ⋅ p(a))2
= (a − a)2
= 0.

(b) Here we assume X is discrete and Var(X) = 0, i.e.

Var(X) = ∑(x − E[X])2 ⋅ p(x) (9)


x
= 0.

Since every term in (9) is nonnegative, the above implies

(x − E[X])2 ⋅ p(x) = 0 (10)

for all x.

For (10) to hold, then any time p(x) > 0, we must have (x − E[X])2 = 0.

Now assume there are two distinct values x1 and x2 for which p(x1 ) > 0 and p(x2 ) > 0.
But (10) implies
x1 = x2 = E[X],
a contradiction. Therefore,
P (X = a) = 1,
where a = E[X].

3
36-401 Modern Regression: Homework 1

Problem 4 [20 pts.]

Cov(a + bX, c + dY ) = E[(a + bX − E[a + bX])(c + dY − E[c + dY ])]


= E[(a + bX − a − b ⋅ E[X])(c + dY − c − d ⋅ E[Y ])]
= E[(bX − b ⋅ E[X])(dY − d ⋅ E[Y ])]
= E[b ⋅ (X − E[X]) ⋅ d ⋅ (Y − E[Y ])]
= bd ⋅ E[(X − E[X])(Y − E[Y ])]
= bd ⋅ Cov(X, Y ),

where we have used the linearity of expectation, established in Problem 1.

4
36-401 Modern Regression: Homework 1

Problem 5 [20 pts.]


(a)

E[Y ] = 5E[X] + E[] (by linearity of expectation)


=5⋅0+0
=0

Var(Y ) = E[(Y − E[Y ])2 ]


= E[(5X + )2 ]
= E[25X 2 + 10X ⋅  + 2 ]
= 25 ⋅ E[X 2 ] + 10 ⋅ E[X ⋅ ] + E[2 ]
= 25 ⋅ E[X 2 ] + 10 ⋅ E[X] ⋅ E[] + E[2 ] (by independence)
= 25 ⋅ (Var(X) + E[X] ) + 10 ⋅ E[X] ⋅ E[] + Var() + E[]
2 2
(by variance formula)
1
= 25 ⋅ ( + 02 ) + 10 ⋅ 0 ⋅ 0 + 1 + 02
3
28
=
3

(b)

E[Y 2 ] = Var(Y ) + E[Y ]2


28
= + 02
3
28
=
3

(c)

E[Y ∣ X = x] = E[5X +  ∣ X = x]
= E[5X ∣ X = x] + E[ ∣ X = x] (by linearity of conditional expectation)
= 5x + E[ ∣ X = x]
= 5x + E[] (by independence of X and )
= 5x

(d)

E[Y 3 ] = E[(5X + )3 ]


= E[125X 3 ] + E[75X 2 ⋅ ] + E[15X ⋅ 2 ] + E[3 ]
= 125E[X 3 ] + 75E[X 2 ⋅ ] + 15E[X ⋅ 2 ] + E[3 ]
1 1
= 125 ∫ x3 ⋅ dx + 75E[X 2 ⋅ ] + 15E[X ⋅ 2 ] + Skew[Z]
−1 2
= 75E[X 2 ⋅ ] + 15E[X ⋅ 2 ]
= 75E[X 2 ] ⋅ E[] + 15E[X] ⋅ E[2 ] (by independence; see A1)
=0

5
36-401 Modern Regression: Homework 1

(e)

Cov(, 2 ) = E[( − E[])(2 − E[2 ])]


= E[ ⋅ (2 − 1)]
= E[3 ] − E[]
= Skew[Z]
= 0.

Although Cov(, 2 ) = 0, this does not imply that they are independent! For example,
consider that

P ( ≤ 1, 2 ≤ 1) = P ( ≤ 1)
≠ P ( ≤ 1) ⋅ P (2 ≤ 1).

Hence,  and 2 are not independent.

6
36-401 Modern Regression: Homework 1

Appendix
(A1) Let X and Y be independent random variables. Then, by definition, for any (x, y)

P (X ≤ x, Y ≤ y) = P (X ≤ x) ⋅ P (Y ≤ y). (11)

Now consider X 2 and Y . We have


√ √
P (X 2 ≤ x, Y ≤ y) = P (− x ≤ X ≤ x, Y ≤ y)
√ √
= P (X ≤ x, Y ≤ y) − P (X ≤ − x, Y ≤ y)
√ √
= P (X ≤ x) ⋅ P (Y ≤ y) − P (X ≤ − x) ⋅ P (Y ≤ y) (by independence of X and Y )
√ √
= (P (X ≤ x) − P (X ≤ − x)) ⋅ P (Y ≤ y)
= P (X 2 ≤ x) ⋅ P (Y ≤ y).

You might also like