Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
24 views

Random Variables Cheatsheet

Uploaded by

mariyam saeed
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views

Random Variables Cheatsheet

Uploaded by

mariyam saeed
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Discrete: pmf p(·) Continuous: pdf f (·)

Support Set Countable set of values Uncountable set of values

Probabilities p(x) = P (X = x) f (x) 6= P (X = x) = 0 for all x


Rb
P (a ≤ X ≤ b) = a f (x)dx = F (b) − F (a)

Joint pXY (x, y) = P(X = x, Y = y) P(X ∈ [a, b], Y


RbRd
∈ [c, d]) = a c
fXY (x, y) dx dy

Marginal
P R∞
pX (x) = y pXY (x, y) fX (x) = −∞
fXY (x, y) dy

Conditional pX|Y (x|y) = pXY (x, y)/pY (y) fX|Y (x|y) = fXY (x, y)/fY (y)

Independence pXY (x, y) = pX (x)pY (y) fXY (x, y) = fX (x)fY (y)

Expected Value µX = E[X] = x xp(x) µX = E[X] = −∞ xf (x) dx


P R∞

E[g(X)] = PxPg(x)p(x) E[g(X)] = −∞


R∞
g(x)f (x) dx
E[g(X, Y )] = x Py g(x, y)p(x, y) E[g(X, Y )] = −∞ −∞ g(x, y)fXY (x, y) dx dy
R∞ R∞

Table 1: Differences between Discrete and Continous Random Variables

Probability Mass Function p(x) Probability Density Function f (x)


Discrete Random Variables Continuous Random Variables
p(x) = P (X = x) f (x) 6= P (X = x) = 0
p(x) ≥ 0 f (x) ≥ 0
p(x) ≤ 1 f (x) can be greater than one!
P R∞
x p(x) = 1 −∞
f (x) dx = 1
P Rx
F (x0 ) = x≤x0 p(x) F (x) = −∞ f (t) dt

Table 2: Probability mass function (pmf) versus probability density function.


Definition of R.V. X : S → R (RV is a fixed function from sample space to reals)
Support Set Collection of all possible realizations of a RV
CDF F (x0 ) = P (X ≤ x0 )
Expectation of a Function In general, E[g(X)] 6= g (E[X])
Linearity of Expectation E[a + X] = a + E[X],
2

E[bX] = bE[X], E[X1 + . . . + Xk ] = E[X1] + . . . E[Xk ]
Variance σX ≡ Var(X) ≡ E (X − E[X])2 = E[X 2 ] − (E[X])2 = E [X(X − µX )]
p
2
Standard Deviation σX = σX
Var. of Linear Combination Var(a + X) = Var(X), Var(bX) = b2 Var(X), Var(aX + bY + c) = a2 Var(X) + b2 Var(Y ) + 2abCov(X, Y )
X1 , . . . , Xk are uncorrelated ⇒ Var(X1 + . . . + Xk ) = Var(X1 ) + . . . Var(Xk )
Covariance σXY ≡ Cov(X, Y ) ≡ E [(X − E[X]) (Y − E[Y ])] = E[XY ] − E[X]E[Y ] = E [X(Y − µY )] = E [(X − µX )Y ]
Correlation ρXY = Corr(X, Y ) = σXY /(σX σY )
Covariance and Independence X, Y independent ⇒ Cov(X, Y ) = 0 but Cov(X, Y ) = 0 ; X, Y independent
Functions and Independence X, Y independent ⇒ g(X), h(Y ) independent
Bilinearity of Covariance Cov(a + X, Y ) = Cov(X, a + Y ) = Cov(X, Y ), Cov(bX, Y ) = Cov(X, bY ) = bCov(X, Y )
Cov(X, Y + Z) = Cov(X, Y ) + Cov(X, Z) and Cov(X + Z, Y ) = Cov(X, Y ) + Cov(Z, Y )
Linearity of Conditional E E[a + Y |X] = a + E[Y |X], E[bY |X] = bE[Y |X], E[X1 + · · · + Xk |Z] = E[X1|Z] + · · · + E[Xk |Z]
Taking Out What is Known E[g(X)Y |X] = g(X)E[Y |X]
Law of Iterated Expectations E[Y ] = E [E(Y |X)]

Conditional Variance Var(Y |X) ≡ E (Y − E[Y |X])2 = E[Y 2 |X] − (E[Y |X])2
Law of Total Variance Var(Y ) = E [Var(Y |X)] + Var (E[Y |Z])

Table 3: Essential facts that hold for all random variables, continuous or discrete: X, Y, Z and X1 , . . . , Xk are random variables; a, b, c, d are
constants; µ, σ, ρ are parameters; and g(·), h(·) are functions.
Sample Statistic Population Parameter Population Parameter
Setup Sample of size n < N from a popn. Population viewed as list of N objects Population viewed as a RV
n N
1X 1 X X
Mean x̄ = xi µX = xi Discrete µX = xp(x)
n i=1 N i=1

x R
Continuous µX = −∞ xf (x) dx

n N
1 X 2 1 X 2
 
Variance s2X = (xi − x̄)2 σX = (xi − µX )2 σX = E (X − E[X])2
n − 1 i=1 N i=1
p p p
Std. Dev. sX = s2X σX = σx2 σX = σx2

n N
1 X 1 X
Covariance sXY = (xi − x̄)(yi − ȳ) σXY = (xi − µX )(yi − µY ) σXY = E [(X − µX ) (Y − µY )]
n − 1 i=1 N i=1

Correlation rXY = sXY /(sX sY ) ρXY = σXY /(σX σY ) ρXY = σXY /(σX σY )

You might also like