Probability Density Functions
Probability Density Functions
is a valid PDF. I will use the convention of upper-case P for discrete probabilities, and lower-case
p for PDFs.
With the PDF we can specify the probability that the random variable x falls within a given
range: Z x1
P (x0 ≤ x ≤ x1 ) = p(x)dx (3)
x0
This can be visualized by plotting the curve p(x). Then, to determine the probability that x falls
within a range, we compute the area under the curve for that range.
The PDF can be thought of as the infinite limit of a discrete distribution, i.e., a discrete dis-
tribution with an infinite number of possible outcomes. Specifically, suppose we create a discrete
distribution with N possible outcomes, each corresponding to a range on the real number line.
Then, suppose we increase N towards infinity, so that each outcome shrinks to a single real num-
ber; a PDF is defined as the limiting case of this discrete distribution.
There is an important subtlety here: a probability density is not a probability per se. For
one thing, there is no requirement that p(x) ≤ 1. Moreover, the probability that x attains any
R 5 specific value out of the infinite set of possible values is always zero, e.g. P (x = 5) =
one
5
p(x)dx = 0 for any PDF p(x). People (myself included) are sometimes sloppy in referring
to p(x) as a probability, but it is not a probability — rather, it is a function that can be used in
computing probabilities.
Joint distributions are defined in a natural way. For two variables x and y, the joint PDF p(x, y)
defines the probability that (x, y) lies in a given domain D:
Z
P ((x, y) ∈ D) = p(x, y)dxdy (4)
(x,y)∈D
Copyright
c 2015 Aaron Hertzmann, David J. Fleet and Marcus Brubaker 27
CSC 411 / CSC D11 / CSC C11 Probability Density Functions (PDFs)
Rwe can write p(x|y), which provides a PDF for x for every value of y. (It must be the case that
p(x|y)dx = 1, since p(x|y) is a PDF over values of x.)
In general, for all of the rules for manipulating discrete distributions there are analogous rules
for continuous distributions:
Probability rules for PDFs:
• p(x)
R ∞ ≥ 0, for all x
• −∞ p(x)dx = 1 R
x
• P (x0 ≤ x ≤R x1 ) = x01 p(x)dx
∞
• Sum rule: −∞ p(x)dx = 1
• Product rule: p(x, y) = p(x|y)p(y)
R∞ = p(y|x)p(x).
• Marginalization: p(y) = −∞ p(x, y)dx R∞
• We can also add conditional information, e.g. p(y|z) = −∞ p(x, y|z)dx
• Independence: Variables x and y are independent if: p(x, y) = p(x)p(y).
The variance of a scalar variable x is the expected squared deviation from the mean:
Z
2
Ep(x) [(x − µ) ] = (x − µ)2 p(x)dx (7)
The variance of a distribution tells us how uncertain, or “spread-out” the distribution is. For a very
narrow distribution Ep(x) [(x − µ)2 ] will be small.
The covariance of a vector x is a matrix:
Z
T
Σ = cov(x) = Ep(x) [(x − µ)(x − µ) ] = (x − µ)(x − µ)T p(x)dx (8)
By inspection, we can see that the diagonal entries of the covariance matrix are the variances of
the individual entries of the vector:
Σii = var(xii ) = Ep(x) [(xi − µi )2 ] (9)
Copyright
c 2015 Aaron Hertzmann, David J. Fleet and Marcus Brubaker 28
CSC 411 / CSC D11 / CSC C11 Probability Density Functions (PDFs)
between variables xi and xj . If the covariance is a large positive number, then we expect xi to be
larger than µi when xj is larger than µj . If the covariance is zero and we know no other information,
then knowing xi > µi does not tell us whether or not it is likely that xj > µj .
P case, the sample
One goal of statistics is to infer properties of distributions. In the simplest
1
mean of a collection of N data points P x1:N is just their average: x̄ = N i xi . The sample
covariance of a set of data points is: N1 i (xi − x̄)(xi − x̄)T . The covariance of the data points
tells us how “spread-out” the data points are.
Equations 11 and 12 are equivalent. The latter simply says: x is distributed uniformly in the range
x0 and x1 , and it is impossible that x lies outside of that range.
The mean of a uniform distribution U(x0 , x1 ) is (x1 + x0 )/2. The variance is (x1 − x0 )2 /12.
Copyright
c 2015 Aaron Hertzmann, David J. Fleet and Marcus Brubaker 29
CSC 411 / CSC D11 / CSC C11 Probability Density Functions (PDFs)
3 3 3
N =1 N =2 N = 10
2 2 2
1 1 1
0 0 0
0 0.5 1 0 0.5 1 0 0.5 1
Figure 1: Histogram plots of the mean of N uniformly distributed numbers for various values of
N . The effect of the Central Limit Theorem is seen: as N increases, the distribution becomes more
Gaussian. (Figure from Pattern Recognition and Machine Learning by Chris Bishop.)
The simplest case is a Gaussian PDF over a scalar value x, in which case the PDF is:
2 1 1 2
p(x|µ, σ ) = √ exp − 2 (x − µ) (13)
2πσ 2 2σ
(The notation exp(a) is the same as ea ). The Gaussian has two parameters, the mean µ, and
the variance σ 2 . The mean specifies the center of the distribution, and the variance tells us how
“spread-out” the PDF is.
The PDF for D-dimensional vector x, the elements of which are jointly distributed with a the
Gaussian denity function, is given by
1
exp −(x − µ)T Σ−1 (x − µ)/2
p(x|µ, Σ) = p (14)
(2π)D |Σ|
where µ is the mean vector, and Σ is the D×D covariance matrix, and |A| denotes the determinant
of matrix A. An important special case is when the Gaussian is isotropic (rotationally invariant).
In this case the covariance matrix can be written as Σ = σ 2 I where I is the identity matrix. This is
called a spherical or isotropic covariance matrix. In this case, the PDF reduces to:
2 1 1 2
p(x|µ, σ ) = p exp − 2 ||x − µ|| . (15)
(2π)D σ 2D 2σ
The Gaussian distribution is used frequently enough that it is useful to denote its PDF in a
simple way. We will define a function G to be the Gaussian density function, i.e.,
1
exp −(x − µ)T Σ−1 (x − µ)/2
G(x; µ, Σ) ≡ p (16)
(2π)D |Σ|
When formulating problems and manipulating PDFs this functional notation will be useful. When
we want to specify that a random vector has a Gaussian PDF, it is common to use the notation:
Copyright
c 2015 Aaron Hertzmann, David J. Fleet and Marcus Brubaker 30
CSC 411 / CSC D11 / CSC C11 Probability Density Functions (PDFs)
Equations 14 and 17 essentially say the same thing. Equation 17 says that x is Gaussian, and
Equation 14 specifies (evaluates) the density for an input x. The covariance matrix Σ of a Gaussian
must be symmetric and positive definite
6.3.1 Diagonalization
A useful way to understand a Gaussian is to diagonalize the exponent. The exponent of the Gaus-
sian is quadratic, and so its shape is essentially elliptical. Through diagonalization we find the
major axes of the ellipse, and the variance of the distribution along those axes. Seeing the Gaus-
sian this way often makes it easier to interpret the distribution.
As a reminder, the eigendecomposition of a real-valued symmetric matrix Σ yields a set of
orthonormal vectors vi and scalars λi such that
Σui = λi ui (18)
Equivalently, if we combine the eigenvalues and eigenvectors into matrices U = [u1 , ..., uN ] and
Λ = diag(λ1 , ...λN ), then we have
ΣU = UΛ (19)
Since U is orthonormal:
Σ = UΛUT (20)
The inverse of Σ is straightforward, since U is orthonormal, and hence U−1 = UT :
−1
Σ−1 = UΛUT = UΛ−1 UT (21)
(If any of these steps are not familiar to you, you should refresh your memory of them.)
Now, consider the negative log of the Gaussian (i.e., the exponent); i.e., let
1
f (x) = (x − µ)T Σ−1 (x − µ) . (22)
2
Substituting in the diagonalization gives:
1
f (x) = (x − µ)T UΛ−1 UT (x − µ) (23)
2
1 T
= z z (24)
2
where
−1 −1
z = diag(λ1 2 , ..., λN 2 )UT (x − µ) (25)
This new function f (z) = zT z/2 = i zi2 /2 is a quadratic, with new variables zi . Given variables
P
x, we can convert them to the z representation by applying Eq. 25, and, if all eigenvalues are
Copyright
c 2015 Aaron Hertzmann, David J. Fleet and Marcus Brubaker 31
CSC 411 / CSC D11 / CSC C11 Probability Density Functions (PDFs)
x2
u2
u1
y2
y1
µ
1/2
λ2
1/2
λ1
x1
Figure 2: The red curve shows the elliptical surface of constant probability density for a Gaussian
in a two-dimensional space on which the density is exp(−1/2) of its value at x = µ. The major
axes of the ellipse are defined by the eigenvectors ui of the covariance matrix, with corresponding
eigenvalues λi . (Figure from Pattern Recognition and Machine Learning by Chris Bishop.)(Note y1 and
y2 in the figure should read z1 and z2 .)
nonzero, we can convert back by inverting Eq. 25. Hence, we can write our Gaussian in this new
coordinate system as1 :
Y
1 1 2 1 1 2
p exp − ||z|| = √ exp − zi (26)
(2π)N 2 i
2π 2
It is easy to see that for the quadratic form of f (z), its level sets (i.e., the surfaces f (z) = c for
constant c) are hyperspheres. Equivalently, it is clear from 26 that z is a Gaussian random vector
with an isotropic covariance, so the different elements of z are uncorrelated. In other words, the
value of this transformation is that we have decomposed the original N -D quadratic with many
interactions between the variables into a much simpler Gaussian, composed of d independent vari-
ables. This convenient geometrical form can be seen in Figure 2. For example, if we consider an
individual zi variable in isolation (i.e., consider a slice of the function f (z)), that slice will look
like a 1D bowl.
We can also understand the local curvature of f with a slightly different diagonalization.
Specifically, let v = UT (x − µ). Then,
1 1 X vi2
f (u) = vT Λ−1 v = (27)
2 2 i λi
If we plot a cross-section of this function, then we have a 1D bowl shape with variance given by
λi . In other words, the eigenvalues tell us variance of the Gaussian in different dimensions.
1
The normalizing |Σ| disappears due to the nature of change-of-variables in PDFs, which we won’t discuss here.
Copyright
c 2015 Aaron Hertzmann, David J. Fleet and Marcus Brubaker 32
CSC 411 / CSC D11 / CSC C11 Probability Density Functions (PDFs)
1 10
xb
xb = 0.7 p(xa |xb = 0.7)
0.5 5
p(xa , xb )
p(xa )
0 0
0 0.5 xa 1 0 0.5 xa 1
Figure 3: Left: The contours of a Gaussian distribution p(xa , xb ) over two variables. Right: The
marginal distribution p(xa ) (blue curve) and the conditional distribution p(xa |xb ) for xb = 0.7 (red
curve). (Figure from Pattern Recognition and Machine Learning by Chris Bishop.)
Then one can show straightforwardly that the marginal PDFs for the components xa and xb are
also Gaussian, i.e.,
xa ∼ N (µa , Σaa ) , xb ∼ N (µb , Σbb ). (29)
With a little more work one can also show that the conditional distributions are Gaussian. For
example, the conditional distribution of xa given xb satisfies
Copyright
c 2015 Aaron Hertzmann, David J. Fleet and Marcus Brubaker 33
CSC 411 / CSC D11 / CSC C11 Probability Density Functions (PDFs)
where
µ = Σ Σ−1 −1
1 µ1 + Σ2 µ2 , (32)
Σ = (Σ−1 −1 −1
1 + Σ2 ) . (33)
Note that the linear transformation of a Gaussian random variable is also Gaussian. For exam-
ple, if we apply a transformation such that y = Ax where x ∼ N (x|µ, Σ), we have y ∼
N (y|Aµ, AΣAT ).
Copyright
c 2015 Aaron Hertzmann, David J. Fleet and Marcus Brubaker 34