PCA, Eigenfaces, and Face Detection: CSC320: Introduction To Visual Computing Michael Guerzhoy
PCA, Eigenfaces, and Face Detection: CSC320: Introduction To Visual Computing Michael Guerzhoy
PCA, Eigenfaces, and Face Detection: CSC320: Introduction To Visual Computing Michael Guerzhoy
Lighting
= +
2
• Squared reconstruction error: 𝑥𝑘 − 𝑥
Reconstruction cont’d
• 𝑥𝑘 = 𝑥 ⋅ 𝑣0 𝑣0 + 𝑥 ⋅ 𝑣1 𝑣1 + ⋯ + 𝑥 ⋅ 𝑣𝑘 𝑣𝑘
• Note: in 𝑥 ⋅ 𝑣0 𝑣0 ,
– 𝑥 ⋅ 𝑣0 is a measure of how similar x is to 𝑣0
– The more similar x is to 𝑣0 , the larger the
contribution from 𝑣0 is to the sum
Representation and reconstruction
=
• Reconstruction:
= +
^
x = µ + w1u1+w2u2+w3u3+w4u4+ …
P=4
P = 200
P = 400
• (Fairly easy calculus – look it up, or we can talk in office hours, or possibly
we’ll do it next week)
Obtaining the Principal Components
• 𝑋𝑋 𝑇 can be huge
• There are tricks to still compute the EVs
PCA as dimensionality reduction
Mean: μ
Dimensionality reduction
• We can represent the orange points with only their v1 coordinates
– since v2 coordinates are all essentially 0
• This makes it much cheaper to store and compare points
• A bigger deal for higher dimensional problems
Another Interpretation of PCA
The eigenvectors of the covariance matrix define a new
coordinate system
• eigenvector with largest eigenvalue captures the most
variation among training vectors x
• eigenvector with smallest eigenvalue has least variation
• The eigenvectors are known as principal components
Data Compression using PCA
• For each data point x, store 𝑉𝑘𝑇 𝑥 (a k-dimensional
vector). The reconstruction error would be the
smallest for a set of k numbers
Face Detection using PCA
• For each (centered) window x and for a set of
principal components V, compute the Euclidean
distance 𝑉𝑉 𝑇 𝑥 − 𝑥
• That is the distance between the reconstruction of x
and x. The reconstruction of x is similar to x if x lies in
the face subspace
• Note: the reconstruction is always in the face subspace
Issues: dimensionality
What if your space isn’t flat?
• PCA may not help
Nonlinear methods
LLE, MDS, etc.
Moving forward
• Faces are pretty well-behaved
– Mostly the same basic shape
– Lie close to a low-dimensional subspace