Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 2

1.

Assuming that the classes are normally distributed, in subset selection, when one
variable is added or removed, how can the new discriminant be calculated quickly? For
example, how can the new S−1 new be calculated from S−1 old?
ANSWER

When we add a new variable, we are adding a row and column to the covariance matrix. It
can be shown that if we partition a symmetric nonsingular matrix A into (Rencher, 1995; p.
28)

the inverse is given by

where b = a22 − a T 12A −1 11 a12. So given A11 and A −1 11 , we can easily calculate A−1
when a12 and a22 (covariances of the new variable with the existing variables, and its
variance) are added.

Another possibility is to break down the Mahalanobis distance calculation (if we have the
previous values stored). Let A be partitioned as (T. Cormen, C. Leiserson, R. Rivest, C. Stein.
2001. Introduction to Algorithms, The MIT Press. 2nd edition, p. 761)

2. Using Optdigits from the UCI repository, implement PCA. For various number of
eigenvectors, reconstruct the digit images and calculate the reconstruction error
(equation 6.12).

ANSWER
The Matlab code is given in ex6_2.m (which also generates figure 6.3 in the book). The
contour plot of an example ‘6’ and its reconstruction is given in figure 6.1. The original 64
dimensional data and its reconstruction are as follows:

You might also like