The Rank-Deficient Least Squares Problem: With Column Pivoting
The Rank-Deficient Least Squares Problem: With Column Pivoting
MAT 610
Summer Session 2009-10
Lecture 11 Notes
These notes correspond to Sections 5.4, 5.5 and 5.7 in the text.
1 2 1
= 2 4 0
1 2 3
has rank 2, because the rst two columns are parallel, and therefore are linearly dependent, while
the third column is not parallel to either of the rst two. Columns 1 and 3, or columns 2 and 3,
form linearly independent sets.
Therefore, in the case where rank() = < , we seek a decomposition of the form = ,
where is a permutation matrix chosen so that the diagonal elements of are maximized at each
stage. Specically, suppose 1 is a Householder reection chosen so that
11
0
1 = .
, 11 = a1 2 .
.
.
0
To maximize 11 , we choose 1 so that in the column-permuted matrix = 1 , we have a1 2
a 2 for 2. For 2 , we examine the lengths of the columns of the submatrix of obtained by
removing the rst row and column. It is not necessary to recompute the lengths of the columns,
because we can update them by subtracting the square of the rst component from the square of
the total length.
=
0 0
where = 1 , = 1 , and is an upper triangular, matrix. The last
rows are necessarily zero, because every column of is a linear combination of the rst columns
of .
Example We perform with column pivoting on the matrix
1
3 5 1
2 1 2 1
.
=
1
4 6 1
4
5 10 1
Computing the 2-norms of the columns yields
a1 2 = 22,
a2 2 = 51,
0 0
0 1
(1) = 1 =
1 0
0 0
a3 2 = 165,
a4 2 = 4.
1
0
0
0
We then apply a Householder transformation 1 to (1) to make the rst column a multiple of e1 ,
which yields
0 2.0953
1.4080
0.6873
.
1 (1) =
0
0.7141 0.7759
0.0618
0 0.4765
1.0402 0.5637
Next, we consider the submatrix obtained by removing the rst row and column of 1 (1) :
2.0953
1.4080
0.6873
0.0618 .
(2) = 0.7141 0.7759
0.4765
1.0402 0.5637
We compute the lengths of the columns, as before, except that this time, we update the lengths of
the columns of , rather than recomputing them. This yields
(2)
(1)
(1)
(2)
(1)
(1)
(2)
(1)
(1)
The second column is the largest, so there is no need for a column interchange this time. We
2 to the rst column of (2) so that the updated column is a
apply a Householder transformation
multiple of e1 , which is equivalent to applying a 4 4 Householder transformation 2 = 2v2 v2 ,
where the rst component of v2 is zero, to the second column of (2) so that the updated column
is a linear combination of e1 and e2 . This yields
0
0.6933 0.6933
Then, we consider the submatrix obtained by removing the rst row and column of 2 (2) :
[
]
0.2559
0.2559
(3)
=
.
0.6933 0.6933
Both columns have the same lengths, so no column interchange is required. Applying a Householder
3 to the rst column to make it a multiple of e1 will have the same eect on the second
reection
column, because they are parallel. We have
]
[
0.7390 0.7390
(3)
.
3 =
0
0
It follows that the matrix (4) obtained by removing the rst row and column of 3 (3) will be
the zero matrix. We conclude that rank() = 3, and that has the factorization
]
[
,
=
0 0
where
0
0
=
1
0
0
1
0
0
1
0
0
0
0
0
,
0
1
1.7905
= 0.4978 ,
0.7390
and = 1 2 3 is the product of the Householder reections used to reduce to uppertriangular form.
Using this decomposition, we can solve the linear least squares problem x = b by observing
that
2
[
]
2
b x2 =
b
x
0 0
2
3
[
][
]
2
u
=
b 0 0
v
2
[
] [
]
2
c
u + v
=
d
0
2
= c u v22 + d22 ,
where
b=
c
d
x=
u
v
]
,
with c and u being -vectors. Thus min b x22 = d22 , provided that u + v = c. A basic
solution is obtained by choosing v = 0. A second solution is to choose u and v so that u22 + v22
is minimized. This criterion is related to the pseudo-inverse of .
=
0 0
where is upper triangular. Then
0
0
0
,
=
2 1
0 0
0
where is upper-triangular. Then
0
0 0
0
0
where is a lower-triangular matrix of size , where is the rank of . This is the complete
orthogonal decomposition of .
Recall that is the pseudo-inverse of if
4
1. =
2. =
3. () =
4. () =
Given the above complete orthogonal decomposition of , the pseudo-inverse of , denoted + , is
given by
[ 1
]
0
+
=
.
0
0
Let = {xb x2 = min }. If x and we desire x2 = min , then x = + b. Note that in
this case,
r = b x = b + b = ( + )b
where the matrix ( + ) is a projection matrix . To see that is a projection, note that
+
=
[
=
0
[
=
0
0
0
]
0
.
0
1 0
0
0
= ( ) 2
= 2
= (1 1)2 + + ( 1)2
Low-Rank Approximations
()
()
()
..
..
.
0
Then
2
= ( ) 2
= 2
2
+ + 2 .
= +1
such that
2 2 ,
We now consider a variation of this problem. We wish to nd
= is the solution if
It follows that
+1 + + 2 2 ,
Note that
(
+
+ 2
2 + + 2 > 2 .
1
2
+1
1
+ + 2
)
.