Lecture 9
Lecture 9
Amit Tripathi
Remark
The 0 vector is orthogonal to every vector.
Example
With respect to the dot product on R3 , the set {i, j, k} is an orthogonal
set.
Example
The vector (x, y ) in R2 is orthogonal to the vector (−y , x) (again with dot
product).
(IIT) Elementary Linear Algebra (2024-25 @IITH) 2 / 28
Orthogonal vectors
Theorem
Any orthogonal set of non-zero vectors is linearly independent.
Proof.
Let S = {α1 , · · · , αn } be an orthogonal set. Suppose for some scalars
c1 , · · · , cn ∈ R,
c1 α1 + · · · + cn αn = 0.
Applying <, α1 > we get
By orthogonality, all the inner product, except the first one, vanish. Since
α1 ̸= 0, we get c1 = 0. Similarly, we conclude
c1 = · · · = cn = 0.
Corollary
If dim(V ) = n, then any orthogonal set of non-zero vectors will have at
most n vectors.
Example
The standard ordered basis B = {(1, 0, 0), (0, 1, 0), (0, 0, 1)} is an
orthogonal basis of R3 .
Proof.
Taking inner product with <, αi > on both sides, we get
Example
With respect to the dot product on R3 , the set {i, j, k} is an orthonormal
set.
Example
If S = {α1 , · · · , αn } is an orthogonal set of non-zero vectors, then the set
′ α1 αn
S = ,··· ,
||α1 || ||αn ||
is an orthonormal set.
(c) Then {β1 , β2 } is orthogonal, linearly independent and spans the same
space as {α1 , α2 }.
Example
(a) Is {β1 , β2 , α3 } orthogonal and linearly independent? Consider the set
{β1 , β2 , α3 + aβ1 + bβ2 } for some a, b ̸= 0. Taking
< β1 , α3 > < β2 , α3 >
a=− 2
and b = − works. Set
||β1 || ||β2 ||2
Theorem
Let (V , <, >) be an inner product space. Let S = {α1 , · · · , αn } be a
linearly independent set.
Then one can construct an orthogonal set S ′ = {β1 , · · · , βn } of non-zero
vectors such that S ′ is an orthonormal basis for the space spanned by S.
Proof.
(a) Can βk+1 = 0 ? Thus, {β1 , · · · , βk+1 } are linearly independent.
(b) How to get an orthonormal basis?
Corollary
If B = {β1 , · · · , βn } is an orthonormal basis, then for any
α = x1 β1 + · · · + xn βn
α′ = y1 β1 + · · · + yn βn ,
Remark
Suppose our starting (non-orthonormal) basis set is {α1 , · · · , αn }.
Suppose we have applied the above procedure till m steps to get a
orthonormal set {β1 , · · · , βm }. Then
Example
Suppose we are given vectors {v1 , v2 } ∈ R2 . We set u1 = v1 . Next we
write v2 in terms of u1 and a vector u2 which is perpendicular to v2 .
Example
Let
1 1 2
v1 = 1 , v2 = 0 , v3 = −2 .
−1 2 3
This is a basis of R3 . Start by setting w1 = v1 . Next set
4/3
< v2 , w1 > 1
w2 = v2 − w1 = v2 + w1 = 1/3 .
||w1 ||2 3
5/3
We now set
< v3 , w1 > < v3 , w2 >
w3 = v3 − 2
w1 − w2
||w1 || ||w2 ||2
1
3
= v3 + w1 − w2 = −3/2 .
2
−1/2
Example
(a) Let V = R2 and let W be the subspace consisting of y = 0.
(b) Let W ′ = {x = 0} and W ′′ = {y = x}.
(c) It is clear that V ∼
= W ⊕ W ′ and V ∼
= W ⊕ W ′′ .
(d) So there are many complementary subspaces to W .
Lemma
Let (V , <, >) be an inner product space and let W be a subspace of V .
Let W ⊥ be the set of all vectors α ∈ V such that
Proof.
It is immediate that W ⊥ is a subspace. To get W ⊥ , we start with a basis
BW = {α1 , · · · , αm } of W . Extend it to a basis
BV = {α1 , · · · , αm , αm+1 , · · · , αn }
of V .
Proof.
Apply Gram-Schmidt process on BV to get a orthonormal basis
{β1 , · · · , βm , βm+1 , · · · , βn }. As remarked earlier, by construction of this
basis, we have
Let
W ⊥ =< βm+1 , · · · , βn > .
It is clear that any vector v ∈ V , can be written as a unique sum
v = w + w ′,
where w ∈ W and w ′ ∈ W ⊥ .
Definition
The subspace W ⊥ thus obtained is called the orthogonal complement of
W with respect to the given inner product.
Proof.
First assume that the vector α ∈ W is such that β − α is orthogonal to
every vector in W . For any arbitrary γ ∈ W , we have
Proof.
Conversely, suppose α ∈ W is a best approximation of β. Suppose
β = w + w ′ be the unique representation of β as a sum of vectors w ∈ W
and w ′ ∈ W ⊥ . Sufficient to show that w = α. If not, then
Lemma
Let A be any m × n matrix. Then
N(At A) = N(A).
Proof.
If X ∈ N(A) then At A(X ) = At (AX ) = 0. Thus N(A) ⊆ N(At A).
Conversely, suppose X ∈ N(At A), i.e. At AX = 0. Multiplying on the left
by X t , we get (AX )t (AX ) = 0, which (with usual norm) gives
||AX || = 0.
Lemma
For any matrices A and B such that AB can be defined, we have
rank(AB) ≤ rank(A).
Definition
Let A be an m × n matrix. The row rank of A is the subspace of Rn
spanned by the rows of A. The row-rank of A is the dimension of this
subspce.
Lemma
For any m × n matrix A
Proof.
We know that
rank(At ) = rank(At A) ≤ rank(A).
Interchanging A and At , we get
rank(At ) = rank(A).