Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Abstract Algebra

Download as pdf or txt
Download as pdf or txt
You are on page 1of 367

h na 's

Kris TEXT BOOK on

(For B.A. and B.Sc. IIIrd year students of All Colleges affiliated to Lucknow University, Lucknow)

As per Latest Syllabus of Lucknow University


(w.e.f. 2014-2015)

By

A.R. Vasishtha Deepti Singh


Retired Head, Dep’t. of Mathematics M.Sc., Ph.D.
Meerut College, Meerut (U.P.) Head, Dep’t. of Mathematics
Mahila Mahavidyalya Aminabad,
Lucknow (U.P.)

Pratibha Shukla Uday Singh Rajput


M.Sc., Ph.D. M.Sc., Ph.D.
Dep’t. of Mathematics Asst. Professor, Dep’t. of Mathematics
S.J.N. (P.G.) College, Lucknow (U.P.) Lucknow University, Lucknow (U.P.)

(Lucknow Edition)

KRISHNA Prakashan Media (P) Ltd.


KRISHNA HOUSE, 11, Shivaji Road, Meerut-250 001 (U.P.), India
Jai Shri Radhey Shyam

Dedicated
to

Lord

Krishna
Authors & Publishers
P reface
This book on ABSTRACT ALGEBRA has been specially written according
to the Latest Unified Syllabus to meet the requirements of the B.A. and B.Sc.
Part-III Students of all colleges affiliated to Lucknow University, Lucknow.

The subject matter has been discussed in such a simple way that the students
will find no difficulty to understand it. The proofs of various theorems and
examples have been given with minute details. Each chapter of this book
contains complete theory and a fairly large number of solved examples.
Sufficient problems have also been selected from various university examination
papers. At the end of each chapter an exercise containing objective questions has
been given.

We have tried our best to keep the book free from misprints. The authors
shall be grateful to the readers who point out errors and omissions which, inspite
of all care, might have been there.

The authors, in general, hope that the present book will be warmly received
by the students and teachers. We shall indeed be very thankful to our colleagues
for their recommending this book to their students.

The authors wish to express their thanks to Mr. S.K. Rastogi, M.D.,
Mr. Sugam Rastogi, Executive Director, Mrs. Kanupriya Rastogi, Director and
entire team of KRISHNA Prakashan Media (P) Ltd., Meerut for bringing
out this book in the present nice form.

The authors will feel amply rewarded if the book serves the purpose for
which it is meant. Suggestions for the improvement of the book are always
welcome.

July, 2014 — Authors


Syllabus

Lucknow University, Lucknow


(w.e.f. 2014-15)

B.A./B.Sc. Paper-II

Unit-I
Automorphism, inner automorphism, automorphism groups and their
computations. Conjugacy relations, Normaliser, Counting principle and the
class equation of a finite group, Center of group of prime power order, Sylow's
theorems, Sylow p-subgroup.

Unit-II
Prime and maximal ideals, Euclidean Rings, Principal ideal rings, Polynomial
Rings, Polynomial over the Rational Field, The Eisenstein Criterion, Polynomial
Rings over Commutative Rings, unique factorization domain, R is unique
factorization domain implies so is R [x1, x2, …, xn].

Unit-III
Direct sum, Quotient space, Linear transformations and their representation as
matrices, The Algebra of linear transformations, rank nullity theorem, change of
basis, linear functional, Dual space, Bidual space and natural isomorphism,
transpose of a linear transformation, Characteristic values, annihilating
polynomials, diagonalisation, Cayley Hamilton Theorem, Invariant subspaces,
Primary decomposition theorem.

Unit-IV
Inner product spaces, Cauchy-Schwarz inequality, orthogonal vectors,
Orthogonal complements, Orthonormal sets and bases, Bessel's inequality for
finite dimensional spaces, Gram-Schmidt orthogonalization process, Bilinear,
Quadratic and Hermitian forms.
B rief C ontents
Dedication.........................................................................(v)
Preface ...........................................................................(vi)
Syllabus (Lucknow University, w.e.f. 2014-15).................................(vii)
Brief Contents ...............................................................(viii)

ABSTRACT ALGEBRA....................................................................01 – 364

1. Group Automorphisms....................................................................................03 – 36

2. Rings........................................................................................................................37 – 88

3. Linear Transformations..................................................................................89 – 146

4. Matrices and Linear Transformations.....................................................147 – 180

5. Linear Functionals..........................................................................................181 – 224

6. Characteristic Values and Annihilating Polynomials.......................225 – 254

7. Inner Product Spaces...................................................................................255 – 290

8. Bilinear, Quadratic and Hermitian Forms ............................................291 – 364


Krishna's

ABSTRACT ALGEBRA
C hapters

1. Group Automorphisms

2. Rings

3. Linear Transformations

4. Matrices and Linear Transformations

5. Linear Functionals
6. Characteristic Values and
Annihilating Polynomials

7. Inner Product Spaces

Bilinear, Quadratic and


8. Hermitian Forms
3

Group Automorphisms

1.1 Automorphisms of a Group


efinition: An isomorphic mapping of a group G onto itself is called an
D automorphism of G.
Thus f : G oneonto
−one
G is an automorphism of G if
f (ab) = f (a) f (b) V a, b ∈ G.
If G is any group, then the identity mapping I : G → G such that
I ( x) = x, V x ∈ G is an automorphism of G. Obviously, I is one-one onto and
I ( xy) = xy = I ( x) I ( y), V x, y ∈ G.
The identity mapping I of G is called a trivial automorphism of G.

Example 1: Show that the mapping f : I → I such that f ( x) = − x V x ∈ I is an


automorphism of the additive group of integers I.
4

Solution: Obviously the mapping f is one-one onto.


Let x1 , x2 be any two elements of I. Then
f ( x1 + x2 ) = − ( x1 + x2 ) = (− x1 ) + (− x2 ) = f ( x1 ) + f ( x2 ).
Hence f is an automorphism of I.
Example 2: Show that a → a −1 is an automorphism of a group G iff G is abelian.
Solution: Let f : G → G be such that f ( x) = x −1 V x ∈ G.
The function f is one-one because
f ( x) = f ( y) ⇒ x −1 = y −1 ⇒ ( x −1 ) −1 = ( y −1 ) −1 ⇒ x = y.
Also if x ∈ G, then x −1 ∈ G and we have f ( x −1 ) = ( x −1 ) −1 = x.
∴ f is onto.
Now suppose G is abelian. Let a, b be any two elements of G. Then
f (ab) = (ab) −1 [by def. of f ]
−1 −1 −1 −1
=b a =a b [ ∵ G is abelian]
= f (a) f (b) [by def. of f ]
∴ f is an automorphism of G.
Conversely suppose that f is an automorphism of G. Let a, b ∈ G.
We have f (ab) = (ab) −1 [by def. of f ]
−1 −1
=b a = f (b) f (a) [by def. of f ]
= f (ba). [∵ f is an automorphism]
Since f is one-one, therefore f (ab) = f (ba) ⇒ ab = ba ⇒ G is abelian.
Example 3: Let G be a non-abelian group. Show that the mapping f : G → G such that
f ( x) = x −1 , V x ∈ G is not an automorphism.
Solution: Let G be a non-abelian group.
Then to prove that the mapping f : G → G such that f ( x) = x −1 , V x ∈ G is not
an automorphism of G.
Suppose the mapping f is an automorphism of G.
Let x, y be any two elements of G. Then
f ( xy) = ( xy) − 1 . [By def. of the mapping f ]
Also f ( xy) = f ( x) f ( y) [ ∵ f is an automorphism]
−1 −1
= x y [By def. of the mapping f ]
−1
= ( yx) .
−1
Thus, f ( xy) = ( xy) and f ( xy) = ( yx) −1
⇒ ( xy) −1 = ( yx) −1 ⇒ [( xy) −1 ] −1 = [( yx) −1 ] −1 ⇒ xy = yx.
Thus xy = yx, V x, y ∈ G i. e., G is abelian which is a contradiction.
Hence, the mapping f is not an automorphism of G.
5

Example 4: Let G be a finite abelian group of order n, where n is odd and > 1. Then show
that G has a non-trivial automorphism.
Solution: Define a mapping f : G → G such that f ( x) = x −1 , V x ∈ G.
Then f is an automorphism of G. [See Ex. 3]
We shall show that the mapping f is a non-trivial automorphism of G i. e., f ≠ I ,
where I is the identity mapping of G i. e., I : G → G such that I ( x) = x, V x ∈ G.
Suppose f = I .
Then f ( x) = I ( x), V x ∈ G
⇒ x −1 = x, V x ∈ G ⇒ x −1 x = x 2 , V x ∈ G
⇒ x 2 = e, V x ∈ G, where e is the identity of G
⇒ o ( x)| 2 , V x ∈ G ⇒ o ( x) = 1 or 2 , V x ∈ G.
Since o (G) > 1, therefore G must have an element x such that x ≠ e.
Then x ≠ e ⇒ o ( x) = 2 .
But in a group G, o ( x)| o (G), V x ∈ G.
∴ o ( x) = 2 ⇒ 2 | o (G) ⇒ o (G) is even,
which is a contradiction.
∴ our assumption that f = I is wrong.
Hence, f ≠ I i. e., f is a non-trivial automorphism of G.
Example 5: Let G be a group, H a subgroup of G, f an automorphism of G. Let
f ( H) = { f (h) : h ∈ H }. (Lucknow 2007)
Prove that f ( H) is a subgroup of G.
Solution: Let a, b be any two elements of f ( H). Then
a = f (h1 ) and b = f (h2 ) where h1 , h2 ∈ H.
Now h1 , h2 ∈ H ⇒ h1 h2 −1 ∈ H [ ∵ H is a subgroup]
−1
⇒ f (h1 h2 ) ∈ f ( H)
⇒ f (h1 ) f (h2 −1 ) ∈ f ( H) [ ∵ f is an automorphism]
−1
⇒ f (h1 ) [ f (h2 )] ∈ f ( H) ⇒ ab −1 ∈ f ( H).
∴ f ( H) is a subgroup of G.
Note: Some authors use the symbol hf in place of f (h) to denote the image of an
element.
Example 6:Let G be a group, f an automorphism of G, N a normal subgroup of G. Prove
that f (N ) is a normal subgroup of G. (Lucknow 2010)
Solution: First show as in Ex. 5 that f (N ) is a subgroup of G.
Now to show that f (N ) is a normal subgroup of G.
Let x ∈ G and k ∈ f (N ). Then x = f ( y) where y ∈ G because f is a function of
G onto G. Also k = f (n) where n ∈ N .
6

We have xk x −1 = f ( y) f (n) [ f ( y)] −1 = f ( y) f (n) f ( y −1 ) = f ( yny −1 ).


Since N is normal in G, therefore yny −1 ∈ N . Consequently
f ( yny −1 ) ∈ f (N ).
Thus xk x −1 ∈ f (N ).
∴ f (N ) is a normal subgroup of G.

1.2 Group of Automorphisms of a Group


Theorem 1: The set of all automorphisms of a group forms a group with respect to
composite of functions as the composition. (Lucknow 2010)
Proof: Let A (G) be the collection of all automorphisms of a group G. Then
A (G) = { f : f is an automorphism of G }.
We shall prove that A (G) is a group with respect to composite of functions as
composition.
Closure Property: Let f , g ∈ A (G). Then f , g are one-one mappings of G onto
itself. Therefore g f is also a one-one mapping of G onto itself. If a, b be any two
elements of G, we have
( g f ) (ab) = g [ f (ab)] = g [ f (a) f (b)]
= g[ f (a)] g[ f (b)] = [( g f ) (a)] [( g f ) (b)].
∴ g f is also an automorphism of G. Thus A (G) is closed with respect to
composite composition.
Associativity: We know that composite of arbitrary mappings is associative.
Therefore composite of automorphisms is also associative.
Existence of Identity: The identity function i on G is also an automorphism of G.
Obviously i is one-one onto and if a, b ∈ G, then i (ab) = ab = i (a) i (b). Thus
i ∈ A (G) and if f ∈ A (G), we have i f = f = f i .
Existence of Inverse: Let f ∈ A (G). Since f is a one-one mapping of G onto itself,
−1
therefore f exists and is also a one-one mapping of G onto itself. We shall show
−1
that f is also an automorphism of G. Let a, b ∈ G.Then there exist a′ , b ′ ∈ G such
that
−1
f (a) = a ′ ⇔ f (a ′ ) = a
−1
f (b) = b ′ ⇔ f (b ′ ) = b.
−1 −1
We have f (ab) = f [ f (a ′ ) f (b ′ )]
−1 −1 −1
= f [ f (a ′ b ′ )] = a ′ b ′ = f (a) f (b).
−1
∴ f is an automorphism of G and thus
−1
f ∈ A (G) ⇒ f ∈ A (G).
7

Therefore each element of A (G) possesses inverse.


Therefore A (G) is a group with respect to composite composition.
Theorem 2: Let G be a group. Let A (G) denote the set of all automorphisms of G and
P (G) be the group of all permutations of G. Then A (G) is a subgroup of P (G).
Proof: We have
P (G) = { f : f is a permutation of G i. e., f : G → G is one-one onto}.
We know that P (G) is a group for the composite of two permutations as the binary
operation.
Also, A (G) = { f : f is an automorphism of G }.
The mapping f : G → G is said to be an automorphism of G if f is one-one onto
and f is a homomorphism i. e., f (ab) = f (a) f (b), V a, b ∈ G.
Obviously, f ∈ A (G) ⇒ f ∈ P (G).
Thus A (G ) ⊆ P (G).
Also, A (G) ≠ ∅, because I ∈ A (G),where I is the identity mapping of the group G.
In order to show that A (G) is a subgroup of P(G) it is sufficient to show that
f ∈ A (G), g ∈ A (G) ⇒ f g −1 ∈ A (G),
where g −1 is the inverse of g in the group P (G).
First we shall show that g ∈ A (G) ⇒ g −1 ∈ A (G).
Now g ∈ A (G) ⇒ g is an automorphism of G
⇒ g is one-one onto and g (ab) = g (a) g (b), V a, b ∈ G.
Since the mapping g : G → G is one-one onto, therefore its inverse mapping
g −1 : G → G exists and is also one-one onto.
Let a, b be any two elements of G. Then there exist elements a′ , b ′ ∈ G such that
g −1 (a) = a ′ , g −1 (b) = b ′ …(1)
and g (a ′ ) = a, g (b ′ ) = b. …(2)
We have g −1 (ab) = g −1 [ g (a ′ ) g (b ′ )] [From (2)]
−1
= g [ g (a ′ b ′ )] [ ∵ g is a homomorphism]
= a′ b ′ [By def. of g −1 ]
= g −1 (a) g −1 (b) [From (1)]
−1
∴ the mapping g : G → G is a homomorphism of G.
Since g −1 : G → G is one-one onto and is a homomorphism of G, therefore g −1 is
an automorphism of G i. e., g −1 ∈ A (G).
Thus g ∈ A (G) ⇒ g −1 ∈ A (G).
Now we shall show that
8

f ∈ A (G), g ∈ A (G) ⇒ f g −1 ∈ A (G),


where f g −1 is the composite mapping of the mappings f and g −1 .
Since the mappings f : G → G and g −1 : G → G are one-one onto, therefore their
composite mapping f g −1 : G → G is also one-one onto.
Now we shall show that the mapping f g −1 : G → G is a homomorphism.
Let a, b be any two elements of G.
Then f g −1 (ab) = f [ g −1 (ab)] [By def. of the composite of two mappings]
= f [ g −1 (a) g −1 (b)] [∵ g −1 is a homomorphism of G]
= f [ g −1 (a)] f [ g −1 (b)] [∵ f is a homomorphism]
−1 −1
= fg (a) fg (b).
[By def. of the composite of two mappings]
−1
∴ the mapping f g : G → G is a homomorphism of G.
∴ f g −1 is an automorphism of G i. e., f g −1 ∈ A (G).
Thus f ∈ A (G), g ∈ A (G) ⇒ f g −1 ∈ A (G).
Hence, A (G) is a subgroup of P (G).
Theorem 3: Let G be a group and f an automorphism of G. If a ∈ G is of order greater
than zero, then prove that o [ f (a)] = o (a).
Proof: Suppose the mapping f is an automorphism of a group G. Let a be any
element of G such that o (a) = n, where n > 0.
Let o [ f (a)] = m.
Let e be the identity of G. Then f (e) = e.
Now o (a) = n ⇒ a n = e ⇒ f (a n ) = f (e)
⇒ f (aaa … n times) = e ⇒ f (a) f (a) f (a) … n times = e
⇒ [ f (a)] n = e ⇒ o [ f (a)] ≤ n ⇒ m ≤ n.
Again o [ f (a)] = m ⇒ [ f (a)] m = e
⇒ f (a) f (a) f (a) … m times = e ⇒ f (aaa … m times) = e
⇒ f (a m ) = f (e) [ ∵ f (e) = e]
⇒ am = e [ ∵ f is one-one]
⇒ o (a) ≤ m ⇒ n ≤ m.
Thus, m ≤ n and n ≤ m ⇒ m = n.
Hence, o [ f (a)] = o (a).

1.3 Inner Automorphisms


We shall now study a special type of automorphisms known as inner
automorphisms. First we shall prove a preliminary theorem.
9

Theorem 1: Let a be a fixed element of a group G. Then the mapping f a : G → G defined


by f a ( x) = a −1 xa V x ∈ G is an automorphism of G.
Proof: The mapping f a is one-one: Let x, y be any two elements of G. Then
f a ( x) = f a ( y) ⇒ a −1 xa = a −1 ya ⇒ x = y,
by cancellation laws in G.
Therefore the mapping f a is one-one.
The mapping f a is also onto G : If y is any element of G, then aya −1 ∈ G and
we have f a (aya −1 ) = a −1 (aya −1 ) a = y.
∴ f a is onto G.
Finally if x, y ∈ G then
f a ( xy) = a −1 ( xy) a = (a −1 xa) (a −1 ya ) = f a ( x) f a ( y).
Hence f a is an automorphism of G.
Inner Automorphism:
Definition: If G is a group, the mapping
f a : G → G defined by f a ( x) = a −1 x a V x ∈ G
is an automorphism of G known as inner automorphism.
Also an automorphism which is not inner is called an outer automorphism.
Theorem 2: For an abelian group the only inner automorphism is the identity
mapping whereas for non-abelian groups there exist non-trivial inner
automorphisms.
Proof: Suppose G is an abelian group and f a is an inner automorphism of G. If
x ∈ G, we have
f a ( x) = a −1 xa = a −1 a x [∵ G is abelian]
= e x = x.
Thus f a ( x) = x V x ∈ G.
∴ f a is the identity mapping of G.
Let now G be non-abelian. Then there exist at least two elements say a, b ∈ G such
that
ba ≠ ab ⇒ a −1 ba ≠ b ⇒ f a (b) ≠ b.
Hence f a is not the identity mapping of G. Thus for non-abelian groups there
always exist non-trivial inner automorphisms.
Theorem 3: The set I (G ) of all inner automorphisms of a group G is a normal
subgroup of the group of its automorphisms isomorphic to the quotient group
G / Z of G where Z is the centre of G.
Proof: Let A (G ) denote the group of all automorphisms of G. Then
I (G ) ⊆ A (G ).
10

Let a, b ∈ G. We shall first prove the following two results :


(i) f = f a −1 i. e., the inner automorphism f is the inverse function of the
a −1 a −1
inner automorphism f a .
(ii) f a f b = f ba .
Proof of (i): If x ∈ G, then we have
( fa f ) ( x) = f a [ f ( x)] = f a [(a −1 ) −1 xa −1 ] = f a [a xa −1 ]
a −1 a −1

= a −1 (a xa −1
) a = x.
∴ fa f is the identity function on G.
a −1
∴ f = ( f a ) −1 .
a −1

Proof of (ii): If x ∈ G, then we have


( f a f b ) ( x) = f a [ f b ( x)] = f a (b −1 xb) = a −1 (b −1 xb) a = (a −1 b −1 ) x (ba)
= (ba) −1 x (ba) = f ba ( x).
∴ f a f b = f ba .
Now we shall prove that I (G ) is a subgroup of A (G ). Let f a , f b be any two
elements of I (G ). Then
f a ( f b ) −1 = f a f = f ∈ I (G), since b −1 a ∈ G.
b −1 b −1 a

Thus f a , f b ∈ I (G) ⇒ f a ( f b ) −1 ∈ I (G).


∴ I (G) is a subgroup of A (G).
Now we shall prove that I (G) is a normal subgroup of A (G).
Let f ∈ A (G) and f a ∈ I (G). If x ∈ G, then we have
−1 −1 −1
( f fa f ) ( x) = ( f f a ) [ f ( x)] = f [ f a ( f ( x))]
= f [a −1 f −1
( x) a]
= f (a −1 ) f [ f −1
( x)] f (a) [∵ f is composition preserving]
−1 −1
= f (a ) x f (a) [∵ f [ f ( x)] = x]
−1
= [ f (a)] x f (a)
−1
=c x c where f (a) = c ∈ G
= f c ( x).
−1
∴ f fa f = f c ∈ I (G), since c ∈ G.
∴ I (G) is a normal subgroup of A (G).
Now we shall show that I (G) is isomorphic to G / Z . For this we shall show that
I (G) is a homomorphic image of G and Z is the kernel of the corresponding
homomorphism.
Then by the fundamental theorem on homomorphism of groups we shall have
G / Z ≅ I (G ).
11

Consider the mapping φ : G → I (G ) defined by


φ (a) = f V a ∈ G.
a −1
Obviously φ is onto I (G ) because f a ∈ I (G ) ⇒ a ∈ G and this implies a −1 ∈ G.
Now φ (a −1 ) = f = fa .
(a −1)−1

∴ φ is onto I (G ).
Now to prove that
φ (ab) = φ (a) φ (b) V a, b ∈ G.
We have φ (ab) = f = f = f f = φ (a) φ (b).
(ab)−1 b −1 a −1 a −1 b −1
Now to show that Z is the kernel of φ.
The identity function i on G is the identity of the group I (G ).
Let K be the kernel of φ.
Then we have z ∈ K ⇔ φ (z ) = i ⇔ f =i ⇔ f ( x) = i ( x) V x ∈ G
z −1 z −1
⇔ (z −1 ) −1 xz −1 = x V x ∈ G ⇔ z x z −1 = x V x ∈ G
⇔ z x = xz V x ∈ G ⇔ z ∈ G.
∴ K = Z. Hence the theorem.

1.4 Group of Automorphisms of a Cyclic Group


Suppose G = {a} is a cyclic group generated by a. An automorphism f of G is
completely defined by a relation of the form
f (a) = a m , …(1)
where m is some suitable integer.
For if k is any integer, then for f to be an automorphism of G, we have
f (a k ) = [ f (a)] k = (a m ) k . …(2)
The relation (2) gives the f -image of each element of G and the mapping f is thus
completely defined.
Now let b be any element of G. Since f is a mapping of G onto itself, therefore
there must exist an element a k ∈ G such that
b = f (a k ) = (a m ) k . …(3)
From (3) we conclude that for the mapping f defined in (1) to be an
automorphism of G, a m must be a generator of G.
Now if G is infinite, the only generators of G are a and a −1 . So in this case the only
automorphisms of G are,
(i) the identity mapping I for which
I (a) = a ⇒ I (a k ) = a k V k ∈ I
and, (ii) the mapping f defined by f (a) = a −1 .
12

Therefore the group of automorphisms of an infinite cyclic group is of order 2.


On the other hand if G is of finite order n, then a m is a generator of G if and
only if m is prime to n and less than n. We shall show that for each such m, the
mapping f defined in (1) is an automorphism of G.
f is one-one: Let a k 1 and a k 2 be any two elements of G where
1 ≤ k1 ≤ n, 1 ≤ k2 ≤ n and k1 ≥ k2 .
k1 k2
Then f (a ) = f (a ) ⇒ (a m ) k 1 = (a m ) k 2 ⇒ a mk 1 = a mk 2
⇒ a m (k 1 − k 2) = e ⇒ n | m (k1 − k2 ).
But m is prime to n and 0 ≤ (k1 − k2 ) < n. Therefore
n | m (k1 − k2 ) ⇒ k1 − k2 = 0 ⇒ k1 = k2 ⇒ a k 1 = a k 2 .
Thus f (a k 1 ) = f (a k 2 ) ⇒ a k 1 = a k 2 .
Therefore the mapping f is one-one.
f is onto: Since G is finite and f is one-one, therefore f must be onto G.
Finally if a k 1 , a k 2 are any two elements of G, then
f (a k 1 a k 2 ) = f (a k 1 + k 2 )
= f ( a nu + k ), where u is some integer and 0 ≤ k < n
 k1 + k2 k
 Note that we can write =u+ 
 n n
= f (a nu a k ) = f [( a n ) u a k ] = f ( a k ) [∵ a n
= e ⇒ ( a n) u
= e]
=a mk
= a m (k 1 + k 2 − nu) [∵ k = k1 + k2 − nu]
= a m (k 1 + k 2) a − mnu
= a m (k 1 + k 2) [∵ a − mnu
= ( a n ) − mu = e − mu = e]
mk 1 mk 2
=a a = (a m ) k 1 (a m ) k 2 = f (a k 1 ) f (a k 2 ).
Therefore the mapping f denoted in (1) is an automorphism of G for each positive
integer m less than n and prime to n.
Hence the group of automorphisms of a finite cyclic group of order n is of order φ (n) where φ (n)
denotes the number of integers less than n and prime to n.
In the end we shall show that the group of automorphisms of a cyclic group is abelian.
Let f m , f m be two automorphisms defined by
1 2
f m (a) = a m 1 , f m (a) = a m 2 .
1 2

Then ( f m o f m ) (a) = f m [ f m (a)] = f m (a m 2 )


1 2 1 2 1
m2
= [ f m (a)] = (a m 1 ) m 2 = a m1 m2
1

= (a m 2 ) m 1 = [ f m (a)] m 1 = f m (a m 1 )
2 2

= f m [ f m (a)] = ( f m o f m ) (a).
2 1 2 1
13

Now two automorphisms of a cyclic group are equal if the image of a generator of
the group under each of them is the same.
Hence f m o f m = fm o fm .
1 2 2 1
Therefore the group of automorphisms of a cyclic group is abelian.

Example 7: Let G be a finite abelian additive group and n be a positive integer relatively
prime to o (G ). Prove that the mapping σ : G → G given by σ ( x) = nx is an automorphism
of G.
Solution: The mapping σ is one-one: Let x, y be any two elements of G. Then
σ ( x) = σ ( y) ⇒ nx = ny
⇒ n ( x − y) = 0 , where 0 is the identity of the group G
⇒ o ( x − y)| n
⇒ o ( x − y)| n and o ( x − y)| o (G ) [ ∵ o ( x − y)| o (G )]
⇒ o ( x − y) = 1 [∵ if o ( x − y) ≠ 1, then o ( x − y) > 1 ⇒ n
and o (G ) are not relatively prime]
⇒ x− y=o i. e., the identity of G
⇒ x = y.
Therefore the mapping σ is one-one.
The mapping σ is also onto G: Since G is finite and σ is one-one, therefore σ
must be onto G.
Finally if x, y ∈ G , then σ ( x + y) = n ( x + y) = nx + ny = σ ( x) + σ ( y).
Hence, σ is an automorphism of G.
Example 8: Let R + be the multiplicative group of all positive real numbers. Show that the
mapping f : R + → R + defined by f ( x) = x 2 , V x ∈ R + is an automorphism.
(Lucknow 2009)
+
Solution: We have R = { x : x ∈ R and x > 0 }.
The mapping f is one-one : Let x, y ∈ R + .
Then f ( x) = f ( y) ⇒ x2 = y2 ⇒ x = y. [∵ x > 0 and y > 0]
∴ f is one-one.
The mapping f is onto : Let x be any element of R + . Then there exists x ∈R+
such that
f ( x ) = ( x )2 = x.
∴ f is onto.
The mapping f is a homomorphism : Let x, y be any two elements of R + . Then
f ( xy) = ( xy)2 = x 2 y 2 = f ( x) f ( y).
14

∴ f is a homomorphism.
Hence, f is an automorphism of R + .
Example 9: Find the automorphism group of A3 , where A3 is the alternating group of degree
3 on three symbols.
Solution. Let A3 be the alternating group on three symbols a , b , c . Then
A3 = {e , f , g},
where e = the identity permutation, f = (a b c ) and g = (a c b).
Let I be the identity mapping of A3 i.e.,
I ( e) = e , I ( f ) = f and I ( g) = g.
Then obviously I is an automorphism of A3 .
Now consider the mapping T : A3 → A3 defined by
T (e) = e, T ( f ) = g, T ( g) = f .
Obviously T is a one-one mapping of A3 onto A3 .
We have T ( e f ) = T ( f ) = g = e g = T ( e) T ( f ),
T ( f e) = T ( f ) = g = ge = T ( f ) = T (e),
T (e g) = T ( g) = f = e f = T (e) T ( g),
T ( ge) = T ( g) T ( e).
 a b c  a c b  a b c  
Also, T ( f g) = T ( e )  ∵ fg = (abc )(acb) =    =  = e
 b c a   c b a  a b c  
=e
and T ( f ) T ( g) = g f = ( acb) ( abc ) = e.
∴ T ( f g) = T ( f ) T ( g).
Similarly T ( g f ) = T ( g) T ( f ). [Note that A3 is an abelian group]
∴ the mapping T is a homomorphism.Thus the mapping T is also an
automorphism of A3 .
Hence, the automorphism group Aut ( A3 ) of A3 contains only two elements,
namely I and T.
Thus Aut ( A3 ) = {I , T }.
Example 10: If f : G → G such that f ( a) = a n , V a ∈ G is an automorphism of G,
show that a n − 1 ∈ Z for all a ∈ G, where Z is the centre of the group G.
Solution: Let x , a be any two elements of G. We have
f ( a − n xa n ) = (a − n xa n ) n [By def. of the mapping f ]
= a − n x na n [∵ if a , b are any elements of a group G,
then ( b − 1 ab) n = b − 1 a n b for any integer n]
= ( a − 1 ) n x na n
= f (a − 1 ) f ( x) f ( a) [By def. of the mapping f ]
= f (a − 1 xa). [∵ the mapping f is a homomorphism]
15

Thus f (a − n xa n ) = f ( a − 1 xa).
Since the mapping f is one-one, therefore a − n xa n = a − 1 xa
⇒ xa n − 1 = a n − 1 x, V x , a ∈ G
⇒ a n − 1 ∈ Z , V a ∈ G.
Example 11: Let f : G → G be a homomorphism i.e., f is an endomorphism of G. Suppose
f commutes with every inner automorphism of G. Show that
(i) K = { x ∈ G : f 2 ( x) = f ( x)} is a normal subgroup of G.
(ii) G / K is abelian.
Solution: (i) Let e be the identity of the group G.
We have f 2 (e) = f ( f (e)) = f (e).
∴ e ∈ K and so K ≠ ∅.
Now let x, y be any two elements of K. Then f ( x) = f 2 ( x) and f ( y) = f 2 ( y). We
have
f 2 ( x y −1 ) = f ( f ( x y −1 ))
= f [ f ( x) f ( y −1 )] [∵ f is a homomorphism]
= f ( f ( x) [ f ( y)] − 1 ) [∵ f is a homomorphism ⇒
−1 −1
f ( y ) = [ f ( y)] ]
= f ( f ( x)) f ([ f ( y)] −1 ) [∵ f is a homomorphism]
= f 2 ( x) [ f ( f ( y))] −1 [∵ f is a homomorphism
⇒ f ([ f ( y)] − 1 ) = [ f ( f ( y))] − 1 ]
= f 2 ( x) [ f 2 ( y)] −1
= f ( x) [ f ( y)] −1 [∵ x , y ∈ K ⇒ f 2 ( x) = f ( x) and
f 2 ( y) = f ( y) ]
= f ( x) f ( y − 1 ) {∵ f is a homomorphism
⇒ [ f ( y)] − 1 = f ( y − 1 ) }
= f ( xy − 1 ). [∵ f is a homomorphism]
∴ xy −1 ∈ K.
Thus K ≠ ∅ and x , y ∈ K ⇒ x y −1 ∈ K.
∴ K is a subgroup of G.
Now to show that K is normal in G. Let g ∈ G and x ∈ K. Then
f 2 ( g x g − 1 ) = f [ f ( g x g −1 )]
= f [ f f g ( x)], where f g is the inner automorphism
corresponding to g
16

= f [ f g f ( x)] [ ∵ By hypothesis f commutes with every


inner automorphism of G and so f f g = f g f ]
= f [ g f ( x) g − 1 ] [By def. of the inner automorphism f g ]

= f ( g) f ( f ( x)) f ( g − 1 ) [∵ f is a homomorphism]
2 −1
= f ( g) f ( x) f ( g )
= f ( g) f ( x) f ( g − 1 ) [∵ x ∈ K ⇒ f 2 ( x) = f ( x) ]
= f ( g x g − 1 ). [∵ f is a homomorphism]
−1
∴ gx g ∈ K for all x ∈ K, g ∈ G.
∴ K is a normal subgroup of G.
(ii) To show that G / K is abelian.
By definition of a quotient group, we have G / K = { K x : x ∈ G}.
We have G / K is abelian ⇔ K x K y = K y K x , V x , y ∈ G
⇔ K x y = K y x, V x , y ∈ G
⇔ x y ( y x) −1 ∈ K, V x , y ∈ G
⇔ x y x −1 y −1 ∈ K, V x, y ∈ G. ...(1)
Now for all x , y ∈ G, we have
f 2 ( x y x −1 y −1 ) = f [ f ( x y x −1 y −1 )]
= f [ f ( x y x −1 ) f ( y −1 )] [∵ f is a homomorphism]
−1
= f [ f ( f x ( y)) f ( y )], where f x is the inner
automorphism corresponding to x
−1
= f [ f x ( f ( y)) f ( y )] [∵ f f x = f x f ]
−1 −1
= f ( x f ( y) x [ f ( y)] )
−1
= f [x f f ( y) (x )], where f f ( y) is the inner automorphism
corresponding to f ( y)
−1
= f ( x) f [ f f ( y) (x )] [ ∵ f is a homomorphism]

= f ( x) f f ( y) f ( x −1 ) [∵ f f f ( y) = ff ( y) f ]

= f ( x) f ( y) f ( x −1 ) [ f ( y)] −1
= f ( x) f ( y) f ( x −1 ) f ( y −1 )
= f ( x y x −1 y −1 ).
∴ x y x −1 y −1 ∈ K.
Hence, by virtue of (1), G / K is abelian.
Example 12: Let G be a group and f an automorphism of G. If, for a ∈ G , we have
N (a) = { x ∈ G : xa = a x}, prove that N ( f (a)) = f (N (a)).
17

Solution: If a ∈ G, then by the definition of N (a), we have


N (a) = { x ∈ G : xa = a x}.
Let y be any arbitrary element of f (N (a)). Then
y ∈ f (N (a)) ⇒ ∃ x ∈ N (a) such that y = f ( x) and xa = a x
⇒ f ( xa) = f (ax) and y = f ( x) for some x ∈ N (a)
⇒ f ( x) f (a) = f (a) f ( x) and y = f ( x) for some x ∈ N (a)
[∵ f is a homomorphism]
⇒ y f (a) = f (a) y, V y ∈ f (N (a))
⇒ y ∈ N ( f (a)), V y ∈ f (N (a)).
∴ f (N (a)) ⊆ N ( f (a)). ...(1)
Again let y ′ be any arbitrary element of N ( f (a)). Then
y ′ ∈ N ( f (a)) ⇒ y ′ ∈ G and y ′ f (a) = f (a) y ′
⇒ ∃ x ′ ∈ G such that y ′ = f ( x ′ ) and y ′ f (a) = f (a) y ′
[ ∵ the mapping f : G → G is onto]
⇒ f ( x ′ ) f (a) = f (a) f ( x ′ ) for some x ′ ∈ G
⇒ f ( x ′ a) = f (ax ′ ) for some x ′ ∈ G [∵ f is a homomorphism]
⇒ x ′ a = ax ′ for some x ′ ∈ G [∵ f is one-one]
⇒ x ′ ∈ N (a)
⇒ f ( x ′ ) ∈ f (N (a))
⇒ y ′ ∈ f (N (a)), V y ′ ∈ N ( f (a)).
∴ N ( f (a)) ⊆ f (N (a)). ...(2)
From (1) and (2), we have
N ( f (a)) = f (N (a)).

Example 13: Let G be an infinite cyclic group. Determine Aut G, the group of all
automorphisms of G.
Solution: Let G = (a) be an infinite cyclic group generated by a.
Let f ∈ Aut G i.e., let f be an automorphism of G.
First we shall show that f (a) is also a generator of G i.e., G = ( f (a)), the cyclic
group generated by f (a).
Let x be any element of G.
Since the mapping f : G → G is onto, therefore there exists y ∈ G such that
x = f ( y).
But y ∈ G ⇒ y = a r for some integer r [ ∵ a is a generator of G ]
r r
∴ x = f ( y) = f (a ) = ( f (a) ).
∴ f (a) is a generator of G i.e., G = ( f (a)) .
But the infinite cyclic group G = (a) has only two generators, namely a and a − 1 .
18

∴ f (a) = a or f (a) = a − 1 .
Thus f has only two choices and so
o (Aut G) ≤ 2. ...(1)
Define a mapping T : G → G such that
T ( x) = x − 1 , V x ∈ G.
Then T ∈ Aut G.
Also T ≠ I as T = I ⇒ T ( x) = x , V x ∈ G
⇒ x −1 = x , V x ∈ G
⇒ a −1 = a ⇒ a2 = e ⇒ o (a) is finite, which is a contradiction because the
generator a of an infinite cyclic group cannot be of finite order.
∴ T ≠ I.
Thus G has at least two automorphisms.
∴ o (Aut G) ≥ 2. ...(2)
From (1) and (2), we have o (Aut G ) = 2.
In fact, we have Aut G = { I , T : T ( x) = x − 1 , V x ∈ G }.
Since o (Aut G) = 2, therefore Aut G is a cyclic group of order 2.
We know that any cyclic group of order n is isomorphic to the group
Z n = {0, 1, 2, … , n − 1} under addition modulo n.
Hence, Aut G ≅ Z 2 , where Z 2 = {0, 1} is a group under addition modulo 2.
Example 14: Let G be a finite cyclic group of order n. Determine Aut G, the group of all
automorphisms of G.
Solution: Let G = (a) be a finite cyclic group of order n generated by a.
n
We have o (a) = o (G) = n so that a = e.
Let f ∈ Aut G i.e., let f be an automorphism of G.
First we shall show that f (a) is also a generator of G i.e., G = ( f (a)), the cyclic
group generated by a.
Let x be any element of G.
Since the mapping f : G → G is onto, therefore ∃ y ∈ G such that x = f ( y).
But y ∈ G ⇒ y = a r for some integer r. [∵ a is a generator of G ]
∴ x = f ( y) = f (a r ) = ( f (a)) r .
∴ f (a) is a generator of G i.e., G = f (a).
But the finite cyclic group G has only φ (n) generators, where φ (n) is Euler’s
φ -function i.e., φ (n) denotes the number of integers less than n and prime to n.
∴ if f is an automorphism of G, then f has only φ (n) choices.
∴ o (Aut G) ≤ φ (n). ...(1)
Define a mapping f m : G → G such that
19

m
f m ( x) = x , (m , n) = 1, 1≤ m < n.
Here (m, n) denotes the H.C.F. of m and n and (m , n) = 1 means that m and n are
co-prime.
We claim that the mapping f m is an automorphism of G.
f m is one-one : Let a r , a s ∈ G, where 1≤ r ≤ n, 1≤ s ≤ n and r ≥ s.
Then f m (a r ) = f m (a s ) ⇒ (a r ) m = (a s ) m
− s) m
⇒ a r m = a sm ⇒ a(r =e
⇒ n |(r − s) m.
But m is prime to n and 0 ≤ r − s < n.
∴ n |(r − s) m ⇒ r − s = 0 ⇒ r = s ⇒ a r = a s .
Thus f m (a r ) = f m (a s ) ⇒ a r = a s .
∴ f m is one-one.
f m is onto: Since G is finite and f m is one-one, therefore f must be onto G.
f m is a homomorphism : Let a r , a s ∈ G, where 1 ≤ r ≤ n , 1 ≤ s ≤ n. Then
+s
f m (a r a s ) = f m (a r ) = f m (a nu + k ) ,
where r + s = nu + k, u is some integer and 0 ≤ k < n
= f m (a nu a k ) = f m (a k ) [∵ a nu = (a n ) u = e u = e]
+ s − nu)
= (a k ) m = a mk = a m (r
+ s)
= a m (r a − mnu
+ s)
= a m (r [∵ a − mnu = (a n ) − mu = e − mu = e]
= a mr a ms = (a r ) m (a s ) m
= f m (a r ) f m (a s ).
∴ f m is a homomorphism.
∴ f m is an automorphism of G i.e., f m ∈ Aut G.
Thus f m ∈ Aut G, where (m, n) = 1, 1 ≤ m < n.
We now show that f r ≠ f s , for all r , s (r ≠ s), 1≤ r , s < n , where (r , n) = 1 and
(s , n) = 1.
Assume that r > s. Suppose, if possible, f r = f s . Then
f r = f s ⇒ f r (a) = f s (a)
−s
⇒ ar = as ⇒ ar =e
⇒ o (a)| (r − s) ⇒ n |(r − s)
⇒ n ≤ r − s < n.
This is a contradiction.
∴ f r ≠ f s , for all r, s (r ≠ s), 1 ≤ r, s < n, where r and s are relatively prime to n.
This shows that Aut G has at least φ (n) automorphisms of G.
20

∴ o (Aut G) ≥ φ (n). ...(2)


From (1) and (2), we have o (Aut G) = φ (n).
In fact, we have
 f : f m ( x) = x m , (m , n) = 1
Aut G =  m ⋅
 1≤ m < n 
So far we have shown that o (Aut G) = φ (n).
Now we shall show that Aut (G) ≅ Gn , where
Gn = { x : x ∈ I , 1 ≤ x < n, ( x, n) = 1}
is the group of integers with respect to multiplication modulo n i.e., × n as the
composition.
Define a mapping ψ : Aut G → Gn such that
ψ ( f m ) = m, 1≤ m < n, (m , n) = 1.
ψ is one-one : Let f r , f s ∈ Aut G. Then
ψ ( fr ) = ψ ( fs) ⇒ r = s ⇒ fr = fs .
∴ ψ is one-one.
ψ is onto : Let m be any element of Gn , where 1≤ m < n, (m , n) = 1.
Then ∃ f m ∈ Aut (G) such that ψ ( f m ) = m.
∴ ψ is onto.
ψ is a homomorphism : Let f r , f s ∈ Aut G. Then 1 ≤ r, s < n, (r, n) = 1 and
(s , n) = 1.
For any x ∈ G, we have f r f s ( x) = f r ( x s ) = ( x s ) r = x r s
= x n u + k , where u is some integer and k is an integer such that 0 ≤ k < n
[∵ x n u = ( x n ) u = e u = e]
= xk
×n s
= xr [∵ r × n s = k ,where k is the least non-negative
remainder obtained on dividing rs by n]
= fr ×n s ( x).
∴ fr fs = fr ×n s
⇒ ψ ( fr fs) = ψ ( fr ×n s ) = r × n s = ψ ( f r ) × n ψ ( f s ).
∴ ψ is a homomorphism .
Hence ψ is an isomorphism of Aut G onto Gn .
∴ Aut G ≅ Gn .
This completely determines Aut G.
Example 15: Show that the group of automorphisms of a cyclic group is abelian.
Solution: Let G = (a) be a cyclic group generated by a. Then the following two cases
21

arise.
Case I: If G is an infinite cyclic group, then its group of automorphisms is given by
Aut G = { I , T : T ( x) = x − 1 , V x ∈ G}.
Since o (Aut G) = 2, therefore Aut G is abelian.
Case II. If G is a cyclic group of finite order n, then
Aut G = { f m : f m ( x) = x m, 1 ≤ m < n, (m, n) = 1}.
Let f r , f s ∈ Aut G. Then f r (a) = a r, f s (a) = a s .
Now ( f r f s ) (a) = f r ( f s (a)) = f r (a s ) = (a s ) r = a sr = a r s
= (a r ) s = f s (a r ) = f s ( f r (a)) = ( f s f r ) (a).
Now two automorphisms of a cyclic group are equal if the image of a generator of
the group under each of them is the same.
Hence f r f s = f s f r , V f r , f s ∈ Aut G.
∴ in this case also Aut G is abelian.
Hence, the group of automorphisms of a cyclic group is abelian.

Example 16: Let φ (n) be the Euler φ -function. For any integers a > 1, n > 0, show that
n | φ (a n − 1).
Solution: Let G = (b) be the cyclic group generated by b, where
o (b) = o (G) = a n − 1.
Consider the mapping f a : G → G defined by
f a ( x) = x a , V x ∈ G.
Since (a, a n − 1) = 1, therefore f a ∈ Aut G.
2
If x ∈ G, then f a 2 ( x) = f a ( f a ( x)) = f a ( x a ) = ( x a ) a = x a .
r
In general, f a r ( x) = x a , for every positive integer r.
n n
−1
∴ f a n ( x) = x a = x x a = x x o (G) = xe = x [∵ x o (G) = e , V x ∈ G ]
= I ( x), V x ∈ G.
n
∴ f a = I. ...(1)
Again if f a m = I , then
f a m ( x) = I ( x) = x , V x ∈ G ⇒ f a m (b) = b [∵ b ∈ G ]
m m
−1
⇒ ba =b ⇒ ba =e ⇒ o (b)|(a m − 1)
⇒ (a n − 1)|(a m − 1) ⇒ an − 1≤ am − 1
⇒ an ≤ am ⇒ n≤ m ⇒ m≥ n ...(2)
From (1) and (2), we conclude that o ( f a ) = n.
22

Also, o (Aut G) = φ (a n − 1.
)
∴ f a ∈ Aut G ⇒ o ( f a )| o (Aut G) ⇒ n | φ (a n − 1).

Comprehensive Exercise 1

1. Verify the following statement for being true or false :


If G = (a) is a cyclic group of order 10 then the mapping σ : G → G such that
σ (a k ) = a2 k for all k, is an automorphism of G.
2. Give an example of a group in which (i) the inner automorphisms
corresponding to any two elements are the same, (ii) the inner
automorphisms corresponding to no two elements are the same.
3. If for group G , the mapping f : G → G given by f ( x) = x 3, x ∈ G, is an
automorphism, then prove that G is abelian.
4. Show that the group of all automorphisms of a cyclic group G of order r is
isomorphic to the group of integers less than and relatively prime to r under
multiplication modulo r. (Lucknow 2006)
5. Let f be an automorphism of a group G. If a ∈ G, then prove that
f (a r ) = [ f (a)] r , for all integers r.
6. Let f be an automorphism of a group G. Show that o ( f (a)) = o (a), for
every a ∈ G. Deduce that o (bab − 1 ) = o (a) for all a , b ∈ G.
7. Prove that the group of automorphisms of an infinite cyclic group is of order
2. (Lucknow 2010)
8. Let G be a cyclic group of order 4. Show that Aut G = { I , T } where
T ( x) = x 3 , for all x ∈ G.
9. If o (Aut G) > 1, then show that o (G ) > 2.
10. If G is any group in which g 2 ≠ e for some g ∈ G, then G has a non-trivial
automorphism.
11. If G = S3 , then show that Aut G ≅ S3 .

Answers 1
1. False.
2. (i) Every abelian group, (ii) the symmetric group P3 .
23

1.5 Conjugate Elements, Relation of Conjugacy and Classes


of Conjugate Elements
Definition: If a, b be two elements of a group G, then b is said to be conjugate to a if there
exists an element x ∈ G such that b = x −1 a x.
If b = x −1 ax, then b is also called the transform of a by x.
If b is conjugate to a then symbolically we shall write b ~ a and this relation in G
will be called the relation of conjugacy. Thus b ~ a iff b = x −1 a x for some x ∈ G.
Theorem: The relation of conjugacy is an equivalence relation on G.
(Lucknow 2008,10)
Proof: Reflexivity: If a is any element of G, then we have
a = e −1 ae ⇒ a ~ a.
Thus a ~ a V a ∈ G. Therefore the relation is reflexive.
Symmetry: We have a ~ b ⇒ a = x −1 b x for some x ∈ G
⇒ xa x −1 = x ( x −1 b x) x −1 ⇒ xa x −1 = b
⇒ b = ( x −1 ) −1 a x −1 , where x −1 ∈ G
⇒ b ~ a.
Therefore the relation is symmetric.
Transitivity: Let a ~ b, b ~ c . Then a = x −1 b x, b = y −1 c y for some x, y ∈ G.
From this we get
a = x −1 ( y −1 c y) x [∵ b = y −1 c y ]
= ( y x) −1 c ( y x), where y x ∈ G.
∴ a ~ c and thus the relation is transitive.
Hence the relation of conjugacy in a group G is an equivalence relation. Therefore
it will partition G into disjoint equivalence classes called classes of conjugate
elements. These classes will be such that
(i) any two elements of the same class are conjugate.
(ii) no two elements of different classes are conjugate.
The collection of all elements conjugate to an element a ∈ G will be symbolically
denoted by C (a) or by a. Thus C (a) = { x ∈ G : x ~ a }.
C (a) will be called the conjugate class of a in G. We have ( y −1 a y) ~ a for all y ∈ G.
Also if b ~ a then b must be equal to y −1 a y for some y ∈ G. Therefore
C (a) = { y −1 a y : y ∈ G }.
If G is a finite group, then the number of distinct elements in C (a) will be
denoted by c a .
24

1.6 Self-Conjugate Elements


Definition: An element a ∈ G is said to be self-conjugate if a is the only member of
the class C (a) of elements conjugate to a i.e., if C ( a) = { a}.
Thus, a, is self-conjugate if and only if
a = x −1 a x V x ∈G or xa = a x V x ∈ G.
Thus a self-conjugate element is one which commutes with each element of the
group.
If a is a self-conjugate element, then we have a = x −1 a x V x ∈ G.
Thus the transform of a by every element of G remains equal to a. Therefore
sometimes a self-conjugate element is also called an invariant element.

1.7 Normalizer or Centralizer of an Element of a Group


Definition: If a ∈ G, then N (a) , the normalizer of a in G is the set of all those elements of
G which commute with a . Symbolically N (a) = { x ∈ G : a x = xa}.
Theorem: The normalizer N (a) of a ∈ G is a subgroup of G. (Lucknow 2009)
Proof: We have N (a) = { x ∈ G : a x = xa}.
Let x1 , x2 ∈ N (a). Then a x1 = x1 a , a x2 = x2 a.
First we show that x2−1 ∈ N (a).
We have a x2 = x2 a ⇒ x2−1 (a x2 ) x2−1 = x2−1 ( x2 a) x2−1
⇒ x2−1 a = a x2−1 ⇒ x2−1 ∈ N (a).
−1
Now we shall show that x1 x2 ∈ N (a).
We have a ( x1 x2−1 ) = (a x1 ) x2−1 = ( x1 a) x2−1 = x1 (a x2−1 ) = x1 ( x2−1 a) = ( x1 x2−1 ) a.
∴ x1 x2−1 ∈ N (a).
Thus x1 , x2 ∈ N (a) ⇒ x1 x2−1 ∈ N (a).
∴ N (a) is a subgroup of G.
Note 1: It should be noted that N (a) is not necessarily a normal subgroup of G.
Note 2: Since e x = xe V x ∈ G, therefore N (e) = G.
Note 3: If G is an abelian group and a ∈ G, then xa = a x V x ∈ G. Therefore
N (a) = G.
25

1.8 Counting Principle and the Class Equation of a Finite


Group
Theorem: Let a be any element of a group G. Then two elements x, y ∈ G give rise to the
same conjugate of a if and only if they belong to the same right coset of the normalizer of a in G.
o (G )
Hence show that if G is a finite group, then c a = , i. e., the number of elements
o [N (a)]
conjugate to a in G is the index of the normalizer of a in G.
Proof: We have x, y ∈ G are in the same right coset of
N (a) in G ⇔ N (a) x = N (a) y [∵ x ∈ N (a) x , y ∈ N (a) y.
Note that if H is a subgroup, then x ∈ Hx]
⇔ x y −1 ∈ N (a) [∵ if H is a subgroup, then Ha = Hb ⇔ ab −1 ∈ H ]
⇔ a x y −1 = x y −1 a [ by def. of N (a)]
−1 −1 −1 −1 −1 −1
⇔ x (a x y ) y= x (x y a) y ⇔ x ax = y ay
⇔ x , y give rise to the same conjugate of a.
Hence the first result follows.
Now consider the right coset decomposition of G with respect to the subgroup
N (a). We have just proved that if x, y ∈ G are in the same right coset of N (a) in G,
then they give the same conjugate of a. Further if x, y are in different right cosets of
N (a) in G, then they give rise to different conjugates of a. The reason is that if x, y
give the same conjugate of a, then they must belong to the same right coset of N (a)
in G. Thus there is a one-to-one correspondence between the right cosets of
N (a) in G and the conjugates of a. So if G is a finite group, then
c a = the number of distinct elements in C (a)
= the number of distinct right cosets of N (a) in G
o (G)
= the index of N (a) in G = .
o [N (a)]
o (G)
Corollary: If G is a finite group, then o (G) = ∑ where this sum runs over one
o [N (a)]
element a in each conjugate class.
Proof: We know that the relation of conjugacy is an equivalence relation on G.
Therefore it partitions G into disjoint conjugate classes. The union of all disjoint
conjugate classes will be equal to G and two distinct conjugate classes of G will have
no common element. Since G is a finite group, therefore the number of distinct
conjugate classes of G will be finite, say equal to k. Suppose C(a) denotes the
conjugate class of a in G and c a denotes the number of elements in this class. If
C(a1 ), C(a2 ), … , C(a k ) are the k distinct conjugate classes of G, then
G = C (a1 ) ∪ C(a2 ) ∪ … ∪ C (a k )
26

⇒ the number of elements in G = the number of elements in C (a1 )


+ the number of elements in C (a2 ) + … + the number of elements in C (a k )
[ ∵ two distinct conjugate classes have no common element ]
⇒ o (G) = ∑ c a , in each conjugate class
o (G)
⇒ o (G) = ∑ , by previous theorem.
o [N (a)]
Note: The equation in this corollary is often called the class equation of G.

1.9 The Centre of a Group


Definition: The set Z of all self-conjugate elements of a group G is called the centre of G.
Symbolically
Z = { z ∈ G : z x = xz V x ∈ G }. (Lucknow 2007)
Theorem 1: The centre Z of a group G is a normal subgroup of G. (Lucknow 2007)
Proof: We have Z = { z ∈ G : z x = xz V x ∈ G }.
First we shall prove that Z is a subgroup of G.
Let z1 , z 2 ∈ Z . Then z1 x = xz1 and z 2 x = xz 2 for all x ∈ G.
We have z 2 x = xz 2 V x ∈ G
⇒ z 2−1 (z 2 x) z 2−1 = z 2−1 ( xz 2 ) z 2−1 ⇒ xz 2−1 = z 2−1 x V x ∈ G
⇒ z 2−1 ∈ Z .
Now (z1 z 2−1 ) x = z1 (z 2−1 x) = z1 ( xz 2−1 ) = (z1 x) z 2−1 = ( xz1 ) z 2−1
= x (z1 z 2−1 ) V x ∈ G.
∴ z1 z 2−1 ∈ Z .
Thus z1 , z 2 ∈ Z ⇒ z1 z 2−1 ∈ Z .
∴ Z is a subgroup of G.
Now we shall show that Z is a normal subgroup of G. Let x ∈ G and z ∈ Z . Then
xz x −1 = ( xz ) x −1 = (z x) x −1 = z ∈ Z .
Thus x ∈ G , z ∈ Z ⇒ xz x −1 ∈ Z .
∴ Z is a normal subgroup of G.
Theorem 2: a ∈ Z if and only if N (a) = G . If G is finite, a ∈ Z if and only if
o [N (a)] = o(G).
Proof: Let a ∈ Z . Then by def. of Z, we have
a x = x a V x ∈ G.
Also N (a) = { x ∈ G : a x = x a}.
Now a∈Z ⇔ a x = x a V x ∈G [ by def. of Z ]
27

⇔ x ∈ N (a) V x ∈ G [by def. of N (a) ]


⇔ N (a) = G [∵ N (a) ⊆ G and each element of G is in N (a) ]
If the group G is finite, then N (a) = G ⇔ o (G) = o [N (a)].
Therefore if the group G is finite, then a ∈ Z if and only if o [N (a)] = o (G).
Theorem 3: Let G be a finite group and Z be the centre of G. Then the class equation of G
can be written as
o (G)
o (G) = o (Z ) + ∑ ,
a ∉Z o [N (a)]

where the summation runs over one element a in each conjugate class containing more than one
element.
Proof: The class equation of G is
o (G)
o (G) = o (Z ) + ∑ ,
a ∉Z o [N (a)]

the summation being extended over one element a in each conjugate class.
Now a ∈ Z ⇔ o [N (a)] = o (G) ⇔ o (G) / o [N (a)] = 1 ⇔ the conjugate class of a in
G contains only one element. Thus the number of conjugate classes each having
only one element is equal to o (Z ). If a is an element of any one of these conjugate
classes, we have o (G) / o [N (a)] = 1.
Hence, the class equation of G takes the desired form
o (G)
o (G) = o (Z ) + ∑ .
a ∉Z o [N (a)]

1.10 Centre for Group of Prime Power Order


Theorem 1: If o (G) = pn where p is a prime number, then the centre Z ≠ { e }.
(Lucknow 2009)
Proof: By the class equation of G, we have
o (G)
o (G) = o (Z ) + ∑ , ...(1)
a ∉Z o [N (a)]

where the summation runs over one element a in each conjugate class containing
more than one element.
Now V a ∈ G, N (a) is a subgroup of G. Therefore by Lagrange’s theorem o [N (a)]
is a divisor of o (G). Also a ∉ Z ⇒ N (a) ≠ G ⇒ o [N (a)] < o (G). Therefore if
a ∉ Z , then o[N (a)] must be of the form pn a where na is some integer such that
1≤ na < n. Suppose there are exactly z elements in Z i. e., let o (Z ) = z . Then the
class equation (1) gives
pn
pn = z + ∑ n , where each na is some integer such that 1≤ na < n.
p a
28

pn
∴ z = pn − ∑ pn a
, ...(2)

where na ’s are some positive integers each less than n.


Now p| pn . Also p divides each term in the ∑ of the right hand side of (2)
because each na < n. Thus we see that p is a divisor of the right hand side of (2).
Therefore p is a divisor of z. Now e ∈ Z . Therefore z ≠ 0. Therefore z is a positive
integer divisible by the prime p. Therefore z > 1. Hence Z must contain an
element besides e. Therefore Z ≠ { e }.
Corollary: If o(G) = p2 where p is a prime number, then G is abelian .
(Lucknow 2007)
Proof:. We shall show that the centre Z of G is equal to G itself. Then obviously G
will be an abelian group.
Since p is a prime number, therefore by the previous theorem Z ≠ { e}. Therefore
o (Z ) > 1.But Z is a subgroup of G,therefore o (Z ) must be a divisor of o (G) i. e., o (Z )
must be a divisor of p2. Since p is prime, therefore either o (Z ) = p or p2.
If o (Z ) = p2 , then Z = G and our proof is complete.
Now suppose that o (Z ) = p. Then o (Z ) < o (G) because p < p2 . Therefore there
must be an element which is in G but which is not in Z. Let a ∈ G and a ∉ Z .
Now N (a) is a subgroup of G and a ∈ N (a). Also x ∈ Z ⇒ xa = a x and this implies
x ∈ N (a). Thus Z ⊆ N (a). Since a ∉ Z , therefore the number of elements in N (a)
is > p i. e., o [N (a)] > p. But order of N (a) must be a divisor of p2 . Therefore o [N (a)]
must be equal to p2 . Then N (a) = G. Therefore a ∈ Z and thus we get a
contradiction.
Therefore it is not possible that o(Z ) = p . Hence the only possibility is that
o (Z ) = p2 ⇒ Z = G ⇒ G is abelian.
Example: Is a group of order 121 abelian?
Ans: Yes .

1.11 Conjugate Subgroups


Definition: If A, B be two subgroups of a group G, then B is said to be conjugate to A if
there exists an element x ∈ G such that B = x −1 Ax.
If B = x −1 Ax , then B is also called the transform of A by x.
If B is conjugate to A, then symbolically we shall write B ~ A.
29

Theorem: The relation of being conjugate is an equivalence relation on the set of


subgroups of a group G.
Proof: Proceed as in theorem of article 1.5.
The relation of conjugacy in the family of subgroups of a group G will partition
the family into disjoint equivalence classes. The collection of all subgroups
conjugate to a subgroup A of G will be symbolically denoted by C ( A). Obviously
C ( A) = { x −1 Ax : x ∈ G}.

1.12 Normalizer of a Subgroup of a Group


Definition: If A is a subgroup of a group G, then N ( A), the normalizer of A in G is the set
of all those elements of G which commute with A. Symbolically
N ( A) = { x ∈ G : x A = Ax }.
Theorem 1: The normalizer N ( A) of a subgroup A of a group G is a subgroup of G.
Proof: Proceed as in Theorem of article 1.7.
Theorem 2: Suppose A is a subgroup of a group G. Then there is a one-to-one correspondence
between the right cosets of N ( A) in G and the conjugates of A.
Proof: Proceed as in Theorem of article 1.8.

1.13 Self-Conjugate Subgroup or an Invariant Subgroup


Definition: A subgroup A of a group G is said to be self-conjugate if A is the only member of
the class C ( A) of subgroups conjugate to A.
Thus, A, is self conjugate iff
A = x −1 Ax V x ∈ G or x A = Ax V x ∈ G
or A is a normal subgroup of G.
If A is a self-conjugate subgroup of a group G, then we have A = x −1 Ax V x ∈ G.
Thus the transform of A by every element of G remains equal to A. Therefore
sometimes a self-conjugate subgroup is also called an invariant subgroup. It is
quite obvious that a subgroup of a group G is invariant if and only if it is normal.
Therefore sometimes a normal subgroup is also called an invariant subgroup.

Example 17: Write all the conjugate classes in S3 , find the c a ’s and verify the class
equation.
Solution: The symmetric group on 3 symbols 1, 2, 3 is given by
S3 = {(1), (1, 2), (2, 3), (3, 1), (1, 2, 3), (1, 3, 2)}.
The three conjugate classes of S3 are
30

1 2 3
C (a) = {(1)}, where a = (1) =  ;
1 2 3
1 2 3
C (b) = {(123 ), (132 )}, where b = (123 ) =  ;
2 3 1
1 2 3
C ( c ) = {(12), (23), (31)}, where c = (12 ) =  ;
2 1 3
∴ c a = 1, c b = 2, c c = 3.
Also, we have Z (S3 ) = {(1)} = C (a).
Here o (Z (S3 )) = 1.
Hence, the class equation of S3 is
o (S3 )
o (S3 ) = o (Z (S3 )) + Σ ,
a o (C (a))
where the summation is taken over a set of representatives for
the distinct conjugacy classes having more than one member
o (S3 ) o (S3 ) 6 6
i. e., o (S3 ) = o (Z (S3 )) + + = 1 + + = 1 + 3 + 2 = 6.
o (C (b)) o (C (c )) 2 3
Example 18: Let Z be the centre of a group G. If a ∈ Z , then prove that the cyclic subgroup
{ a} of G generated by a is a normal subgroup of G.
Solution: We have
Z = { z ∈ G : z x = xz V x ∈ G }. Let a ∈ Z and let H = { a} be the cyclic
subgroup of G generated by a. Let h be any element of H. Then h = a n for some
integer n.
Let x be any element of G. We have
xh x −1 = xa n x −1 = a n x x −1 [ ∵ a ∈ Z ⇒ a n ∈ Z ∈ a n x = xa n ]
= a n e = a n ∈ H.
∴ H is a normal subgroup of G.

Example 19: Let a be any element of G. Show that the cyclic subgroup of G generated by a is
a normal subgroup of the normalizer of a.
Solution: We have the normalizer of a = N (a) = { x ∈ G : xa = a x }.
Let H be the cyclic subgroup of G generated by a. Also let h be any element of H.
Then h = a n where n is some integer. We have
a n a = a n + 1 = aa n .
∴ a n = h ∈ N (a).
Now N (a) and H are subgroups of G. Also h ∈ H ⇒ h ∈ N (a).
Therefore H ⊆ N (a). Hence H is a subgroup of N (a).
Now to prove that H is a normal subgroup of N (a). Let x be any element of N (a) and
h = a n be any element of H. We have
31

xh x −1 = xa n x −1 = ( xa x −1 ) n = (a x x −1 ) n [∵ x ∈ N (a) ⇒ a x = xa]
n n
= (ae) = a ∈ H.
∴ H is a normal subgroup of N (a).
Example 20: Show that two elements are conjugate if and only if they can be put in the
form x y and y x respectively where x and y are suitable elements of G.
Solution: Let a, b be two conjugate elements of a group G.
Then a = c −1 bc for some c ∈ G.
Let c −1 b = x and c = y. Then a = xy.
Also y x = c (c −1 b) = (c c −1 ) b = eb = b.
Conversely suppose that a = x y and b = y x. We have
b = y x ⇒ y −1 b = y −1 y x ⇒ y −1 b = x .
Now a = x y ⇒ a = y −1 by ⇒ a and b are conjugate elements.
Example 21: Give an example to show that in a group G the normalizer of an element is not
necessarily a normal subgroup of G.
Solution: Consider the group S3 , the symmetric group of permutations on three
symbols a, b, c. We have S3 = { I , (ab), (bc ), (ca), (abc ), (acb)}. Let N (ab) denote the
normalizer of the element (ab) ∈ S3 . We shall show that N (ab) is not a normal
subgroup of S3 . Let us calculate the elements of N (ab). Obviously (ab) ∈ N (ab).
Also I ∈ N (ab) because I (ab) = (ab) I .
Now (bc ) (ab) = (abc ) and (ab) (bc ) = (acb). Thus (bc ) does not commute with (ab).
Therefore (bc ) ∉ N (ab).
Again (ca) (ab) = (acb) and (ab) (ca) = (abc ).
Thus (ca) (ab) ≠ (ab) (ca) and therefore (ca) ∉ N (ab). Similarly we can verify that
(abc ) ∉ N (ab) and (acb) ∉ N (ab).
Hence N (ab) = { I , (ab) }.
Now we shall show that N (ab) is not a normal subgroup of S3 . Take the element
(bc ) ∈ S3 and the element (ab) ∈ N (ab). We have
(bc ) (ab) (bc ) −1 = (bc ) (ab) (cb) = (abc ) (cb) = (ac ) ∉ N (ab).
Therefore N (ab) is not a normal subgroup of S3 .
Example 22:. Let Z denote the centre of a group G. If G / Z is cyclic prove that G is
abelian.
Solution: It is given that G / Z is cyclic. Let Zg be a generator of the cyclic group
G / Z where g is some element of G.
Let a, b ∈ G. Then to prove that ab = ba. Since a ∈ G, therefore Z a ∈ G / Z . But
G / Z is cyclic having Z g as a generator. Therefore there exists some integer m such
that Z a = (Z g) m = Z g m, because Z is a normal subgroup of G. Now a ∈ Za.
Therefore
Z a = Z g m ⇒ a ∈ Z g m ⇒ a = z1 g m for some z1 ∈ Z .
Similarly b = z 2 g n where z 2 ∈ Z and n is some integer.
32

Now ab = (z1 g m ) (z 2 g n ) = z1 g m z 2 g n
= z1 z 2 g m g n [∵ z 2 ∈ Z ⇒ z 2 g m = g m z 2 ]
= z1 z 2 g m + n .
Again ba = z 2 g n z1 g m = z 2 z1 g n g m = z 2 z1 g n + m = z1 z 2 g m + n .
[∵ z1 ∈ Z ⇒ z1 z 2 = z 2 z1 ]
∴ ab = ba.
Since ab = ba V a, b ∈ G, therefore G is abelian.

Example 23: If p is a prime number and G is a non-abelian group of order p3, show that the
centre of G has exactly p elements.
Solution: Let Z denote the centre of G. Since o (G) = p3 where p is a prime number,
therefore Z ≠ { e} i. e., o (Z ) > 1. But Z is a subgroup of G, therefore o (Z ) must be a
divisor of o (G) i. e., o (Z ) must be a divisor of p3 . Since p is prime, therefore either
o (Z ) = p or p2 or p3 .
If o (Z ) = p3 = o (G), then Z = G and so G is abelian which contradicts the
hypothesis that G is non-abelian. So o (Z ) cannot be p3 .
If o (Z ) = p2 , then o (G / Z ) = o (G) / o (Z ) = p3 / p2 = p i. e., G / Z is a group of
prime order p and so is cyclic. But if G / Z is cyclic, then G is abelian which again
contradicts the hypothesis. So o (Z ) cannot be p2 .
Hence the only possibility is that o (Z ) = p i. e., the centre of G has exactly p
elements.

1.14 p-Sylow Subgroup


Definition: Suppose G is a finite group and o (G) = pm n , where p is a prime number and p
is not a divisor of n. Then a subgroup H of G is said to be a p-Sylow subgroup of G iff o ( H )
= pm .
Sylow’s Theorem: Suppose G is a group of finite order and p is a prime number. If
pm | o (G) and pm + 1 is not a divisor of o (G ), then G has a subgroup of order pm .

Proof: We shall prove the theorem by induction on o (G ).

Assuming that the theorem is true for groups of order less than that of G, we shall
show that it is also true for G. To start the induction we see that the theorem is
obviously true if o (G ) = 1 .
Let o (G ) = pm n, where p is not a divisor of n. If m = 0, the theorem is obviously true.
If m = 1, the theorem is true by Cauchy’s theorem. So let m > 1. Then G is a group of
33

composite order and so G must possess a subgroup H such that H ≠ G.


o (G )
If p is not a divisor of , then pm | o ( H ) because
o (H )
o (G )
o (G) = pm n = o ( H) .
o (H )
Also pm+1 cannot be a divisor of o ( H ) because then pm+1 will, be a divisor of o (G) of
which o ( H) is a divisor. Further o ( H ) < o (G ). Therefore by our induction
hypothesis, the theorem is true for H. Therefore H has a subgroup of order pm and
this will also be a subgroup of G. So let us assume that for every subgroup H of G
o (G )
where H ≠ G, p is a divisor of .
o (H )

Consider the class equation,


o (G)
o (G ) = o (Z ) + Σ …(1)
a ∉ Z o [N (a)]
Since a ∉ Z ⇒ N (a) ≠ G, therefore according to our assumption p is a divisor of
o (G)
Σ .
a ∉ Z o [N (a)]
Also p| o (G ).
Therefore from (1), we conclude that p is a divisor of o (Z ). Then by Cauchy’s
theorem, Z has an element b of order p. Z is the centre of G. Also N = {b} is a cyclic
subgroup of Z of order p. Therefore N is a cyclic subgroup of G of order p. Since
b ∈ Z , therefore N is a normal subgroup of G of order p.
Now consider the quotient group G ′ = G / N .
We have o (G ′ ) = o (G ) / o (N ) = pm n / p = pm−1 n.
Thus o (G ′ ) < o (G). Also pm−1 | o (G ′ ) but pm is not a divisor of o (G ′ ). Therefore
by our induction hypothesis G ′ has a subgroup, say S ′ of order pm−1 . We know that
the natural mapping φ : G → G / N defined by φ ( x) = Nx V x ∈ G is a
homomorphism of G onto G / N with kernel N. Let S = { x ∈ G : φ ( x) ∈ S ′ }.

Then S is a subgroup of G and S ′ ≅ S / N .


o (S)
∴ o (S ′ ) = o (S / N ) = .
o (N )
Therefore o (S) = o (S ′ ) . o (N ) = pm −1 p = pm .
Thus S is a subgroup of G of order pm .
This completes the proof of the theorem.
34

Example 24: If H is a p-Sylow subgroup of G and x ∈ G, then x −1 Hx is also a p-Sylow


subgroup of G .
Solution: Suppose G is a finite group and o (G) = pm n where p is a prime number
and p is not a divisor of n. If H is a p-Sylow subgroup of G, then o ( H) = pm .
Let x ∈ G be arbitrary. Then x −1 Hx will be a p-Sylow subgroup of G if x −1 Hx is a
subgroup of G and if o ( x −1 Hx) = pm .
First we shall prove that x −1 Hx is a subgroup of G. Let x −1 h1 x, x −1 h2 x be any two
elements of x −1 Hx. Then h1 , h2 ∈ H. Also we have
( x −1 h1 x) ( x −1 h2 x) −1 = x −1 h1 x x −1 h2−1 ( x −1 ) −1 = x −1 h1 eh2 −1 x
= x −1 h1 h2 −1 x
∈ x −1 Hx,
since h1 h2 −1 ∈ H, H being a subgroup of G.
∴ x −1 Hx is a subgroup of G.
Now let ψ be a mapping from H into x −1 Hx defined as
ψ (h) = x −1 h x V h ∈ H.
ψ is onto: Let x −1 h x be any element of x −1 H x. Then h ∈ H and we have
ψ (h) = x −1 h x.
Therefore ψ is onto.
ψ is one-one: Let h1 , h2 ∈ H. Then
ψ (h1 ) = ψ (h2 ) ⇒ x −1 h1 x = x −1 h2 x
⇒ h1 = h2 [by cancellation laws]
⇒ ψ is one-one.
Thus ψ is a one-to-one correspondence between the elements of H and the
elements of x −1 Hx. Therefore o ( x −1 Hx) = o ( H ) = pm . Hence x −1 Hx is a p -Sylow
subgroup of G.

Example 25: If a group G has only one p-Sylow subgroup H, then H is normal in G.
Solution: Suppose a group G has only one p-Sylow subgroup H. Let x be any
element of G. Then by previous example, x −1 Hx is also a p -Sylow subgroup of G.
But H is the only p -Sylow subgroup of G. Therefore
x −1 Hx = H V x ∈ G ⇒ H is a normal subgroup of G.
35

Comprehensive Exercise 2

1. Let G be a non-abelian group. Show that for all x ∈ G, Z (G) is a subgroup of


N ( x).
2. Let G be a finite group and an element a ∈ G has exactly two conjugates.
Prove that G has a normal subgroup N ≠ (e), G.
3. Let G be a group, and let a ∈ G has only two conjugates in G. Show that N (a)
is a normal subgroup of G.
4. Let N be a normal subgroup of G and a ∈ N . Show that every conjugate of a
in G is also in N.
5. Write down the elements of the symmetric group P3 and determine the
classes of conjugate elements.
6. Show that any two conjugate classes of a group are either disjoint or identical.
7. If the order of a group G is a power of a prime p, show that the centre of G has
at least p elements.
8. If o (G ) = pn , where pis a prime number, and H ≠ G is a subgroup of G, show
that there exists an x ∈ G, x ∉ H such that x − 1 H x = H.
9. If N be a normal subgroup of G, show that either C (a) ∩ N = ∅ or C (a) ⊆ N
for all a ∈ G.
10. Let G be a finite non-abelian group such that G / Z (G ) is abelian and k (G )
denotes the number of conjugate classes in G. Show that
k (G ) ≥ o (G / Z (G )) + o (Z (G )) − 1, where Z (G ) is the centre of G.

Answers 2
5. f1 = I , f 2 = (12), f 3 = (23), f 4 = (31), f 5 = (123), f 6 = (132);
C ( f1 ) = { f1 }, C ( f 2 ) = C ( f 3 ) = C ( f 4 ) = { f 2 , f 3 , f 4 },
C ( f 5 ) = C ( f 6 ) = { f 5 , f 6 }.
36

O bjective T ype Q uestions

Multiple Choice Questions


Indicate the correct answer for each question by writing the corresponding letter from
(a), (b), (c) and (d).
1. If p is a prime number and G is a non-abelian group of order p3, then the
number of elements in the centre of G is
(a) p (b) p2
(c) p3 (d) none of these.

Fill in the Blank(s)


Fill in the blanks “……”, so that the following statements are complete and correct.
1. The set Z of all self-conjugate elements of a group G is called the ……of G.
2. If o (G) = pn where p is a prime number, then the centre Z ≠ …… .
3. If a ∈ G, then N (a), the set of all those elements of G which commute with a
is called the …… of a in G.

True or False
Write ‘T’ for true and ‘F’ for false statement.
1. The relation of conjugacy is an equivalence relation on the group G.
2. If a ∈ G, then the set N (a) = { x ∈ G : a x = xa } is always a normal subgroup
of G.
3. If o (G) = p n where p is a prime number, then the centre Z = { e }.
4. A group of order 121 is abelian.

A nswers

Multiple Choice Questions


1. (a).
Fill in the Blank(s)
1. centre. 2. { e }. 3. normalizer.

True or False
1. T. 2. F. 3. F. 4. T.

¨
37

Rings

2.1 Principal Ideal Ring


efinition: A commutative ring R without zero divisors and with unity element is a
D principal ideal ring if every ideal S in R is a principal ideal i.e., if every ideal S in R is of
the form S = (a) for some a ∈ S.
Theorem 1: The ring of integers is a principal ideal ring.
Proof: Let (I, + , .) be the ring of integers. Obviously I is a commutative ring with
unity and without zero divisors. Therefore I will be a principal ideal ring if every
ideal in I is a principal ideal.
Let S be any ideal of the ring of integers. If S is the null ideal then S = (0) so that S is
a principal ideal.
So let us suppose that S ≠ (0).
Now S contains at least one non-zero integer, say a. Since S is a subgroup of R
under addition, therefore a ∈ S ⇒ − a ∈ S. This shows that S contains at least one
positive integer, because if 0 ≠ a, then one of a and − a must be positive.
38

Let S + be the set of all positive integers in S. Since S + is not empty, therefore by the
well ordering principle S + must possess a least positive integer. Let s be this least
element. We will now show that S is the principal ideal generated by s i.e., S = (s).
Suppose now that n is any integer in S. Then by division algorithm, there exist
integers q and r such that n = qs + r with 0 ≤ r < s.
Now s ∈ S, q ∈ I ⇒ qs ∈ S [ ∵ S is an ideal ]
and n ∈ S, qs ∈ S ⇒ n − qs ∈ S
[ ∵ S is a subgroup of the additive group of I ]
⇒ r ∈S [∵ n − qs = r ]
But 0 ≤ r < s and s is the least positive integer such that s ∈ S. Hence r must be 0.
∴ n = qs.
Thus n ∈ S ⇒ n = qs for some q ∈ I.
Hence S is a principal ideal of I generated by s.
Since S was an arbitrary ideal in the ring of integers, therefore the ring of integers is
a principal ideal ring.
Theorem 2: Every field is a principal ideal ring.
Proof: A field has no proper ideals. The only ideals of a field are (i) the null ideal
which is a principal ideal generated by 0 and (ii) the field itself which is also a
principal ideal generated by 1. Thus a field is always a principal ideal ring.

2.2 Polynomial Rings


While studying algebra in high school classes we are introduced with polynomials.
2
We know that expressions of the type 3 x 2 − 4 x + 5, x + 7, 9 x 3 − x 2 + 4 x + 5
3
etc. are called polynomials in the indeterminate x. In place of x we can use other
letters like y, z etc. to denote these polynomials. Now we shall define polynomials
over an arbitrary ring.
Definition: Let R be an arbitrary ring and let x, called an indeterminate, be any
symbol not an element of R. By a polynomial in x over R is meant an expression of the form
f ( x) = a0 x 0 + a1 x + a2 x 2 + ……
where a0 , a1 , a2 , …, are elements of R and only a finite number of them are not equal to 0 , the
zero element of R.
Here x is an indeterminate. We could have used any other letter, say, y in place of
x. Also a0 x 0 , a1 x, a2 x 2 etc. are called terms of the polynomial and a0 , a1 , a2 etc.
are called coefficients of these terms. All these coefficients are elements of R. The
number of terms in the polynomial f ( x) will be infinite but except a finite number
of terms, the coefficients of all the remaining terms will be equal to 0, the zero
39

element of the ring. The symbol ‘+’ connecting various terms in f ( x) has no
connection with the addition of the ring. This symbol has been used here only to
connect different terms. Also x is not an element of R. The powers of x are nothing
to do with the powers of an element of R. The different powers of x only tell us the
ordered place of different coefficients. There is no harm if we represent this
polynomial f ( x) by the infinite ordered set (a0 , a1 , a2 , ……), where
a0 , a1 , a2 ,……are elements of R and only a finite number of them are not equal to
zero. Since from high school classes we represent a polynomial with an
indeterminate x, therefore we have preferred this way to represent polynomials.
Remark: The polynomial a0 x 0 + a1 x + a2 x 2 + ……over a ring R can also be
written as
a0 + a1 x + a2 x 2 + …
Set of all polynomials over a ring : Let R be an arbitrary ring and x an
indeterminate. The set of all polynomials f ( x) ,

f ( x) = Σ a n x n = a0 x 0 + a1 x + a2 x 2 + …
n=0

where the a,s are elements of the ring R and only a finite number of them are not equal to zero,
is called R [ x ] .
We shall make a ring out of R [ x ]. Then R [ x ] will be called the ring of all
polynomials over the ring R. For this we shall define equality, addition and
multiplication of two elements of R [ x ] .
Definition:
Suppose R is an arbitrary ring and f ( x) = a0 x 0 + a1 x + a2 x 2 + a3 x 3 + … and
g ( x) = b0 x 0 + b1 x + b2 x 2 + b3 x 3 + … are any elements of R [ x]. Then
(a) f ( x) = g ( x) if and only if a n = b n V non-negative integer n. Thus two
polynomials are equal iff their corresponding coefficients are equal.
(b) f ( x) + g ( x) = c 0 x 0 + c1 x + c 2 x 2 + c 3 x 3 + …
where c n = a n + b n for every non-negative integer n. Thus in order to add two
polynomials we should add the coefficients of like powers of x.
Since c n ∈ R and only a finite number of c' s can be not equal to zero, therefore
f ( x) + g ( x) is an element of R [ x]. Thus R [ x] is closed with respect to
addition of polynomials as defined above.
(c) f ( x) g ( x) = d0 x 0 + d1 x + d2 x 2 + d3 x 3 + …
where dn = a0 b n + a1 b n − 1 + a2 b n − 2 + … + a n b0
for every non-negative integer n.

We can write dn = Σ a i b j , where by this summation we mean the sum of all


i+ j=n

the products of the type a i b j with i and j non-negative integers whose sum is n.
40
,
Since dn ∈ R and only a finite number of d s can be not equal to zero, therefore
f ( x) g ( x) is an element of R [ x]. Thus R [ x] is closed with respect to multiplication
of polynomials as defined above.
We have d0 = a0 b0 , d1 = a0 b1 + a1 b0 , d2 = a0 b2 + a1 b1 + a2 b0 ,
d3 = a0 b3 + a1 b2 + a2 b1 + a3 b0 and so on.
Therefore in order to multiply two polynomials f ( x) and g ( x), we should first write
f ( x) g ( x) = (a0 x 0 + a1 x + a2 x 2 + …) (b0 x 0 + b1 x + b2 x 2 + …).
Now we should multiply different powers of the indeterminate x and using the
relation x i x j = x i + j we should collect coefficients of different powers of x.
Zero Polynomial: The polynomial
f ( x) = Σ a n x n = a0 x 0 + a1 x + a2 x 2 + a3 x 3 + …
in which all the coefficients a0 , a1 , a2 ,…… are equal to 0 is called the zero
polynomial over the ring R.
Degree of a polynomial:
Let f ( x) = a0 x 0 + a1 x + a2 x 2 + a3 x 3 + … + a n x n
+…
be a polynomial over an arbitrary ring R. We say that n is the degree of the polynomial f ( x) if
and only if a n ≠ 0 and a m = 0 for all m > n. We shall write deg f ( x) to denote the
degree of f ( x). Thus the degree of f ( x) is the largest non-negative integer i for
which the ith coefficient of f ( x) is not 0. If in the polynomial f ( x), a0 (i.e., the
coefficient of x 0 ) is not 0 and all the other coefficients are 0, then according to our
definition, the degree of f ( x) will be zero. Also according to our definition, if there
is no non-zero coefficient in f ( x), then its degree will remain undefined. Thus we
do not define the degree of the zero polynomial. Also it is obvious that every
non-zero polynomial will possess a unique degree.
Note: If f ( x) = a0 x 0 + a1 x + a2 x 2 + … + a n x n + … is a polynomial of degree
n i. e., if a n ≠ 0 and a m = 0 for all m > n, then it is convenient to write
n
f ( x) = Σ a i x i = a0 x 0 + a1 x + a2 x 2 + … + a n x n .
i =0

It will remain understood that all the terms in f ( x) which follow the term a n x n ,
have zero coefficients. Also we shall call a n x n as the leading term and a n as the
leading coefficient of the polynomial. The term a0 x 0 is called the constant term
and a0 is called the zero th coefficient of f ( x).
For example f ( x) = 2 x 0 + 3 x − 4 x 2 + 4 x 3 − 8 x 4 is a polynomial of degree 4 over
the ring of integers. Here − 8 is the leading coefficient and 2 is the zero th
coefficient. The coefficients of all terms which contain powers of x greater than 4
will be regarded as zero. Similarly g ( x) = 3 x 0 is a polynomial of degree zero over
the ring of integers. In this polynomial the coefficients of x, x 2 , x 3 ,…are all equal to
41

zero. The zero polynomial over an arbitrary ring R will be represented by 0 x 0 .


Set of constant polynomials over a ring: Let R be an arbitrary ring and R [ x] the
set of all polynomials over R. Let R′ denote the set of all polynomials over R whose coefficients
are all zero except for the constant term, which may be either zero or non-zero. That is,
R ′ = {ax 0 : a ∈ R} .
Then R′ will be called as the set of constant polynomials in R [ x] .
Thus all the polynomials of degree 0 as well as the zero polynomial will be called as
constant polynomials.
Illustration 1: Add and multiply the following polynomials over the ring of integers:
f ( x) = 2 x 0 + 5 x + 3 x 2 − 4 x 3 , g ( x) = 3 x 0 + 4 x − x 3 + 5 x 4 .
Solution: By our definition of the sum of two polynomials, we have
f ( x) + g ( x) = (2 + 3) x 0 + (5 + 4) x + (3 + 0) x 2
+ (− 4 − 1) x 3 + (0 + 5) x 4
= 5 x0 + 9 x + 3 x2 − 5 x3 + 5 x4 .
Also f ( x) g ( x) = (2 x 0 + 5 x + 3 x 2 − 4 x 3 ) (3 x 0 + 4 x − x 3 + 5 x 4 )
= 6 x 0 + (8 + 15) x + (20 + 9) x 2 + (− 2 + 12 − 12) x 3
+ (10 − 5 − 16) x 4 + (25 − 3) x 5 + (15 + 4) x 6 − 20 x 7
= 6 x 0 + 23 x + 29 x 2 − 2 x 3 − 11x 4 + 22 x 5 + 19 x 6 − 20 x 7
Illustration 2: Add and multiply the following polynomials over the ring (I6 , + 6 , × 6 ) :
f ( x) = 2 x 0 + 5 x + 3 x 2 , g ( x) = 1 x 0 + 4 x + 2 x 3 .
Solution: f ( x) + g ( x) = (2 + 6 1) x 0 + (5 + 6 4) x + (3 + 6 0) x 2 + (0 + 6 2) x 3
= 3 x0 + 3 x + 3 x2 + 2 x3 .
Also f ( x) g ( x) = (2 x 0 + 5 x + 3 x 2 ) (1x 0 + 4 x + 2 x 3 )
= (2 × 6 1) x 0 + [(2 × 6 4) + 6 (5 × 6 1)] x + [(5 × 6 4) + 6 (3 × 6 1)] x 2
+ [(2 × 6 2) + 6 (3 × 6 4)] x 3 + (5 × 6 2) x 4 + (3 × 6 2) x 5
= 2 x 0 + (2 + 6 5) x + (2 + 6 3) x 2 + (4 + 6 0) x 3 + 4 x 4 + 0 x 5
= 2 x 0 + 1x + 5 x 2 + 4 x 3 + 4 x 4 .
Note: Here degree of f ( x) = 2, degree of g ( x) = 3 and degree of f ( x) g ( x) = 4.
The point to note is that degree f ( x) g ( x) may be less than the sum of the degrees
of f ( x) and g ( x).
Illustration 3: Add and multiply the following polynomials over the ring (I5 , + 5 , × 5 ) :
f ( x) = 3 + 4 x + 2 x 2 , g ( x) = 1 + 3 x + 4 x 2 + 2 x 3 .
Solution : f ( x) + g ( x) = 4 + 2 x + x 2 + 2 x 3 ,
f ( x) g ( x) = 3 + 3 x + x 2 + 3 x 3 + x 4 + 4 x 5 .
42

2.3 Degree of the Sum and the Product of Two Polynomials


Theorem: Let f ( x) and g ( x) be two non-zero polynomials over an arbitrary ring R .
Then
(i) deg [ f ( x) + g ( x)] ≤ Max [deg f ( x), deg g ( x)], if f ( x) + g ( x) ≠ 0.
(ii) deg [ f ( x) g ( x)] ≤ deg f ( x) + deg g ( x), if f ( x) g ( x) ≠ 0.
Proof: Let f ( x) = a0 x 0 + a1 x + a2 x 2 + .... + a n x n , a n ≠ 0
and g ( x) = b0 x 0 + b1 x + b2 x 2 + .... + b m x m , b m ≠ 0
be two elements of R [ x].
Here deg f ( x) = n and deg g ( x) = m.
From our definition of the sum of two polynomials, it is obvious that if
f ( x) + g ( x) ≠ 0, then
 max (n, m) if n ≠ m
deg [ f ( x) + g( x)] =  n if n = m and a n + b m ≠ 0

< n if n = m and a n + b m = 0.
Again f ( x) g ( x) = (a0 b0 ) x 0 + (a0 b1 + a1 b0 ) x + .... + a n b m x n + m .
Suppose f ( x) g ( x) ≠ 0. Then f ( x) g ( x) has a unique degree.
If a n b m ≠ 0, then deg [ f ( x) g ( x)] = n + m = deg f ( x) + deg g ( x).
Also if a n b m = 0, then deg [ f ( x) g ( x)] < n + m.
Corollary 1: Suppose D is an integral domain and f ( x), g ( x) are two non-zero elements
of D [ x]. Then
deg [ f ( x) g ( x)] = deg f ( x) + deg g ( x).
Proof: Since a n ≠ 0, b m ≠ 0, therefore a n b m ≠ 0 because in an integral domain the
product of two non-zero elements cannot be zero. Hence deg [ f ( x) g ( x)] = m + n.
Corollary 2: If F is a field and f ( x), g ( x) are two non-zero elements of F [ x] , then
deg [ f ( x) g ( x)] = deg f ( x) + deg g ( x).
Proof: Since a field is also free from zero divisors, therefore a n b m ≠ 0 when a n ≠ 0
and b m ≠ 0. Hence the result.

2.4 Ring of Polynomials


Theorem: The set R [ x] of all polynomials over an arbitrary ring R is a ring with respect to
addition and multiplication of polynomials.
Proof: Let f ( x), g ( x) ∈ R [ x]. Then f ( x) + g ( x) and f ( x) g ( x) are also
polynomials over R. Therefore R [ x] is closed with respect to addition and
multiplication of polynomials.
43

Now let
f ( x) = Σ a i x i = a0 x 0 + a1 x + a2 x 2 + …,
g ( x) = b0 x 0 + b1 x + b2 x 2 + …,
h ( x) = c 0 x 0 + c1 x + c 2 x 2 + … be any arbitrary elements of R [ x].
Commutativity of addition: We have
0
f ( x) + g ( x) = (a0 + b0 ) x + (a1 + b1 ) x + (a2 + b2 ) x 2 + …
= (b0 + a0 ) x 0 + (b1 + a1 ) x + (b2 + a2 ) x 2 + … = g ( x) + f ( x).
Associativity of addition: We have
[ f ( x) + g ( x)] + h ( x) = Σ (a i + b i ) x i + Σ c i x i = Σ [(a i + b i ) + c i ] x i
= Σ [a i + (b i + c i )] x i = Σ a i x i + Σ (b i + c i ) x i
= f ( x) + [ g ( x) + h ( x)].
Existence of additive identity: Let 0 ( x) be the zero polynomial over R i.e.,
0 ( x) = 0 x 0 + 0 x + 0 x 2 + …
Then f ( x) + 0 ( x) = (a0 + 0) x 0 + (a1 + 0) x + (a2 + 0) x 2 + …
= a0 x 0 + a1 x + a2 x 2 + … = f ( x).
∴ the zero polynomial 0 ( x) is the additive identity.
Existence of additive inverse: Let − f ( x) be the polynomial over R defined as
− f ( x) = (− a0 ) x 0 + (− a1 ) x + (− a2 ) x 2 + …
Then − f ( x) + f ( x) = (− a0 + a0 ) x 0 + (− a1 + a1 ) x + (− a2 + a2 ) x 2 + …
= 0 x 0 + 0 x + 0 x 2 + … = 0 ( x) = the additive identity.
∴ each member of R [ x] possesses additive inverse.
Associativity of Multiplication: We have
0
f ( x) g ( x) = (a0 x + a1 x + a2 x 2 + …) (b0 x 0 + b1 x + b2 x 2 + …)
= d0 x 0 + d1 x + d2 x 2 + … + dl x l + … ,
where dl = Σ ai b j .
i+ j=l
Now [ f ( x) g ( x)] h ( x) = (d0 x 0 + d1 x + d2 x 2 + …) (c 0 x 0 + c1 x + c 2 x 2 + …)
= e0 x 0 + e1 x + e2 x 2 + … + e n x n + …,
where e n = the coeff. of x n in [ f ( x) g ( x)] h ( x)
= Σ dl c k = Σ [( Σ ai b j ) c k ] = Σ ai b j c k .
l+k=n l+k=n i+ j=l i+ j+k=n

Similarly we can show that the coeff. of x n in f ( x) [ g ( x) h ( x)]


= Σ ai b j c k .
i+ j+k=n
44

Thus [ f ( x) g ( x)] h ( x) = f ( x) [ g ( x) h ( x)] since corresponding coefficients in


these two polynomials are equal.
Distributivity of Multiplication with respect to addition: We have
f ( x) [ g ( x) + h ( x)]
= (a0 x 0 + a1 x + a2 x 2 + …) [(b0 + c 0 ) x 0 + (b1 + c1 ) x + (b2 + c 2 ) x 2 + …]
If n is any non-negative integer, then the coefficient of x n in f ( x) [ g ( x) + h ( x)]
= Σ a i (b j + c j ) = Σ (a i b j + a i c j ) = Σ ai b j + Σ ai c j
i+ j=n i+ j=n i+ j=n i+ j=n
= Coeff. of x n in f ( x) g ( x) + coeff. of x n in f ( x) h ( x)
= Coeff. of x n in [ f ( x) g ( x) + f ( x) h ( x)].
∴ f ( x) [ g ( x) + h ( x)] = f ( x) g ( x) + f ( x) h ( x).
Similarly we can prove the right distributive law.
Hence R [ x] is a ring. This is called the ring of all polynomials over R. The zero
element of this ring is the zero polynomial
0 x0 + 0 x + 0 x2 + 0 x3 + …

2.5 R as a Subset of R [ x ]or Imbedding of R into R [ x ]


Theorem: If R is an arbitrary ring and R′ is the set of constant polynomials in R [ x], then
R′ is isomorphic to R.
Proof: We have R ′ = {ax 0 + 0 x + 0 x 2 + 0 x 3 + .... such that a ∈ R}.
Let φ : R → R ′ such that
φ (a) = ax 0 + 0 x + 0 x 2 + 0 x 3 + .... V a ∈ R.
φ is one-one since
φ (a) = φ (b) ⇒ ax 0 + 0 x + 0 x 2 + .... = bx 0 + 0 x + 0 x 2 + … ⇒ a = b.
Also φ is obviously onto R′.
Again φ (a + b) = (a + b) x 0 + 0 x + 0 x 2 + 0 x 3 + ....
= [ax 0 + 0 x + 0 x 2 + ....] + [bx 0 + 0 x + 0 x 2 + ...] = φ (a) + φ (b).
Also φ (ab) = abx 0 + 0 x + 0 x 2 + 0 x 3 + ....
= (ax 0 + 0 x + 0 x 2 + ...) (bx 0 + 0 x + 0 x 2 + ....) = φ (a) φ (b).
∴ φ is an isomorphism of R onto R′. Hence R ≅ R ′.
Since R is isomorphic to R′, therefore in R [ x] we can identify R′ by R i.e., all the
constant polynomials in R [ x] can be replaced by the corresponding elements of R.
This replacement will not affect the addition and multiplication of polynomials.
Hence in future we shall write 0 in place of the zero polynomial. If
ax 0 + 0 x + 0 x 2 + .... is any constant polynomial in R [ x], then we shall simply
write a in place of this polynomial.
45

Also in place of a0 x 0 + a1 x + a2 x 2 + … we shall write a0 + a1 x + a2 x 2 + .... . If


we are to multiply f ( x) by a constant polynomial ax 0 + 0 x + 0 x 2 + ..., then we
shall write af ( x) in place of (ax 0 + 0 x + ....) f ( x).

2.6 Polynomials over an Integral Domain


Theorem 1: If D is an integral domain, then the polynomial ring D [ x] is also an integral
domain.
Proof: Let D be a commutative ring without zero divisors and with unity element
1. As proved in article 2.4, D [ x] is also a ring. To prove that D [ x] is an integral
domain we should prove that (i) D [ x] is commutative, (ii) is without zero divisors
and (iii) possesses the unity element.
D [ x ] is commutative: Let f ( x) = a0 + a1 x + a2 x 2 + ……..
and g ( x) = b0 + b1 x + b2 x 2 + …… be any two elements of D [ x].
If n is any non-negative integer, then the coefficient of x n in f ( x) g ( x) is
= Σ ai b j = Σ b j a i , since D is commutative
i+ j=n i+ j=n
n
= coefficient of x in g ( x) f ( x).
∴ f ( x) g ( x) = g ( x) f ( x). Hence D [ x] is a commutative ring.
If 1 is the unity element of D, then the constant polynomial
1 + 0 x + 0 x2 + 0 x3 + …
is the unity element of D [ x]. We have
[a0 + a1 x + a2 x 2 + …] [1 + 0 x + 0 x 2 + 0 x 3 + …]
= (a0 1) + (a11) x + (a2 1) x 2 + … = a0 + a1 x + a2 x 2 + …
∴ the polynomial 1 + 0 x + 0 x 2 + …or simply 1 is the unity element of D [ x].
D [ x ] is without zero divisors: Let
f ( x) = a0 + a1 x + a2 x 2 + … + a m x m , a m ≠ 0
g ( x) = b0 + b1 x + b2 x 2 + … + b n x n , b n ≠ 0
be two non-zero elements of D [ x].
Then f ( x) g ( x) cannot be a zero polynomial i.e., the zero element of D [ x]. The
reason is that at least one coefficient of f ( x) g ( x) namely a m b n of x m + n is ≠ 0
because a m , b n are non-zero elements of D and D is without zero divisors.
Hence D [ x] is an integral domain.
Theorem 2: If R is an integral domain with unity element, then any unit in R [x] must
already be a unit in R .
46

Proof: If R is an integral domain with unity element 1, then R [ x] is also an integral


domain with unity element. Further the constant polynomial 1 is the unity
element of R [ x]. Let f ( x) be a unit in R [ x] i.e., let f ( x) be an inversible element of
R [ x]. Let g ( x) be the inverse of f ( x) in R [ x]. Then
f ( x) g ( x) = 1
⇒ deg [ f ( x) g ( x)] = 0 [∵ degree of the constant polynomial 1 is 0]
⇒ deg f ( x) + deg g ( x) = 0
⇒ deg f ( x) = 0, deg g ( x) = 0
⇒ both f ( x) and g ( x) are constant polynomials in R [ x].
Let f ( x) = a ∈ R and g ( x) = b ∈ R . Then ab = 1 ⇒ a is a unit in R. Thus any unit in
R [ x] must already be a unit in R.

Note: If a ∈ R is a unit in R, then a ∈ R [ x] is also a unit in R [ x]. If b is the inverse of a


in R, then the constant polynomial b is the inverse of a in R [ x].

2.7 Polynomials over a Field


Theorem 1: If F is a field, then the set F [x] of all polynomials over F is an integral domain.
Proof: Every field is an integral domain. So give the same proof as we have given in
article 2.4 and then in article 2.6 .
We shall call the set F [ x] as the polynomial domain over the field F.

Theorem 2: The polynomial domain F [ x] over a field F is not a field.


Proof: In order to show that F [ x] is not a field, we should show that there exists a
non-zero element of F [ x] which has no multiplicative inverse. Let f ( x) be any
member of F [ x] such that deg f ( x) is greater than zero. The inverse of f ( x) cannot
be the zero polynomial because the product of f ( x) and the zero polynomial will be
equal to the zero polynomial and not equal to the unity element of F [ x] which is
the polynomial 1 + 0 x + 0 x 2 + .... . Suppose now g ( x) is any non-zero
polynomial.Then F being a field, we have
deg [ f ( x) g ( x)] = deg f ( x) + deg g ( x) > 0
because deg f ( x) > 0 and deg g ( x) ≥ 0.
The degree of the unity element of F [ x] is 0. Hence f ( x) g ( x) cannot be equal to
the unity element of F [ x]. Thus f ( x) does not possess multiplicative inverse.
∴ F [ x] is not a field.

Important: The only inversible elements of F [ x] are constant polynomials


excluding the zero polynomial. No member of F [ x] whose degree is greater than 0
is inversible.
47

Example 1: Show that if a ring R has no zero divisors, then the ring R [ x] has also no zero
divisors.
Solution: It is given that a ring R has no zero divisors and we have to show that the
ring R [ x] has also no zero divisors.
Let f ( x) = a0 + a1 x + a2 x 2 + .... + a m x m , a m ≠ 0
and g ( x) = b0 + b1 x + b2 x 2 + … + b n x n , b n ≠ 0
be two non-zero elements of R [ x].
Then f ( x) g ( x) cannot be the zero polynomial i.e., the zero element of R [ x]. The
reason is that at least one coefficient of f ( x) g ( x) namely a m b n of x m + n is ≠ 0
because a m , b n are non-zero elements of R and R is without zero divisors.
Thus in R [ x] the product of no two non-zero elements can be the zero element.
Hence the ring R [ x] has no zero divisors.
Example 2: Consider the following polynomials over the ring (I 8 , + 8 , × 8 ) :
f ( x) = 2 + 6 x + 4 x 2, g ( x) = 2 x + 4 x 2, h ( x) = 2 + 4 x
and find (i) deg [ f ( x) + g ( x)]
(ii) deg [ f ( x) g ( x)]
(iii) deg [ h ( x) h ( x)].
Solution: (i) We have f ( x) + g ( x) = (2 + 6 x + 4 x 2 ) + (0 + 2 x + 4 x 2 )
= (2 + 8 0) + (6 + 8 2) x + (4 + 8 4) x 2 = 2 + 0 x + 0 x 2 = 2.
Thus f ( x) + g ( x) is a non-zero constant polynomial and so
deg [ f ( x) + g ( x)] = 0.
(ii) We have f ( x) g ( x) = (2 + 6 x + 4 x 2 ) (2 x + 4 x 2 )
= (2 × 8 2) x + [(2 × 8 4) + 8 (6 × 8 2)] x 2
+ [(6 × 8 4) + 8 (4 × 8 2)] x 3 + (4 × 8 4) x 4
= 4 x + (0 + 8 4) x 2 + (0 + 8 0) x 3 + 0 x 4
= 4 x + 4 x2 + 0 x3 + 0 x4 = 4 x + 4 x2
∴ deg [ f ( x) g ( x)] = 2.
(iii) We have h ( x) h ( x) = (2 + 4 x) (2 + 4 x)
= (2 × 8 2) + [(2 × 8 4) + 8 (4 × 8 2)] x + (4 × 8 4) x 2
= 4 + (0 + 8 0) x + 0 x 2 = 4 + 0 x + 0 x 2 = 4.
Thus h ( x) h ( x) is a non-zero constant polynomial and so deg [h ( x) h ( x)] = 0.
48

2.8 Ring of Polynomials in n Variables over an Integral


Domain
Definition: Let R be an integral domain. Then the ring of polynomials in the n-variables
x1 , ...., x n over R is denoted by R [ x1 , ..., x n ] and is defined as follows :
Let R1 = R [ x1 ], the polynomial ring in x1 over R,
R 2 = R1 [ x2 ], the polynomial ring in x2 over R1 ,
R 3 = R 2 [ x3 ], the polynomial ring in x3 over R 2 ,
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
R n = R n − 1 [ x n ], the polynomial ring in x n over R n − 1 .
Then R n is called the ring of polynomials in x1 , ..., x n over R and we write
R n = R [ x1 , ..., x n ].
Theorem 1: If R is an integral domain, then so is R [ x1 , ..., x n ].
Proof: If R is an integral domain then R1 = R [ x1 ] is also an integral domain. Now
R1 is an integral domain implies that R 2 = R1 [ x2 ] = R [ x1 , x2 ] is also an integral
domain. Continuing this process a finite number of times we see that R [ x1 , ..., x n ]
is an integral domain.
Theorem 2: If F is a field, then F [ x1 , ..., x n ] is an integral domain.
Proof: If F is a field, then F 1 = F [ x1 ] is an integral domain.
Now F 1 is an integral domain implies that F 2 = F 1 [ x2 ] = F [ x1 , x2 ] is also an
integral domain. Continuing this process a finite number of times we see that
F [ x1 , ..., x n ] is an integral domain.

Note: If F is a field, then F [ x1 ,..., x n ] is an integral domain.


Now we can construct the field of quotients of the integral domain F [ x1 , ..., x n ].
This field is called the field of rational functions in x1 , ..., x n over F and is denoted by
F ( x1 , ..., x n ).

2.9 Divisibility of Polynomials over a Field


Suppose F is a field. Then F [ x ] is an integral domain. If a( x) ≠ 0 and f ( x) are
elements of F [ x ], then a ( x) is a divisor (or factor) of f ( x) if and only if there is a
polynomial b( x) in F [ x ] such that f ( x) = a( x) b( x). Symbolically we write
a( x)| f ( x) .
A unit is an element of F [ x ] which has multiplicative inverse. All the polynomials
of zero degree belonging to F [ x ] are units of F [ x ] . Thus the non-zero elements
of F are the only units of F [ x ].
49

If f ( x) and g( x) are polynomials in F [ x ], then we call f ( x) and g( x) associates if


f ( x) = c g( x) for some 0 ≠ c ∈ F. It can be easily proved that two non-zero
polynomials f ( x) and g( x) in F [ x ] are associates if and only if f ( x)| g( x) and
g( x)| f ( x).
If f ( x) is any non-zero polynomial in F [ x ], then f ( x) is always divisible by its
associates and by all units of F [ x ]. These divisors of f ( x) are called its improper
divisors. All other divisors of f ( x) , if there are any, are called its proper divisors.
Definition of an irreducible polynomial over a field: Let F be a field and f ( x) be a
non-zero and non-unit polynomial in F [ x ] i. e., f ( x) be a polynomial of positive degree.
Then f ( x) is said to be irreducible over F (or prime ) if it has no proper divisors in F [ x ]; f ( x)
is reducible over F if it has a proper divisor in F [ x ] .
Thus a positive degree polynomial f ( x) in F [ x ] is irreducible over F if whenever
f ( x) = a( x) b( x) with a( x), b( x) ∈ F [ x ] then one of a( x) or b( x) is a unit in F [ x ] i. e.,
has degree 0. Also f ( x) is reducible over F if and only if we can find two
polynomials a( x) and b( x) in F [ x ] such that f ( x) = a( x) b( x) and none of a( x) and
b( x) is a unit in F [ x ] i. e., has degree 0.
Irreducibility depends on the field. The polynomial x 2 − 2 is irreducible over the
field of rational numbers while it is reducible over the field of real numbers, since
x 2 − 2 = ( x + 2) ( x − 2).
The polynomial x 2 + 1 is irreducible over the field of real numbers while it is
reducible over the field of complex numbers since x 2 + 1 = ( x + i) ( x − i).

Monic polynomials: Definition:


Let f ( x) = a0 + a1 x + … + a n x n , with a n ≠ 0, be a polynomial in F [ x ] over an
arbitrary field F. If the leading coefficient a n of f ( x) is equal to 1, the unity element of F, then
the polynomial f ( x) will be called monic.
The polynomial 2 x − 3 x 3 over the field of real numbers is not monic since its
leading coefficient is – 3. But the polynomial x 3 − 3 x + 4 over the field of real
numbers is monic since its leading coefficient is 1.
Greatest common divisor of two polynomials over a field:
Definition: Suppose F is any field. Let f ( x) and g( x) be two elements of F [ x ]. A greatest
common divisor of f ( x) and g( x) is a non-zero polynomial d( x) such that
(i) d( x)| f ( x) and d( x)| g( x)
(ii) If c ( x) is a polynomial such that c ( x)| f ( x) and c ( x)| g( x) then c ( x)| d( x).
Relatively prime polynomials: Two polynomials f ( x) and g( x) ∈ F [ x ] are said to be
relatively prime if their greatest common divisor is 1, the unity element of F .
50

2.10 Division Algorithm for Polynomials over a Field


Theorem: Let f ( x), g( x) ≠ 0 be any two polynomials of the polynomial domain F [ x ] ,
over the field F . Then there exist uniquely two polynomials q( x) and r( x) in F [ x ] such that
f ( x) = q( x) g( x) + r( x) where either r( x) = 0 or deg r( x) < deg g( x).
Proof: Suppose
f ( x) = a0 + a1 x + a2 x 2 + … + a m x m , a m ≠ 0
and g( x) = b0 + b1 x + b2 x 2 + … + b n x n , b n ≠ 0.
If degree m of f ( x) is smaller than the degree n of g( x) or if f ( x) = 0, then we have
nothing to prove . Because we can always write f ( x) = 0 . g ( x) + f ( x). So in this
case q ( x) = 0, r ( x) = f ( x) and we have either f ( x) = 0 or deg f ( x) < deg g( x) .
Now let us assume that m ≥ n . In this case we shall prove the theorem by induction
on m i. e., degree of f ( x).
If m = 0, then m ≥ n ⇒ n = 0. Therefore f ( x) and g( x) are both non-zero constant
polynomials, f ( x) = a0 , a0 ≠ 0, and g ( x) = b0 , b0 ≠ 0.
We have in this case
f ( x) = a0 = (a0 b0−1 ) b0 + 0 = (a0 b0−1 ) g ( x) + 0.
Thus the theorem is true when m = 0 or when the degree of f ( x) is less than 1.
We shall now assume that the theorem is true when f ( x) is a polynomial of degree
less than m and then we shall show that it is also true if f ( x) is of degree m and then
the proof will be complete by induction.
Let f1 ( x) = f ( x) − (a m b n−1 ) x m − n g( x) ...(1)
Obviously deg f1 ( x) < m. Therefore by our assumed hypothesis, there exist
polynomials s( x) and r( x) such that
f1 ( x) = s ( x) g ( x) + r ( x),
where r ( x) = 0 or deg r ( x) < deg g ( x).
Now putting the value of f1 ( x) in (1), we get
s ( x) g ( x) + r ( x) = f ( x) − (a m b n−1 ) x m − n g ( x)
or f ( x) = [(a m b n−1 ) x m − n + s ( x)] g ( x) + r ( x).
If we write q ( x) in place of (a m b n −1 ) x m − n + s ( x), we get
f ( x) = q ( x) g ( x) + r ( x)
where r ( x) = 0 or deg r ( x) < deg g ( x).
This proves the existence of polynomials q ( x) and r ( x). Now to show that q ( x) and
r ( x) are unique. Let us assume that
f ( x) = q1 ( x) g ( x) + r 1( x) = q2 ( x) g ( x) + r2 ( x).
Then q1 ( x) g ( x) + r 1( x) = q2 ( x) g ( x) + r2 ( x)
or [q1 ( x) − q2 ( x)] g ( x) = r2 ( x) − r1 ( x). ...(2)
51

If [q1 ( x) − q2 ( x)] ≠ 0, then [q1 ( x) − q2 ( x)] g( x) cannot be equal to the zero


polynomial because g( x) ≠ 0 and F [ x ] is without zero divisors. Also then the
degree of [q1 ( x) − q2 ( x)] g( x) is at least n, the degree of g( x). But r2 ( x) − r1 ( x) is
either equal to the zero polynomial or else its degree is less than n because the
degrees of r2 ( x) and r1 ( x) are both less than n. Hence the equality (2) among two
polynomials holds only if q1 ( x) − q2 ( x) = 0 and r2 ( x) − r1 ( x) = 0
i. e., only if q1 ( x) = q2 ( x) and r1 ( x) = r2 ( x).
∴ the polynomials q( x) and r( x) are unique.
Definition: In the division algorithm, the polynomial q( x) is called the quotient on
dividing f ( x) by g( x) and the polynomial r( x) is called the remainder.
Theorem: A polynomial domain F [ x ] over a field F is a principal ideal ring.
Proof: Obviously F [ x ] is a commutative ring with unity and without zero
divisors. Therefore F [ x ] is a principal ideal ring if every ideal in F [ x ] is a
principal ideal.
Let S be an arbitrary ideal of F [ x ]. If S is the null ideal, then S = (0) i. e., the ideal of
F [ x ] generated by 0. Therefore S is a principal ideal, so let us suppose that S is not
a null ideal. Then there exist non-zero polynomials f ( x) in S. Let g ( x) be a
polynomial of lowest degree m belonging to S. We shall show that S is the principal
ideal generated by g ( x).
Let f ( x) be any arbitrary member of S. By division algorithm there exist two
polynomials q ( x) ∈ F [ x ], r ( x) ∈ F [ x ], such that
f ( x) = q ( x) g ( x) + r ( x), where r ( x) = 0 or deg r ( x) < deg g ( x).
Since S is an ideal, therefore
q ( x) ∈ F [ x ], g ( x) ∈ S ⇒ q ( x) g ( x) ∈ S.
Also f ( x) ∈ S, q ( x) g ( x) ∈ S ⇒ f ( x) − q ( x) g ( x) ∈ S.
But f ( x) − q ( x) g ( x) = r ( x). Therefore r ( x) ∈ S.
Now either r ( x) = 0 or deg r ( x) < deg g ( x). But we have assumed that g ( x) is a
polynomial of lowest degree belonging to S. Hence deg r ( x) cannot be less than
deg g ( x). Therefore we must have r ( x) = 0. Then f ( x) = q ( x) g ( x). Thus g ( x) ∈ S is
such that f ( x) ∈ S ⇒ f ( x) = q ( x) g ( x) for some q ( x) ∈ F [ x ]. Therefore S is a
principal ideal of F [ x ] generated by g ( x). Hence F [ x ] is a principal ideal ring.
Important: A polynomial ring over an arbitrary field is a principal ideal ring. But a
polynomial ring over an arbitrary ring is not a principal ideal ring as is obvious from
the following example.
Example: Show that the polynomial ring I [ x ] over the ring of integers is not a principal
ideal ring.
Solution: To prove this statement we shall show that the ideal (2, x) of the ring
I [ x ] generated by two elements 2 and x of I [ x ] is not a principal ideal. Let (2, x) be
52

a principal ideal in I [ x ]. Then there will exist a non-zero element g ( x) ∈ I [ x ] such


that (2, x) = ( g ( x)).
Since 2 ∈ ( g( x)) and x ∈ ( g( x)) therefore there will exist elements φ( x) and ψ( x)
belonging to I [ x ] such that
2 = φ( x) g ( x), ...(1)
x = ψ ( x) g ( x). ...(2)
From (1), we get 2x = [φ ( x) g ( x)] x and from (2), we get
2 x = 2 ψ ( x) g ( x).
∴ 2 ψ ( x) g ( x) = x φ ( x) g ( x). [ ∵ I [ x ] is a commutative ring]
∴ 2 ψ ( x) = x φ ( x), since g( x) ≠ 0, and I [ x ] is without zero divisors.
Now 2 ψ ( x) = x φ ( x) implies that the coefficients of φ( x) must be even integers.
Therefore φ ( x) = 2h ( x) where h ( x) is some polynomial in I [ x ]. Putting this value
of φ( x) in (1) we get
2 = 2h( x) g( x) or 1 = h( x) g( x).
Now 1 = h( x) g( x) ⇒ 1 ∈ ( g( x)). Therefore each element of I [ x ] will belong to
( g ( x)). Thus we have I [ x ] = ( g ( x)) = (2, x). Therefore each element of I [ x ] will
belong to (2, x). We shall show that 1 ∉(2, x) and this contradiction will mean that
(2, x) is not a principal ideal in I [ x ].
Now 1 ∈ (2, x) ⇒ we can write
1 = 2 p( x) + x q ( x)
where p( x) and q ( x) are some elements in I [ x ].
Let p( x) = a0 + a1 x + a2 x 2 + … and q ( x) = b0 + b1 x + b2 x 2 + …
Then 1 = 2(a0 + a1 x + a2 x 2 + ...) + x (b0 + b1 x + b2 x 2 + ...)
or 1 = 2a0 + (2a1 + b0 ) x + (2a2 + b1 ) x 2 + …
This equality implies 1 = 2a0 where a0 ∈ I.
But for no integer a0 we can have 1 = 2a0 . Hence 1 ∉(2, x).
∴ (2, x) is not a principal ideal in I [ x ].

2.11 Euclidean Algorithm for Polynomials over a Field


Theorem: Let F be a field and f ( x) and g ( x) be any two polynomials in F [ x ] , not both of
which are zero. Then f ( x) and g( x) have a greatest common divisor d( x) which can be expressed
in the form
d ( x) = m ( x) f ( x) + n ( x) g ( x)
for polynomials m ( x) and n ( x) in F [ x ].
Proof: Consider the set
S = { s ( x) f ( x) + t ( x) g ( x) : s ( x), t ( x) ∈ F [ x ] }. ...(1)
53

We claim that S is an ideal of F [ x ]. The proof is as follows :


Let s1 ( x) f ( x) + t1 ( x) g ( x) and s2 ( x) f ( x) + t2 ( x) g ( x) be any two elements of S.
Then [s1 ( x) f ( x) + t1 ( x) g ( x)] − [s2 ( x) f ( x) + t2 ( x) g ( x)]
= [s1 ( x) − s2 ( x)] f ( x) + [t1 ( x) − t2 ( x)] g ( x) ∈ S
since s1 ( x) − s2 ( x) and t1 ( x) − t2 ( x) are both members of F [ x ].
Also if α ( x) be any member of F [ x ], then
α ( x) [s1 ( x) f ( x) + t1 ( x) g ( x)]
= [α ( x) s1 ( x)] f ( x) + [α ( x) t1 ( x)] g ( x) ∈ S.
Therefore S is an ideal of F [ x ]. Now every ideal in F [ x ] is a principal ideal.
Therefore there exists an element d ( x) in S such that every element in S is a
multiple of d ( x).
Since d ( x) ∈ S, therefore from (1) we see that there exist elements
m( x), n( x) ∈ F [ x ] such that
d ( x) = m( x) f ( x) + n( x) g( x).
Now F [ x ] is a ring with unity element 1.
∴ Putting s ( x) = 1, t ( x) = 0 in (1), we see that f ( x) ∈ S. Also putting
s ( x) = 0, t ( x) = 1 in (1), we see that g ( x) ∈ S.
Now f ( x), g( x) are elements of S.Therefore they are both multiples of d ( x). Hence
d ( x)| f ( x) and d ( x)| g ( x).
Now suppose c ( x)| f ( x) and c ( x)| g ( x).
Then c ( x)|[m ( x) f ( x)] and c ( x)|[n( x) g( x)]. Therefore c ( x) is also a divisor of
m ( x) f ( x) + n ( x) g ( x) i. e., c ( x) is a divisor of d ( x).
Thus d( x) is a greatest common divisor of f ( x) and g( x).
Note: If d( x) is a greatest common divisor of f ( x) and g( x) then any associate of
d( x) i. e., k d( x) where 0 ≠ k ∈ F will also be a greatest common divisor of f ( x) and
g ( x). In particular if 0 ≠ b is the leading coefficient of the polynomial d ( x), then the
monic polynomial b −1 d ( x) will also be a greatest common divisor of f ( x) and g ( x).
Often while defining greatest common divisor of two polynomials over a field we include one
more condition in our definition that the greatest common divisor should be a monic
polynomial. The advantage of this extra condition is that now we shall get a unique
greatest common divisor as shown below:
Suppose d1 ( x) and d2 ( x) are two monic polynomials and each is a greatest common
divisor of f ( x) and g ( x). Then d1 ( x)| d2 ( x) and d2 ( x)| d1 ( x). Therefore d1 ( x) and
d2 ( x) are associates and we have d1 ( x) = ud2 ( x) for some 0 ≠ u ∈ F. Since d1 ( x) and
d2 ( x) are both monic, therefore u = 1.

2.12 Unique Factorization Domain


Definition: An integral domain, R, with unity element 1 is a unique factorization domain
if
54

(a) any non-zero element in R is either a unit or can be written as the product of a finite
number of irreducible (prime) elements of R;
(b) the decomposition in part (a) is unique upto the order and associates of the irreducible
elements.
Thus if R is a unique factorization domain and if a ≠ 0 is a non-unit in R, then a can
be expressed as a product of a finite number of prime elements of R. Also if
a = p1 p2 p3 … pn = p1 ' p2 ' p3 ' … pm '
where the pi and p j ' are prime elements of R, then m = n and each pi , 1≤ i ≤ n is an
associate of some p j ' , 1≤ j ≤ m and conversely each pk ' is an associate of some pl .

2.13 The Unique Factorization Theorem for Polynomials


over a Field
We shall now prove that every polynomial over a field can be factored uniquely
into irreducible factors. Before stating the main factorization theorem, we shall
give two preliminary theorems that are needed for its proof.
Theorem 1: Let f ( x), g( x) and h( x) be polynomials in F [ x ] for a field F. If
f ( x)| g ( x) h ( x) and the greatest common divisor of f ( x) and g ( x) is1, then f ( x)| h ( x).
Proof: If the greatest common divisor of f ( x) and g ( x) is 1, then by theorem of
article 2.11 there exist polynomials m ( x) and n ( x) ∈ F [ x ] such that
1 = m ( x) f ( x) + n ( x) g ( x).
Multiplying both members of this equation by h ( x), we get
h ( x) = m ( x) f ( x) h ( x) + n ( x) g ( x) h ( x) ...(1)
But f ( x)| g ( x) h ( x), so there exists a polynomial q ( x) ∈ F [ x ] such that
g ( x) h ( x) = q ( x) f ( x).
Substituting this value of g ( x) h ( x) in (1), we get
h ( x) = m ( x) f ( x) h ( x) + n ( x) q ( x) f ( x)
= f ( x) [m ( x) h ( x) + n ( x) q ( x)],
which shows that f ( x) is a divisor of h ( x).
Hence the theorem.
Theorem 2: If f ( x) is an irreducible polynomial in F [ x ] for a field F and
f ( x)| g ( x) h ( x) where g ( x), h ( x) ∈ F [ x ] then f ( x) divides at least one of g ( x) or h ( x).
Proof: Suppose that f ( x) does not divide g ( x). Since f ( x) is prime therefore f ( x) does not
divide g( x) implies that f ( x) and g( x) are relatively prime. Therefore the greatest
common divisor of f ( x) and g( x) is 1. Hence by theorem 1, we get that f ( x)| h( x).
Corollary: If f ( x) is an irreducible polynomial in F [ x ] for a field F, and if f ( x) divides
the product g1 ( x) g2 ( x) … g n ( x) of polynomials in F [ x ] , then f ( x) divides g i ( x) for some
i, 1≤ i ≤ n.
55

This result follows immediately by repeated application of theorem 2.


The Unique Factorization Theorem for polynomials over a field:
Let f ( x) be a non-zero polynomial in F [ x ] , where F is a field. Then either f ( x) is a unit in
F [ x ] or f ( x) = a( x) p1 ( x) p2 ( x) … pm ( x), where each pi ( x), 1≤ i ≤ m, is an irreducible
monic polynomial in F [ x ] and a ∈ F is the leading coefficient of f ( x). Further the factors
p1 ( x), p2 ( x), …, pm ( x) are unique except for the order in which they appear.
Proof: We shall prove the theorem in two parts. First we shall prove that f ( x) can
be factored as required, and then we shall show that the factors are unique.
Let f ( x) be a non-zero element of F [ x ]. Then either f ( x) is a unit in F [ x ] i. e.,
deg f ( x) is 0 or deg f ( x) > 0. If deg f ( x) > 0, and the leading coefficient of f ( x) is a
we are to prove that f ( x) can be expressed as a product of a and a finite number of
irreducible monic polynomials in F [ x ]. The proof will be by induction on the
degree of f ( x).
Suppose f ( x) is of degree one. Let f ( x) = b + ax for a, b ∈ F and a ≠ 0. We can write
f ( x) = a(a − 1 b + x). Therefore the theorem holds in the case where f ( x) has degree
one since a −1 b + x is irreducible and monic.
Now assume, as the induction hypothesis, that every polynomial of degree less
than n can be factored as stated in the theorem. Consider an arbitrary polynomial
f ( x) of degree n having a as its leading coefficient. We can write f ( x) = a f1 ( x),
where f1 ( x) = a − 1 f ( x) and f1 ( x) is monic. If f ( x) is irreducible, then f1 ( x) is also
irreducible and the theorem holds. If f ( x) is reducible, then it can be factored as
f ( x) = g ( x) h ( x) where neither g ( x) nor h ( x) is a unit in F [ x ]. Now the degree of
f ( x) is equal to the sum of the degrees of g ( x) and h ( x). Also g ( x) and h ( x) are not
units in F [ x ], so each of them must be of degree one or larger. Hence both g ( x) and
h ( x) have degrees less than n. Therefore by our induction hypothesis we can write
g ( x) = cα1 ( x) α 2 ( x) … α s ( x), h ( x) = d β1 ( x) β 2 ( x) … β t ( x)
where each α i ( x) and each β j ( x) is monic and irreducible and where c and d are
leading coefficients of g ( x) and h ( x) respectively.
Thus f ( x) = cdα1 ( x) α 2 ( x) α 3 ( x) … α s ( x) β1 ( x) β 2 ( x) … β t ( x).
Since the leading coefficient of f ( x) is a, therefore we must have a = cd because
each α ( x) and each β ( x) is monic. Therefore
f ( x) = a α1 ( x) α 2 ( x) … α s ( x) β1 ( x) β 2 ( x) … β t ( x).
The factorization of f ( x) satisfies the requirements of the theorem. Hence the
theorem holds for all polynomials of degree n, and by the principle of induction, for
all polynomials of arbitrary degree.
In order to prove that the factors are unique, let us suppose that
f ( x) = a p1 ( x) p2 ( x) … pm ( x) = a q1 ( x) q2 ( x) … q n ( x) where each p( x) and each q ( x)
is irreducible and monic. Then we shall prove that n = m and each p( x) is equal to
some q ( x) and each q ( x) is equal to some p( x). From these two decompositions of
f ( x), we have
56

p1 ( x) p2 ( x) … pm ( x) = q1 ( x) q2 ( x) … q n ( x).
Now p1 ( x)| p1 ( x) p2 ( x) … pm ( x).
Therefore p1 ( x)| q1 ( x) q2 ( x) … q n ( x).
By corollary to theorem 2 of this article p1 ( x) must divide at least one of
q1 ( x), q2 ( x), … , q n ( x). Since F [ x ] is a commutative ring, therefore without loss of
generality we may suppose that p1 ( x) divides q1 ( x). But p1 ( x) and q1 ( x) are both
irreducible polynomials in F [ x ] and p1 ( x)| q1 ( x). Therefore p1 ( x) and q1 ( x) must
be associates and we have q1 ( x) = u p1 ( x) where u is a unit in F [ x ] i. e., u is a
non-zero element of F. Since q1 ( x) and p1 ( x) are monic therefore u must be equal to
1 and we have p1 ( x) = q1 ( x). Thus we have
p1 ( x) p2 ( x) … pm ( x) = p1 ( x) q2 ( x) … q n ( x).
Cancelling 0 ≠ p1 ( x) from both sides, we get
p2 ( x) p3 ( x) … pm ( x) = q2 ( x) q3 ( x) … q n ( x) ...(1)
Now we can repeat the above argument on the relation (1) with p2 ( x). If n > m, then
after m steps the left hand side becomes 1 while the right hand side reduces to a
product of a certain number of q( x) (the excess of n over m). But the q( x) are
irreducible polynomials so they are not units of F [ x ] i. e., they are not
polynomials of zero degree.
So their product will be a polynomial of degree ≥ 1. So it cannot be equal to 1.
Therefore n cannot be greater than m. Then n ≤ m. Similarly interchanging the roles
of p( x) and q ( x), we get m ≤ n. Hence m = n.
Also in the above process we have shown that every p( x) is equal to some q ( x) and
conversely every q ( x) is equal to some p( x). Hence the theorem has been completely
established.
Thus we can say that the ring of polynomials over a field is a unique factorization domain.

2.14 Value of a Polynomial at x = c


Definition: Let f ( x) = a0 + a1 x + a2 x 2 + … + a n x n be a polynomial in F [ x ] for
an arbitrary field F and let c be an element of F. Then
f (c ) = a0 + a1 c + a2 c 2 + … + a n c n ,
where the indicated addition and multiplication are the operations in F, is called the value of
f ( x) at x = c .
Obviously f (c ) is an element of F.

Zeros of a polynomial: Definition: If f ( x) is a polynomial in F [ x ] for an arbitrary


field F, and f (c ) = 0 for an element c ∈ F, then c is called a zero of f ( x).
57

Polynomial equations and their roots:


Definition: Let f ( x) be a polynomial of degree n over a field F. We say that f ( x) = 0 is an
equation over the field F and n is the degree of the equation.
If c is a zero of the polynomial f ( x), then c is a root of the equation f ( x) = 0. A root of an
equation is also called a solution of the equation.

Remainder Theorem: If f ( x) ∈ F [ x ] and a ∈ F, for any field F, then f (a) is the


remainder when f ( x) is divided by ( x − a).
Proof: By division algorithm there exist polynomials q ( x) and r ( x) such that
f ( x) = q ( x) ( x − a) + r ( x), where either r ( x) = 0 or deg r ( x) is less than the degree
of x − a. But the degree of ( x − a) is 1. Therefore r ( x) has degree 0 or no degree.
Hence r ( x) is a constant polynomial i. e., r ( x) is simply an element, say, r in F. Thus
f ( x) = q ( x) ( x − a) + r. Putting x = a in this relation, we get
f (a) = q (a) (a − a) + r ⇒ f (a) = r.

Corollary: Factor Theorem: If f ( x) ∈ F [ x ] and a ∈ F, for a field F, then x − a


divides f ( x) if and only if f (a) = 0.
Proof: By remainder theorem, f (a) is the remainder when f ( x) is divided by ( x − a).
Therefore if f (a) = 0, then ( x − a) divides f ( x).
Conversely, if f ( x) is divisible by ( x − a) we get
f ( x) = ( x − a) q( x).
Putting x = a, we get f (a) = (a − a) q (a) = 0 q (a) = 0.

Illustration: Show that the polynomial x 2 + x + 4 is irreducible over F, the field of integers
modulo 11.
Solution: The field F is ({0, 1, ..., 10 }, +11 , ×11 ).
Let f ( x) = x 2 + x + 4.
If a ∈ F, then by a n we shall mean a ×11 a ×11 a ×11 a… upto n times.
Now f (0) = 0 2 +11 0 +11 4 = 4, f (1) = 12 +11 1 +11 4 = 6,
f (2) = 22 + 11 2 +11 4 = 10, f (3) = 32 +11 3 +11 4 = 5, f (4) = 2,
f (5) = 1, f (6) = 62 +11 6 +11 4 = 2, f (7) = 5, f (8) = 10, f (9) = 6,
f (10) = 4.
Since f (a) ≠ 0 V a ∈ F, therefore by factor theorem x − a does not divide
f ( x) V a ∈ F. Therefore f ( x) has no proper divisors in F [ x ]. Hence f ( x) is
irreducible over F.
58

Comprehensive Exercise 1

1. Resolve x 4 + 4 into factors over the field ({0, 1, 2, 3, 4 }, + 5 , × 5 ).


2. Resolve x 2 + 1 into factors over the field Z 5 .
3. Find the solution of the equation 3 x = 2 in the field (Z 7 , + 7 , × 7 ).
4. Show that f ( x) = x 2 + 8 x − 2 is irreducible over the field of rational
numbers Q. Is it irreducible over reals ? Give reasons for your answer.
5. Let f ( x) = 2 x 4 + 3 x 3 + 2 and g ( x) = 3 x 5 + 4 x 3 + 2 x 2 + 3 be two
polynomials over the field Z 5 = ({0, 1, 2, 3, 4 }, + 5 , × 5 ).
d
Determine (i) f ( x), (ii) f ( x) . g ( x).
dx
6. If f ( x) = 3 x 7 + 2 x + 3, g ( x) = 5 x 3 + 2 x + 6 be two polynomials over the
field Z 7 = ({0, 1, 2, 3, 4, 5, 6 }, + 7 , × 7 ), determine
d
(i) f ( x), (ii) f ( x) . g ( x), and (iii) f ( x) + g ( x).
dx
7. Let f ( x) = x 6 + 3 x 5 + 4 x 2 − 3 x + 2 and g ( x) = x 2 + 2 x − 3 be in Z 7 [ x ].
Find
(i) Sum and product of f ( x) and g ( x) in Z 7 [ x ].
(ii) Two polynomials q ( x) and r ( x) in Z 7 [ x ] such that
f ( x) = q ( x) g ( x) + r ( x) with degree r ( x) < 2.

Answers 1
1. x 4 + 4 = ( x + 1) ( x + 2) ( x + 3) ( x + 4).
2. x 2 + 1 = ( x + 2) ( x + 3).
3. x = 3 because 3 × 7 3 = 2.
4. not irreducible over reals.
5. (i) 3 x 3 + 4 x 2 .
(ii) 1 + 4 x 2 + 2 x 3 + x 4 + 2 x 5 + x 6 + 3 x 7 + 4 x 8 + x 9 .
6. (i) 2
(ii) 4 + 4 x + 4 x 2 + x 3 + 3 x 4
+ 4 x 7 + 6 x 8 + x10 .
(iii) 3 x 7 + 5 x 3 + 4 x + 2.
7. (i) f ( x) + g ( x) = x 6 + 3 x 5 + 5 x 2 + 6 x + 6.
59

f ( x) + g ( x) = x 8 + 5 x 7 + 3 x 6 + 5 x 5 + 4 x 4 + 5 x 3 + 5 x 2 + 6 x + 1.
Note that in Z 7 , we have − 3 = 4, − 1 = 6 etc.
(ii) q ( x) = x 4 + x 3 + x 2 + x + 5, r ( x) = 4 x + 3.

2.15 Maximal Ideal


An ideal S ≠ R in a ring R is said to be a maximal ideal of R if whenever U is an ideal of R
such that S ⊆ U ⊆ R , then either R = U or S = U.
In other words an ideal S of a ring R is said to be maximal ideal if there exists no ideal
properly contained in R which itself properly contains S i. e., if it is impossible to
find an ideal which lies between S and the full ring R. For example, in the ring of
integers I, the ideal (6) is not maximal since it is properly contained in the ideal (3),
which in turn is properly contained in I. On the other hand, (5) is a maximal ideal
since the only ideal properly containing (5) is I itself.
Theorem: An ideal S of the ring of integers I is maximal if and only if S is generated by some
prime integer.
Proof: We know that every ideal of the ring of integers I is a principal ideal.
Suppose S is an ideal of I generated by p so that S = ( p). Since p and − p generate
the same ideal, therefore we can take p as positive.
Now we are to prove that
(i) S is maximal if p is prime.
(ii) p is prime if S is maximal.
First we shall prove (i). Let pbe a prime integer such that ( p) = S. Let T be an ideal of
I such that S ⊆ T ⊆ I. Since T is also a principal ideal of I, let T = (q) where q is
some positive integer.
Now S ⊆T ⇒ ( p) ⊆ T ⇒ p ∈T
⇒ p ∈ { xq : x ∈ I } ⇒ p = rq for some positive integer r.
Since p is prime, therefore q = 1 or q = p.
If q = 1, we have T = (q) = (1) = I
and if q = p, we have T = (q) = ( p) = S.
Thus either T = I or T = S.
Hence ( p) is a maximal ideal of I.
Now we shall prove (ii). Let ( p) = S be a maximal ideal. We are to show that p is
prime. Let us suppose that p is a composite integer.
Let p = mn, m ≠ 1, n ≠ 1.
It is obvious that ( p) ⊆ (m) ⊆ I.
But since ( p) is a maximal ideal, therefore we have
either (m) = ( p) or (m) = I.
60

If (m) = I, then m = 1, which is a contradiction.


If (m) = ( p), then m must be equal to lp for some integer l since each element of ( p)
is a multiple of p.
Therefore p = mn = lpn = pln. But p ≠ 0, therefore ln = 1. This gives n = 1 which is
again a contradiction.
Hence p must be a prime integer.

2.16 More Results on Ideals


Theorem 1: Let S1 , S2 be ideals of a ring R and let
S1 + S2 = { s1 + s2 : s1 ∈ S1 , s2 ∈ S2 }.
Then S1 + S2 is an ideal of R generated by S1 ∪ S2 .
Proof: Let a1 + a2 ∈ S1 + S2 , b1 + b2 ∈ S1 + S2 . Then
a1 , b1 ∈ S1 and a2 , b2 ∈ S2 .
We have (a1 + a2 ) − (b1 + b2 ) = (a1 − b1 ) + (a2 − b2 ).
Since S1 is an ideal, therefore a1 , b1 ∈ S1 ⇒ a1 − b1 ∈ S1 .
Similarly a2 − b2 ∈ S2 .
∴ (a1 − b1 ) + (a2 − b2 ) ∈ S1 + S2 .
∴ (a1 + a2 ) − (b1 + b2 ) ∈ S1 + S2 .
∴ S1 + S2 is a subgroup of the additive group of R.
Let r be any element of R. Then
r (a1 + a2 ) = ra1 + ra2 ∈ S1 + S2 ; since r ∈ R, a1 ∈ S1 ⇒ ra1 ∈ S1
and similarly ra2 ∈ S2 .
Similarly (a1 + a2 )r = a1 r + a2 r ∈ S1 + S2 since a1 r ∈ S1 , a2 r ∈ S2 .
Hence S1 + S2 is an ideal of R .
Since 0 ∈ S1 and also 0 ∈ S2 , therefore obviously
S1 ⊆ S1 + S2 and S2 ⊆ S1 + S2 .
∴ S1 ∪ S2 ⊆ S1 + S2 .
Thus S1 + S2 is an ideal of R containing S1 ∪ S2 .
Also if S is any ideal of R containing S1 ∪ S2 , then S must contain S1 + S2 . Thus
S1 + S2 is the smallest ideal of R containing S1 ∪ S2 i. e., S1 + S2 = (S1 ∪ S2 ).
Theorem 2: If an ideal U of a ring R contains a unit of R then U = R.
Proof: Let R be a ring with unity element 1. Let u be a unit of R. Then u is an
inversible element of R i. e., u −1 exists. Let u ∈ U.
Since U is an ideal, therefore
u ∈ U, u −1 ∈ R ⇒ uu −1 ∈ U ⇒ 1 ∈ U.
61

Now let x be any element of R. Then


x ∈ R, 1 ∈ U ⇒ x 1 ∈ U ⇒ x ∈ U.
∴ R ⊆ U.
Also U ⊆ R as U is an ideal of R. Hence R = U.
Theorem 3: Let R be a commutative ring with unity and a, b be two non-zero elements of R.
Then (a) = (b) iff a | b and b | a.
Proof: Here (a) = the principal ideal of R generated by a
= { ax : x ∈ R }.
Similarly (b) = the principal ideal of R generated by b.
Let (a) = (b).
Then (a) ⊆ (b) ⇒ a ∈ (b)
⇒ a = rb for some r ∈ R
⇒ b | a i. e., b is a divisor of a.
Similarly (a) = (b)
⇒ (b) ⊆ (a) ⇒ b ∈ (a)
⇒ b = sa for some s ∈ R ⇒ a | b.
Thus (a) = (b) ⇒ a | b and b | a.
Conversely let a | b and b | a.
Now a | b ⇒ b = pa for some p ∈ R. Let y be any element of (b).
Then y = ub for some u ∈ R
= u ( pa) = (up) a ∈ (a) since up ∈ R.
Thus y ∈ (b) ⇒ y ∈ (a).
∴ (b) ⊆ (a).
Thus a| b ⇒ (b) ⊆ (a). Similarly b | a ⇒ (a) ⊆ (b).
Consequently a | b, b | a ⇒ (a) = (b).
Corollary: Let R be an integral domain with unity and a, b be two non-zero elements of R.
Then (a) = (b) iff a and b are associates.
Theorem 4: An ideal S of a commutative ring R with unity is maximal if and only if the
residue class ring R / S is a field.
Proof: Since R is a commutative ring with unity, therefore R / S is also a
commutative ring with unity. The zero element of the ring R / S is S and the unity
element is the coset S + 1 where 1 is the unity element of R.
Let the ideal S be maximal. Then to prove that R / S is a field.
Let S + b be any non-zero element of R / S. Then S + b ≠ S i. e., b ∉ S. To prove
that S + b is inversible.
If (b) is the principal ideal of R generated by b, then S + (b) is also an ideal of R.
Since b ∉ S, therefore the ideal S is properly contained in S + (b). But S is a
maximal ideal of R. Hence we must have S + (b) = R.
62

Since 1 ∈ R, therefore we must obtain 1 on adding an element of S to an element of


(b). Therefore there exists an element a ∈ S and α ∈ R such that
a + αb = 1 [Note that (b) = {α b : α ∈ R }]
∴ 1 − αb = a ∈ S.
Consequently S + 1 = S + αb = (S + α) (S + b).
∴ S + α = (S + b) −1 . Thus S + b is inversible.
∴ R / S is a field.
Conversely, let S be an ideal of R such that R / S is a field. We shall prove that S is a
maximal ideal of R.
Let S ' be an ideal of R properly containing S i. e., S ⊆ S ' and S ≠ S '. Then S will
be maximal if S ' = R. The elements of R contained in S already belong to S ' since
S ⊆ S '. Therefore R will be a subset of S ' if every element α of R not contained in
S also belongs to S '. If α ∈ R is such that α ∉ S, then S + α ≠ S i. e., S + α is a
non-zero element of R / S. Also S ' properly contains S. Therefore there exists an
element β of S ' not contained in S so that S + β is also a non-zero element of
R / S. Now the non-zero elements of R / S form a group with respect to
multiplication because R / S is a field. Therefore there exists a non-zero element
S + y of R / S such that
(S + y) (S + β) = S + α.
[We may take S + y = (S + α) (S + β) −1 ].
Now (S + y) (S + β) = S + α
⇒ S + y β = S + α ⇒ y β − α ∈ S ⇒ y β − α ∈ S'. [∵ S ⊆ S ' ]
Now S ' is an ideal. Therefore y ∈ R, β ∈ S ' ⇒ y β ∈ S ' .
Again y β ∈ S ' , y β − α ∈ S ' ⇒ y β − ( y β − α) ∈ S ' i. e., α ∈ S ' .
Thus R ⊆ S '.
Also S ' ⊆ R as S ' is an ideal of R.
∴ S ' = R.
Hence the theorem.

2.17 Prime Ideals


Prime Ideal: Definition: Let R be a ring and S an ideal in R. Then S is said to be a prime
ideal of R if ab ∈ S, a, b ∈ R implies that either a or b is in S.
For example, in the ring of integers I, the principal ideal (7) is prime. Obviously if
ab is in (7), then a or b must be a multiple of 7. On the other hand, (6) is not a prime
ideal in I since, in particular,12 = 3 × 4 is in (6), yet neither 3 nor 4 is an element of
(6).
63

Theorem 1: Let R be a commutative ring and S an ideal of R. Then the ring of residue
classes R / S is an integral domain if and only if S is a prime ideal.
Proof: Let R be a commutative ring and S an ideal of R. Then
R / S = { S + a : a ∈ R }.
Let S + a, S + b be any two elements of R / S. Then a, b ∈ R.
We have (S + a) (S + b) = S + ab
= S + ba [∵ R is a commutative ring ]
= (S + b) (S + a).
∴ R / S is a commutative ring.
Now let S be a prime ideal of R. Then we are to prove that R / S is an integral
domain. For this we are to show that R / S is without zero divisors. The zero
element of the ring R / S is the residue class S itself. Let S + a, S + b be any two
elements of R / S.
Then (S + a) (S + b) = S ( the zero element of R / S)
⇒ S + ab = S ⇒ ab ∈ S
⇒ either a or b is in S, since S is a prime ideal
⇒ either S + a = S or S + b = S [Note that a ∈ S ⇔ S + a = S]
⇒ either S + a or S + b is the zero element of R / S.
∴ R / S is without zero divisors.
Since R / S is a commutative ring without zero divisors, therefore R / S is an
integral domain.
Conversely, let R / S be an integral domain. Then we are to prove that S is a prime
ideal of R. Let a, b be any two elements in R such that ab ∈ S. We have
ab ∈ S ⇒ S + ab = S ⇒ (S + a) (S + b) = S.
Since R / S is an integral domain, therefore it is without zero divisors. Therefore
(S + a) (S + b) = S (the zero element of R / S)
⇒ either S + a or S + b is zero ⇒ either S + a = S or S + b = S
⇒ either a ∈ S or b ∈ S ⇒ S is a prime ideal.
This completes the proof of the theorem.
Note: If R is a ring with unity, then R / S is also a ring wity unity. The residue class
S + 1 is the unity element of R / S. Therefore if we define an integral domain as a
commutative ring with unity and without zero divisors, even then the above
theorem will be true. But in that case R must be a commutative ring with unity.
Theorem 2: Let R be a commutative ring with unity. Then every maximal ideal of R is a
prime ideal.
Proof: R is a commutative ring with unit element. Let S be a maximal ideal of
R.Then R / S is a field.
Now every field is an integral domain. Therefore R / S is also an integral domain.
Hence by theorem 1, S is a prime ideal of R. This completes the proof of the
theorem.
64

But it should be noted that the converse of the above theorem is not true i. e., every
prime ideal is not necessarily a maximal ideal.

Example 3: Let R be the field of real numbers and S the set of all those polynomials
f ( x) ∈ R [ x ] such that f (0) = 0 = f (1). Prove that S is an ideal of R [ x ]. Is the
residue class ring R [ x ] / S an integral domain? Give reasons for your answer.
Solution: Let f ( x), g( x) be any elements of S. Then
f (0) = 0 = f (1) and g (0) = 0 = g (1).
Let h ( x) = f ( x) − g ( x).
Then h (0) = f (0) − g (0) = 0 − 0 = 0 and h (1) = f (1) − g (1) = 0 − 0 = 0.
Thus h (0) = 0 = h (1). Therefore h( x) ∈ S.
Thus f ( x), g ( x) ∈ S ⇒ h ( x) = f ( x) − g ( x) ∈ S.
Further let f ( x) be any element of S and r ( x) be any element of R [ x ].
Then f (0) = 0 = f (1), by definition of S.
Let t ( x) = r ( x) f ( x) = f ( x) r ( x). [∵ R [ x ] is a commutative ring ]
Then t (0) = r (0) f (0) = r (0) . 0 = 0
and t (1) = r (1) f (1) = r (1) . 0 = 0.
∴ t ( x) ∈ S.
Thus r ( x) ∈ R [ x ], f ( x) ∈ S ⇒ r ( x) f ( x) ∈ S.
Hence S is an ideal of R [ x ].
Now we claim that S is not a prime ideal of R [ x ]. Let f ( x) = x ( x − 1). Then
f (0) = 0 (0 − 1) = 0, and f (1) = 1(1 − 1) = 0.
Thus f ( x) = x ( x − 1) is an element of S.
Now let p( x) = x, q ( x) = x − 1.
We have p(1) = 1 ≠ 0. Therefore p( x) ∉ S. Also q (0) = 0 − 1 = − 1 ≠ 0.
Therefore q ( x) ∉ S. Thus x ( x − 1) ∈ S while neither x ∈ S nor x − 1∈ S. Hence S is
not a prime ideal of R [ x ].
Since S is not a prime ideal of R [ x ], therefore the residue class ring R [ x ] / S is
not an integral domain.

Example 4: Let R be the ring of all real valued continuous functions defined on the closed
interval [ 0, 1 ]. Let M = { f ( x) ∈ R : f   = 0}.
1
 3
Show that M is a maximal ideal of R.
Solution: First of all we observe that M is non-empty because the real valued
function e ( x) on [0, 1] defined by
65

e ( x) = 0 V x ∈ [0, 1]
belongs to M.
Now let f ( x), g( x) be any two elements of M. Then
f   = 0, g   = 0, by definition of M.
1 1
 3  3
Let h( x) = f ( x) − g( x).
h   = f   − g   = 0 − 0 = 0.
1 1 1
Then
 3  3  3
Therefore h( x) ∈ M.
Thus f ( x), g ( x) ∈ M ⇒ h ( x) = f ( x) − g ( x) ∈ M.
Further let f ( x) be any element of M and r( x) be any element of R. Then
f   = 0, by definition of M.
1
 3
Let t ( x) = r ( x) f ( x) = f ( x) r ( x). [∵ R is a commutative ring.]

Then t   = r   f   = r   . 0 = 0. Therefore t ( x) ∈ M.
1 1 1 1
 3  3  3  3
Thus r ( x) ∈ R, f ( x) ∈ M ⇒ r ( x) f ( x) ∈ M.
Hence M is an ideal of R.
Clearly M ≠ R because i( x) ∈ R given by i( x) = 1 V x ∈ [0, 1] does not belong to M.
The ring R is with unity and the element i( x) is its unity element.
Let N be an ideal of R properly containing M i. e., M ⊆ N and M ≠ N .Then M
will be a maximal ideal of R if N = R , which will be so if the unity i( x) of R
belongs to N. Since M is a proper subset of N, therefore there exists λ ( x) ∈ N such
that λ ( x) ∉ M. This means λ   ≠ 0. Put λ   = c where c ≠ 0.
1 1
 3  3
Let us define β ( x) ∈ R by β ( x) = c V x ∈ [0, 1]. Now consider µ ( x) ∈ R given by
µ ( x) = λ ( x) − β ( x).
We have µ   = λ   − β   = c − c = 0.
1 1 1
 3  3  3
Therefore µ( x) ∈ M and so µ ( x) also belongs to N because N is a super-set of M.
Now N is an ideal of R and λ ( x), µ ( x) are in N. Therefore λ ( x) − µ ( x) = β ( x) is also
an element of N.
Now define γ ( x) ∈ R by γ ( x) = 1 / c V x ∈ [0, 1]. Since N is an ideal of R, therefore
γ ( x) ∈ R and β ( x) ∈ N ⇒ γ ( x) β( x) ∈ N . We shall show that γ ( x) β ( x) = i( x).
For every x ∈ [0, 1,] we have γ ( x) β ( x) = (1 / c ) c = 1.
Therefore γ ( x) β ( x) = i( x), by definition of i( x).
Thus the unity element i( x) of R belongs to N and consequently N = R.
Hence M is a maximal ideal of R.
66

Example 5: If R is a finite commutative ring (i. e., has only a finite number of elements) with
unit element prove that every prime ideal of R is a maximal ideal of R..
Solution: Let R be a finite commutative ring with unit element. Let S be a prime
ideal of R. Then to prove that S is a maximal ideal of R.
Since S is a prime ideal of R, therefore the residue class ring R / S is an integral
domain. Now
R / S = { S + a : a ∈ R }.
Since R is a finite ring, therefore R / S is a finite integral domain. But every finite
integral domain is a field. Therefore R / S is a field. Since R is a commutative ring
with unity and R / S is a field, therefore S is a maximal ideal of R.
Example 6: Give an example of a ring in which some prime ideal is not a maximal ideal.
Solution: Let I [ x ] be the ring of polynomials over the ring of integers I. Let S be the
principal ideal of I [ x ] generated by x i. e., let S = ( x).We shall show that ( x) is prime
but not maximal.
We have S = ( x) = { x f ( x) : f ( x) ∈ I [ x ] } .
First we shall prove that S is prime.
Let a ( x), b ( x) ∈ I [ x ] be such that a ( x) b ( x) ∈ S. Then there exists a polynomial
c ( x) ∈ I [ x ] such that
x c ( x) = a ( x) b ( x). ...(1)
2 2
Let a( x) = a0 + a1 x + a2 x + … , b( x) = b0 + b1 x + b2 x + …,
c ( x) = c 0 + c1 x + c 2 x 2 + … .
Then (1) becomes
x (c 0 + c1 x + …) = (a0 + a1 x + …) (b0 + b1 x + … ).
Equating the constant term on both sides, we get
a0 b0 = 0 ⇒ a0 = 0 or b0 = 0. [ ∵ I is without zero divisors]
2
Now a0 = 0 ⇒ a( x) = a1 x + a2 x +…
⇒ a ( x) = x (a1 + a2 x + ...) ⇒ a ( x) ∈ ( x).
Similarly b0 = 0 ⇒ b ( x) = b1 x + b2 x 2 + …
⇒ b ( x) = x (b1 + b2 x + ...) ⇒ b ( x) ∈ ( x).
Thus a ( x) b ( x) ∈ ( x) ⇒ either a ( x) ∈ ( x) or b ( x) ∈ ( x).
Hence ( x) is a prime ideal.
Now we shall show that ( x) is not a maximal ideal of I [ x ]. For this we must show an
ideal N of I [ x ] such that ( x) is properly contained in N, while N itself is properly
contained in I [ x ]. The ideal N = ( x, 2) serves this purpose.
Obviously ( x) ⊆ ( x, 2). In order to show that ( x) is properly contained in ( x, 2) we
must show an element of ( x, 2) which is not in ( x). Clearly 2 ∈ ( x, 2). We shall show
that 2 ∉( x). Let 2 ∈ ( x). Then we can write,
67

2 = x f ( x) for some f ( x) ∈ I [ x ].
Let f ( x) = a0 + a1 x + …
Then 2 = x f ( x) ⇒ 2 = x (a0 + a1 x + ...)
⇒ 2 = a0 x + a1 x 2 + … ⇒ 2 = 0 + a0 x + a1 x 2 + …
⇒ 2=0 [by equality of two polynomials]
But 2 ≠ 0 in the ring of integers. Hence 2 ∉( x). Thus ( x) is properly contained in
( x, 2).
Now obviously ( x, 2) ⊆ I [ x ]. In order to show that ( x, 2) is properly contained in
I [ x ] we must show an element of I [ x ] which is not in ( x, 2). Clearly1∈ I [ x ]. We
shall show that 1 ∉( x, 2). Let 1 ∈ ( x, 2). Then we have a relation of the form
1 = x f ( x) + 2 g( x), where f ( x), g( x) ∈ I [ x ].
Let f ( x) = a0 + a1 x + … , g( x) = b0 + b1 x + …
Then 1 = x (a0 + a1 x + …) + 2 (b0 + b1 x + ...)
⇒ 1 = 2 b0 [Equating constant term on both sides]
But there is no integer b0 such that 1 = 2 b0 .
Hence 1 ∉( x, 2). Thus ( x, 2) is properly contained in I [ x ].
Therefore ( x) is not a maximal ideal of I [ x ].

2.18 Euclidean Rings or Euclidean Domains


Definition: Let R be an integral domain i.e., let R be a commutative ring without zero
divisors. Then R is said to be a Euclidean ring if to every non-zero element a ∈ R we can assign
a non-negative integer d(a) such that:
(i) For all a, b ∈ R, both non-zero, d (ab) ≥ d (a).
(ii) For any a, b ∈ R and b ≠ 0, there exist q, r ∈ R such that a = qb + r where either
r = 0 or d (r) < d (b). (Lucknow 2007)
The second part of the above definition is known as division algorithm. Also we
do not assign a value to d(0). Thus d (a) will remain undefined when a = 0. Also d (a)
will be called d-value of a and d (a) must be some non-negative integer for every
non-zero element a ∈ R.
Illustration 1: The ring of integers is a Euclidean ring.
Solution: Let ( I, +, .) be the ring of integers where
I = {…, − 3, − 2, − 1, 0, 1, 2, 3, …}.
Let the d function on the non-zero elements of I be defined as
d (a) = | a | V 0 ≠ a ∈ I.
Now if 0 ≠ a ∈ I, then | a | is a non-negative integer. Thus we have assigned a
non-negative integer to every non-negative element a ∈ I.
68

[d (− 5) = | − 5 | = 5, d (− 1) = | − 1| = 1, d (4) = | 4 | = 4 etc.]
Further if a, b ∈ I and are both non-zero, then
| ab | = | a || b | ⇒ | ab | ≥ | a | [ ∵ | b | ≥ 1 if 0 ≠ b ∈ I ]
⇒ d (ab) ≥ d (a).
Finally we know that if a ∈ I and 0 ≠ b ∈ I, then there exist two integers q and r
such that
a = qb + r where 0 ≤ r < | b |
i. e., where either r = 0 or 1≤ r < | b |
i. e., where either r = 0 or d (r) < d (b).
It should be noted that d (b) = | b | and if r is a positive integer then r = | r | = d (r).
Therefore the ring of integers is a Euclidean ring.
Illustration 2: The ring of polynomials over a field is a Euclidean ring.
Solution: Let F [ x ] be the ring of polynomials over a field F. Let the d function on
the non-zero polynomials in F [ x ] be defined as
d [ f ( x)] = deg f ( x), V 0 ≠ f ( x) ∈ F [ x ].
Now if 0 ≠ f ( x) ∈ F [ x ], then deg f ( x) is a non-negative integer.
Thus we have assigned a non-negative integer to every non-zero element f ( x) in
F [ x ].
Further if f ( x), g( x) ∈ F [ x ] and are both non-zero polynomials, then
deg [ f ( x) g ( x)] = deg f ( x) + deg g ( x)
⇒ deg [ f ( x) g ( x)] ≥ deg f ( x) [∵ deg g ( x) ≥ 0 ]
⇒ d [ f ( x) g ( x)] ≥ d [ f ( x)].
Finally we know that if f ( x) ∈ F [ x ] and 0 ≠ g( x) ∈ F [ x ], then there exist two
polynomials q ( x) and r ( x) in F [ x ] such that
f ( x) = q ( x) g ( x) + r ( x)
where either r ( x) = 0 or deg r ( x) < deg g ( x)
i. e., where either r ( x) = 0 or d [r ( x)] < d [ g ( x)].
Hence the ring of polynomials over a field is a Euclidean ring .
Illustration 3: Every field is a Euclidean ring.
Solution: Let F be any field. Let the d function on the non-zero elements of F be
defined as
d (a) = 0 V 0 ≠ a ∈ F.
Thus we have assigned the integer zero to every non-zero element in F.
If a and b are non-zero elements in F then ab is also a non-zero element in F. We
have therefore
d (ab) = 0 = d (a).
Thus we have d (ab) ≥ d (a).
Finally if a ∈ F and 0 ≠ b ∈ F, then we can write
69

a = (ab − 1 ) b + 0
i. e., a = qb + r, where q = ab −1 and r = 0.
Hence every field is a Euclidean ring.
Illustration 4: The ring of Gaussian integers is a Euclidean ring.
Solution: Let (G, +, .) be the ring of Gaussian integers where
G = { x + iy : x, y ∈ I }.
Let the d function on the non-zero elements of G be defined as
d ( x + iy) = x 2 + y 2 V 0 + i0 ≠ x + iy ∈ G.
Now if x + iy is a non-zero element of G, then ( x 2 + y 2 ) is a non-negative integer.
Thus we have assigned a non-negative integer to every non-zero element of G.
If x + iy and m + in are two non-zero elements of G, then
d [ ( x + iy ) (m + in ) ] = d [ ( xm − ny ) + i (my + xn ) ]
= ( xm − ny)2 + (my + xn)2 = x 2 m2 + n2 y 2 + m2 y 2 + x 2 n2
= ( x 2 + y 2 ) (m2 + n2 )
≥ x2 + y2 . [∵ m2 + n2 ≥ 1]
Thus d [ ( x + iy) (m + in)] ≥ d ( x + iy).
Now to show the existence of division algorithm in G.
Let α ∈ G and let β be a non-zero element of G. Let α = x + iy and β = m + in.
Define a complex number λ by the equation
α x + iy ( x + iy) (m − in)
λ = = = = p + iq ,
β m + in m2 + n2
where p, q are rational numbers.
Here λ is not necessarily a Gaussian integer.
Also division by β is possible since β ≠ 0.
Let p′ and q ′ be the nearest integers to p and q respectively.
1 1
Then obviously | p − p′ | ≤ , | q − q ′ | ≤ .
2 2
Let λ ′ = p′ + iq ′. Then λ ′ is a Gaussian integer.
α
Now λ = ⇒α = λ β ⇒ α = λ ′ β + λ β − λ ′ β.
β
Thus α = λ ′ β + (λ − λ ′) β. ...(1)
Since α, β, λ ′ are Gaussian integers, therefore from (1) it implies that (λ − λ ′) β is
also a Gaussian integer.
Now if p and q are integers then p = p′, q = q ′.
So λ − λ ′ = ( p − p′) + i(q − q ′) = 0 + i0. Thus (λ − λ ′) β = 0 + i0.
If p and q are not both integers, then (λ − λ ′ ) β is a non-zero Gaussian integer and
we have
70

d [(λ − λ ′ ) β] = d [ {( p − p′ ) + i(q − q ′ )} (m + in)]


= [( p − p′ )2 + (q − q ′ )2 ] (m2 + n2 ) = [( p − p′ )2 + (q − q ′ )2 ] d (β)
1 1 1 1
≤[ + ] d (β) [ ∵ ( p − p′ )2 ≤ , (q − q ′ )2 ≤ ]
4 4 4 4
1
= d (β) < d (β).
2
Thus α = λ ′ β + (λ − λ ′ ) β, where λ ′ and (λ − λ ′ ) β are Gaussian
integers and either (λ − λ ′ ) β = 0
or d [ (λ − λ ′ ) β ] < d (β).
Hence the ring of Gaussian integers is a Euclidean ring.

2.19 Properties of Euclidean Rings


Theorem 1: Every Euclidean ring is a principal ideal ring.
Proof: Let R be a Euclidean ring. Let S be an arbitrary ideal of R. If S is the null
ideal , then S = (0) i. e.,the ideal of R generated by 0. Therefore S is a principal ideal.
So let us suppose that S is not a null ideal. Then there exist elements in S not equal
to zero. Let b be a non-zero element in S such that d (b) is minimum i. e., there exists
no element c in S such that d (c ) < d (b). We shall show that S = (b) i. e., S is
nothing but the ideal generated by b.
Let a be any element of S. Then by definition of Euclidean ring there exist elements
q and r in R such that
a = qb + r where either r = 0 or d (r) < d (b).
Now q ∈ R, b ∈ S ⇒ qb ∈ S because S is an ideal.
Further a ∈ S, qb ∈ S ⇒ a − qb = r ∈ S.
Thus r ∈ S and we have either r = 0 or d (r) < d (b).
If r ≠ 0, then d (r) < d (b) which contradicts our assumption that no element in S
has d-value smaller than d(b). Therefore we must have r = 0.
Then a = qb.
Thus every element a in S is a multiple of the generating element b. Thus
a ∈ S ⇒ a ∈ (b). Therefore S ⊆ (b).
Again if xb is any element of (b), then x ∈ R.
Now x ∈ R, b ∈ S ⇒ xb ∈ S. Therefore (b) ⊆ S.
Hence S = (b).
Thus every ideal S in R is a principal ideal. Therefore R is a principal ideal ring.
Theorem 2: Every Euclidean ring possesses unity element.
Proof: Let R be a Euclidean ring. Obviously R is an ideal of R . Therefore there exists
an element u0 ∈ R such that R = (u0 ) i. e., there exists an element u0 ∈ R such that
71

every element in R is a multiple of u0 . Since, in particular, u0 ∈ R therefore there


exists an element c ∈ R such that u0 = u0 c . We shall show that c is the required
unity element. Let now a be any element of R . Since a ∈ R, therefore there exists
some x ∈ R such that a = u0 x.
Now ac = (u0 x) c [ ∵ a = u0 x ]
= (u0 c ) x [∵ R is a commutative ring]
= u0 x [ ∵ u0 = u0 c ]
= a. [ ∵ a = u0 x ]
Thus we have ac = a = ca V a ∈ R .
Hence c is the unity element.
Theorem3: Let R be a Euclidean ring and a and b be any two elements in R, not both of
which are zero. Then a and b have a greatest common divisor d which can be expressed in the
form
d = λ a + µ b for some λ , µ ∈ R .
Proof: Consider the set
S = { sa + tb : s, t ∈ R }. ...(1)
We claim that S is an ideal of R. The proof is as follows :
Let x = s1 a + t1 b, and y = s2 a + t2 b be any two elements of S.
Then s1 , t1 , s2 , t2 ∈ R . We have
x − y = (s1 a + t1 b) − (s2 a + t2 b) = (s1 − s2 ) a + (t1 − t2 ) b ∈ S
since s1 − s2 and t1 − t2 are both elements of R.
Thus S is a subgroup of R with respect to addition.
Also if u be any element of R, then
xu = ux = u (s1 a + t1 b) = (us1 ) a + (ut1 ) b ∈ S since us1 , ut1 ∈ R .
Therefore S is an ideal of R. Now every ideal in R is a principal ideal. Therefore
there exists an element d in S such that every element in S is a multiple of d.
Since d ∈ S, therefore from (1), we see that there exist elements λ , µ ∈ R such that
d = λa + µb.
Now R is a ring with unity element 1.
∴ Putting s = 1, t = 0 in (1), we see that a ∈ S. Also putting s = 0, t = 1in (1), we
see that b ∈ S.
Now a, b are elements of S. Therefore they are both multiples of d. Hence d | a
and d | b.
Now suppose c | a and c | b.
Then c | λ a and c |µ b. Therefore c is also a divisor of λ a + µ b i. e., c is a divisor of d.
Thus d is a greatest common divisor of a and b.
Theorem 4: Let a, b and c be any elements of a Euclidean ring R. Let (a, b) = 1 i. e., let the
greatest common divisor of a and b be 1. If a | bc , then a | c .
72

Proof: If the greatest common divisor of a and b is 1, then by theorem 3 there exist
elements λ and µ in R such that
1 = λ a + µ b. ...(1)
Multiplying both members of (1) by c, we get
c = λ a c + µ b c. ...(2)
But a | bc , so there exists an element q ∈ R such that bc = qa.
Substituting this value of bc in (2), we get
c = λ a c + µ q a = ( λ c + µ q ) a,
which shows that a is a divisor of c. Hence the theorem.
Theorem 5: If p is a prime element in the Euclidean ring R and p| ab where a, b ∈ R then p
divides at least one of a or b.
Proof: If p divides a, we are nothing to prove. So suppose that p does not
divide a. Since p is prime and p does not divide a, therefore p and a are
relatively prime i. e., the greatest common divisor of p and a is 1. Hence by
theorem 4, we get that p| b.
Corollary: If p is a prime element in the Euclidean ring R and p divides the product
a1 a2 … a n of elements in R , then p divides at least one of a1 , a2 , ..., a n .
The result follows immediately by repeated application of theorem 5.
Theorem 6: Let R be a Euclidean ring. Let a and b be two non-zero elements in R. Then
(i) if b is a unit in R, d (ab) = d (a).
(ii) if b is not a unit in R, d (ab) > d (a).
Proof: (i) By the definition of Euclidean ring, we have
d (ab) ≥ d (a). ...(1)
Now suppose that b is a unit in R.Then b is inversible and b −1 exists. We can write
a = (ab) b −1 .
∴ d (a) = d [(ab) b −1 ].
But by the definition of Euclidean ring, we have
d [(ab) b −1 ] ≥ d (ab).
∴ d (a) ≥ d (ab). ...(2)
From (1) and (2), we conclude that
d (ab) = d (a).
(ii) Suppose now that b is not a unit in R. Since a and b are non-zero elements
of the Euclidean ring R, therefore ab is also a non-zero element of R. Now a ∈ R
and 0 ≠ ab ∈ R, therefore by definition of Euclidean ring there exist
elements q and r in R such that
a = q(ab) + r ...(3)
where either r = 0 or d (r) < d (ab).
73

If r = 0, then
a = qab ⇒ a − qab = 0 ⇒ a(1 − qb) = 0
⇒ 1 − qb = 0 [∵ a ≠ 0 and R is free of zero divisors]
⇒ qb = 1 ⇒ b is inversible ⇒ b is a unit in R.
Thus we get a contradiction. Hence r cannot be zero. Therefore we must have
d (r) < d (ab) i. e., d (ab) > d (r). ...(4)
Also from (3), we have r = a − qab = a (1 − qb).
∴ d (r) = d [a(1 − qb)].
But d [a(1 − qb)] ≥ d (a).
∴ d (r) ≥ d (a). ...(5)
From (4) and (5), we conclude that d (ab) > d (a).

Theorem 7: The necessary and sufficient condition that the non-zero element a in the
Euclidean ring R is a unit is that
d (a) = d (1). (Lucknow 2011)
Proof: Let a be a unit in R. Then to prove that d (a) = d (1).
By the definition of Euclidean ring
d (1 a ) ≥ d (1) ⇒ d ( a ) ≥ d (1). ...(1)
−1
Since a is a unit in R, therefore a exists and we have
−1
1 = aa ⇒ d (1) = d (aa −1 ).
But d ( aa −1 ) ≥ d ( a ).
∴ d (1) ≥ d ( a ). ...(2)
From (1) and (2), we conclude that d ( a ) = d (1).
Conversely let d ( a ) = d (1). Then to prove that a is a unit in R. If a is not a unit in R,
then by theorem 6, we have
d (1a ) > d (1) ⇒ d ( a ) > d (1).
Thus we get a contradiction. Hence a must be a unit in R.
Theorem 8: Let R be a Euclidean ring . Then every non-zero element in R is either a unit in
R or can be written as a product of a finite number of prime elements of R .
Proof: Let a be a non-zero element of R . We are to prove that either a is a unit in R
or it can be written as a product of a finite number of prime elements of R . We shall
prove the result by induction on d ( a ) i. e., by induction on the d- value of a.
Let us first start the induction. We have a = 1 a. Therefore d ( a ) ≥ d (1). Thus 1 is
an element in R which has the minimal d - value. If d ( a ) = d (1),then a is a unit in
R. [See theorem 7]. Thus the result of the theorem is true if d ( a ) = d (1) and so we
have started the induction.
Now assume as our induction hypothesis that the theorem is true for all non-zero
elements x ∈ R such that d ( x ) < d ( a ). Then we shall show that the theorem is
74

true for a also. If a is a prime element of R, the theorem is obviously true. So


suppose that a is not prime. Then we can write a = bc where neither b nor c is a unit
in R. Since both b and c are not units in R, therefore d ( bc ) > d ( b ) and
d ( bc ) > d ( c ). But d ( a ) = d ( bc ). Therefore we have d (b) < d (a) and d (c ) < d (a).
So by our induction hypothesis each of b and c can be written as a product of a
finite number of prime elements of R.
Let b = p1 p2 … pn , c = q1 q2 where the p’s and q ’s are prime elements of R.
Then a = bc = p1 p2 … pn q1 q2 … q m .
Thus we have written a as a product of a finite number of prime elements of R.
This completes the induction and so the theorem has been proved.
Theorem 9: Unique Factorization theorem: Let R be a Euclidean ring and a be a
non-zero non-unit element in R. Suppose that
a = p1 p2 … pm = q1 q2 … q n
where the p’s and q’s are prime elements of R. Then m = n and each p is an associate of some q
and each q is an associate of some p.
Proof: We have p1 p2 … pm = q1 q2 … q n . Now p1 is a divisor of p1 p2 … pm .
Therefore p1 is also a divisor of q1 q2 … q n . By Corollary to Theorem 5, p1 must
divide at least one of q1 , q2 , … , q n . Since R is a commutative ring, therefore
without loss of generality we may suppose that p1 divides q 1 . But p1 and q1 are
both prime elements in R. Therefore p1 and q 1 must be associates and we have
q 1 = up1 where u is a unit in R. Thus we have
p1 p2 … pm = up1 q2 … q n .
Cancelling 0 ≠ p1 from both sides, we get
p2 p3 … pm = uq2 q3 … q n . ...(1)
Now we can repeat the above argument on the relation (1) with p2 . If n > m, then
after m steps the left hand side becomes 1 while the right hand side reduces to a
product of some units in R and certain number of q’s (the excess of n over m). But
the q’s are prime elements of R and so they are not units in R. So the product of
some units in R and certain number of q’s cannot be equal to 1. Therefore n cannot
be greater than m.
Thus n ≤ m.
Similarly interchanging the roles of p’s and q’s we get m ≤ n.
Hence m = n.
Also in the above process we have shown that every p is an associate of some q and
conversely every q is an associate of some p. Hence the theorem has been
completely established.
Note: Combining theorems 8 and 9, we can say that every non-zero element in a
Euclidean ring R can be uniquely written (upto associates) as a product of prime elements or is
a unit in R . Therefore a Euclidean ring is a Unique Factorization Domain.
75

Theorem 10: An ideal S of the Euclidean ring R is maximal iff S is generated by some prime
element of R .
Proof: We know that every ideal of a Euclidean ring R is a principal ideal. Suppose
S is an ideal of R generated by p so that S = ( p). Now we are to prove that
(i) S is maximal if p is a prime element of R.
(ii) p is prime if S is maximal.
First we shall prove (i). Let p be a prime element of R such that ( p) = S . Let T be an
ideal of R such that S ⊆ T ⊆ R. Since T is also a principal ideal of R, so let T = (q)
where q ∈ R.
Now S ⊆ T ⇒ ( p) ⊆ (q) ⇒ p ∈ (q)
⇒ p = x q for some x ∈ R ⇒ q | p.
Since p is prime, therefore either q should be a unit in R or q should be an
associate of p.
If q is a unit in R, then T = (q) = R.
If q is an associate of p, then T = (q) = ( p) = S .
Thus either T = R or T = S .
Now we shall prove (ii). Let ( p) = S be a maximal ideal.We are to show that p is
prime. Let us suppose that p is composite i. e., p is not prime.
Let p = mn where neither m nor n is a unit in R.
Now p = mn ⇒ m | p ⇒ ( p) ⊆ ( m ).
But ( m ) ⊆ R. Therefore we have ( p) ⊆ ( m ) ⊆ R.
But ( p) is a maximal ideal, therefore we should have either
( m ) = ( p) or ( m ) = R.
If R = ( m ), then R ⊆ (m).
∴ 1 ∈ R ⇒ 1 ∈ (m) ⇒ 1 = ym for some y ∈ R
⇒ m is inversible ⇒ m is a unit in R.
Thus we get a contradiction.
If (m) = ( p), then m ∈ ( p). Therefore m = l p for some l ∈ R.
∴ p = mn = l pn = pln .
∴ p (1 − ln) = 0 ⇒ 1 − ln = 0 [∵ p ≠ 0 and R is without zero divisors]
⇒ ln = 1 ⇒ n is inversible ⇒ n is a unit in R.
This is again a contradiction. Hence p must be a prime element of R.

2.20 Polynomial Rings over Unique Factorization Domains


We have already defined a unique factorization domain. For the sake of
convenience we repeat the definition here.
76

Unique Factorization Domain: Definition: An integral domain R, with unity


element 1 is a unique factorization domain if
(a) any non-zero element in R is either a unit or can be written as the product of a finite
number of irreducible (prime) elements of R.
(b) the decomposition in part (a) is unique upto the order and associates of the irreducible
elements.
In general commutative rings we have defined the greatest common divisors of
elements. But the difficulty is that in an arbitrary commutative ring these might
not exist. However, in unique factorization domain their existence is assured.
Further we know that in an integral domain with unity in case a greatest common
divisor of some elements exists, it is unique apart from the distinction between
associates.
Theorem 1: Let R be a unique factorization domain and a and b be any two elements in R,,
not both of which are zero. Then a and b have a greatest common divisor (a, b) in R. Moreover,
if a and b are relatively prime (i.e., (a, b) is a unit in R), then a | bc ⇒ a | c .
Proof: Suppose a and b are any two elements, not both of which are zero, of a
unique factorization domain R. If one of a and b, say, b is 0, then obviously a is the
greatest common divisor of a and b. If any of a and b, say a , is a unit in R, then
obviously a is the greatest common divisor of a and b. So let us suppose that neither
a = 0 nor b = 0 and none of these is a unit in R.Then each of a and b can be uniquely
expressed as the product of a finite number of irreducible elements of R.
Let a = p1 m1 p2 m2 … p r m r , ...(1)
and b = p1 n1 p 2 n2 .... p r nr , ...(2)
where we have arranged the expressions in such a way that the same irreducible
factors p1, p 2 , ..., p r appear in both. Note that we can definitely do so because the
integer 0 can be used as power in any case, if necessary. The elements p1, p 2 , ..., p r
are all different primes and m1, m 2 , … , m r , n1, n 2 , … , n r are all integers ≥ 0 .
Let g i = minimum ( m i , n i ), where i = 1, 2, … , r. Then obviously
p1 g1 p2 g2 .... p r gr

is the greatest common divisor of a and b.


This proves the existence of greatest common divisor.
Now suppose that a and b are relatively prime i.e., the greatest common divisor of a
and b is a unit in R. Also suppose that
a | bc .
If a is a unit in R, then obviously a is a divisor of c . So let a be not a unit in R. Then a
can be uniquely expressed as the product of a finite number of prime elements of R.
Let a = q1 q 2 … q s,
where q1 , q 2 , … , q s are prime elements of R.
We have a | bc ⇒ bc = k a for some k ∈ R
77

⇒ bc = k q 1 q 2 … q s . ...(3)
Since each element of R can be uniquely expressed as the product of a finite number
of prime elements of R, therefore each of the prime elements q 1, q 2 , … , q s must
occur as a factor of either b or c . But none of q 1, q 2 , … , q s can be a factor of b
because otherwise a and b will not remain relatively prime. Therefore each of
q1 , q 2 , … , q s must be a factor of c. Hence
q 1 q 2 … q s is a divisor of c ⇒ a | c .
Note: In a similar manner we can prove that if a1 , … , a n are any n elements of a
unique factorization domain, they possess a greatest common divisor which will be
unique apart from the distinction between associates. Thus if g1 , g 2 are two
greatest common divisors of these n elements, then by the definition of greatest
common divisor, we have
g1 | g 2 and g2 | g1
⇒ g1 and g 2 are associates
⇒ g1 = u g 2 where u is a unit in R.
Thus the greatest common divisor of some elements is unique within units of R.
Theorem 2: If a is a prime element of a unique factorization domain R and b, c are
any elements of R, then
a | bc ⇒ a | b or a | c .
Proof: If a | b, then obviously the theorem is proved. So let a be not a divisor of b.
Since a is a prime element of R and a is not a divisor of b, therefore we claim that a
and b are relatively prime. Since a is a prime element of R, therefore the only
divisors of a are the associates of a or the units of R. Now an associate of a cannot be
a divisor of b otherwise a itself will be a divisor of b while we have assumed that a is
not a divisor of b. Thus the units of R are the only divisors of a which also divide b.
Therefore the greatest common divisor of a and b is a unit of R.
Since a and b are relatively prime, therefore by theorem 1, we have
a | bc ⇒ a | c .
This completes the proof of the theorem.
Polynomial rings over unique factorization domains: Let R be a unique
factorization domain. Since R is an integral domain with unity, therefore R [ x] is
also an integral domain with unity. Also any unit, (inversible element) in R [ x]
must already be a unit in R . Thus the only units in R [ x] are the units of R . A
polynomial p ( x) ∈ R [ x] is irreducible over R i.e., irreducible as an element of R [ x] if
whenever p( x) = a ( x) b ( x) with a ( x), b ( x) ∈ R [ x] , then one of a ( x) or b ( x) is a
unit in R [ x] i.e., a unit in R. For example, if I is the ring of integers, then I is a
unique factorization domain. The polynomial 2 x 2 + 4 ∈ I [ x] is a reducible
element of I [ x]. We have 2 x 2 + 4 = 2 ( x 2 + 2) . Neither 2 nor x 2 + 2 is a unit in
78

I [ x] . On the other hand the polynomial x 2 + 1 ∈ I [ x] is an irreducible element of


I [ x] .
Content of a polynomial:
Definition: Let f ( x) = a 0 + a 1 x + a 2 x 2 + .... + a n x n be a polynomial over a unique
factorization domain R . Then the content of f ( x) denoted by c ( f ) , is defined as the greatest
common divisor of the coefficients a0 , a1 , … , a n of f ( x) . Obviously the content of f ( x) is
unique within units of R. Thus if c 1 and c 2 are two contents of f ( x) , then we must have
c1 = u c 2 where u is some unit in R.
Primitive polynomial: Definition: Let R be a unique factorization domain. Then a
polynomial f ( x) = a0 + a1 x + .... + a n x n ∈ R [ x] is said to be primitive if the greatest
common divisor of its coefficients a0 , a1 , … , a n is a unit in R. Thus a polynomial f ( x) is
primitive if its content is 1 (that is a unit in R). If
f ( x) = x n + a1 x n − 1 + … + a n − 1 x + a n
is a monic polynomial over R, then obviously f ( x) is primitive.
If I is the ring of integers, then 3 x 3 − 5 x 2 + 7 is a primitive member of I [ x] while
2 x 2 − 4 x + 8 is not a primitive member of I [ x].
Every irreducible polynomial of positive degree belonging to R [ x ] is necessarily
primitive. But an irreducible polynomial of zero degree may not be primitive. For
example 3 ∈ I [ x ] is irreducible but it is not primitive. Further a primitive
polynomial may not be irreducible. For example x 2 + 5 x + 6 ∈ I [ x] is primitive
and it is not irreducible. We have x 2 + 5 x + 6 = ( x + 2) ( x + 3).
Theorem 3: Let R be a unique factorization domain. Prove that every non-zero member
f ( x ) of R [ x ] can be written as f ( x) = g f 1 ( x) where g = c ( f ) and where f1 ( x) is
primitive. Also prove that this decomposition of f ( x) as an element of R by a primitive
polynomial in R [ x] is unique apart from the distinction between associates.
Proof: Let R be a unique factorization domain and let
f ( x) = a0 + a1 x + .... + a n x n ∈ R [ x].
Since R is a unique factorization domain, therefore the elements a0 , a1 , … , a n ∈ R
must possess a greatest common divisor. Let g ∈ R be the greatest common divisor
of these elements. Then g = c ( f ) . Let
a i = gb i where i = 0, 1, ..., n.
Then f ( x) = gb0 + gb 1 x + .... + gb n x n = g [b0 + b 1 x + … + b n x n ].
Since g is the greatest common divisor of a 0 , a1 , … , a n , therefore the elements
b 0 , b1 , … , b n can have no common factor other than units of R. Consequently the
polynomial
f1 ( x) = b0 + b1 x + … + b n x n
is a primitive member of R [ x]. Thus we have f ( x) = g f1 ( x), where g ∈ R and
f1 ( x) ∈ R [ x] is primitive.
Now we come to the uniqueness part of the theorem.
79

If possible, let
f ( x) = h f 2 ( x) where h ∈ R and f 2 ( x) ∈ R [ x] is primitive.
Then g f1 ( x) = h f 2 ( x) ...(1)
Since f1 ( x) and f 2 ( x) are both primitive, therefore the content of the polynomial
on the left hand side of (1) is g and the content of the polynomial on the right hand
side of (1) is h. But the content of a polynomial is unique upto associates.
Therefore g and h are associates
⇒ g = hu where u is some unit in R
⇒ hu f1 ( x) = h f 2 ( x)
⇒ u f1 ( x) = f 2 ( x) [by left cancellation law in the integral
domain R [ x] , since h ≠ 0]
⇒ f1 ( x) and f 2 ( x) are associates.
Hence the theorem.
Theorem 4: If R is a unique factorization domain, then the product of two primitive
polynomials in R [ x] is again a primitive polynomial in R [ x ].
Proof: Let
f ( x) = a0 + a1 x + … + a n x n and g( x) = b0 + b1 x + … + b m x m
be two primitive polynomials in R [ x].
Let h ( x) = f ( x) g( x) = c 0 + c 1 x + … + c m + n x m + n .
Suppose h ( x) is not primitive. Then all the coefficients of h ( x) must be divisible by
some prime element p of R. Since f ( x) is primitive, therefore the prime element p
must not divide some coefficient of f ( x) . Let a i be the first coefficient of f ( x)
which p does not divide. Similarly let b j be the first coefficient of g ( x) which p
does not divide. In f ( x) g( x), the coefficient of x i + j is
ci + j = a i b j + (a i − 1 b j + 1 + a i − 2 b j + 2 + .... + a0 b i + j )
+ (a i + 1 b j − 1 + a i + 2 b j − 2 + .... + a i + j b0 )
From this relation, we get
a i b j = c i + j − (a i − 1 b j + 1 + a i − 2 b j + 2 + .... + a0 b i + j )
− (a i + 1 b j − 1 + a i + 2 b j − 2 + .... + a i + j b0 ) ...(1)
Now by our choice of a i , p is a divisor of each of the elements a0 , a1 , … , a i − 1 .
Therefore p|(a i − 1 b j + 1 + a i − 2 b j + 2 + … + a0 b i + j ) .
Similarly, by our choice of b j , p is a divisor of each of the elements
b0 , b1 , ..., b j − 1 . Therefore
p|(a i + 1 b j − 1 + a i + 2 b j − 2 + .... + a i + j b0 ).
Also by assumption p| c i + j .
Hence from (1), we get
p| a i b j ⇒ p| a i or p| b j , since p is a prime element of R.
But this is nonsense because according to our assumption p is not a divisor of a i
and also p is not a divisor of b j .
Hence h ( x) must be primitive. This proves the theorem.
80

Theorem 5: If R is a unique factorization domain and if f ( x), g ( x) are in R [ x] , then


c ( fg) = c ( f ) c ( g ) (upto units).
Proof: The polynomial f ( x) in R [ x] can be written as f ( x) = a f 1 ( x) , where
a = c ( f ) and f 1 ( x) is primitive. Similarly the polynomial g ( x) can be written as
g ( x) = b g1 ( x) , where b = c ( g) and g1 ( x) is primitive. Then
f ( x) g ( x) = ab f1 ( x) g1 ( x). ...(1)
Since f 1 ( x) and g1 ( x) are both primitive, therefore f 1 ( x) g1 ( x) is also primitive.
[Refer theorem 4].
Therefore from (1), we see that the content of f ( x) g ( x) is either ab or some
associate of ab. Thus the content of f ( x) g( x) is ab (upto units). Therefore
c ( f g) = ab = c ( f ) c ( g) .
This proves the theorem.
Field of Quotients of a unique factorization domain: If R is a unique
factorization domain, then R is necessarily an integral domain. Therefore R has a
field of quotients. Throughout this section we shall denote the field of quotients of
R by F . We can consider R [ x] to be a subring of F [ x].
Theorem 6: If R is an integral domain (not necessarily a unique factorization domain) and
F is its field of quotients, then any element f ( x) in F [ x] can be written as
f ( x)
f ( x) = 0 ,
a
where f 0 ( x) ∈ R [ x] and where a ∈ R.
Proof: Let F be the field of quotients of an integral domain R. Then
p 
F =  : p ∈ R, 0 ≠ q ∈ R .
q 
Let f ( x) be an element of F [ x]. Let
a a1 an n
f ( x) = 0 + x + .... + x , where a0 , a1 , … , a n ∈ R
b0 b1 bn

and b 0 , b1 , … , b n are non-zero elements of R .


Now b 0 , b 1 , ..., b n are also non-zero elements of F. So each of them must be
inversible. Further b 0 b 1 … b n is also a non-zero element of F and so it is also
inversible. Then we can write
b 0 b1 … b n  a 0 a1 a n n
f ( x) =  + x + .... + x 
b 0 b1 … b n  b 0 b1 bn 

(a0 b1 b 2 … b n ) + (b 0 a1 b 2 … b n ) x + … + (b0 b1 … b n − 1 a n ) x n
=
b0 b1 … b n
f 0 ( x) ,
=
a
81

where obviously f 0 ( x) = (a0 b1 b 2 … b n ) + (b 0 a1 b 2 … b n ) x + …


+ (b0 b1 … b n − 1 a n ) x n is in R [ x] and a = b 0 b1 … b n is in R.

Theorem 7: (Gauss’ Lemma): Let F be the field of quotients of a unique factorization


domain R . If the primitive polynomial f ( x) ∈ R [ x] can be factored as the product of two
polynomials having coefficients in F, then it can be factored as the product of two polynomials
having coefficients in R.
Proof: Let R be a unique factorization domain and F be its field of quotients. Let
f ( x) ∈ R [ x] be primitive.
Let f ( x) = g( x) h ( x) where g ( x) and h ( x) have coefficients in F.
Since g ( x), h ( x) ∈ F [ x], therefore we can write
g ( x) h ( x)
g ( x) = 0 , h ( x) = 0 ,
a b
where a, b ∈ R and where g0 ( x), h0 ( x) ∈ R [ x].
Also g0 ( x) = α g1 ( x) , h0 ( x) = β h1 ( x),
where α = c ( g0 ) , β = c (h0 ) and where g1 ( x), h1 ( x) are primitive members of
R [ x]. [See theorem 3].
αβ
Thus f ( x) = g1 ( x) h1 ( x)
ab
⇒ ab f ( x) = α β g1 ( x) h1 ( x). ...(1)
Since g1 ( x) and h1 ( x) are both primitive members of R [ x], therefore g1 ( x) h1 ( x) is
also a primitive member of R [ x]. Therefore from (1), we conclude that f ( x) and
g1 ( x) h1 ( x) are associates in R [ x]. (See theorem 3). Thus
f ( x) = ug1 ( x) h1 ( x) where u is a unit in R [ x] and so a unit of R.
Now u ∈ R, g1 ( x) ∈ R [ x] ⇒ ug1 ( x) is a member of R [ x]. Also h1 ( x) is a
member of R [ x]. Hence f ( x) can be factored as the product of two polynomials
with coefficients in R.
Note: Let I be the ring of integers. Then I is a Euclidean ring and so a unique
factorization domain. The field of quotients of I is the field of rational numbers. If
in the above theorem we take I in place of R, then the statement of the theorem is
as follows :
If the primitive polynomial f ( x) ∈ I [ x] can be factored as the product of two polynomials
having rational coefficients it can be factored as the product of two polynomials having integer
coefficients.
For its proof simply replace R by I or say, ‘let R be the ring of integers’.
Theorem 8: Let F be the field of quotients of a unique factorization domain R. If
f ( x) ∈ R [ x] is both primitive and irreducible as an element of R [ x] , then it is irreducible as
an element of F [ x]. Conversely, if the primitive element f ( x) in R [ x] is irreducible as an
element of F [ x] , it is also irreducible as an element of R [ x].
82

Proof: Let f ( x) be a primitive member of R [ x]. Suppose f ( x) is irreducible in R [ x]


but is reducible in F [ x]. Since F is a field and f ( x) is reducible in F [ x], therefore we
must have f ( x) = g ( x) h ( x), where g ( x), h ( x) are in F [ x] and are of positive degree.
Now we can write
g ( x) h ( x)
g ( x) = 0 , h ( x) = 0 ,
a b
where a, b ∈ R, and where g0 ( x), h0 ( x) ∈ R [ x].
Also g0 ( x) = α g1 ( x), h0 ( x) = β h1 ( x)
where α = c ( g0 ), β = c ( h0 ), and where g1 ( x), h1 ( x) are primitive members of
R [ x]. [See theorem 3].
αβ
Thus f ( x) = g1 ( x) h1 ( x) ⇒ ab f ( x) = αβ g1 ( x) h1 ( x). ...(1)
ab
Since g1 ( x) and h1 ( x) are both primitive members of R [ x], therefore g1 ( x) h1 ( x) is
also a primitive member of R [ x]. Therefore from (1), we conclude that f ( x) and
g1 ( x) h1 ( x) are associates in R [ x]. [See theorem 3].
Thus f ( x) = ug1 ( x) h1 ( x) where u is a unit in R [ x] and so a unit in R.
Let ug1 ( x) = g2 ( x). Then
f ( x) = g2 ( x) h1 ( x), where g2 ( x), h1 ( x) ∈ R [ x].
We have deg g2 ( x) = deg g( x), and deg h1 ( x) = deg h ( x).
Thus deg g2 ( x) > 0, deg h1 ( x) > 0.
Therefore neither g2 ( x) nor h1 ( x) is a unit in R [ x].
Thus f ( x) = g2 ( x) h1 ( x) is a proper factorization of f ( x) in R [ x]. This contradicts
the given statement that f ( x) is irreducible in R [ x]. Hence f ( x) must be irreducible
in F [ x].
Converse: Suppose f ( x) is a primitive member of R [ x] and is irreducible as an
element of F [ x]. Then to prove that f ( x) is also irreducible as an element of R [ x].
Let f ( x) = g ( x) h ( x), where g ( x), h ( x) ∈ R [ x].
Then f ( x) will be irreducible in R [ x] if one of g ( x) or h ( x) is a unit in R [ x] i.e., a
unit in R.
Now g ( x), h ( x) ∈ R [ x] can also be treated as g ( x), h ( x) ∈ F [ x].
Since f ( x) is irreducible as an element of F [ x], therefore one of g ( x) or h ( x) must
be of degree 0. Suppose deg g ( x) = 0. Then g ( x) is a constant polynomial.
Let g ( x) = k ∈ R. Then
f ( x) = kh ( x).
Now f ( x) is a primitive member of R [ x]. Therefore c ( f ) is a unit in R. If k is not a
unit in R, then content of kh ( x) cannot be a unit in R and so it cannot be equal to
c ( f ). Hence k must be a unit in R. Consequently f ( x) is irreducible as an element
of R [ x].
This completes the proof of the theorem.
83

Theorem 9: Let F be the field of quotients of a unique factorization domain R. If


f1 ( x), f 2 ( x) are two primitive members of R [ x] and are associates in F [ x], then they are
also associates in R [ x].
Proof: Since f1 ( x), f 2 ( x) are associates in F [ x], therefore we have
f1 ( x) = k f 2 ( x) where 0 ≠ k ∈ F
i.e., k is a unit in F [ x].
Note that the only units of F [ x] are the non-zero elements of F.
Now F is the field of quotients of R. Therefore
g
0 ≠ k ∈ F ⇒ k = where g, 0 ≠ h ∈ R.
h
g
∴ f1 ( x) = f 2 ( x) ⇒ h f1 ( x) = g f 2 ( x).
h
Since h, g ∈ R and f1 ( x), f 2 ( x) are primitive members of R [ x], therefore by
theorem 3, f1 ( x) and f 2 ( x) are associates in R [ x].
Theorem 10: If R is a unique factorization domain and if p ( x) is a primitive polynomial
in R [ x] , then it can be factored in a unique way as the product of irreducible elements in R [ x].
Hence show that the polynomial ring R [ x] over a unique factorization domain R is itself a
unique factorization domain.
Proof: Let F be the field of quotients of a unique factorization domain R. Let p ( x)
be a primitive member of R [ x]. We can regard p ( x) as a member of F [ x]. Since F
is a field, therefore F [ x] is a unique factorization domain. Recall that the ring of
polynomials over a field is a unique factorization domain. Therefore p ( x) ∈ F [ x]
can be factored as
p( x) = p1 ( x) p2 ( x) … pk ( x), where
p1 ( x), p2 ( x), … , pk ( x) are irreducible polynomials in F [ x]. Now each pi ( x),1≤ i ≤ k,
f ( x)
can be written as pi ( x) = i , where a i ∈ R and f i ( x) ∈ R [ x]. Further f i ( x) can be
ai
written as
f i ( x) = c i q i ( x),
ci
where c i ∈ R and q i ( x) is a primitive member of R [ x]. Thus each pi ( x) = q i ( x),
ai
where c i , a i ∈ R and q i ( x) is a primitive member of R [ x]. Since pi ( x) is irreducible
in F [ x], therefore q i ( x) must also be irreducible in F [ x]. Now q i ( x) is a primitive
member of R [ x] and q i ( x) is irreducible in F [ x]. Therefore, by converse part of
theorem 8, q i ( x) is irreducible in R [ x].
Now p( x) = p1 ( x) p2 ( x) … pk ( x)
c c … ck
= 1 2 q1 ( x) q2 ( x) … q k ( x).
a1 a2 … a k
∴ a1 a2 … a k p( x) = c1 c 2 … c k q1 ( x) q2 ( x) … q k ( x). ...(1)
84

Since q1 ( x), … , q k ( x) are all primitive members of R [ x], therefore


q1 ( x) q2 ( x) … q k ( x)
is also a primitive member of R [ x]. Further p ( x) is primitive.
Therefore from the relation (1), we conclude with the help of theorem 3 that p ( x)
and q1 ( x) q2 ( x) … q k ( x) are associates in R [ x]. Therefore
p( x) = u q1 ( x) q2 ( x)… q k ( x)
where u is some unit in R [ x] and hence in R.
If q1 ( x) is irreducible in R [ x] then u q1 ( x) is also irreducible in R [ x]. If we simply
replace u q1 ( x) by q1 ( x), then we get
p( x) = q1 ( x) q2 ( x) … q k ( x).
Thus we have factored p( x) in R [ x] as a product of irreducible elements.
Now to show that the above factorization of p( x) is unique upto the order and
associates of irreducible elements.
Let p( x) = r1 ( x) r2 ( x) … rm ( x), where the ri ( x) are irreducible in R [ x]. Since p( x) is
primitive, therefore each ri ( x) must be primitive. Consequently by theorem 8,
each ri ( x) must be irreducible in F [ x]. But F [ x] is a unique factorization domain.
Therefore p( x) ∈ F [ x] can be uniquely expressed as the product of irreducible
elements of F [ x]. Hence the ri ( x) and the q i ( x) regarded as the elements of F [ x] are
equal (upto associates) in some order.
Since ri ( x) and q i ( x) are primitive members of R [ x] and are associates in F [ x],
therefore by theorem 9, they are also associates in R [ x]. Thus p ( x) has a unique
factorization as a product of irreducible elements of R [ x].
Now we are in a position to prove that if R is a unique factorization domain, then
so is R [ x].
Let f ( x) ∈ R [ x] be arbitrary. Then we can write f ( x) in a unique way as f ( x) = c g( x)
where c ∈ R and g( x) is a primitive member of R [ x].
Now by the above discussion g ( x) can be uniquely expressed as the product of
irreducible elements of R [ x]. What about c ?
Let c = h1 ( x) h2 ( x) … hs ( x), where h1 ( x), ..., hs ( x) ∈ R [ x].
We have 0 = deg c = deg h1 ( x) + deg h2 ( x) + … + deg hs ( x)
⇒ each hi ( x) must be of degree 0
⇒ each hi ( x) is an element of R.
Thus the only factorization of c as an element of R [ x] are those it had as an element
of R. In particular if α ∈ R is irreducible, then α ∈ R [ x] is also irreducible. But R is
a unique factorization domain. Therefore c ∈ R can be uniquely expressed as the
product of irreducible elements of R and hence of R [ x].
Finally, we conclude that f ( x) = c g( x) can be uniquely expressed as the product of
irreducible elements of R [ x]. Hence R [ x] is a unique factorization domain.
Corollary 1: If R is a unique factorization domain then so is R [ x1 , … , x n ].
85

Proof: If R is a unique factorization domain, then we know that R1 = R [ x1 ] is a


unique factorization domain. Now R1 is a unique factorization domain implies
that
R2 = R1 [ x2 ] = R [ x1 , x2 ]
is a unique factorization domain. Continuing this process a finite number of times
we conclude that R [ x1 , … , x n ] is a unique factorization domain.
Corollary 2: If F is a field, then F [ x1 , x2 , … , x n ] is a unique factorization domain.
Proof: If F is a field, then we know that F1 = F [ x1 ] is a unique factorization
domain. Now F1 is a unique factorization domain implies that F2 = F1 [ x2 ]
= F [ x1 , x2 ] is a unique factorization domain. Continuing this process a finite
number of times, we conclude that F [ x1 , … , x n ] is a unique factorization domain.
Eienstein’s Criterion of Irreducibility:
Theorem 11: Let F be the field of quotients of a unique factorization domain R. If
f ( x) = a0 + a1 x + a2 x 2 + … + a n x n ∈ R [ x]
and p is a prime element of R such that
p| a0 , p| a1 , p| a2 , … , p| a n − 1
whereas p is not a divisor of a n and p2 is not a divisor of a0 , then f ( x) is irreducible in F [ x].
Proof: Without loss of generality we may take f ( x) to be primitive, for taking out
the greatest common factor of its coefficients does not disturb the hypothesis,
since p is not a divisor of a n . Now suppose that f ( x) is reducible in F [ x]. Then
f ( x) can be factored as the product of two polynomials of positive degree in F [ x].
Therefore by Gauss lemma f ( x) can be factored as the product of two polynomials
of positive degree in R [ x]. Thus if we assume that f ( x) is reducible in F [ x], then
f ( x) = a0 + a1 x +... + a n x n
= (b0 + b1 x + … + b r x r ) (c 0 + c1 x + … + c s x s ) ...(1)
where the b’s and c’s are elements of R and where r > 0 and s > 0.
We have from (1), a0 = b0 c 0 .
Since p is a prime element of R , therefore
p| a0 ⇒ p| b0 or p| c 0 .
2
Since p is not a divisor of a0 , therefore p cannot divide both b0 and c 0 . Suppose
that
p| b0 and p is not a divisor of c 0 .
If p is a divisor of all the coefficients b0 , b1 , … , b r , then from (1) we see that p is a
divisor of all the coefficients of f ( x). But pis not a divisor of a n . Therefore not all the
coefficients b0 , b1 , … , b r can be divisible by p. Let b k , where k ≤ r, be the first b
which is not divisible by p. Then each of b0 , b1 , … , b k − 1 is divisible by p and b k is
not divisible by p.
86

Also k < n, since r < n.


Now from (1), we have
a k = b k c 0 + b k − 1 c1 + b k − 2 c 2 + … + b0 c k
⇒ b k c 0 = a k − b k − 1 c1 − b k − 2 c 2 − … − b0 c k . ...(2)
Now k < n. Therefore p| a k . Also p| b k − 1 , b k − 2 , ... , b0 .
Therefore from (2), we have
p| b k c 0 ⇒ p| b k or p| c 0 , since p is a prime element of R.
But this is nonsense because according to our initial assumptions p is neither a
divisor of b k nor a divisor of c 0 . Hence f ( x) must be irreducible in F [ x]. This
completes the proof of the theorem.

Note: In the above theorem if we take the ring of integers I in place of the unique
factorization domain R, then the field of quotients of I is the field of rational
numbers. The statement of the theorem will be as follows :
Let f ( x) = a0 + a1 x + … + a n x n be a polynomial with integer coefficients. If p is a prime
number such that
p| a0 , p| a1 , ..., p| a n − 1
whereas p is not a divisor of a n and p2 is not a divisor of a0 , then f ( x) is irreducible over the
field of rational numbers.
There will be no difference in proof.

Comprehensive Exercise 2

1. If p is a prime number, prove that the polynomial x n − p is irreducible over


the field of rational numbers.
2. Show that the polynomial x 2 − 3 is irreducible over the field of rational
numbers.
3. Prove that the polynomial 1 + x + … + x p − 1 , where p is a prime number, is
irreducible over the field of rational numbers.
4. Show that the polynomial x 4 + x 3 + x 2 + x + 1 is irreducible over the field
of rational numbers.
5. Let R be a unique factorization domain. Then show that every prime element
in R generates a prime ideal.
87

O bjective T ype Q uestions

Multiple Choice Questions


Indicate the correct answer for each question by writing the corresponding letter from
(a), (b), (c) and (d).
1. The polynomial x 2 + 1 is reducible over the field of
(a) rational numbers (b) real numbers
(c) complex numbers (d) none of these.
2. If f ( x) ∈ F [ x] and a ∈ F, for a field F, then ( x − a) divides f ( x) if and
only if
(a) f (a) = 0 (b) f (a) ≠ 0
(c) f ′ (a) = 0 (d) f ′ (a) ≠ 0.
3. Let f ( x) = 2 + 6 x + 4 x 2 , g ( x) = 2 x + 4 x 2 be two polynomials over the
ring (I 8 , + 8 , × 8 ). Then deg [ f ( x) + g ( x)] is
(a) 0 (b) 1
(c) 2 (d) 4.
4. Let R be a Euclidean ring. Let a and b be two non-zero elements in R.
Then if b is a unit in R,
(a) d (ab) > d (a) (b) d (ab) < d (a)
(c) d (ab) = d (a) (d) none of these.
5. The necessary and sufficient condition that the non-zero element a in the
Euclidean ring R is a unit is that
(a) d (a) > d (1) (b) d (a) < d (1)
(c) d (a) = d (1) (d) none of these.

Fill in the Blank(s)


Fill in the blanks “……”, so that the following statements are complete and correct.
1. If R is an arbitrary ring and R′ is the set of constant polynomials in R [ x],
then R′ is …… to R.
2. Two polynomials f ( x) and g( x) ∈ F [ x] are said to be …… if their greatest
common divisor is 1, the unity element of the field F.
3. If f ( x) ∈ F [ x] and a ∈ F, for any field F, then …… is the remainder when
f ( x) is divided by ( x − a).
4. An ideal S of the ring of integers I is maximal if and only if S is generated
by some …… integer.
5. Every Euclidean ring is a …… ring.
6. Every Euclidean ring possesses …… element.
88

True or False
Write ‘T’ for true and ‘F’ for false statement.
1. Every field is a principal ideal ring.
2. The polynomial domain F [ x] over a field F is a field.
3. The polynomial ring I[ x] over the ring of integers is not a principal ideal
ring.
4. The field of real numbers is a prime field.
5. Every prime integer is a prime Gaussian integer.
6. A Euclidean ring is a unique Factorization Domain.
7. The ring of Gaussian integers is not a Euclidean ring.
8. Every field is a Euclidean ring.

A nswers

Multiple Choice Questions


1. (c). 2. (a). 3. (a). 4. (c). 5. (c).

Fill in the Blank(s)


1. isomorphic. 2. relatively prime. 3. f (a).
4. prime. 5. principal ideal. 6. unity.

True or False
1. T. 2. F. 3. T. 4. F. 5. F.
6. T. 7. F. 8. T.

¨
89

3
L inear T ransformations

3.1 Quotient Space


i ght and Left Cosets: Let W be any subspace of a vector space V ( F ). Let α
R be any element of V. Then the set
W + α = {γ + α : γ ∈ W }
is called a right coset of W in V generated by α. Similarly the set
α + W = {α + γ : γ ∈ W }
is called a left coset of W in V generated by α.
Obviously W + α and α + W are both subsets of V. Since addition in V is
commutative, therefore we have W + α = α + W. Hence we shall call W + α as
simply a coset of W in V generated by α.
The following results about cosets are both to be remembered :
(i) We have 0 ∈ V and W + 0 = W. Therefore W itself is a coset of W in V.
(ii) α ∈ W ⇒ W + α = W.
90

Proof:(i) First we shall prove that W + α ⊆ W.


Let γ + α be any arbitrary element of W + α. Then γ ∈ W.
Now W is a subspace of V. Therefore
γ ∈ W, α ∈ W ⇒ γ + α ∈ W.
Thus every element of W + α is also an element of W. Hence
W + α ⊆ W.
Now we shall prove that W ⊆ W + α.
Let β ∈ W. Since W is a subspace, therefore
α ∈ W ⇒ − α ∈ W.
Consequently β ∈ W, − α ∈ W ⇒ β − α ∈ W. Now we can write
β = ( β − α) + α ∈ W + α since β − α ∈ W.
Thus β ∈ W ⇒ β ∈ W + α. Therefore W ⊆ W + α.
Hence W = W + α.
(ii) If W + α and W + β are two cosets of W in V, then
W + α = W + β ⇔ α − β ∈ W.
Proof: Since 0 ∈ W, therefore 0 + α ∈ W + α. Thus
α ∈ W + α.
Now W + α = W + β ⇒ α ∈W + β
⇒ α − β ∈ W + ( β − β)
⇒ α − β ∈ W + 0 ⇒ α − β ∈ W.
Conversely,
α − β ∈ W ⇒ W + (α − β) = W
⇒ W + [(α − β) + β] = W + β
⇒ W + α = W + β.
Let V / W denote the set of all cosets of W in V i. e., let
V / W = {W + α : α ∈ V }.
We have just seen that if α − β ∈ W, then W + α = W + β. Thus a coset of W in V
can have more than one representation.
Now if V ( F ) is a vector space, then we shall give a vector space structure to the set
V / W over the same field F. For this we shall have to define addition in V / W i. e.,
addition of cosets of W in V and multiplication of a coset by an element of F i. e.,
scalar multiplication.
Theorem: If W is any subspace of a vector space V ( F ), then the set V / W of all cosets
W + α where α is any arbitrary element of V, is a vector space over F for the addition and
scalar multiplication compositions defined as follows :
(W + α) + (W + β) = W + (α + β) V α, β ∈ V
and a (W + α) = W + aα ; a ∈ F, α ∈ V .
91

Proof: We have α, β ∈ V ⇒ α + β ∈ V .
Also a ∈ F, α ∈ V ⇒ aα ∈ V .
Therefore W + (α + β) ∈ V / W and also W + aα ∈ V / W. Thus V / W is closed
with respect to addition of cosets and scalar multiplication as defined above. Now
first of all we shall show that these two compositions are well defined i. e., are
independent of the particular representation chosen to denote a coset.
Let W + α = W + α ′ , α, α ′ ∈ V
and W + β = W + β ′ , β, β ′ ∈ V .
We have W + α = W + α′ ⇒ α − α′ ∈ W
and W + β = W + β ′ ⇒ β − β ′ ∈ W.
Now W is a subspace, therefore
α − α ′ ∈ W, β − β ′ ∈ W
⇒ (α − α ′ ) + ( β − β ′ ) ∈ W
⇒ (α + β) − (α ′ + β ′ ) ∈ W
⇒ W + (α + β) = W + (α ′ + β ′ )
⇒ (W + α) + (W + β) = (W + α ′ ) + (W + β ′ ).
Therefore addition in V /W is well defined.
Again a ∈ F, α − α ′ ∈ W ⇒ a (α − α ′ ) ∈ W
⇒ aα − aα ′ ∈ W ⇒ W + aα = W + aα ′ .
∴ scalar multiplication in V /W is also well defined.
Commutativity of addition: Let W + α , W + β be any two elements of V / W.
Then
(W + α) + (W + β) = W + (α + β) = W + ( β + α)
= (W + β) + (W + α).
Associativity of addition: Let W + α , W + β, W + γ be any three elements of
V / W. Then
(W + α) + [(W + β) + (W + γ )] = (W + α) + [W + ( β + γ )]
= W + [α + ( β + γ )]
= W + [(α + β) + γ ]
= [W + (α + β)] + (W + γ )
= [(W + α) + (W + β)] + (W + γ ).
Existence of additive identity: If 0 is the zero vector of V, then
W + 0 = W ∈ V / W.
If W + α is any element of V / W, then
(W + 0) + (W + α) = W + (0 + α) = W + α .
∴ W + 0 = W is the additive identity.
Existence of additive inverse: If W + α is any element of V / W, then
W + (− α) = W − α ∈ V / W.
92

Also we have
(W + α) + (W − α) = W + (α − α) = W + 0 = W.
∴ W − α is the additive inverse of W + α.
Thus V / W is an abelian group with respect to addition composition. Further we
observe that if
a, b ∈ F and W + α, W + β ∈ V / W, then
1. a [(W + α) + (W + β)] = a [W + (α + β)]
= W + a (α + β) = W + (aα + aβ)
= (W + aα) + (W + aβ)
= a (W + α) + a (W + β).
2. (a + b) (W + α) = W + (a + b) α
= W + (aα + bα)
= (W + aα) + (W + bα)
= a (W + α) + b (W + α).
3. (ab) (W + α) = W + (ab) α = W + a (bα)
= a (W + bα) = a [b (W + α)].
4. 1 (W + α) = W + 1α = W + α.
∴ V / W is a vector space over F for these two compositions. The vector space V / W
is called the Quotient Space of V relative to W. The coset W is the zero vector of
this vector space.

3.2 Dimension of a Quotient Space


Theorem: If W be a subspace of a finite dimensional vector space V ( F ), then
dim V / W = dim V − dim W.
Proof: Let m be the dimension of the subspace W of the vector space V ( F ).
Let S = {α1 , α 2 , … , α m }
be a basis of W. Since S is a linearly independent subset of V, therefore it can be
extended to form a basis of V. Let
S ′ = {α1 , α 2 , … , α m , β1 , β 2 , … , β l }
be a basis of V. Then dim V = m + l.
∴ dim V − dim W = (m + l ) − m = l.
So we should prove that dim V / W = l.
We claim that the set of l cosets
S1 = {W + β1 , W + β 2 , … , W + β l}
is a basis of V / W.
First we show that S1 is linearly independent. The zero vector of V / W is W.
93

Let a1 (W + β1 ) + a2 (W + β 2 ) + … + a l (W + β l ) = W
⇒ (W + a1 β1 ) + (W + a2 β 2 ) + … + (W + a l β l ) = W
⇒ W + (a1 β1 + a2 β 2 + … + a l β l ) = W + 0
⇒ a1 β1 + a2 β 2 + … + a l β l ∈ W
⇒ a1 β1 + a2 β 2 + … + a l β l = b1 α1 + b2 α 2 + … + b m α m
[∵ any vector in W can be expressed as a lin ear
combination of its basis vectors]
⇒ a1 β1 + a2 β 2 + … + a l β l − b1 α1 − b2 α 2 − … − b m α m = 0
⇒ a1 = 0, a2 = 0, …, a l = 0 since the vectors
β1 , β 2 , … β l , α1 , α 2 , … , α m are linearly independent.
∴ The set S1 is linearly independent.
Now to show that L (S1 ) = V / W. Let W + α be any element of V / W. The vector
α ∈ V can be expressed as
α = c1 α1 + c 2 α 2 + … + c m α m + d1 β1 + d2 β 2 +… + dl β l
= γ + d1 β1 + d2 β 2 + … + dl β l where
γ = c1 α1 + c 2 α 2 + … + c m α m ∈ W.
So W + α = W + (γ + d1 β1 + d2 β 2 + … + dl β l )
= (W + γ ) + d1 β1 + d2 β 2 + … + dl β l
= W + (d1 β1 + d2 β 2 + … + dl β l ) [ ∵ γ ∈ W ⇒ W + γ = W ]
= (W + d1 β1 ) + (W + d2 β 2 ) + … + (W + dl β l )
= d1 (W + β1 ) + d2 (W + β 2 ) + … + dl (W + β l ).
Thus any element W + α of V / W can be expressed as a linear combination of
elements of S1 .
∴ V / W = L (S1 ). ∴ S1 is a basis of V / W.
∴ dim V / W = l. Hence the theorem.

3.3 Direct Sum of Spaces


Vector space as a direct sum of subspaces.
Definition: Let V ( F ) be a vector space and let W1 , W2 , … , Wm be subspaces of V.
Then V is said to be the direct sum of W1 , W2 , … , Wm if every element α ∈ V can be written
in one and only one way as α = α1 + α 2 + … + α m where
α1 ∈ W1 , α 2 ∈ W2 , … , α m ∈ Wm .
If a vector space V ( F ) is a direct sum of its two subspaces W1 and W2 then we
should have not only V = W1 + W2 but also that each vector of V can be uniquely
expressed as sum of an element of W1 and an element of W2 . Symbolically the
direct sum is represented by the notation V = W1 ⊕ W2 .
Illustration 1: Let V2 ( F ) be the vector space of all ordered pairs of F. Then
W1 = {(a, 0) : a ∈ F } and W2 = {(0, b) : b ∈ F } are two subspaces of V2 ( F ). Obviously
94

any element ( x, y) ∈ V2 ( F ) can be uniquely expressed as the sum of an element of W1 and


an element of W2 . The unique expression is ( x, y) = ( x, 0) + (0, y). Thus V2 ( F ) is the
direct sum of W1 and W2 . Also we observe that the only element common to both W1 and W2
is the zero vector (0, 0).
Illustration 2: Let W1 = {(a, b, 0) : a, b ∈ R} and W2 = {(0, 0, c ) : c ∈ R} be two
3
subspaces of R .
Now any vector (a, b, c ) ∈ R3 can be uniquely expressed as the sum of a vector in W1
and a vector in W2 : (a, b, c ) = (a, b, 0) + (0, 0, c ).
Thus R3 is the direct sum of W1 and W2 .
Also we observe that the only element common to both W1 and W2 is the zero
vector (0, 0, 0).
Illustration 3: Let W1 = {(a, b, 0) : a, b ∈ R} and W2 = {(0, b, c ) : b, c ∈ R} be two
subspaces of R3 . Then R3 = W1 + W2 since every vector in R3 is the sum of a vector in W1
and a vector in W2 . However, R3 is not the direct sum of W1 and W2 since such sums are not
unique.
For example, (4, 6, 8) = (4, 5, 0) + (0, 1, 8).
Also (4, 6, 8) = (4, − 1, 0) + (0, 7, 8).
Here we observe that W1 ∩ W2 ≠ { 0}.

3.4 Disjoint Subspaces


Definition. Two subspaces W1 and W2 of the vector space V ( F ) are said to be disjoint if
their intersection is the zero subspace i. e., if W1 ∩ W2 = { 0}.
Theorem. The necessary and sufficient conditions for a vector space V ( F ) to be a direct
sum of its two subspaces W1 and W2 are that
(i) V = W1 + W2
(ii) W1 ∩ W2 = { 0} i. e., W1 and W2 are disjoint.
Proof: The conditions are necessary.
Let V be direct sum of its two subspaces W1 and W2 . Then each element of V is
expressible uniquely as sum of an element of W1 and an element of W2 . Therefore
we have
V = W1 + W2 .
Let, if possible 0 ≠ α ∈ W1 ∩ W2 . Then α ∈ W1 , α ∈ W2 . Also α ∈ V and we can
write
α = 0 + α where 0 ∈ W1 , α ∈ W2
and α = α + 0 where α ∈ W1 , 0 ∈ W2 .
95

Thus α ∈ V can be expressed in at least two different ways as sum of an element of


W1 and an element of W2 . This contradicts the fact that V is direct sum of W1 and
W2 . Hence 0 is the only vector common to both W1 and W2 i. e., W1 ∩ W2 = { 0}.
Thus the conditions are necessary.
The conditions are sufficient:
Let V = W1 + W2 and W1 ∩ W2 = { 0}.Then to show that V is direct sum of W1 and
W2 .
V = W1 + W2 ⇒ that each element of V can be expressed as sum of an element of
W1 and an element of W2 . Now to show that this expression is unique.
Let, if possible,
α = α1 + α 2 , α ∈ V , α1 ∈ W1 , α 2 ∈ W2 ,
and α = β1 + β 2 , β1 ∈ W1 , β 2 ∈ W2 .
Then to show that α1 = β1 and α 2 = β 2 .
We have α1 + α 2 = β1 + β 2
⇒ α1 − β1 = β 2 − α 2 .
Since W1 is a subspace, therefore
α1 ∈ W1 , β1 ∈ W1 ⇒ α1 − β1 ∈ W1 .
Similarly, β 2 − α 2 ∈ W2 .
∴ α1 − β1 = β 2 − α 2 ∈ W1 ∩ W2 .
But 0 is the only vector which belongs to W1 ∩ W2 . Therefore
α1 − β1 = 0 ⇒ α1 = β1 .
Also β2 − α2 = 0 ⇒ α2 = β2 .
Thus each vector α ∈ V is uniquely expressible as sum of an element of W1 and an
element of W2 . Hence V = W1 ⊕ W2 .

3.5 Dimension of a Direct Sum


Theorem: If a finite dimensional vector space V ( F ) is a direct sum of two subspaces W1 and
W2 , then
dim V = dim W1 + dim W2 .
Proof: Let dim W1 = m and dim W2 = l. Also let the sets of vectors
S1 = {α1 , α 2 , … , α m } and S2 = { β1 , β 2 , … , β l}
be the bases of W1 and W2 respectively.
We have dim W1 + dim W2 = m + l.
In order to prove that dim V = dim W1 + dim W2 , we should therefore prove that
dim V = m + l. We claim that the set
S = S1 ∪ S2 = {α1 , α 2 , … , α m , β1 , β 2 , … , β l }
is a basis of V.
96

First we show that the set S is linearly independent. Let


a1α1 + a2 α 2 + … + a m α m + b1 β1 + b2 β 2 + … + b l β l = 0.
⇒ a1α1 + a2 α 2 + … + a m α m = − (b1 β1 + b2 β 2 + … + b l β l ).
Now a1α1 + a2 α 2 + … + a m α m ∈ W1
and − (b1 β1 + b2 β 2 + … + b l β l ) ∈ W2 .
Therefore a1α1 + a2 α 2 + … + a m α m ∈ W1 ∩ W2
and − (b1 β1 + b2 β 2 + … + b l β l ) ∈ W1 ∩ W2 .
But V is the direct sum of W1 and W2 . Therefore 0 is the only vector belonging to
W1 ∩ W2 . Then we have
a1α1 + a2 α 2 + … + a m α m = 0, b1 β1 + b2 β 2 + … + b l β l = 0.
Since both the sets {α1 , α 2 , … , α m } and { β1 , β 2 , … , β l} are linearly independent,
therefore we have
a1 = 0, a2 = 0, … , a m = 0, b1 = 0, b2 = 0, … , b l = 0.
Therefore S is linearly independent.
Now we shall show that L (S ) = V . Let α be any element of V. Then
α = an element of W1 + an element of W2 .
= a linear combination of S1 + a linear combination of S2
= a linear combination of elements of S.
∴ L (S ) = V
∴ S is a basis of V. Therefore dim V = m + l.
Hence the theorem.
A sort of converse of this theorem is true. It has been proved in the following
theorem.
Theorem:Let V be a finite dimensional vector space and let W1 , W2 be subspaces of V such
that V = W1 + W2 and dim V = dim W1 + dim W2 .
Then V = W1 ⊕ W2 .
Proof: Let dim W1 = l and dim W2 = m. Then
dim V = l + m.
Let S1 = {α1 , α 2 , … , α l}
be a basis of W1 and
S2 = { β1 , β 2 , … , β m }
be a basis of W2 . We shall show that S1 ∪ S2 is a basis of V.
Let α ∈ V. Since V = W1 + W2 , therefore we can write
α = γ + δ where γ ∈ W1 , δ ∈ W2 .
Now γ ∈ W1 can be expressed as a linear combination of the elements of S1 and
δ ∈ W2 can be expressed as a linear combination of the elements of S2 . Therefore
α ∈ V can be expressed as a linear combination of the elements of S1 ∪ S2 .
Therefore V = L (S1 ∪ S2 ). Since dim V = l + m and L (S1 ∪ S2 ) = V , therefore
97

the number of distinct elements in S1 ∪ S2 cannot be less than l + m. Thus


S1 ∪ S2 has l + m distinct elements and therefore S1 ∪ S2 is a basis of V.Therefore
the set
{α1 , α 2 , … , α l , β1 , β 2 , … , β m }
is linearly independent.
Now we shall show that
W1 ∩ W2 = { 0}.
Let α ∈ W1 ∩ W2 .
Then α ∈ W1 , α ∈ W2 .
Therefore α = a1α1 + a2 α 2 + … + a l α l
and α = b1 β1 + b2 β 2 + … + b m β m
for some a’s and b’s ∈ F.
∴ a1α1 + a2 α 2 + … + a l α l = b1 β1 + b2 β 2 + … + b m β m
⇒ a1α1 + a2 α 2 + … + a l α l − b1 β1 − b2 β 2 − … − b m β m = 0
⇒ a1 = 0, a2 = 0, … , a l = 0, b1 = 0, b2 = 0, … , b m = 0
⇒ α = 0.
∴ W1 ∩ W2 = { 0}.

3.6 Complementary Subspaces


Definition: Let V ( F ) be a vector space and W1 , W2 be two subspaces of V. Then the
subspace W2 is called the complement of W1 in V if V is the direct sum of W1 and W2 .
Existence of complementary subspaces:
Theorem: Corresponding to each subspace W1 of a finite dimensional vector space V ( F ),
there exists a subspace W2 such that V is the direct sum of W1 and W2 .
Proof: Let dim W1 = m. Let the set
S1 = {α1 , α 2 , … , α m }
be a basis of W1 .
Since S1 is a linearly independent subset of V, therefore S1 can be extended to form
a basis of V. Let the set
S = {α1 , α 2 , … , α m , β1 , β 2 , … , β l }
be a basis of V.
Let W2 be the subspace of V generated by the set
S2 = { β1 , β 2 , … , β l }.
We shall prove that V is the direct sum of W1 and W2 . For this we shall prove that
V = W1 + W2 and W1 ∩ W2 = { 0}.
Let α be any element of V. Then we can express
α = a linear combination of elements of S
98

= a linear combination of S1 + a linear combination of S2


= an element of W1 + an element of W2 .
∴ V = W1 + W2 .
Again let β ∈ W1 ∩ W2 . Then β can be expressed as a linear combination of S1 and
also as a linear combination of S2 . So we have
β = a1α1 + a2 α 2 + … + a m α m = b1 β1 + b2 β 2 + … + b l β l .
⇒ a1α1 + a2 α 2 + … + a m α m − b1 β1 − b2 β 2 − … − b l β l = 0
⇒ a1 = 0, a2 = 0, … , a m = 0, b1 = 0, b2 = 0, … , b l = 0
since α1 , α 2 , … , α m , β1 , β 2 , … , β l are linearly independent.
∴ β = 0 (zero vector).
Thus W1 ∩ W2 = {0}.
Hence V is the direct sum of W1 and W2 .
Dimension of a Quotient space: Alternative method:
Theorem: If W1 and W2 are complementary subspaces of a vector space V, then the
mapping f which assigns to each vector β in W2 the coset W1 + β is an isomorphism between
W2 and V / W1 .
Proof:It is given that
V = W1 ⊕ W2
and f : W2 → V / W1 such that
f ( β) = W1 + β V β ∈ W2 .
We shall show that f is an isomorphism of W2 onto V / W1 .
(i) f is one-one:
If β1 , β 2 ∈ W2 , then
f ( β1 ) = f ( β 2 ) ⇒ W1 + β1 = W1 + β 2 [by def. of f ]
⇒ β1 − β 2 ∈ W1
⇒ β1 − β 2 ∈ W1 ∩ W2 [ ∵ β1 − β 2 ∈ W2 because W2 is a subspace]
⇒ β1 − β 2 = 0 [ ∵ W1 ∩ W2 = { 0}]
⇒ β1 = β 2 .
∴ f is one-one.
(ii) f is onto:
Let W1 + α be any coset in V / W1 , where α ∈ V. Since V is direct sum of W1 and W2 ,
therefore we can write
α = γ + β where γ ∈ W1 , β ∈ W2 .
This gives γ = α − β ∈ W1 .
Since α − β ∈ W1 , therefore W1 + α = W1 + β.
Now f ( β) = W1 + β [by def. of f ]
= W1 + α.
99

Thus W1 + α ∈ V / W1 ⇒ ∃ β ∈ W2 such that


f ( β) = W1 + α .
Therefore f is onto.
(iii) f is a linear transformation:
Let a, b ∈ F and β1 , β 2 ∈ W2 .
Then f (aβ1 + bβ 2 ) = W1 + (aβ1 + bβ 2 )
= (W1 + aβ1 ) + (W1 + bβ 2 )
= a (W1 + β1 ) + b (W1 + β 2 )
= af ( β1 ) + bf ( β 2 ).
Therefore f is a linear transformation.
Hence f is an isomorphism between W2 and V / W1 .
Corollary: Dimension of a quotient space: If W is an m-dimensional subspace of
an n-dimensional vector space V, then the dimension of the quotient space V /W is n − m.
Proof: Since W is a subspace of a finite dimensional vector space V, therefore
there exists a subspace W1 of V such that
V = W ⊕ W1 .
Also dim V = dim W + dim W1
or dim W1 = dim V − dim W = n − m.
Now by the above theorem , we have
V / W ≅ W1 .
∴ dim V / W = dim W1 = n − m.

3.7 Direct Sum of Several Subspaces


Now we shall have some discussion on the direct sum of several subspaces. For this
purpose we shall first define the concept of independence of subspaces analogous
to the disjointness condition on two subspaces.
Definition: Suppose W1 , W2 , … , Wk are subspaces of the vector space V. We shall say
that W1 , W2 , … , Wk are independent, if
α1 + α 2 + … + α k = 0, α i ∈ Wi implies that each α i = 0.
Theorem 1: Suppose V ( F ) is a vector space. Let W1 , W2 , … , Wk be subspaces of V
and let W = W1 + W2 + … + Wk . The following are equivalent :
(i) W1 , W2 , … , Wk are independent.
(ii) Each vector α in W can be uniquely expressed in the form
α = α1 + α 2 + … + α k with α i in Wi , i = 1, … , k.
(iii) For each j, 2 ≤ j ≤ k, the subspace W j is disjoint from the sum
(W1 + … + W j − 1 ).
100

Proof: In order to prove the equivalence of the three statements we shall prove
that (i) ⇒ (ii), (ii) ⇒ (iii) and (iii) ⇒ (i).
(i) ⇒ (ii). Suppose W1 , … , Wk are independent. Let α ∈ W.
Since W = W1 + … + Wk , therefore we can write
α = α1 + … + α k with α i in Wi .
Suppose that also α = β1 + … + β k with β i in Wi .
Thenα1 + … + α k = β1 + … + β k
⇒ (α1 − β1 ) + … + (α k − β k ) = 0 with α i − β i in Wi as Wi is a subspace
⇒ αi − βi = 0 [ ∵ W1 , … , Wk are independent]
⇒ α i = β i , i = 1, … , k.
Therefore the α i ’s are uniquely determined by α.
(ii) ⇒ (iii). Let α ∈ W j ∩ (W1 + … + W j−1 ).
Then α ∈ W j and α ∈ W1 + … + W j − 1 .
Now α ∈ W1 + … + W j − 1 implies that there exist vectors α1 , … , α j − 1 with α i in Wi
such that
α = α1 + … + α j − 1 .
Also α ∈ Wj .
Therefore we get two expressions for α as a sum of vectors, one in each Wi . These
are
α = α1 + … + α j − 1 + 0 + … + 0
in which the vector belonging to W j is 0
and α = 0 +…+ 0 + α +…+ 0
in which the vector belonging to W j is α.
Since the expression for α is given to be unique, therefore we must have
α1 = … = α j − 1 = 0 = α.
Thus W j ∩ (W1 + … + W j − 1 ) = {0}.
(iii) ⇒ (i).
Let α1 + … + α k = 0 where α i ∈ Wi , i = 1, … , k. …(1)
Then we are to prove that each α i = 0.
Suppose that for some i we have α i ≠ 0.
Let j be the largest integer ibetween 1 and k such that α i ≠ 0.Obviously j must be ≥ 2
and at the most j can be equal to k. Then (1) reduces to
α1 + … + α j = 0, α j ≠ 0
⇒ α j = − α1 − … − α j − 1
⇒ α j ∈ W1 + … + W j − 1 [∵ − α1 − … − α j − 1 ∈ W1 + … + W j − 1 ]
⇒ α j ∈ W j ∩ (W1 + … + W j − 1 ) [ ⇒ α j ∈ Wj ]
⇒ α j = 0.
Thus we get a contradiction. Hence each α i = 0.
101

Note: If any (and hence all) of the three conditions of theorem 1 hold for
W1 , … , Wk ,then we shall say that W is the direct sum of W1 , … , Wk and we write
W = W1 ⊕ … ⊕ Wk .
Theorem 2: Let V ( F ) be a vector space. Let W1 , … , Wn be subspaces of V.Suppose that
V = W1 + … + Wn
and that Wi ∩ (W1 + … + Wi − 1 + Wi + 1 + … + Wn ) = {0}
for every i = 1, 2, … , n. Prove that V is the direct sum of W1 , … , Wn .
Proof: In order to prove that V is the direct sum of W1 , … , Wn , we should prove
that each vector α ∈ V can be uniquely expressed as
α = α1 + … + α n where α i ∈ Wi , i = 1, … , n.
Since V = W1 + … + Wn , therefore any vector α in V can be written as
α = α1 + … + α n where α i ∈ Wi . …(1)
To show that α1 , … , α n are unique.
Let α = β1 + … + β n where β i ∈ Wi . …(2)
From (1) and (2), we get
α1 + … + α n = β1 + … + β n
(α1 − β1 ) + … + (α i − 1 − β i − 1 ) + (α i − β i )
+ (α i + 1 − β i + 1 ) + … + (α n − β n ) = 0. …(3)
Now each Wi is a subspace of V. Therefore α i − β i and also its additive inverse
β i − α i ∈ Wi , i = 1, … , n.
From (3), we get
(α i − β i ) = ( β1 − α1 ) + … + ( β i − 1 − α i − 1 )
+ ( β i + 1 − α i + 1 ) + … + ( β n − α n ). …(4)
Now the vector on the right hand side of (4) and consequently the vector α i − β i is
in W1 + … + Wi − 1 + Wi + 1 + … + Wn .
Also α i − β i ∈ Wi .
∴ α i − β i ∈ Wi ∩ (W1 + … + Wi − 1 + Wi + 1 + … + Wn ).
But for every i = 1, … , n, it is given that
Wi ∩ (W1 + … + Wi − 1 + Wi + 1 + … + Wn ) = {0}.
Therefore α i − β i = 0, i = 1, … , n
⇒ α i = β i , i = 1, … , n
⇒ the expression (1) for α is unique.
Hence V is the direct sum of W1 , … , Wn .
Theorem 3: Let V ( F ) be a finite dimensional vector space and let W1 , … , Wk be
subspaces of V. Then the following two statements are equivalent.
(i) V is the direct sum of W1 , … , Wk .
(ii) If Bi is a basis of Wi , i = 1, … , k, then the union
k
B = ∪ Bi is also a basis for V.
i =1
102

Proof:Let Bi = {α1 i , α 2 i , … , α ni i } be a basis for Wi .


Here ni = dim Wi = number of vectors in Bi . Also let B be the union of the bases
Bi .
(i) ⇒ (ii). It is given that V is the direct sum of W1 , … , Wk , therefore for any α ∈ V,
we can write
α = α1 + … + α k for α i ∈ Wi , i = 1, … , k.
Now α i can be expressed as a linear combination of the vectors in Bi which is a basis
of Wi . Therefore α can be expressed as a linear combination of the elements of
k
B = ∪ Bi . Therefore L ( B) = V i. e., B spans V.
i =1

Now to show that B is linearly independent. Let


k
Σ (a1 i α1 i + a2 i α 2 i + … + a ni i α ni i ) = 0. …(1)
i =1

Since V is the direct sum of W1 , … , Wk , therefore 0 ∈ V can be uniquely expressed


as a sum of vectors one in each Wi . This unique expression is
0 = 0 + … + 0 where 0 ∈ Wi , i = 1, … , k.
Now a1 α1 + … + a ni i α ni i ∈ Wi . Therefore from (1) which is an expression for
i i

0 ∈ V as a sum of vectors one in each Wi , we get


a1 i α1 i + … + a ni i α ni i = 0, i = 1, … , k
a1 i = … = a ni i = 0
since {α1 i , … , α ni i} is linearly independent being a basis for Wi .
k
Therefore B = ∪ Bi is linearly independent. Therefore B is a basis of V.
i =1
k
(ii) ⇒ (i). It is given that B = ∪ Bi is a basis of V.
i =1

Therefore for any α ∈ V, we can write


k
α= Σ (a1 i α1 i + a2 i α 2 i + ...+ a ni i α ni i )
i =1

= α1 + α 2 + … + α k …(2)
i i i i
where α i = a1 α1 + … + a ni α ni ∈ Wi .
Thus each vector in V can be expressed as a sum of vectors one in each Wi .
Now V will be the direct sum of W1 , … , Wk if the expression (2) for α is unique. Let
α = β1 + β 2 + … + β k …(3)
i i i i
where β i = b1 α1 + … + b ni α ni ∈ Wi .
From (2) and (3), we get
α1 + … + α k = β1 + …+ β k
⇒ (α1 − β1 ) + … + (α i − β i ) + … + (α k − β k ) = 0
103

k
⇒ Σ (α i − β i ) = 0
i =1
k
⇒ Σ [(a1 i − b1 i ) α1 i + … + (a ni i − b ni i ) α ni i ] = 0
i =1

⇒ a1 i − b1 i = … = a ni i − b ni i = 0, i = 1, … , k
k
[ ∵ ∪ Bi is linearly independent being a basis of V ]
i =1

⇒ a1 = b1 , … , a ni = b ni i , i = 1, … , k
i i i

⇒ α i = β i , i = 1, … , k
⇒ the expression (2) for α is unique.
Hence V is the direct sum of W1 , … , Wk .
Note: While proving this theorem we have proved that if a finite dimensional vector
space V ( F ) is the direct sum of its subspaces W1 , … , Wk , then
dim V = dim W1 + … + dim Wk .

Example 1: Find dimension of V / W where V = R3 , W = {(a, 0, 0) : a ∈ R}.


Solution: Here V = R3 . dim V = 3.
W = {(a, 0, 0) : a ∈ R}. {(1, 0, 0)} is a basis of W so dim W = 1.
Now dim V / W = dim V − dim W = 3 − 1 = 2.
Example 2: If W1 = {( x, 0, z ) : x, z ∈ R}, W2 = {(0, y, z ) : y, z ∈ R} be two subspaces
of R then show that R3 = W1 + W2 but R3 ≠ W1 ⊕ W2 .
3

Solution: Any ( x, y, z ) ∈ R3 can be written as a sum of a vector of W1 and a vector


of W2 as shown below.
We have ( x, y, z ) =  x, 0, z  + 0, y, z  , where  x, 0, z  ∈ W1
1 1 1
 2   2   2 
0, y, 1 z  ∈ W .
and  
 2 
2

Thus R3 = W1 + W2 .

Also we have ( x, y, z ) =  x, 0, z  + 0, y, z  where  x, 0, z  ∈ W1 and


1 3 1
 4   4   4 
0, y, 3 z  ∈ W .
 
 4 
2

It follows that representation of ( x, y, z ) as a sum of a vector of W1 and a vector of


W2 is not unique.
Hence R3 ≠ W1 ⊕ W2 .
104

Example 3: If W1 = {(a, b, c ) : a = b = c}, W2 = {(a, b, c ) : a = 0} are two subspaces of


R , show that R3 = W1 ⊕ W2 .
3

Solution:We claim that R3 = W1 + W2 .


For if α = (a, b, c ) ∈ R3 , then α = (a, a, a) + (0, b − a, c − a) where (a, a, a) ∈ W1 and
(0, b − a, c − a) ∈ W2 .
Also we have W1 ∩ W2 = {0}.
For α = (a, b, c ) ∈ W1 ∩ W2 ⇒ a = b = c and a = 0
⇒ a = 0, b = 0, c = 0 i. e., α = 0.
Thus R3 = W1 + W2 and W1 ∩ W2 = {0}.
Hence R3 = W1 ⊕ W2 .

Example 4: Construct three subspaces W1 , W2 , W3 of a vector space V so that


V = W1 ⊕ W2 = W1 ⊕ W3 but W2 ≠ W3 .
Solution: Take the vector space V = R2 .
Obviously W1 = {(a, 0) : a ∈ R},
W2 = {(0, a) : a ∈ R}, and W3 = {(a, a) : a ∈ R},
2
are three subspaces of R .
We have V = W1 + W2 and W1 ∩ W2 = {(0, 0)}. ∴ V = W1 ⊕ W2 .
Also it can be easily shown that
V = W1 + W3 and W1 ∩ W3 = {(0, 0)}. ∴ V = W1 ⊕ W3 .
Thus V = W1 ⊕ W2 = W1 ⊕ W3 but W2 ≠ W3 .

Comprehensive Exercise 1

1. Let V be the vector space of square matrices of order n over the field R. Let W1
and W2 be the subspaces of symmetric and antisymmetric matrices
respectively. Show that V = W1 ⊕ W2 .
2. Let V be the vector space of all functions from the real field R into R. Let U be
the subspace of even functions and W the subspace of odd functions. Show
that V = U ⊕ W.
3. Let W1 , W2 and W3 be the following subspaces of R 3 :
W1 = {(a, b, c ) : a + b + c = 0}, W2 = {(a, b, c ) : a = c },
W3 = {(0, 0, c ) : c ∈ R }.
Show that (i) R 3 = W1 + W2 ; (ii) R 3 = W1 + W3 ; (iii) R 3 = W2 + W3 .
When is the sum direct ?
105

4. Let W be a subspace of a vector space V over a field F. Show that α ∈ β + W iff


α − β ∈ W.
5. Let W1 and W2 be two subspaces of a finite dimensional vector space V. If
dim V = dim W1 + dim W2 and W1 ∩ W2 = {0}, prove that V = W1 ⊕ W2 .

A nswers 1

3. The sum is direct in (ii) and (iii)

3.8 Linear Transformation or Vector Space Homomorphism


Definition: Let U ( F ) and V ( F ) be two vector spaces over the same field F. A linear
transformation from U into V is a function T from U into V such that
T (aα + bβ) = aT (α) + bT ( β) …(1)
for all α, β in U and for all a, b in F.
The condition (1) is also called linearity property. It can be easily seen that the
condition (1) is equivalent to the condition
T (aα + β) = aT (α) + T ( β)
for all α, β in U and all scalars a in F.
Linear operator: Definition: Let V ( F ) be a vector space. A linear operator on V is
a function T from V into V such that
T (aα + bβ) = aT (α) + bT ( β)
for all α, β in V and for all a, b in F.
Thus T is a linear operator on V if T is a linear transformation from V into V itself.
Illustration 1: The function
T : V3 (R) → V2 (R)
defined by T (a, b, c ) = (a, b) V a, b, c ∈ R is a linear transformation from V3 (R) into
V2 (R).
Let α = (a1 , b1 , c1 ), β = (a2 , b2 , c 2 ) ∈ V3 (R).
If a, b ∈ R , then
T (aα + bβ) = T [a (a1 , b1 , c1 ) + b (a2 , b2 , c 2 )]
= T (aa1 + ba2 , ab1 + bb2 , ac1 + bc 2 )
= (aa1 + ba2 , ab1 + bb2 ) [by def. of T ]
= (aa1 , ab1 ) + (ba2 , bb2 )
= a (a1 , b1 ) + b (a2 , b2 )
106

= aT (a1 , b1 , c1 ) + bT (a2 , b2 , c 2 )
= aT (α) + bT ( β).
∴ T is a linear transformation from V3 (R) into V2 (R).
Illustration 2: Let V ( F ) be the vector space of all m × n matrices over the field F. Let P
be a fixed m × m matrix over F, and let Q be a fixed n × n matrix over F.The correspondence T
from V into V defined by
T ( A) = PAQ V A ∈ V
is a linear operator on V.
If A is an m × n matrix over the field F, then PAQ is also an m × n matrix over the
field F.Therefore T is a function from V into V.Now let A, B ∈ V and a, b ∈ F.Then
T (aA + bB) = P (aA + bB ) Q [by def. of T ]
= (aPA + bPB) Q = a PAQ + b PBQ = aT ( A) + bT ( B ).
∴ T is a linear transformation from V into V. Thus T is a linear operator on V.
Illustration 3: Let V ( F ) be the vector space of all polynomials over the field F. Let
f ( x) = a0 + a1 x + a2 x 2 + … + a n x n
∈ V be a polynomial of degree n in the
indeterminate x. Let us define
n −1
Df ( x) = a1 + 2a2 x + … + na n x if n > 1
and Df ( x) = 0 if f ( x) is a constant polynomial.
Then the correspondence D from V into V is a linear operator on V.
If f ( x) is a polynomial over the field F, then Df ( x) as defined above is also a
polynomial over the field F. Thus if f ( x) ∈ V , then Df ( x) ∈ V . Therefore D is a
function from V into V.
Also if f ( x), g ( x) ∈ V and a, b ∈ F, then
D [a f ( x) + bg ( x)] = a Df ( x) + b Dg ( x).
∴ D is a linear transformation from V into V.
The operator D on V is called the differentiation operator. It should be noted that
for polynomials the definition of differentiation can be given purely algebraically,
and does not require the usual theory of limiting processes.
Illustration 4: Let V (R) be the vector space of all continuous functions from R into R. If
f ∈ V and we define T by
x
(Tf ) ( x) = ∫0 f (t) dt V x ∈ R ,

then T is a linear transformation from V into V.


If f is real valued continuous function, then Tf , as defined above, is also a real
valued continuous function. Thus
f ∈ V ⇒ Tf ∈ V .
Also the operation of integration satisfies the linearity property. Therefore T is a
linear transformation from V into V.
107

3.9 Some Particular Transformations


1. Zero Transformation: Let U ( F ) and V ( F ) be two vector spaces. The function T,
from U into V defined by
T (α) = 0 (zero vector of V ) V α ∈ U
is a linear transformation from U into V.
Let α, β ∈ U and a, b ∈ F. Then aα + bβ ∈ U.
We have T (aα + bβ) = 0 [by def. of T ]
= a0 + b 0 = aT (α) + bT ( β).
∴ T is a linear transformation from U into V. It is called zero transformation
and we shall in future denote it by ^
0.
2. Identity operator: Let V ( F ) be a vector space. The function I from V into V defined
by I (α) = α V α ∈ V is a linear transformation from V into V.
If α, β ∈ V and a, b ∈ F, then aα + bβ ∈ V and we have
I (aα + bβ) = aα + bβ [by def. of I ]
= aI (α) + bI ( β).
∴ I is a linear transformation from V into V. The transformation I is called
identity operator on V and we shall always denote it by I.
3. Negative of a linear transformation: Let U ( F ) and V ( F ) be two vector spaces.
Let T be a linear transformation from U into V. The correspondence − T defined by
(− T ) (α) = − [T (α)] V α ∈ U
is a linear transformation from U into V.
Since T (α) ∈ V ⇒ − T (α) ∈ V , therefore − T is a function from U into V.
Let α, β ∈ U and a, b ∈ F. Then aα + bβ ∈ U and we have
(− T ) (aα + bβ) = − [T (aα + bβ)] [by def. of − T ]
= − [aT (α) + bT ( β)] [ ∵ T is a linear transformation]
= a [− T (α)] + b [− T ( β)] = a [(− T ) α] + b [(− T ) β].
∴ − T is a linear transformation from U into V. The linear transformation − T is
called the negative of the linear transformation T.

3.10 Properties of Linear Transformations.


Theorem: Let T be a linear transformation from a vector space U ( F ) into a vector space
V ( F ). Then
(i) T (0) = 0 where 0 on the left hand side is zero vector of U and 0 on the right hand side
is zero vector of V.
(ii) T (− α) = − T (α) V α ∈ U.
108

(iii) T (α − β) = T (α) − T ( β) V α, β ∈ U.
(iv) T (a1α1 + a2 α 2 + … + a n α n ) = a1T (α1 ) + a2 T (α 2 ) + … + a nT (α n )
where α1 , α 2 ,… α n ∈ U and a1 , a2 , … , a n ∈ F.
Proof: (i) Let α ∈ U. Then T (α) ∈ V . We have
T (α) + 0 = T (α) [∵ 0 is zero vector of V and T (α) ∈ V ]
= T (α + 0) [∵ 0 is zero vector of U ]
= T (α) + T (0) [ ∵ T is a linear transformation]
Now in the vector space V, we have
T (α) + 0 = T (α) + T (0)
⇒ 0 = T (0), by left cancellation law for addition in V.
Note: When we write T (0) = 0, there should be no confusion about the vector 0.
Here T is a function from U into V. Therefore if 0 ∈ U, then its image under
T i. e., T (0) ∈ V . Thus in T (0) = 0, the zero on the right hand side is zero vector of
V.
(ii) We have T [α + (− α)] = T (α) + T (− α)
[ ∵ T is a linear transformation]
But T [α + (− α)] = T (0) = 0 ∈ V . [by (i)]
Thus in V, we have
T (α) + T (− α) = 0
⇒ T (− α) = − T (α).
(iii) T (α − β) = T [α + (− β)]
= T (α) + T (− β) [ ∵ T is linear]
= T (α) + [− T ( β)] [by (ii)]
= T (α) − T ( β).
(iv) We shall prove the result by induction on n, the number of vectors in the
linear combination a1α1 + a2 α 2 + … + a n α n . Suppose
T (a1α1 + a2 α 2 + … + a n − 1 α n − 1 ) = a1 T (α1 ) + a2 T (α 2 )
+ … + a n − 1 T (α n − 1 ). …(1)
ThenT (a1α1 + a2 α 2 + … + a n α n )
= [T (a1α1 + a2 α 2 + … + a n − 1 α n − 1 ) + a n α n ]
= T (a1α1 + a2 α 2 + … + a n − 1 α n − 1 ) + a n T (α n )
= [a1 T (α1 ) + a2 T (α 2 ) + … + a n −1 T (α n −1 )] + a n T (α n ) [by (1)]
= a1 T (α1 ) + a2 T (α 2 ) + … + a n − 1 T (α n − 1 ) + a n T (α n ).
Now the proof is complete by induction since the result is true when the number of
vectors in the linear combination is 1.
Note: On account of this property sometimes we say that a linear transformation
preserves linear combinations.
109

3.11 Range and Null Space of a Linear Transformation


Range of a linear transformation:
Definition: Let U ( F ) and V ( F ) be two vector spaces and let T be a linear
transformation from U into V. Then the range of T written as R (T ) is the set of all vectors β in
V such that β = T (α) for some α in U.
Thus the range of T is the image set of U under T i. e.,
Range ( T ) = {T (α) ∈ V : α ∈ U }.
Theorem 1: If U ( F ) and V ( F ) are two vector spaces and T is a linear transformation
from U into V, then range of T is a subspace of V.
Proof: Obviously R ( T ) is a non-empty subset of V.
Let β1 , β 2 ∈ R ( T ). Then there exist vectors α1 , α 2 in U such that
T (α1 ) = β1 ,T (α 2 ) = β 2 .
Let a, b be any elements of the field F. We have
aβ1 + bβ 2 = a T (α1 ) + b T (α 2 )
= T (aα1 + bα 2 ) [∵ T is a linear transformation]
Now U is a vector space. Therefore α1 , α 2 ∈ U and
a, b ∈ F ⇒ aα1 + bα 2 ∈ U.
Consequently T (aα1 + bα 2 ) = aβ1 + bβ 2 ∈ R (T ).
Thus a, b ∈ F and β1 , β 2 ∈ R (T ) ⇒ aβ1 + bβ 2 ∈ R (T ).
Therefore R (T ) is a subspace of V.
Null space of linear transformation:
Definition: Let U ( F ) and V ( F ) be two vector spaces and let T be a linear
transformation from U into V. Then the null space of T written as N ( T ) is the set of all
vectors α in U such that T (α) = 0 (zero vector of V). Thus
N ( T ) = {α ∈ U : T (α) = 0 ∈ V }.
If we regard the linear transformation T from U into V as a vector space
homomorphism of U into V, then the null space of T is also called the kernel of T.
Theorem 2: If U ( F ) and V ( F ) are two vector spaces and T is a linear transformation
from U into V, then the kernel of T or the null space of T is a subspace of U.
Proof: Let N ( T ) = {α ∈ U : T (α) = 0 ∈ V }.
Since T (0) = 0 ∈ V , therefore at least 0 ∈ N ( T ).
Thus N ( T ) is a non-empty subset of U.
Let α1 , α 2 ∈ N ( T ). Then T (α1 ) = 0 and T (α 2 ) = 0.
Let a, b ∈ F. Then aα1 + bα 2 ∈ U and
T (aα1 + bα 2 ) = a T (α1 ) + b T (α 2 ) [∵ T is a linear transformation]
110

= a0 + b 0 = 0 + 0 = 0 ∈ V .
∴ aα1 + bα 2 ∈ N (T ).
Thus a, b ∈ F and α1 , α 2 ∈ N ( T ) ⇒ aα1 + bα 2 ∈ N ( T ). Therefore N ( T ) is a
subspace of U.

3.12 Rank and Nullity of a Linear Transformation


Theorem 1: Let T be a linear transformation from a vector space U ( F ) into a vector
space V ( F ). If U is finite dimensional, then the range of T is a finite dimensional subspace
of V.
Proof: Since U is finite dimensional, therefore there exists a finite subset of U,
say {α1 , α 2 , … , α n } which spans U.
Let β ∈ range of T. Then there exists α in U such that T (α) = β.
Now α ∈ U ⇒ ∃ a1 , a2 , … , a n ∈ F such that
α = a1α1 + a2 α 2 + … + a n α n
⇒ T (α) = T (a1α1 + a2 α 2 + … + a n α n )
⇒ β = a1 T (α1 ) + a2 T (α 2 ) + … + a n T (α n ). …(1)
Now the vectors T (α1 ), T (α 2 ), … , T (α n ) are in the range of T. If β is any vector in
the range of T, then from (1), we see that β can be expressed as a linear
combination of T (α1 ), T (α 2 ), … , T (α n ).
Therefore range of T is spanned by the vectors
T (α1 ), T (α 2 ), … , T (α n ).
Hence range of T is finite dimensional.
Now we are in a position to define rank and nullity of a linear transformation.
Rank and nullity of a linear transformation:
Definition: Let T be a linear transformation from a vector space U ( F ) into a vector space
V ( F ) with U as finite dimensional. The rank of T denoted by ρ ( T ) is the dimension of the
range of T i. e.,
ρ ( T ) = dim R ( T ).
The nullity of T denoted by ν ( T ) is the dimension of the null space of T i. e.,
ν ( T ) = dim N ( T ).
Theorem 2: Let U and V be vector spaces over the field F and let T be a linear
transformation from U into V. Suppose that U is finite dimensional. Then
rank ( T ) + nullity ( T ) = dim U.
Proof: Let N be the null space of T. Then N is a subspace of U. Since U is finite
dimensional, therefore N is finite dimensional. Let dim N = nullity ( T ) = k and let
{α1 , α 2 , … , α k } be a basis for N.
111

Since {α1 , α 2 , … , α k } is a linearly independent subset of U, therefore we can


extend it to form a basis of U. Let dim U = n and let
{α1 , α 2 , … , α k , α k + 1 , … , α n }
be a basis for U.
The vectors T (α1 ), T (α 2 ), … , T (α k ), T (α k + 1 ), … , T (α n ) are in the range of T.
We claim that {T (α k + 1 ), T (α k + 2 ), … , T (α n )} is a basis for the range of T.
(i) First we shall prove that the vectors
T (α k + 1 ), T (α k + 2 ), … , T (α n ) span the range of T.
Let β ∈ range of T. Then there exists α ∈ U such that
T (α) = β.
Now α ∈ U ⇒ ∃ a1 , a2 , … , a n ∈ F such that
α = a1α1 + a2 α 2 + … + a n α n
⇒ T (α) = T (a1α1 + a2 α 2 + … + a n α n )
⇒ β = a1T (α1 ) + … + a k T (α k ) + a k + 1 T (α k + 1 ) + … + a n T (α n )
⇒ β = a k + 1 T (α k + 1 ) + a k +2 T (α k + 2 ) + … + a n T (α n )
[ ∵ α1 , α 2 , … , α k ∈ N ⇒ T (α1 ) = 0, … , T (α k ) = 0]
∴ the vectors T (α k + 1 ), … , T (α n ) span the range of T.
(ii) Now we shall show that the vectors
T (α k + 1 ), … , T (α n )
are linearly independent.
Let c k + 1 , … , c n ∈ F be such that
c k + 1 T (α k + 1 ) + … + c n T (α n ) = 0
⇒ T (c k + 1 α k + 1 + … + c n α n ) = 0
⇒ c k + 1 α k + 1 + … + c n α n ∈ null space of T i. e., N
⇒ c k + 1 α k + 1 + … + c n α n = b1α1 + b2 α 2 + … + b k α k
for some b1 , b2 , … , b k ∈ F
[∵ each vector in N can be expressed as a linear combination
of the vectors α1 , … , α k forming a basis of N ]
⇒ b1α1 + … + b k α k − c k + 1 α k + 1 − … − c n α n = 0
⇒ b1 = … = b k = c k +1 = … = c n = 0
[ ∵ α1 , α 2 , … , α k , α k + 1 , … , α n are linearly
independent, being a basis for U ]
⇒ the vectors T (α k + 1 ), … , T (α n ) are linearly independent.
∴ the vectors T (α k + 1 ), … , T (α n ) form a basis of range of T.
∴ rank T = dim of range of T = n − k.
∴ rank (T ) + nullity (T ) = (n − k) + k = n = dim U.
112

Note:If in place of the vector space V, we take the vector space U i. e., if T is a linear
transformation on an n-dimensional vector space U, even then as a special case of
the above theorem,
ρ ( T ) + ν ( T ) = n.

Example 5: Show that the mapping T : V3 (R) → V2 (R) defined as


T (a1 , a2 , a3 ) = (3a1 − 2a2 + a3 , a1 − 3a2 − 2a3 )
is a linear transformation from V3 (R) into V2 (R).
Solution: Let α = (a1 , a2 , a3 ), β = (b1 , b2 , b3 ) ∈ V3 (R).
Then T (α) = T (a1 , a2 , a3 ) = (3a1 − 2a2 + a3 , a1 − 3a2 − 2a3 )
and T ( β) = (3b1 − 2b2 + b3 , b1 − 3b2 − 2b3 ).
Also let a, b ∈ R. Then aα + bβ ∈ V3 (R). We have
T (aα + bβ) = T [a (a1 , a2 , a3 ) + b (b1 , b2 , b3 )]
= T (aa1 + bb1 , aa2 + bb2 , aa3 + bb3 )
= (3 (aa1 + bb1 ) − 2 (aa2 + bb2 ) + aa3 + bb3 ,
aa1 + bb1 − 3 (aa2 + bb2 ) − 2 (aa3 + bb3 ))
= (a (3a1 − 2a2 + a3 ) + b (3b1 − 2b2 + b3 ),
a (a1 − 3a2 − 2a3 ) + b (b1 − 3b2 − 2b3 ))
= a (3a1 − 2a2 + a3 , a1 − 3a2 − 2a3 )
+ b (3b1 − 2b2 + b3 , b1 − 3b2 − 2b3 )
= aT (α) + bT (β).
Hence T is a linear transformation from V3 (R) into V2 (R).
Example 6: Show that the mapping T : V2 (R) → V3 (R) defined as
T (a, b) = (a + b, a − b, b)
is a linear transformation from V2 (R) into V3 (R). Find the range, rank, null-space and
nullity of T.
Solution: Let α = (a1 , b1 ), β = (a2 , b2 ) ∈ V2 (R).
Then T (α) = T (a1 , b1 ) = (a1 + b1 , a1 − b1 , b1 )
and T ( β) = (a2 + b2 , a2 − b2 , b2 ).
Also let a, b ∈ R.
Then aα + bβ ∈ V2 (R) and T (aα + bβ) = T [a (a1 , b1 ) + b (a2 , b2 )]
= T (aa1 + ba2 , ab1 + bb2 )
= (aa1 + ba2 + ab1 + bb2 , aa1 + ba2 − ab1 − bb2 , ab1 + bb2 )
= (a [a1 + b1 ] + b [a2 + b2 ], a [a1 − b1 ] + b [a2 − b2 ], ab1 + bb2 )
113

= a (a1 + b1 , a1 − b1 , b1 ) + b (a2 + b2 , a2 − b2 , b2 )
= aT (α) + bT ( β).
∴ T is a linear transformation from V2 (R) into V3 (R).
Now {(1, 0), (0, 1)} is a basis for V2 (R).
We have T (1, 0) = (1 + 0, 1 − 0, 0) = (1, 1, 0)
and T (0, 1) = (0 + 1, 0 − 1, 1) = (1, − 1, 1).
The vectors T (1, 0), T (0, 1) span the range of T.
Thus the range of T is the subspace of V3 (R) spanned by the vectors
(1, 1, 0), (1, − 1, 1).
Now the vectors (1, 1, 0), (1, − 1, 1) ∈ V3 (R) are linearly independent because if
x, y ∈ R, then
x (1, 1, 0) + y (1, − 1, 1) = (0, 0, 0)
⇒ ( x + y, x − y, y) = (0, 0, 0)
⇒ x + y = 0, x − y = 0, y = 0 ⇒ x = 0, y = 0.
∴ the vectors (1, 1, 0), (1, − 1, 1) form a basis for the range of T.
Hence rank T = dim of range of T = 2.
Nullity of T = dim of V2 (R) − rank T = 2 − 2 = 0.
∴ null space of T must be the zero subspace of V2 (R).
Otherwise, (a, b) ∈ null space of T
⇒ T (a, b) = (0, 0, 0)
⇒ (a + b, a − b, b) = (0, 0, 0)
⇒ a + b = 0, a − b = 0, b = 0
⇒ a = 0, b = 0.
∴ (0, 0) is the only element of V2 (R) which belongs to the null space of T.
∴ null space of T is the zero subspace of V2 (R).
Example 7: Let V be the vector space of all n × n matrices over the field F and let B be a
fixed n × n matrix. If
T ( A) = AB − BA V A ∈ V ,
verify that T is a linear transformation from V into V.
Solution: If A ∈ V , then T ( A) = AB − BA ∈ V because AB − BA is also an n × n
matrix over the field F. Thus T is a function from V into V.
Let A1 , A2 ∈ V and a, b ∈ F.
Then aA1 + bA2 ∈ V
and T (aA1 + bA2 ) = (aA1 + bA2 ) B − B (aA1 + bA2 )
= aA1 B + bA2 B − aBA1 − bBA2
= a ( A1 B − BA1 ) + b ( A2 B − BA2 )
= aT ( A1 ) + bT ( A2 ).
∴ T is a linear transformation from V into V.
114

Example 8: Let V be an n-dimensional vector space over the field F and let T be a linear
transformation from V into V such that the range and null space of T are identical. Prove
that n is even. Give an example of such a linear transformation.
Solution: Let N be the null space of T. Then N is also the range of T.
Now ρ ( T ) + ν ( T ) = dim V
i. e., dim of range of T + dim of null space of T = dim V = n
i. e., 2 dim N = n [∵ range of T = null space of T = N ]
i. e., n is even.
Example of such a transformation:
Let T : V2 (R) → V2 (R) be defined by
T (a, b) = (b, 0) V a, b ∈ R.
Let α = (a1 , b1 ), β = (a2 , b2 ) ∈ V2 (R) and let x, y ∈ R.
Then T ( xα + yβ) = T ( x (a1 , b1 ) + y (a2 , b2 )) = T ( xa1 + ya2 , xb1 + yb2 )
= ( xb1 + yb2 , 0) = ( xb1 , 0) + ( yb2 , 0) = x (b1 , 0) + y (b2 , 0)
= xT (a1 , b1 ) + yT (a2 , b2 ) = xT (α) + yT ( β).
∴ T is a linear transformation from V2 (R) into V2 (R).
Now {(1, 0), (0, 1)} is a basis of V2 (R).
We have T (1, 0) = (0, 0) and T (0, 1) = (1, 0).
Thus the range of T is the subspace of V2 (R) spanned by the vectors (0, 0) and (1, 0).
The vector (0, 0) can be omitted from this spanning set because it is zero vector.
Therefore the range of T is the subspace of V2 (R) spanned by the vector (1, 0). Thus
the range of T = { a (1, 0) : a ∈ R} = {(a, 0) : a ∈ R}.
Now let (a, b) ∈ N (the null space of T ).
Then (a, b) ∈ N ⇒ T (a, b) = (0, 0) ⇒ (b, 0) = (0, 0) ⇒ b = 0.
∴ null space of T = {(a, 0) : a ∈ R}.
Thus range of T = null space of T.
Also we observe that dim V2 (R) = 2 which is even.
Example 9: Let V be a vector space and T a linear transformation from V into V. Prove
that the following two statements about T are equivalent :
(i) The intersection of the range of T and the null space of T is the zero subspace of
V i. e., R ( T ) ∩ N ( T ) = { 0}.
(ii) T [T (α)] = 0 ⇒ T (α) = 0.
Solution: First we shall show that (i) ⇒ (ii).
We have T [T (α)] = 0 ⇒ T (α) ∈ N (T )
⇒ T (α) ∈ R ( T ) ∩ N ( T ) [ ∵ α ∈ V ⇒ T (α) ∈ R (T )]
⇒ T (α) = 0 because R (T ) ∩ N (T ) = { 0}.
Now we shall show that (ii) ⇒ (i).
115

Let α ≠ 0 and α ∈ R ( T ) ∩ N ( T ).
Then α ∈ R ( T ) and α ∈ N ( T ).
Since α ∈ N ( T ), therefore T (α) = 0. …(1)
Also α ∈ R ( T ) ⇒ ∃ β ∈ V such that T ( β) = α.
Now T ( β) = α
⇒ T [T ( β)] = T (α) = 0 [From (1)]
Thus ∃ β ∈ V such that T [T ( β)] = 0 but T ( β) = α ≠ 0.
This contradicts the given hypothesis (ii).
Therefore there exists no α ∈ R ( T ) ∩ N ( T ) such that α ≠ 0.
Hence R ( T ) ∩ N ( T ) = {0}.
Example 10: Consider the basis S = {α1 , α 2 , α 3 } of R3 where
α1 = (1, 1, 1), α 2 = (1, 1, 0), α 3 = (1, 0, 0).
Express (2 , − 3, 5) in terms of the basis α1 , α 2 , α 3 .
Let T : R 3 → R 2 be defined as
T (α1 ) = (1, 0), T (α 2 ) = (2 , − 1), T (α 3 ) = (4, 3).
Find T (2 , − 3, 5).
Solution: Let (2 , − 3, 5) = aα1 + bα 2 + cα 3
= a (1, 1, 1) + b (1, 1, 0) + c (1, 0, 0).
Then a + b + c = 2 , a + b = − 3, a = 5.
Solving these equations, we get
a = 5, b = − 8, c = 5.
∴ (2 , − 3, 5) = 5α1 − 8α 2 + 5α 3 .
Now T (2 , − 3, 5) = T (5α1 − 8α 2 + 5α 3 )
= 5T (α1 ) − 8T (α 2 ) + 5T (α 3 )
[ ∵ T is a linear transformation]
= 5 (1, 0) − 8 (2 , − 1) + 5 (4, 3)
= (5, 0) − (16, − 8) + (20, 15)
= (9, 23).

Comprehensive Exercise 2

1. Show that the mapping T : V3 (R) → V2 (R) defined as


T (a1 , a2 , a3 ) = (a1 − a2 , a1 − a3 )
is a linear transformation.
2. Show that the mapping T : R 3 → R 2 defined as T ( x, y, z ) = (z , x + y) is
linear.
116

3. Show that the following functions are linear :


(i) T : R 2 → R 2 defined by T (a, b) = (b, a)
(ii) T : R 2 → R 2 defined by T (a, b) = (a + b, a)
(iii) T : R 3 → R defined by T (a, b, c ) = 2a − 3b + 4c.
4. Show that the following mappings T are not linear :
(i) T : R 2 → R defined by T ( x, y) = xy ;
(ii) T : R 2 → R 2 defined by T ( x, y) = (1 + x, y);
(iii) T : R 3 → R 2 defined by T ( x, y, z ) = (| x |, 0) ;
(iv) T : R 2 → R defined by T ( x, y) = | x − y |.
5. Show that the mapping T : R 2 → R 3 defined as
T (a, b) = (a − b, b − a, − a)
is a linear transformation from R 2 into R 3 . Find the range, rank, null space
and nullity of T.
6. Let F be the field of complex numbers and let T be the function from F 3 into
F 3 defined by
T ( x1 , x2 , x3 ) = ( x1 − x2 + 2 x3 , 2 x1 + x2 − x3 , − x1 − 2 x2 ).
Verify that T is a linear transformation. Describe the null space of T.
7. Let F be the field of complex numbers and let T be the function from F 3 into
F 3 defined by
T (a, b, c ) = (a − b + 2c , 2a + b, − a − 2b + 2c ).
Show that T is a linear transformation. Find also the rank and the nullity of
T.
8. Let T : R 3 → R 3 be the linear transformation defined by :
T ( x, y, z ) = ( x + 2 y − z , y + z , x + y − 2z ).
Find a basis and the dimension of (i) the range of T (ii) the null space of T.
9. Let T : V4 (R) → V3 (R) be a linear transformation defined by
T (a, b, c , d) = (a − b + c + d, a + 2c − d, a + b + 3c − 3d).
Then obtain the basis and dimension of the range space of T and null space of
T.
10. Let V be the vector space of polynomials in x over R. Let D : V → V be the
df
differential operator : D ( f ) = ⋅ Find the image (i. e.range) and kernel of D.
dx
11. Let V be the vector space of n × n matrices over the field F and let E be an
arbitrary matrix in V. Let T : V → V be defined as T ( A) = AE + EA, A ∈ V .
Show that T is linear.
117

 1 − 1
12. Let V be the vector space of 2 × 2 matrices over R and let M =  ⋅
 −2 2 
Let T : V → V be the linear function defined by T ( A) = MA for A ∈ V .
Find a basis and the dimension of (i) the kernel of T and (ii) the range of T.
 1 2
13. Let V be the vector space of 2 × 2 matrices over R and let M =   ⋅ Let
0 3
T : V → V be the linear transformation defined by T ( A) = AM − MA. Find
a basis and the dimension of the kernel of T.
14. Let V be the space of n × 1 matrices over a field F and let W be the space of
m × 1 matrices over F. Let A be a fixed m × n matrix over F and let T be the
linear transformation from V into W defined by T ( X ) = AX .
Prove that T is the zero transformation if and only if A is the zero matrix.
15. Let U ( F) and V ( F) be two vector spaces and let T1 , T2 be two linear
transformations from U to V. Let x, y be two given elements of F. Then the
mapping T defined as T (α) = x T1 (α) + y T2 (α) V α ∈ U is a linear
transformation from U into V.

A nswers 2

5. Null space of T = {0} ; nullity of T = 0, rank T = 2


The set {(1, − 1, − 1), (− 1, 1, 0)} is a basis set for R (T )
6. Null space of T = {0} 7. Rank T = 2 ; nullity of T = 1
8. (i) {(1, 0, 1), (2, 1, 1)} is a basis of R (T ) and dim. R (T ) = 2
(ii) {(3, − 1, 1)} is a basis of N (T ) and dim N (T ) = 1
9. {(1, 1, 1), (0, 1, 2)} is a basis of R (T ) and dim. R (T ) = 2
{(1, 2, 0, 1), (2, 1, − 1, 0)} is a basis of N (T ) and dim. N (T ) = 2
10. The image of D is the whole space V
The kernel of D is the set of constant polynomials

  1 0   0 1
12. (i)   ,  is a basis of the kernel T and dim (kernel T) = 2
  1 0   0 1

 1 0   0 1
(ii)   ,   is a basis for R (T ) and dim R (T ) = 2
  −2 0   0 −2 

  1 −1  1 0 
13.   ,   is a basis of the kernel of T and dim (kernel T) = 2
  0 0   0 1
118

3.13 Linear Transformations as Vectors


Let L (U, V ) be the set of all linear transformations from a vector space U ( F ) into
a vector space V ( F ). Sometimes we denote this set by Hom (U, V ). Now we want
to impose a vector space structure on the set L (U, V ) over the same field F. For
this purpose we shall have to suitably define addition in L (U, V ) and scalar
multiplication in L (U, V ) over F.
Theorem 1: Let U and V be vector spaces over the field F. Let T1 and T2 be linear
transformations from U into V. The function T1 + T2 defined by
(T1 + T2 ) (α) = T1 (α) + T2 (α) V α ∈ U
is a linear transformation from U into V. If c is any element of F, the function (cT ) defined by
(cT ) (α) = cT (α) V α ∈ U
is a linear transformation from U into V. The set L (U, V ) of all linear transformations from
U into V, together with the addition and scalar multiplication defined above is a vector space
over the field F.
Proof: Suppose T1 and T2 are linear transformations from U into V and we define
T1 + T2 as follows :
(T1 + T2 ) (α) = T1 (α) + T2 (α) V α ∈ U. …(1)
Since T1 (α) + T2 (α) ∈ V , therefore T1 + T2 is a function from U into V.
Let a, b ∈ F and α, β ∈ U. Then
(T1 + T2 ) (aα + bβ) = T1 (aα + bβ) + T2 (aα + bβ) [by (1)]
= [aT1 (α) + bT1 ( β)] + [aT2 (α) + bT2 ( β)]
[ ∵ T1 and T2 are linear transformations]
= a [T1 (α) + T2 (α)] + b [T1 ( β) + T2 ( β)] [ ∵ V is a vector space]
= a (T1 + T2 ) (α) + b (T1 + T2 ) ( β) [by (1)]
∴ T1 + T2 is a linear transformation from U into V. Thus
T1 , T2 ∈ L (U, V ) ⇒ T1 + T2 ∈ L (U, V ).
Therefore L (U, V ) is closed with respect to addition defined in it.
Again let T ∈ L (U, V ) and c ∈ F. Let us define cT as follows :
(cT ) (α) = cT (α) V α ∈ U. …(2)
Since cT (α) ∈ V , therefore cT is a function from U into V.
Let a, b ∈ F and α, β ∈ U. Then
(cT ) (aα + bβ) = cT (aα + bβ) [by (2)]
= c [aT (α) + bT ( β)] [ ∵ T is a linear transformation]
= c [aT (α)] + c [bT ( β)] = (ca) T (α) + (cb) T ( β)
= (ac ) T (α) + (bc ) T ( β) = a [cT (α)] + b [cT ( β)]
= a [(cT ) (α)] + b [(cT ) ( β)].
∴ cT is a linear transformation from U into V. Thus
119

T ∈ L (U, V ) and c ∈ F ⇒ cT ∈ L (U, V ).


Therefore L (U, V ) is closed with respect to scalar multiplication defined in it.
Associativity of addition in L (U , V ):
Let T1 , T2 , T3 ∈ L (U, V ). If α ∈ U, then
[T1 + (T2 + T3 )] (α) = T1 (α) + (T2 + T3 ) (α)
[by (1) i. e., by def. of addition in L (U, V )]
= T1 (α) + [T2 (α) + T3 (α)] [by (1)]
= [T1 (α) + T2 (α)] + T3 (α) [ ∵ addition in V is associative]
= (T1 + T2 ) (α) + T3 (α) [by (1)]
= [(T1 + T2 ) + T3 ] (α) [by (1)]
∴ T1 + (T2 + T3 ) = (T1 + T2 ) + T3
[by def. of equality of two functions]
Commutativity of addition in L (U , V ): Let T1 , T2 ∈ L (U, V ). If α is any
element of U, then
(T1 + T2 ) (α) = T1 (α) + T2 (α) [by (1)]
= T2 (α) + T1 (α) [ ∵ addition in V is commutative]
= (T2 + T1 ) (α) [by (1)]
∴ T1 + T2 = T2 + T1 [by def. of equality of two functions]
Existence of additive identity in L (U , V ): Let ^
0 be the zero function from U into

V i. e., ^
0 (α) = 0 ∈ V V α ∈ U.

Then ^
0 ∈ L (U, V ). If T ∈ L (U, V ) and α ∈ U, we have

(^
0 + T ) (α) = ^
0 (α) + T (α) [by (1)]

= 0 + T (α) [by def. of ^


0]
= T (α) [0 being additive identity in V ]
^
∴ 0 + T = T V T ∈ L (U, V ).
^
∴ 0 is the additive identity in L (U, V ).
Existence of additive inverse of each element in L (U , V ):
Let T ∈ L (U, V ). Let us define − T as follows :
(− T ) (α) = − T (α) V α ∈ U.
Then − T ∈ L (U, V ). If α ∈ U, we have
(− T + T ) (α) = (− T ) (α) + T (α)
[by def. of addition in L (U, V )]
= − T (α) + T (α) [by def. of − T ]
= 0 ∈V
=^
0 (α) [by def. of ^
0]
120

∴ −T +T =^
0 for every T ∈ L (U, V ).
Thus each element in L (U, V ) possesses additive inverse.
Therefore L (U, V ) is an abelian group with respect to addition defined in it.
Further we make the following observations :
(i) Let c ∈ F and T1 , T2 ∈ L (U, V ). If α is any element in U, we have
[c (T1 + T2 )] (α) = c [(T1 + T2 ) (α)] [by (2) i. e., by def. of scalar
multiplication in L (U, V )]
= c [T1 (α) + T2 (α)] [by (1)]
= cT1 (α) + cT2 (α)
[ ∵ c ∈ F and T1 (α), T2 (α) ∈ V
which is a vector space]
= (cT1 ) (α) + (cT2 ) (α) [by (2)]
= (cT1 + cT2 ) (α) [by (1)]
∴ c (T1 + T2 ) = cT1 + cT2 .
(ii) Let a, b ∈ F and T ∈ L (U, V ). If α ∈ U, we have
[(a + b) T ] (α) = (a + b) T (α) [by (2)]
= aT (α) + bT (α) [ ∵ V is a vector space]
= (aT ) (α) + (bT ) (α) [by (2)]
= (aT + bT ) (α) [by (1)]
∴ (a + b) T = aT + bT .
(iii) Let a, b ∈ F and T ∈ L (U, V ). If α ∈ U, we have
[(ab) T ] (α) = (ab) T (α) [by (2)]
= a [bT (α)] [ ∵ V is a vector space]
= a [(bT ) (α)] [by (2)]
= [a (bT )] (α) [by (2)]
∴ (ab) T = a (bT ).
(iv) Let 1∈ F and T ∈ L (U, V ). If α ∈ U, we have
(1T ) (α) = 1T (α) [by (2)]
= T (α) [ ∵ V is a vector space]
∴ 1T = T .
Hence L (U, V ) is a vector space over the field F.
Note: If in place of the vector space V,we take U,then we observe that the set of all
linear operators on U forms a vector space with respect to addition and scalar
multiplication defined as above.
Dimension of L (U , V ): Now we shall prove that if U ( F ) and V ( F ) are finite
dimensional, then the vector space of linear transformations from U into V is also
finite dimensional. For this purpose we shall require an important result which
we prove in the following theorem :
121

Theorem 2: Let U be a finite dimensional vector space over the field F and let
B = {α1 , α 2 , … , α n }be an ordered basis for U. Let V be a vector space over the same field F
and let β1 , … , β n be any vectors in V. Then there exists a unique linear transformation T from
U into V such that
T (α i ) = β i , i = 1, 2 , … , n.
Proof: Existence of T:
Let α ∈ U.
Since B = {α1 , α 2 , … , α n} is a basis for U, therefore there exist unique scalars
x1 , x2 , … , x n such that
α = x1α1 + x2 α 2 + … + x n α n .
For this vector α, let us define
T (α) = x1β1 + x2 β 2 + … + x n β n .
Obviously T (α) as defined above is a unique element of V. Therefore T is a
well-defined rule for associating with each vector α in U a unique vector T (α) in V.
Thus T is a function from U into V.
The unique representation of α i ∈ U as a linear combination of the vectors
belonging to the basis B is
α i = 0α1 + 0α 2 + … + 1α i + 0α i + 1 + … + 0α n .
Therefore according to our definition of T, we have
T (α i ) = 0β1 + 0β 2 + … + 1β i + 0β i + 1 + … + 0β n
i. e., T (α i ) = β i , i = 1, 2 , … , n.
Now to show that T is a linear transformation.
Let a, b ∈ F and α, β ∈ U. Let
α = x1α1 + … + x n α n and β = y1α1 + … + y n α n .
Then T (aα + bβ) = T [a ( x1α1 + … + x n α n ) + b ( y1α1 + … + y n α n )]
= T [(ax1 + by1 ) α1 + … + (ax n + by n ) α n ]
= (ax1 + by1 ) β1 + … + (ax n + by n ) β n [by def. of T ]
= a ( x1 β1 + … + x n β n ) + b ( y1 β1 + … + y n β n )
= aT (α) + bT ( β) [by def. of T ]
∴ T is a linear transformation from U into V. Thus there exists a linear
transformation T from U into V such that
T (α i ) = β i , i = 1, 2 , … , n.
Uniqueness of T: Let T ′ be a linear transformation from U into V such that
T ′ (α i ) = β i , i = 1, 2 , … , n.
For the vector α = x1α1 + … + x n α n ∈ U, we have
T ′ (α) = T ′ ( x1α1 + … + x n α n )
= x1T ′ (α1 ) + … + x n T ′ (α n )
[ ∵ T ′ is a linear transformation]
122

= x1 β1 + … + x n β n [by def. of T ′]
= T (α). [by def. of T ]
Thus T ′ (α) = T (α) V α ∈ U.
∴ T ′ = T.
This shows the uniqueness of T.
Note: From this theorem we conclude that if T is a linear transformation from a
finite dimensional vector space U ( F ) into a vector space V ( F ), then T is
completely defined if we mention under T the images of the elements of a basis set
of U. If S and T are two linear transformations from U into V such that
S (α i ) = T (α i ) V α i belonging to a basis of U, then
S (α) = T (α) V α ∈ U, i. e., S = T .
Thus two linear transformations from U into V are equal if they agree on a basis of U.
Theorem 3: Let U be an n-dimensional vector space over the field F, and let V be an
m-dimensional vector space over F. Then the vector space L (U, V ) of all linear
transformations from U into V is finite dimensional and is of dimension mn.
Proof: Let B = { α1 , α 2 , … , α n }and B ′ = { β1 , β 2 , … , β m }be ordered bases for
U and V respectively. By theorem 2, there exists a unique linear transformation T11
from U into V such that
T11 (α1 ) = β1 , T11 (α 2 ) = 0, … , T11 (α n ) = 0
where β1 , 0, … , 0 are vectors in V.
In fact, for each pair of integers ( p, q) with 1≤ p ≤ m and 1≤ q ≤ n, there exists a
unique linear transformation Tpq from U into V such that
 0, if i≠ q
Tpq (α i ) = 
 β p , if i= q

i. e., Tpq (α i ) = δ iq β p , …(1)


where δ iq ∈ F is Kronecker delta i. e., δ iq = 1 if i = q and δ iq = 0 if i ≠ q.
Since p can be any of 1, 2 , … , m and q any of 1, 2 , … , n, there are mn such Tpq ’s. Let
B1 denote the set of these mn transformations Tpq ’s. We shall show that B1 is a basis
for L (U, V ).
(i) First we shall show that L (U, V ) is a linear span of B1 .
Let T ∈ L (U, V ). Since T (α1 ) ∈ V and any element in V is a linear combination of
β1 , β 2 , … , β m , therefore
T (α1 ) = a11 β1 + a21 β 2 + … + a m1 β m ,
for some a11 , a21 , … , a m1 ∈ F. In fact for each i, 1≤ i ≤ n,
m
T (α i ) = a1i β1 + a2 i β 2 + … + a mi β m = Σ a pi β p …(2)
p =1
m n
Now consider S = Σ Σ a pq Tpq .
p =1 q =1

Obviously S is a linear combination of elements of B1 which is a subset of L (U, V ).


123

Since L (U, V ) is a vector space, therefore S ∈ L (U, V ) i. e., S is also a linear


transformation from U into V. We shall show that S = T .
Let us compute S (α i ) where α i is any vector in the basis B of U. We have
 m n  m n
S (α i ) =  Σ Σ a pq Tpq  (α i ) = Σ Σ a pq Tpq (α i )
p = 1 q = 1  p =1 q =1

m n
= Σ Σ a pq δ iq β p [From (1)]
p =1 q =1
m
= Σ a pi β p
p =1 [On summing with respect to q. Remember
that δ iq = 1 when q = i and δ iq = 0 when q ≠ i]
= T (α i ). [From (2)]
Thus S (α i ) = T (α i ) V α i ∈ B. Therefore S and T agree on a basis of U. So we
must have S = T . Thus T is also a linear combination of the elements of B1 .
Therefore L (U, V ) is a linear span of B1 .
(ii) Now we shall show that B1 is linearly independent. For b pq ’s ∈ F, let
m n
Σ Σ b pq Tpq = ^
0 i. e., zero vector of L (U, V )
p =1 q =1

 m n 
⇒  Σ Σ b pq Tpq  (α i ) = ^
0 (α i ) V α i ∈ B
p =1 q =1 
m n
⇒ Σ Σ b pq Tpq (α i ) = 0 ∈ V [∵ ^
0 is zero transformation]
p =1 q =1
m n m
⇒ Σ Σ b pq δ iq β p = 0 ⇒ Σ b pi β p = 0
p =1 q =1 p =1

⇒ b1i β1 + b2 i β 2 + … + b mi β m = 0, 1 ≤ i ≤ n
⇒ b1i = 0, b2 i = 0, … , b mi = 0, 1 ≤ i ≤ n
[∵ β1 , β 2 , … , β m are linearly independent]
⇒ b pq = 0 where 1≤ p ≤ m and 1≤ q ≤ n
⇒ B1 is linearly independent. Therefore B1 is a basis of L (U, V ).
∴ dim L (U, V ) = number of elements in B1 = mn.
Corollary: The vector space L (U, U ) of all linear operators on an n-dimensional vector
space U is of dimension n2 .
Note: Suppose U ( F ) is an n-dimensional vector space and V ( F ) is an
m-dimensional vector space. If U ≠ {0} and V ≠ {0}, then n ≥ 1 and m ≥ 1. Therefore
L (U, V ) does not just consist of the element ^
0, because dimension of L (U, V ) is
mn ≥ 1.
124

3.14 Product of Linear Transformations


Theorem 1: Let U, V and W be vector spaces over the field F. Let T be a linear
transformation from U into V and S a linear transformation from V into W. Then the
composite function ST (called product of linear transformations) defined by
(ST ) (α) = S [T (α)] V α ∈ U
is a linear transformation from U into W.
Proof:T is a function from U into V and S is a function from V into W.
So α ∈ U ⇒ T (α) ∈ V . Further
T (α) ∈ V ⇒ S [T (α)] ∈ W. Thus (ST ) (α) ∈ W.
Therefore ST is a function from U into W. Now to show that ST is a linear
transformation from U into W.
Let a , b ∈ F and α , β ∈ U. Then
(ST ) (aα + bβ) = S [T (aα + bβ)] [by def. of product of two functions]
= S [aT (α) + bT ( β)] [ ∵ T is a linear transformation]
= aS [T (α)] + bS [T ( β)][ ∵ S is a linear transformation]
= a (ST ) (α) + b (ST ) ( β).
Hence ST is a linear transformation from U into W.
Note: If T and S are linear operators on a vector space V ( F ), then both the
products ST as well as TS exist and each is a linear operator on V. However, in
general TS ≠ ST as is obvious from the following examples.

Example 11: Let T1 and T2 be linear operators on R2 defined as follows :


T1 ( x1 , x2 ) = ( x2 , x1 ) and T2 ( x1 , x2 ) = ( x1 , 0).
Show that T1T2 ≠ T2 T1 .
Solution: We have
(T1T2 ) ( x1 , x2 ) = T1 [T2 ( x1 , x2 )]
= T1 ( x1 , 0), [by def. of T2 ]
= (0, x1 ), [by def. of T1 ].
Also (T2 T1 ) ( x1 , x2 ) = T2 [T1 ( x1 , x2 )], [by def. of T2 T1 ]
= T2 ( x2 , x1 ), [by def. of T1 ]
= ( x2 , 0), [by def. of T2 ].
2
Thus we see that (T1T2 ) ( x1 , x2 ) ≠ (T2 T1 ) ( x1 , x2 ) V ( x1 , x2 ) ∈ R .
Hence by the definition of equality of two mappings, we have
T1T2 ≠ T2 T1 .
125

Example 12:Let S (R) be the vector space of all polynomial functions in x with coefficients as
elements of the field R of real numbers. Let D and T be two linear operators on V defined by
d
D ( f ( x)) = f ( x) …(1)
dx
x
and T ( f ( x)) = ∫0 f ( x) dx …(2)

for every f ( x) ∈ V .
Then show that DT = I (identity operator) and TD ≠ I .
Solution: Let f ( x) = a0 + a1 x + a2 x 2 + … ∈ V .
We have ( DT ) ( f ( x)) = D [T ( f ( x))]
= D ∫ f ( x) dx = D ∫ (a0 + a1 x + a2 x 2 + ...) dx
x x
 0   0 
x
a a a x + a1 x 2 + …
= D a0 x + 1 x 2 + 2 x 3 + … =
d
 2 3 
0 dx  0 2 

= a0 + a1 x + a2 x 2 + … = f ( x) = I [ f ( x)].
Thus we have ( DT ) [ f ( x)] = I [ f ( x)] V f ( x) ∈ V . Therefore DT = I .
Now (TD ) f ( x) = T [ D f ( x)]
d
= T  (a0 + a1 x + a2 x 2 + …) = T (a1 + 2a2 x + …)
 dx 
x
= ∫ 0 (a1 + 2a2 x + ...) dx = [a1 x + a2 x 2 + …]0x

= a1 x + a2 x 2 + …
≠ f ( x) unless a0 = 0.
Thus ∃ f ( x) ∈ V such that (TD ) [ f ( x)] ≠ I [ f ( x)].
∴ TD ≠ I .
Hence TD ≠ DT ,
showing that product of linear operators is not in general commutative.
Example 13: Let V (R) be the vector space of all polynomials in x with coefficients in the
field R. Let D and T be two linear transformations on V defined as
d
D [ f ( x)] = f ( x) V f ( x) ∈ V and T [ f ( x)] = x f ( x) V f ( x) ∈ V .
dx
Then show that DT ≠ TD.
Solution: We have
d
( DT ) [ f ( x)] = D [T ( f ( x))] = D [ x f ( x)] = [ x f ( x)]
dx
d
= f ( x) + x f ( x). …(1)
dx
d
Also (TD ) [ f ( x)] = T [ D ( f ( x))] = T  ( f ( x))
 dx 
126

d
= x f ( x). …(2)
dx
From (1) and (2), we see that ∃ f ( x) ∈ V such that
( DT ) ( f ( x)) ≠ (TD ) ( f ( x)) ⇒ DT ≠ TD.
Also we see that
( DT − TD ) ( f ( x)) = ( DT ) ( f ( x)) − (TD ) ( f ( x))
= f ( x) = I ( f ( x)).
∴ DT − TD = I .
Theorem 2: Let V ( F ) be a vector space and A, B, C be linear transformations on V.
Then
(i) A^
0 =^
0 =^
0 A (ii) AI = A = IA
(iii) A ( BC ) = ( AB ) C (iv) A ( B + C ) = AB + AC
(v) ( A + B) C = AC + BC
(vi) c ( AB ) = (cA ) B = A (cB ) where c is any element of F.
Proof: Just for the sake of convenience we first mention here our definitions of
addition, scalar multiplication and product of linear transformations :
( A + B ) (α) = A (α) + B (α) …(1)
(cA ) (α) = cA (α) …(2)
( AB ) (α) = A [ B (α)] …(3)
V α ∈ V and V c ∈ F.
Now we shall prove the above results.
(i) We have V α ∈ V, ( A ^
0) (α) = A [ ^
0 (α)] [by (3)]
= A (0) ^
[ ∵ 0 is zero transformation]
= 0 =^
0 (α).
∴ A^
0 =^
0. [by def. of equality of two functions]
^ ^
Similarly we can show that 0 A = 0.
(ii) We have V α ∈ V,
( AI ) (α) = A [I (α)]
= A (α) [∵ I is identity transformation]
∴ AI = A.
Similarly we can show that IA = A.
(iii) We have V α ∈ V
[ A ( BC )] (α) = A [( BC ) (α)] [by (3)]
= A [ B (C (α))] [by (3)]
= ( AB ) [C (α)] [by (3)]
= [( AB ) C ] (α). [by (3)]
∴ A ( BC ) = ( AB ) C.
127

(iv) We have V α ∈ V,
[ A ( B + C )] (α) = A [( B + C ) (α)] [by (3)]
= A [ B (α) + C (α)] [by (1)]
= A [ B (α)] + A [C (α)] [ ∵ A is a linear
transformation and B (α), C (α) ∈ V ]
= ( AB ) (α) + ( AC ) (α) [by (3)]
= ( AB + AC ) (α) [by (1)]
∴ A ( B + C ) = AB + AC.
(v) We have V α ∈ V,
[( A + B) C ] (α) = ( A + B ) [C (α)] [by (3)]
= A [C (α)] + B [C (α)] [by (1) since C (α) ∈ V ]
= ( AC ) (α) + ( BC ) (α) [by (3)]
= ( AC + BC ) (α) [by (1)]
∴ ( A + B ) C = AC + BC.
(vi) We have V α ∈ V,
[c ( AB )] (α) = c [( AB ) (α)] [by (2)]
= c [ A ( B (α))] [by (3)]
= (cA ) [ B (α)] [by (2) since B (α) ∈ V ]
= [(cA) B ] (α) [by (3)]
∴ c ( AB ) = (cA ) B.
Again [c ( AB )] (α) = c [( AB ) (α)] [by (2)]
= c [ A ( B (α))] [by (3)]
= A [cB (α)]
[ ∵ A is a linear transformation and B (α) ∈ V ]
= A [(cB ) (α)] [by (2)]
= [ A (cB )] (α). [by (3)]
∴ c ( AB ) = A (cB ).

3.15 Ring of Linear Operators on a Vector Space


Ring: Definition: A non-empty set R with two binary operations, to be denoted
additively and multiplicatively, is called a ring if the following postulates are satisfied :
R1 . R is closed with respect to addition i. e.,
a + b ∈ R V a, b ∈ R.
R2 . (a + b) + c = a + (b + c ) V a, b, c ∈ R.
R3 . a + b = b + a V a, b ∈ R.
R4 . ∃ an element 0 (called zero element) in R such that
0 + a = a V a ∈ R.
128

R5 . a ∈ R ⇒ ∃ − a ∈ R such that
(− a) + a = 0.
R6 . R is closed with respect to multiplication i. e.,
ab ∈ R , V a, b ∈ R
R7 . (ab) c = a (bc ) V a, b, c ∈ R.
R8 . Multiplication is distributive with respect to addition, i. e.,
a (b + c ) = ab + ac and (a + b) c = ac + bc V a, b, c ∈ R.
Ring with unity element: Definition:
If in a ring R there exists an element 1∈ R such that
1a = a = a1 V a ∈ R,
then R is called a ring with unity element. The element 1 is called the unity element of
the ring.
Theorem: The set L (V , V ) of all linear transformations from a vector space V ( F ) into
itself is a ring with unity element with respect to addition and multiplication of linear
transformations defined as below :
(S + T ) (α) = S (α) + T (α)
and (ST ) (α) = S [T (α)] V S, T ∈ L (V , V ) and V α ∈ V.
Proof: The students should themselves write the complete proof of this theorem.
We have proved all the steps here and there. They should show here that all the ring
postulates are satisfied in the set L (V , V ). The transformation ^
0 will act as the zero
element and the identity transformation I will act as the unity element of this ring.

3.16 Algebra or Linear Algebra


Definition: Let F be a field. A vector space V over F is called a linear algebra over F if there
is defined an additional operation in V called multiplication of vectors and satisfying
the following postulates :
1. αβ ∈ V V α, β ∈ V
2. α ( β γ ) = (αβ) γ V α, β, γ ∈ V
3. α ( β + γ ) = αβ + α γ and (α + β) γ = α γ + β γ V α, β, γ ∈ V .
4. c (αβ) = (cα) β = α (cβ) V α, β ∈ V and c ∈ F.
If there is an element 1 in V such that
1α = α = α1 V α ∈ V,
then we call V a linear algebra with identity over F. Also 1 is then called the
identity of V. The algebra V is Commutative if
αβ = βα V α , β ∈ V .
Theorem: Let V ( F ) be a vector space. The vector space L (V , V ) over F of all linear
transformations from V into V is a linear algebra with identity with respect to the product of
linear transformations as the multiplication composition in L (V , V ).
129

Proof: The students should write the complete proof here. All the necessary steps
have been proved here and there.

3.17 Polynomials
Let T be a linear transformation on a vector space V ( F ). Then T T is also a linear
transformation on V. We shall write T 1 = T and T 2 = T T . Since the product of
linear transformations is an associative operation, therefore if m is a positive
integer, we shall define
m
T = T T T … upto m times.
m
Obviously T is a linear transformation on V.
0
Also we define T = I (identity transformation).
If m and n are non-negative integers, it can be easily seen that
m n m+n m n mn
T T =T and (T ) =T .
The set L (V , V ) of all linear transformations on V is a vector space over the field F.
If a0 , a1 , … , a n ∈ F, then
2 n
p (T ) = a0 I + a1T + a2 T + … + an T ∈ L (V , V )
i. e., p (T ) is also a linear transformation on V because it is a linear combination over
F of elements of L (V , V ).We call p (T ) as a polynomial in linear transformation T.
The polynomials in a linear transformation behave like ordinary polynomials.

3.18 Invertible Linear Transformations


Definition: Let U and V be vector spaces over the field F. Let T be a linear transformation
from U into V such that T is one-one onto. Then T is called invertible.
If T is a function from U into V, then T is said to be 1-1 if
α1 , α 2 ∈ U and α1 ≠ α 2 ⇒ T (α1 ) ≠ T (α 2 ).
In other words T is said to be 1-1 if
α1 , α 2 ∈ U and T (α1 ) = T (α 2 ) ⇒ α1 = α 2 .
Further T is said to be onto if
β ∈V ⇒ ∃ α ∈ U such that T (α) = β.
If T is one-one and onto, then we define a function from V into U, called the inverse
of T and denoted by T −1 as follows :
Let β be any vector in V. Since T is onto, therefore
β ∈ V ⇒ ∃ α ∈ U such that T (α) = β.
Also α determined in this way is a unique element of U because T is one-one and
therefore
α 0 , α ∈ U and α 0 ≠ α ⇒ β = T (α) ≠ T (α 0 ).
130
−1
We define T ( β) to be α . Thus
T −1 : V ⇒ U such that
T −1 ( β) = α ⇔ T (α) = β.
The function T −1 is itself one-one and onto. In the following theorem, we shall
prove that T −1 is a linear transformation from V into U.
Theorem 1:Let U and V be vector spaces over the field F and let T be a linear transformation
from U into V. If T is one-one and onto, then the inverse function T −1 is a linear
transformation from V into U.
Proof: Let β1 , β 2 ∈ V and a, b ∈ F.
Since T is one-one and onto, therefore there exist unique vectors α1 , α 2 ∈ U such
that T (α1 ) = β1 , T (α 2 ) = β 2 .
By definition of T −1 , we have
T −1 ( β1 ) = α1 , T −1 ( β 2 ) = α 2 .
Now aα1 + bα 2 ∈ U and we have by linearity of T,
T (aα1 + bα 2 ) = aT (α1 ) + bT (α 2 )
= aβ1 + bβ 2 ∈ V .
−1
∴ by def. of T , we have
−1
T (aβ1 + bβ 2 ) = aα1 + bα 2
= aT −1 ( β1 ) + bT −1 ( β 2 ).
∴ T −1 is a linear transformation from V into U.
Theorem 2: Let T be an invertible linear transformation on a vector space V ( F ).Then
T −1 T = I = T T −1 .
Proof: Let α be any element of V and let T (α) = β. Then
T −1 ( β) = α.
We have T (α) = β
⇒ T −1 [T (α)] = T −1
( β) ⇒ (T −1 T ) (α) = α
⇒ (T −1 T ) (α) = I (α) ⇒ T −1 T = I .
Let β be any element of V. Since T is onto, therefore β ∈ V ⇒ ∃ α ∈ V such that
T (α) = β. Then T −1 ( β) = α.
Now T −1 ( β) = α
⇒ T [T −1 ( β)] = T (α)
⇒ (T T −1 ) ( β) = β
⇒ (T T −1 ) ( β) = β = I ( β)
⇒ T T −1 = I .
131

Theorem 3: If A, B and C are linear transformations on a vector space V ( F ) such that


AB = CA = I ,
then A is invertible and A −1 = B = C.
Proof: In order to show that A is invertible, we are to show that A is one-one and
onto.
(i) A is one-one:
Let α1 , α 2 ∈ V . Then
A (α1 ) = A (α 2 )
⇒ C [ A (α1 )] = C [ A (α 2 )]
⇒ (CA ) (α1 ) = (CA ) (α 2 )
⇒ I (α1 ) = I (α 2 )
⇒ α1 = α 2 .
∴ A is one-one.
(ii) A is onto:
Let β be any element of V. Since B is a linear transformation on V, therefore
B ( β) ∈ V . Let B ( β) = α. Then
B ( β) = α
⇒ A [ B ( β)] = A (α)
⇒ ( AB ) ( β) = A (α)
⇒ I ( β) = A (α) [ ∵ AB = I ]
⇒ β = A (α).
Thus β ∈ V ⇒ ∃ α ∈ V such that A (α) = β.
∴ A is onto.
Since A is one-one and onto therefore A is invertible i. e., A −1 exists.
(iii) Now we shall show that A −1 = B = C.
We have AB = I
⇒ A −1 ( AB ) = A −1 I ⇒ ( A −1 A ) B = A −1
⇒ IB = A −1 ⇒ B = A −1 .
Again CA = I
⇒ (CA) A −1 = IA −1 ⇒ C ( AA −1 ) = A −1
⇒ CI = A −1 ⇒ C = A −1 .
Hence the theorem.
Theorem 4: The necessary and sufficient condition for a linear transformation A on a
vector space V ( F ) to be invertible is that there exists a linear transformation B on V such that
AB = I = BA.
Proof: The condition is necessary. For proof see theorem 2.
132

The condition is sufficient: For proof see theorem 3. Take B in place of C.


Also we note that B = A −1 and A = B −1 .
Theorem 5: Uniqueness of inverse: Let A be an invertible linear transformation
on a vector space V ( F ). Then A possesses unique inverse.
Proof: Let B and C be two inverses of A. Then
AB = I = BA and AC = I = CA.
We have C ( AB) = CI = C. …(1)
Also (CA) B = IB = B. …(2)
Since product of linear transformations is associative, therefore from (1) and (2),
we get
C ( AB) = (CA) B
⇒ C = B.
Hence the inverse of A is unique.
Theorem 6: Let V ( F ) be a vector space and let A, B be linear transformations on V.
Then show that
(i) If A and B are invertible, then AB is invertible and
( AB) −1 = B −1 A −1 .
(ii) If A is invertible and a ≠ 0 ∈ F, then aA is invertible and
1
(aA) −1 = A −1 .
a
(iii) If A is invertible, then A −1 is invertible and ( A −1 ) −1 = A.
Proof: (i) We have
( B −1 A −1 ) ( AB) = B −1 [ A −1 ( AB)] = B −1 [( A −1 A) B]
= B −1 (IB) = B −1 B = I .
Also ( AB) ( B −1 A −1 ) = A [ B ( B −1 A −1 )] = A [( BB −1 ) A −1 ]
= A (IA −1 ) = AA −1 = I .
Thus ( AB) ( B −1 A −1 ) = I = ( B −1 A −1 ) ( AB).
∴ By theorem 3, AB is invertible and ( AB) −1 = B −1 A −1 .

We have (aA)  A −1  = a  A  A −1  
1 1
(ii)
a    a  

= a  ( AA −1 ) =  a  ( AA −1 ) = 1I = I .
1 1
 a   a
 1 A −1  (aA) = 1 [ A −1 (aA)] = 1 [a ( A −1 A)]
Also  
a  a a
=  a ( A −1 A) = 1I = I .
1
a 
133

(aA)  A −1  = I =  A −1  (aA).
1 1
Thus
a  a 
∴ by theorem 3, aA is invertible and
1
(aA) −1 = A −1 .
a
(iii) Since A is invertible, therefore
AA −1 = I = A −1 A.
∴ by theorem 3, A −1 is invertible and
A = ( A −1 ) −1 .
Singular and Non-singular transformations:
Definition: Let T be a linear transformation from a vector space U ( F ) into a vector space
V ( F ). Then T is said to be non-singular if the null space of T (i. e., ker T ) consists of the
zero vector alone i. e., if
α ∈ U and T (α) = 0 ⇒ α = 0.
If there exists a vector 0 ≠ α ∈ U such that T (α) = 0, then T is said to be singular.
Theorem 7: Let T be a linear transformation from a vector space U ( F ) into a vector
space V ( F ). Then T is non-singular if and only if T is one-one.
Proof: Given that T is non-singular. Then to prove that T is one-one.
Let α1 , α 2 ∈ U. Then
T (α1 ) = T (α 2 )
⇒ T (α1 ) − T (α 2 ) = 0 ⇒ T (α1 − α 2 ) = 0
⇒ α1 − α 2 = 0 [ ∵ T is non-singular]
⇒ α1 = α 2 .
∴ T is one-one.
Conversely let T be one-one. We know that T (0) = 0. Since T is one-one, therefore
α ∈ U and T (α) = 0 = T (0) ⇒ α = 0. Thus the null space of T consists of zero
vector alone. Therefore T is non-singular.
Theorem 8: Let T be a linear transformation from U into V. Then T is non-singular if
and only if T carries each linearly independent subset of U onto a linearly independent subset
of V.
Proof: First suppose that T is non-singular.
Let B = {α1 , α 2 , … , α n}
be a linearly independent subset of U. Then image of B under T is the subset B ′ of V
given by
B ′ = {T (α1 ), T (α 2 ), … , T (α n )}.
To prove that B ′ is linearly independent.
Let a1 , a2 , … , a n ∈ F and let
134

a1T (α1 ) + … + a n T (α n ) = 0
⇒ T (a1α1 + … + a n α n ) = 0 [ ∵ T is linear]
⇒ a1α1 + … + a n α n = 0 [ ∵ T is non-singular]
⇒ a i = 0, i = 1, 2 , … , n [ ∵ α1 , … , α n are linearly independent]
Thus the image of B under T is linearly independent.
Conversely suppose that T carries independent subsets onto independent subsets.
Then to prove that T is non-singular.
Let α ≠ 0 ∈ U. Then the set S = {α} consisting of the one non-zero vector α is
linearly independent. The image of S under T is the set
S ′ = {T (α)}.
It is given that S ′ is also linearly independent. Therefore T (α) ≠ 0 because the set
consisting of zero vector alone is linearly dependent. Thus
0 ≠ α ∈ U ⇒ T (α) ≠ 0.
This shows that the null space of T consists of the zero vector alone. Therefore T is
non-singular.
Theorem 9: Let U and V be finite dimensional vector spaces over the field F such that dim
U = dim V. If T is a linear transformation from U into V, the following are equivalent.
(i) T is invertible.
(ii) T is non-singular.
(iii) The range of T is V.
(iv) If {α1 , … , α n} is any basis for U, then
{T (α1 ), … , T (α n )} is a basis for V.
(v) There is some basis {α1 , … , α n} for U such that
{T (α1 ), … , T (α n )} is a basis for V.
Proof: (i) ⇒ (ii).
If T is invertible, then T is one-one. Therefore T is non-singular.
(ii) ⇒ (iii).
Let T be non-singular. Let {α1 , … , α n } be a basis for U. Then {α1 , … , α n } is a
linearly independent subset of U. Since T is non-singular therefore
{T (α1 ), … , T (α n )} is a linearly independent subset of V and it contains n vectors.
Since dim V is also n, therefore this set of vectors is a basis for V. Now let β be any
vector in V. Then there exist scalars a1 , … , a n ∈ F such that
β = a1T (α1 ) + … + a n T (α n ) = T (a1 α1 + … + a n α n )
which shows that β is in the range of T because
a1α1 + … + a n α n ∈ U.
Thus every vector in V is in the range of T. Hence range of T is V.
(iii) ⇒ (iv).
135

Now suppose that range of T is V i. e., T is onto. If {α1 , … , α n} is any basis for U,
then the vectors T (α1 ), … , T (α n ) span the range of T which is equal to V. Thus
the vectors T (α1 ), … , T (α n ) which are n in number span V whose dimension is
also n. Therefore {T (α1 ), … , T (α n )} must be a basis set for V.
(iv) ⇒ (v).
Since U is finite dimensional, therefore there exists a basis for U. Let {α1 , … , α n} be
a basis for U. Then {T (α1 ), … , T (α n )} is a basis for V as it is given in (iv).
(v) ⇒ (i).
Suppose there is some basis {α1 , … , α n } for U such that {T (α1 ), … , T (α n ) } is a
basis for V. The vectors {T (α1 ), … , T (α n ) } span the range of T. Also they span V.
Therefore the range of T must be all of V i. e., T is onto.
If α = c1 α1 + … + c n α n is in the null space of T, then
T (c1 α1 + … + c n α n ) = 0
⇒ c1T (α1 ) + … + c n T (α n ) = 0
⇒ c i = 0, 1 ≤ i ≤ n because T (α1 ), … , T (α n ) are linearly independent
⇒ α = 0.
∴ T is non-singular and consequently T is one-one. Hence T is invertible.

Example 14: Describe explicitly the linear transformation T : R 2 → R 2 such that


T (2, 3) = (4, 5) and T (1, 0) = (0, 0).
Solution: First we shall show that the set {(2, 3), (1, 0)} is a basis of R2 . For linear
independence of this set let
a (2, 3) + b (1, 0) = (0, 0), where a, b ∈ R.
Then (2a + b, 3a) = (0, 0)
⇒ 2a + b = 0, 3a = 0
⇒ a = 0, b = 0.
Hence the set {(2, 3), (1, 0)} is linearly independent.
Now we shall show that the set {(2, 3), (1, 0)} spans R2 . Let ( x1 , x2 ) ∈ R2 and let
( x1 , x2 ) = a (2, 3) + b (1, 0) = (2a + b, 3a).
Then 2a + b = x1 , 3a = x2 . Therefore
x 3 x − 2 x2
a= 2 ,b = 1 ⋅
3 3
x 3 x − 2 x2
∴ ( x1 , x2 ) = 2 (2, 3) + 1 (1, 0). …(1)
3 3
From the relation (1) we see that the set {(2, 3), (1, 0)} spans R2 . Hence this set is a
basis for R2 .
136

Now let ( x1 , x2 ) be any member of R2 . Then we are to find a formula for T ( x1 , x2 )


under the conditions that T (2, 3) = (4, 5), T (1, 0) = (0, 0).
x 3 x − 2 x2
We have T ( x1 , x2 ) = T  2 (2, 3) + 1 (1, 0) , by (1)
 3 3 
x2 3 x − 2 x2
= T (2, 3) + 1 T (1, 0), by linearity of T
3 3
x 3 x − 2 x2 4x 5x 
= 2 (4, 5) + 1 (0, 0) =  2 , 2  ⋅
3 3  3 3 
Example 15: Describe explicitly a linear transformation from V3 (R) into V3 (R) which
has its range the subspace spanned by (1, 0, − 1) and (1, 2, 2).
Solution: The set B = {(1, 0, 0), (0, 1, 0), (0, 0, 1)} is a basis for V3 (R).
Also {(1, 0, − 1), (1, 2, 2), (0, 0, 0)} is a subset of V3 (R). It should be noted that in this
subset the number of vectors has been taken the same as is the number of vectors in
the set B.
There exists a unique linear transformation T from V3 (R) into V3 (R) such that
T (1, 0, 0) = (1, 0, − 1),

and T (0, 1, 0) = (1, 2, 2),  …(1)
T (0, 0, 1) = (0, 0, 0). 
Now the vectors T (1, 0, 0), T (0, 1, 0), T (0, 0, 1) span the range of T. In other words
the vectors
(1, 0, − 1), (1, 2, 2), (0, 0, 0)
span the range of T. Thus the range of T is the subspace of V3 (R) spanned by the set
{(1, 0, − 1), (1, 2, 2)} because the zero vector (0, 0, 0) can be omitted from the
spanning set. Therefore T defined in (1) is the required transformation.
Now let us find an explicit expression for T. Let (a, b, c ) be any element of V3 (R).
Then we can write
(a, b, c ) = a (1, 0, 0) + b (0, 1, 0) + c (0, 0, 1).
∴ T (a, b, c ) = aT (1, 0, 0) + bT (0, 1, 0) + cT (0, 0, 1)
= a (1, 0, − 1) + b (1, 2, 2) + c (0, 0, 0) [from (1)]
= (a + b, 2b, 2b − a).
Example 16:Let T be a linear operator on V3 (R) defined by
T (a, b, c ) = (3a, a − b, 2a + b + c ) V (a, b, c ) ∈ V3 (R).
−1
Is T invertible ? If so, find a rule for T like the one which defines T.
Solution: Let us see that T is one-one or not.
Let α = (a1 , b1 , c1 ), β = (a2 , b2 , c 2 ) ∈ V3 (R).
Then T (α) = T ( β)
⇒ T (a1 , b1 , c1 ) = T (a2 , b2 , c 2 )
137

⇒ (3a1 , a1 − b1 , 2a1 + b1 + c1 ) = (3a2 , a2 − b2 , 2a2 + b2 + c 2 )


⇒ 3a1 = 3a2 , a1 − b1 = a2 − b2 , 2a1 + b1 + c1 = 2a2 + b2 + c 2
⇒ a1 = a2 , b1 = b2 , c1 = c 2 .
∴ T is one-one.
Now T is a linear transformation on a finite dimensional vector space V3 (R)
whose dimension is 3. Since T is one-one, therefore T must be onto also and thus
T is invertible.
−1
If T (a, b, c ) = ( p, q, r), then T ( p, q, r) = (a, b, c ).
Now T (a, b, c ) = ( p, q, r)
⇒ (3a, a − b, 2a + b + c ) = ( p, q, r)
⇒ p = 3a, q = a − b, r = 2a + b + c
p p 2p p
⇒ a = , b = − q, c = r − 2a − b = r − − + q = r − p + q.
3 3 3 3
p p 
∴ T −1 ( p, q, r) =  , − q, r − p + q V ( p, q, r) ∈ V3 (R)
3 3 
−1
is the rule which defines T .
Example 17: A linear transformation T is defined on V2 ( C) by

T (a, b) = (αa + βb, γa + δb),


where α, β, γ , δ are fixed elements of C.Prove that T is invertible if and only if αδ − β γ ≠ 0.
Solution: The vector space V2 (C) is of dimension 2. Therefore T is a linear
transformation on a finite-dimensional vector space. T will be invertible if and only
if the null space of T consists of zero vector alone. The zero vector of the space
V2 (C) is the ordered pair (0, 0). Thus T is invertible
iff T ( x, y) = (0, 0) ⇒ x = 0, y = 0
i. e., iff (α x + βy, γ x + δy) = (0, 0) ⇒ x = 0, y = 0
i. e., iff α x + βy = 0, γ x + δy = 0 ⇒ x = 0, y = 0.
Now the necessary and sufficient condition for the equations αx + βy = 0,
γ x + δy = 0 to have the only solution x = 0, y = 0 is that
α β
≠0.
γ δ
Hence T is invertible iff αδ − β γ ≠ 0.
Example 18: Find two linear operators T and S on V2 (R) such that

TS = ^
0 but ST ≠ ^
0.
Solution: Consider the linear transformations T and S on V2 (R) defined by
T (a, b) = (a, 0) V (a, b) ∈ V2 (R)
138

and S (a, b) = (0, a) V (a, b) ∈ V2 (R).


We have (TS ) (a, b) = T [S (a, b)] = T (0, a) = (0, 0)
=^
0 (a, b) V (a, b) ∈ V2 (R).

∴ TS = ^
0.
Again (ST ) (a, b) = S [T (a, b)] = S (a, 0) = (0, a)
≠^
0 (a, b) V (a, b) ∈ V2 (R).

Thus ST ≠ ^
0.
Example 19: Let V be a vector space over the field F and T a linear operator on V. If
T 2
=^
0, what can you say about the relation of the range of T to the null space of T ? Give an

example of a linear operator T on V2 (R) such that T 2


=^
0 but T ≠ ^
0.

Solution: We have T 2
=^
0

⇒ T 2
(α) = ^
0 (α) V α ∈ V
⇒ T [T (α)] = 0 V α ∈ V
⇒ T (α) ∈ null space of T V α ∈ V .
But T (α) ∈ range of T V α ∈ V .
∴ T 2
=^
0 ⇒ range of T ⊆ null space of T.
For the second part of the question, consider the linear transformation T on V2 (R)
defined by
T (a, b) = (0, a) V (a, b) ∈ V2 (R).
Then obviously T ≠ ^
0.
2
We have T (a, b) = T [T (a, b)] = T (0, a) = (0, 0)

=^
0 (a, b) V (a, b) ∈ V2 (R).

∴ T 2
=^
0.
Example 20: If T : U → V is a linear transformation and U is finite dimensional, show
that U and range of T have the same dimension iff T is non-singular. Determine all
non-singular linear transformations
T : V4 (R) → V3 (R).
Solution: We know that
dim U = rank (T ) + nullity (T )
= dim of range of T + dim of null space of T.
∴ dim U = dim of range of T
iff dim of null space of T is zero
139

i. e., iff null space of T consists of zero vector alone


i. e., iff T is non-singular.
Let T be a linear transformation from V4 (R) into V3 (R). Then T will be
non-singular iff
dim of V4 (R) = dim of range of T.
Now dim V4 (R) = 4 and dim of range of T ≤ 3 because range of T ⊆ V3 (R).
∴ dim V4 (R) cannot be equal to dim of range of T.
Hence T cannot be non-singular. Thus there can be no non-singular linear
transformation from V4 (R) into V3 (R).
Example 21: If A and B are linear transformations (on the same vector space), then a
necessary and sufficient condition that both A and B be invertible is that both AB and BA be
invertible.
Solution: Let A and B be two invertible linear transformations on a vector
space V.
We have ( AB) ( B −1 A −1 ) = I = ( B −1 A −1 ) ( AB).
∴ AB is invertible.
Also we have ( BA) ( A −1 B −1 ) = I = ( A −1 B −1 ) ( BA).
∴ BA is also invertible.
Thus the condition is necessary.
Conversely, let AB and BA be both invertible. Then AB and BA are both one-one
and onto.
First we shall show that A is invertible.
A is one-one: Let α1 , α 2 ∈ V . Then
A (α1 ) = A (α 2 ) ⇒ B [ A (α1 )] = B [ A (α 2 )]
⇒ ( BA) (α1 ) = ( BA) (α 2 ) ⇒ α1 = α 2 . [∵ BA is one-one]
∴ A is one-one.
A is onto:
Let β ∈ V. Since AB is onto, therefore there exists α ∈ V such that
( AB) (α) = β ⇒ A [ B (α)] = β.
Thus β ∈ V ⇒ ∃ B (α) ∈ V such that A [ B (α)] = β.
∴ A is onto.
∴ A is invertible.
Interchanging the roles played by AB and BA in the above proof, we can prove
that B is invertible.
Example 22: If A is a linear transformation on a vector space V such that
A2 − A + I = ^
0,
then A is invertible.
140

Solution: If A2 − A + I = ^
0, then A2 − A = − I .
First we shall prove that A is one-one Let α1 , α 2 ∈ V .
Then A (α1 ) = A (α 2 ) …(1)
⇒ A [ A (α1 )] = A [ A (α 2 )]
⇒ A2 (α1 ) = A2 (α 2 ) …(2)
2 2
⇒ A (α1 ) − A (α1 ) = A (α 2 ) − A (α 2 ) [From (2) and (1)]
⇒ ( A2 − A) (α1 ) = ( A2 − A) (α 2 )
⇒ (− I ) (α1 ) = (− I ) (α 2 ) ⇒ − [I (α1 )] = − [I (α 2 )]
⇒ − α1 = − α 2 ⇒ α1 = α 2 .
∴ A is one-one.
Now to prove that A is onto.
Let α ∈ V. Then α − A (α) ∈ V .
We have A [α − A (α)] = A (α) − A2 (α) = ( A − A2 ) (α)
= I (α) [ ∵ A2 − A = − I ⇒ A − A2 = I ]
=α.
Thus α ∈ V ⇒ ∃ α − A (α) ∈ V such that A [α − A (α)] = α.
∴ A is onto.
Hence A is invertible.
Example 23: Let V be a finite dimensional vector space and T be a linear operator on V.
Suppose that rank (T 2 ) = rank ( T ). Prove that the range and null space of T are disjoint
i. e., have only the zero vector in common.
Solution: We have
dim V = rank ( T ) + nullity ( T ) and dim V = rank ( T 2 ) + nullity (T 2 ).
Since rank ( T ) = rank ( T 2 ), therefore we get nullity (T ) = nullity (T 2 )
i. e., dim of null space of T = dim of null space of T 2 .
Now T (α) = 0
⇒ T [T (α)] = T (0) ⇒ T 2 (α) = 0.
∴ α ∈ null space of T ⇒ α ∈ null space of T 2 .
∴ null space of T ⊆ null space of T 2 .
But null space of T and null space of T 2 are both subspaces of V and have the same
dimension.
∴ null space of T = null space of T 2 .
∴ null space of T 2 ⊆ null space of T
2
i. e., T (α) = 0 ⇒ T (α) = 0.
∴ range and null space of T are disjoint. [See Example 5 after 3.12]
141

Example 24:Let V be a finite dimensional vector space over the field F.


Let {α1 , α 2 , … , α n} and { β1 , β 2 , … , β n} be two ordered bases for V. Show that there
exists a unique invertible linear transformation T on V such that
T (α i ) = β i , 1 ≤ i ≤ n.
Solution: We have proved in one of the previous theorems that there exists a
unique linear transformation T on V such that T (α i ) = β i , 1 ≤ i ≤ n.
Here we are to show that T is invertible. Since V is finite dimensional therefore in
order to prove that T is invertible, it is sufficient to prove that T is non-singular.
Let α ∈ V and T (α) = 0.
Let α = a1α1 + … + a n α n where a1 , … , a n ∈ F.
We have T (α) = 0
⇒ T (a1α1 + … + a n α n ) = 0 ⇒ a1T (α1 ) + … + a n T (α n ) = 0
⇒ a1 β1 + … + a n β n = 0
⇒ a i = 0 for each 1≤ i ≤ n [ ∵ β1 , … , β n are linearly independent]
⇒ α = 0.
∴ T is non-singular because null space of T consists of zero vector alone.
Hence T is invertible.

Comprehensive Exercise 3

1. Describe explicitly the linear transformation T from F 2 to F 2 such that


T (e1 ) = (a, b), T (e2 ) = (c , d) where e1 = (1, 0), e2 = (0, 1).
2. Find a linear transformation T : R 2 → R 2 such that T (1, 0) = (1, 1) and
T (0, 1) = (− 1, 2). Prove that T maps the square with vertices (0, 0), (1, 0), (1, 1)
and (0, 1) into a parallelogram.
3. Let T : R 2 → R be the linear transformation for which T (1, 1) = 3 and
T (0, 1) = − 2. Find T (a, b).
4. Describe explicitly a linear transformation from V3 (R) into V4 (R) which
has its range the subspace spanned by the vectors (1, 2, 0, − 4), (2, 0, − 1, − 3).
5. Find a linear mapping T : R 3 → R 4 whose image is generated by (1, − 12 , , 3)
and (2, 3, − 1, 0).
6. Let F be any field and let T be a linear operator on F 2 defined by
T (a, b) = (a + b, a). Show that T is invertible and find a rule for T −1 like the
one which defines T.
7. Show that the operator T on R 3 defined by T ( x, y, z ) = ( x + z , x − z , y) is
invertible and find similar rule defining T −1 .
142

8. For the linear operator T of Solved Example 16, after article 3.18, prove
that (T 2 − I ) (T − 3I ) = ^
0.
9. Let T and U be the linear operators on R 2 defined by T (a, b) = (b, a) and
U (a, b) = (a, 0). Give rules like the one defining T and U for each of the linear
transformation (U + T ), UT , TU, T 2 , U 2 .
10. Let T be the (unique) linear operator on C 3 for which
T (1, 0, 0) = (1, 0, i), T (0, 1, 0) = (0, 1, 1), T (0, 0, 1) = (i, 1, 0).
Show that T is not invertible.

11. Show that if two linear transformations of a finite dimensional vector space
coincide on a basis of that vector space, then they are identical.

12. If T is a linear transformation on a finite dimensional vector space V such


that range (T ) is a proper subset of V, show that there exists a non-zero
element α in V with T (α) = 0.
13. Let T : R 3 → R 3 be defined as T (a, b, c ) = (0, a, b). Show that

T ≠^ 0, T 2 ≠ ^
0 but T 3 = ^0.
14. Let T be a linear transformation from a vector space U into a vector space V
with Ker T ≠ 0. Show that there exist vectors α1 and α 2 in U such that
α1 ≠ α 2 and Tα1 = Tα 2 .
15. Let T be a linear transformation from V3 (R) into V2 (R), and let S be a linear
transformation from V2 (R) into V3 (R). Prove that the transformation ST is
not invertible.
16. Let A and B be linear transformations on a finite dimensional vector space
V and let AB = I . Then A and B are both invertible and A −1 = B. Give an
example to show that this is false when V is not finite dimensional.
17. If A and B are linear transformations (on the same vector space) and if
AB = I , then A is called a left inverse of B and B is called a right inverse of A.
Prove that if A has exactly one right inverse, say B, then A is invertible.
18. Prove that the set of invertible linear operators on a vector space V with the
operation of composition forms a group. Check if this group is
commutative.
19. Let V and W be vector spaces over the field F and let U be an isomorphism
of V onto W. Prove that T → UTU −1 is an isomorphism of L (V , V ) onto
L (W, W).
20. If {α1 , … , α k } and {β1 , … β k } are linearly independent sets of vectors in a
finite dimensional vector space V, then there exists an invertible linear
transformation T on V such that
T (α i ) = β i , i = 1, … , k.
143

A nswers 3

1. T ( x1 , x2 ) = ( x1 a + x2 c , x1 b + x2 d) 2. T ( x1 , x2 ) = ( x1 − x2 , x1 + 2 x2 )
3. T (a, b) = 5a − 2b 4. T (a, b, c ) = (a + 2b, 2a − b, − 4a − 3b)
5. T (a, b, c ) = (a + 2b, − a + 3b, 2a − b, 3a)

7. T −1 ( x, y, z ) =  x +
1 1 1 1 
6. T −1 ( p, q) = (q, p − q) y, z , x − y
2 2 2 2 
9. (U + T ) (a, b) = (a + b, a) ; (UT ) (a, b) = (b, 0) ; (TU) (a, b) = (0, α)
T 2 (a, b) = (a, b) ; U 2 (a, b) = (a, 0)
18. Not commutative

O bjective T ype Q uestions

Multiple Choice Questions


Indicate the correct answer for each question by writing the corresponding letter from
(a), (b), (c) and (d).
1. If W be a subspace of a finite dimensional vector space V ( F ) , then
dim V / W =
(a) dim V − dim W (b) dim V + dim W
(c) (dim V ) / (dim W ) (d) None of these.
2. Two finite dimensional vector spaces U and V over a field F are isomorphic
if
(a) dim U = dim V (b) dim U ≠ dim V
(c) dim U < dim V (d) dim U > dim V
V
3. The dimension of , where V = R3 , W = {(a, 0, 0) : a ∈ R } is
W
(a) 1 (b) 3
(c) 2 (d) 4
4. If U ( F ) and V ( F ) are two vector spaces and T is a linear transformation
from U into V, then range of T is a subspace of
(a) U (b) V
(c) U ∪ V (d) None of these.
144

5. Let T be a linear transformation from a vector space U ( F ) into a vector space


V ( F ) with U as finite dimensional. The rank of T is the dimension of the
(a) range of T (b) null space of T
(c) vector space U (d) vector space V.
6. Let U be an n-dimensional vector space over the field F, and let V be an
m-dimensional vector space over F. Then the vector space L (U, V ) of all
linear transformations from U into V is finite dimensional and is of
dimension
(a) m (b) n
(c) mn (d) none of these.
7. If T : V2 (R) → V3 (R) defined as T (a, b) = (a + b, a − b, b) is a linear
transformation, then nullity of T is
(a) 0 (b) 1
(c) 2 (d) none of these.
8. If T is a linear transformation from a vector space V into a vector space W,
then the condition for T −1 to be a linear transformation from W to V is
(a) T should be one-one (b) T should be onto
(c) T should be one-one and onto (d) none of these.
2
9. Let F be any field and let T be a linear operator on F defined by
−1
T (a, b) = (a + b, a). Then T (a, b) =
(a) (b, a − b) (b) (a − b, b)
(c) (a, a + b) (d) none of these.
10. If T is a linear transformation T (a, b) = (α a + β b, γ a + γ b), the T is invertible
if
(a) αβ − γδ = 0 (b) αβ − γδ ≠ 0
(c) αδ − βγ = 0 (d) αδ − βγ ≠ 0
11. Let V ( F) be a vector space and let T1 , T2 be linear Transformations on V. If
T1 and T2 are invertible then
(a) (T1T2 ) −1 = T2 T1 −1 (b) (T1T2 ) −1 = T1 T2 −1
(c) (T1T2 ) −1 = T2 −1T1 −1 (d) (T1T2 ) −1 = T1 −1T2 −1

Fill in the Blank(s)


Fill in the blanks ‘‘……’’ so that the following statements are complete and
correct.
1. If W is an m-dimensional subspace of an n-dimensional vector space V, then
V
the dimension of the quotient space is …… .
W
2. Two subspaces W1 and W2 of the vector space V ( F) are said to be disjoint if
W1 ∩ W2 = …… .
145

3. Let V ( F ) be a vector space. A linear operator on V is a function T from V into


V such that T (aα + bβ) = …… for all α, β in V and for all a, b in F.

4. If U ( F ) and V ( F ) are two vector spaces and T is a linear transformation


from U into V, then the kernel of T is a subspace of …… .

5. Let U and V be vector spaces over the field F and let T be a linear
transformation from U into V. Suppose that U is finite dimensional. Then
rank (T ) + nullity (T ) = …… .

6. Let V be an n-dimensional vector space over the field F and let T be a linear
transformation from V into V such that the range and null space of T are
identical. Then n is …… .

7. The vector space of all linear operators on an n-dimensional vector space U is


of dimension …… .

8. Let V ( F ) be a vector space and let A, B be linear transformations on V. If A


and B are invertible then AB is invertible and ( AB) −1 = …… .

9. A linear operator T on R2 defined by T ( x, y) = (ax + by, cx + dy) will be


invertible iff …… .

True or False
Write ‘T’ for true and ‘F’ for false statement.
1. If a finite dimensional vector space V ( F ) is a direct sum of two subspaces W1
and W2 , then dim V = dim W1 + dim W2 .

2. Let U ( F ) and V ( F ) be two vector spaces and let T be a linear transformation


from U into V. Then the null space of T is the set of all vectors α in U such that
T (α) = α.

3. The function T : R2 → R2 defined by T (a, b) = (1 + a, b) is a linear


transformation.

4. Two linear transformations from U into V are equal if they agree on a basis of
U.
5. For two linear operators T and S on R2 , TS = ^
0 → ST = ^
0.

6. If S and T are linear operators on a vector space U, then


(S + T )2 = S 2 + 2ST + T 2 .
7. The identity operator on a vector space is always invertible.
146

A nswers

Multiple Choice Questions


1. (a) 2. (a) 3. (c) 4. (b)
5. (a) 6. (c) 7. (a) 8. (c)
9. (a) 10. (d) 11. (c)
Fill in the Blank(s)
1. n−m 2. {0} 3. aT (α) + bT ( β)
4. U 5. dim U 6. even 7. n2
8. B −1 A −1 9. ad − bc ≠ 0

True or False
1. T 2. F 3. F 4. T
5. F 6. F 7. T

¨
147

4
M atrices and L inear
T ransformations

4.1 Matrix
efinition: Let F be any field. A set of mn elements of F arranged in the form of a
D rectangular array having m rows and n columns is called an m × n matrix over the field F.
An m × n matrix is usually written as
 a11 a12 ... a1n 
a a22 ... a2 n 
A=  ⋅
21
 ... ... ... ...
a 
 m1 a m2 ... a mn 
In a compact form the above matrix is represented by A = [a ij ] m × n .The element a ij
is called the (i, j) th element of the matrix A. In this element the first suffix i will
always denote the number of row in which this element occurs.
If in a matrix A the number of rows is equal to the number of columns and is equal
to n, then A is called a square matrix of order n and the elements a ij for which i = j
constitute its principal diagonal.
Unit matrix: A square matrix each of whose diagonal elements is equal to1and each of whose
non-diagonal elements is equal to zero is called a unit matrix or an identity matrix. We shall
denote it by I. Thus if I is unit matrix of order n, then I = [δ ij ] n × n where δ ij is
Kronecker delta.
148

Diagonal matrix: A square matrix is said to be a diagonal matrix if all the elements lying
above and below the principal diagonal are equal to 0. For example,
0 0 0 0
0 2 + i 0 0 
 
0 0 0 0
0 0 0 5 

is a diagonal matrix of order 4 over the field of complex numbers.
Null matrix: The m × n matrix whose elements are all zero is called the null
matrix or (zero matrix) of the type m × n.
Equality of two matrices: Definition:
Let A = [a ij ] m × n and B = [b jk ] m × n . Then
A = B if a ij = b ij for each pair of subscripts i and j.
Addition of two matrices: Definition:
Let A = [a ij ] m × n , B = [b ij ] m × n . Then we define
A + B = [a ij + b ij ] m × n .
Multiplication of a matrix by a scalar: Definition:
Let A = [a ij ] m × n and a ∈ F i. e., a be a scalar. Then we define
aA = [aa ij ] m × n .
Multiplication of two matrices: Definition:
Let A = [a ij ] m × n , B = [b jk ] n × p i. e.,the number of columns in the matrix
A is equal to the number of rows in the matrix B. Then we define
 n 
AB =  Σ a ij b jk  i. e., AB is an m × p matrix whose (i, k) th
 j =1 m × p
n
element is equal to Σ a ij b jk .
j =1
If A and B are both square matrices of order n, then both the products AB and BA
exist but in general AB ≠ BA.
Transpose of a matrix: Definition:
Let A = [a ij ] m × n . The n × m matrix AT obtained by interchanging the rows and
columns of A is called the transpose of A. Thus AT = [b ij ] n × m , where b ij = a ji , i. e.,
the (i, j) th element of AT is the ( j, i) th element of A. If A is an m × n matrix and B is
an n × p matrix, it can be shown that ( AB)T = BT AT . The transpose of a matrix A
is also denoted by A t or by A′ .
Determinant of a square matrix: Let Pn denote the group of all permutations of
degree n on the set {1, 2 , … , n}. If θ ∈ Pn , then θ (i) will denote the image of i under θ.
The symbol (− 1) θ for θ ∈ Pn will mean + 1if θ is an even permutation and − 1if θ is an
odd permutation.
Definition: Let A = [a ij ] n × n . Then the determinant of A, written as det A or| A |or
| a ij |n × n is the element
149

Σ (− 1) θ a1θ (1) a2 θ (2) … a nθ (n) in F.


θ ∈ Pn

The number of terms in this summation is n ! because there are n ! permutations in


the set Pn .
We shall often use the notation
 a11 ... a1n 
 a21 ... a2 n 
 
 ... ... ...
 ... ... ...
 
 a n1 ... a nn 
for the determinant of the matrix [a ij ] n × n .
The following properties of determinants are worth to be noted :
(i) The determinant of a unit matrix is always equal to 1.
(ii) The determinant of a null matrix is always equal to 0.
(iii) If A = [a ij ] n × n , B = [b ij ] n × n , then det ( AB) = (det A) (det B).
Cofactors: Definition: Let A = [a ij ] n × n . We define

Aij = cofactor of a ij in A
= (− 1) i + j . [determinant of the matrix of order n − 1 obtained
by deleting the row and column of A passing through a ij ].
It should be noted that
n
Σ a ik Aij = 0 if k ≠ j
i =1

or = det A if k = j.
Adjoint of a square matrix: Definition: Let A = [a ij ] n × n .

The n × n matrix which is the transpose of the matrix of cofactors of A is called the
adjoint of A and is denoted by adj A.
It should be remembered that
A (adj A) = (adj A) A = (det A) I
where I is the unit matrix of order n.
Inverse of a square matrix: Definition: Let A be a square matrix of order n. If
there exists a square matrix B of order n such that
AB = I = BA
then A is said to be invertible and B is called the inverse of A.
Also we write B = A −1 .
The following results should be remembered :
(i) The necessary and sufficient condition for a square matrix A to be invertible
is that det A ≠ 0.
150

(ii) If A is invertible, then A −1 is unique and


1
A −1 = (adj. A).
det A
(iii) If A and B are invertible square matrices of order n, then AB is also invertible
and ( AB) −1 = B −1 A −1 .
(iv) If A is invertible, so is A −1 and ( A −1 ) −1 = A.
Elementary row operations on a matrix:
Definition:Let A be an m × n matrix over the field F. The following three
operations are called elementary row operations :
(1) multiplication of any row of A by a non-zero element c of F.
(2) addition to the elements of any row of A the corresponding elements of any
other row of A multiplied by any element a in F.
(3) interchange of two rows of A.
Row equivalent matrices: Definition: If A and B are m × n matrices over the
field F, then B is said to be row equivalent to A if B can be obtained from A by a
finite sequence of elementary row operations. It can be easily seen that the relation
of being row equivalent is an equivalence relation in the set of all m × n matrices
over F.
Row reduced Echelon matrix: Definition:
An m × n matrix R is called a row reduced echelon matrix if :
(1) Every row of R which has all its entries 0 occurs below every row which has a
non-zero entry.
(2) The first non-zero entry in each non-zero row is equal to 1.
(3) If the first non-zero entry in row i appears in column k i , then all other entries
in column k i are zero.
(4) If r is the number of non-zero rows, then
k1 < k2 < … < k r
(i. e., the first non-zero entry in row i is to the left of the first non-zero entry in row
i + 1).
Row and column rank of a matrix:
Definition: Let A = [a ij ] m × n be an m × n matrix over the field F. The row vectors
of A are the vectors α1 , … , α m ∈ Vn ( F ) defined by
α i = (a i1 , a i2 , … , a in ), 1 ≤ i ≤ m.
The row space of A is the subspace of Vn ( F ) spanned by these vectors. The row
rank of A is the dimension of the row space of A.
The column vectors of A are the vectors β1 , … , β n ∈ Vm ( F ) defined by
β j = (a1 j , a2 j , … , a mj ), 1 ≤ j ≤ n.
151

The column space of A is the subspace of Vm ( F ) spanned by these vectors. The


column rank of A is the dimension of the column space of A.
The following two results are to be remembered :
(1) Row equivalent matrices have the same row space.
(2) If R is a non-zero row reduced Echelon matrix, then the non-zero row
vectors of R are linearly independent and therefore they form a basis for the
row space of R.
In order to find the row rank of a matrix A, we should reduce it to row reduced
Echelon matrix R by elementary row operations. The number of non-zero rows in R
will give us the row rank of A.

4.2 Representation of Transformations by Matrices


Martix of a linear transformation: Let U be an n-dimensional vector space over
the field F and let V be an m-dimensional vector space over F. Let
B = {α1 , … , α n} and B ′ = { β1 , … , β m }
be ordered bases for U and V respectively. Suppose T is a linear transformation
from U into V. We know that T is completely determined by its action on the
vectors α j belonging to a basis for U. Each of the n vectors T (α j ) is uniquely
expressible as a linear combination of β1 , … , β m because T (α j ) ∈ V and these m
vectors form a basis for V. Let for j = 1, 2 , … , n,
m
T (α j ) = a1 j β1 + a2 j β 2 + … + a mj β m = Σ a ij β i .
i =1

The scalars a1 j , a2 j , … , a mj are the coordinates of T (α j ) in the ordered basis B ′ .


The m × n matrix whose j th column ( j = 1, 2 , … , n) consists of these coordinates is
called the matrix of the linear transformation T relative to the pair of
ordered bases B and B ′. We shall denote it by the symbol [T ; B ; B ′ ] or simply
by [T ] if the bases are understood. Thus
[T ] = [T ; B ; B ′ ]
= matrix of T relative to the ordered bases B and B ′
= [a ij ] m × n
m
where T (α j ) = Σ a ij β i , for each j = 1, 2 , … , n.
i =1 …(1)
The coordinates of T (α1 ) in the ordered basis B ′ form the first column of this
matrix, the coordinates of T (α 2 ) in the ordered basis B ′ form the second column
of this matrix and so on.
The m × n matrix [a ij ] m × n completely determines the linear transformation T
through the formulae given in (1). Therefore the matrix [a ij ] m × n represents the
transformation T.
152

Note: Let T be a linear transformation from an n-dimensional vector space


V ( F ) into itself. Then in order to represent T by a matrix, it is most convenient to
use the same ordered basis in each case, i. e., to take B = B ′ . The representing
matrix will then be called the matrix of T relative to the ordered basis B and
will be denoted by [T ; B ] or sometimes also by [T ] B .
Thus if B = {α1 , … , α n} is an ordered basis for V, then
[T ] B or [T ; B ] = matrix of T relative to the ordered basis B
= [a ij ] n × n ,
n
where T (α j ) = Σ a ij α i , for each j = 1, 2 , … , n.
i =1

Example 1: Let T be a linear transformation on the vector space V2 ( F ) defined by


T (a, b) = (a, 0).
Write the matrix of T relative to the standard ordered basis of V2 ( F ).
Solution: Let B = {α1 , α 2 } be the standard ordered basis for V2 ( F ). Then
α1 = (1, 0), α 2 = (0, 1).
We have T (α1 ) = T (1, 0) = (1, 0).
Now let us express T (α1 ) as a linear combination of vectors in B. We have
T (α1 ) = (1, 0) = 1 (1, 0) + 0 (0, 1) = 1α1 + 0α 2 .
Thus 1, 0 are the coordinates of T (α1 ) with respect to the ordered basis B. These
coordinates will form the first column of matrix of T relative to ordered basis B.
Again T (α 2 ) = T (0, 1) = (0, 0) = 0 (1, 0) + 0 (0, 1).
Thus 0, 0 are the coordinates of T (α 2 ) and will form second column of matrix of T
relative to ordered basis B.
Thus matrix of T relative to ordered basis B
1 0
= [T ] B or [T ; B] =  .
0 0 
Example 2: Let V (R) be the vector space of all polynomials in x with coefficients in R of the
form
f ( x) = a0 x 0 + a1 x + a2 x 2 + a3 x 3
i. e., the space of polynomials of degree three or less. The differentiation operator D is a linear
transformation on V. The set
B = {α1 , … , α 4 } where α1 = x 0 , α 2 = x1 , α 3 = x 2 , α 4 = x 3
is an ordered basis for V. Write the matrix of D relative to the ordered basis B.
Solution: We have
153

D (α1 ) = D ( x 0 ) = 0 = 0 x 0 + 0 x1 + 0 x 2 + 0 x 3
= 0α1 + 0α 2 + 0α 3 + 0α 4
D (α 2 ) = D ( x ) = x 0 = 1x 0 + 0 x1 + 0 x 2 + 0 x 3
1

= 1α1 + 0α 2 + 0α 3 + 0α 4
D (α 3 ) = D ( x ) = 2 x1 = 0 x 0 + 2 x1 + 0 x 2 + 0 x 3
2

= 0α1 + 2α 2 + 0α 3 + 0α 4
D (α 4 ) = D ( x ) = 3 x 2 = 0 x 0 + 0 x1 + 3 x 2 + 0 x 3
3

= 0α1 + 0α 2 + 3α 3 + 0α 4 .
∴ the matrix of D relative to the ordered basis B
0 1 0 0
0 0 2 0
= [D ; B ] =  
0 0 0 3
0 0 0 0
  4 × 4.
Theorem 1: Let U be an n-dimensional vector space over the field F and let V be an
m-dimensional vector space over F. Let B and B ′ be ordered bases for U and V respectively.
Then corresponding to every matrix [a ij ] m × n of mn scalars belonging to F there corresponds a
unique linear transformation T from U into V such that
[T ; B ; B ′ ] = [a ij ] m × n .
Proof: Let B = {α1 , α 2 , … , α n} and B ′ = { β1 , β 2 , … , β m }.
m m m
Now Σ a i1 β i , Σ a i2 β i , … , Σ a in β i
i =1 i =1 i =1

are vectors belonging to V because each of them is a linear combination of the


m
vectors belonging to a basis for V. It should be noted that the vector Σ a ij β i has
i =1
th
been obtained with the help of the j column of the matrix [a ij ] m × n .

Since B is a basis for U, therefore by the theorem 2 of article 3.13 of chapter 3 there
exists a unique linear transformation T from U into V such that
m
T (α j ) = Σ a ij β i where j = 1, 2, … , n. …(1)
i =1

By our definition of matrix of a linear transformation , we have from (1)


[T ; B ; B ′ ] = [a ij ] m × n .
Note: If we take V = U, then in place of B ′ , we also take B. In that case the above
theorem will run as :
Let V be an n-dimensional vector space over the field F and B be an ordered basis or
co-ordinate system for V. Then corresponding to every matrix [a ij ] n × n of n2 scalars
belonging to F there corresponds a unique linear transformation T from V into V such that
[T ; B ] or [T ] B = [a ij ] n × n .
154

Explicit expression for a linear transformation in terms of its matrix: Now


our aim is to establish a formula which will give us the image of any vector under a
linear transformation T in terms of its matrix.
Theorem 2: Let T be a linear transformation from an n-dimensional vector space U into
an m-dimensional vector space V and let B and B ′ be ordered bases for U and V respectively. If
A is the matrix of T relative to B and B ′ then V α ∈ U, we have
[T (α)] B ′ = A [α] B
where [α] B is the co-ordinate matrix of α with respect to ordered basis B and [T (α)] B ′ is
co-ordinate matrix of T (α) ∈ V with respect to B ′ .
Proof: Let B = {α1 , α 2 , … , α n }

and B ′ = { β1 , β 2 , … , β m }.
Then A = [T ; B ; B ′ ] = [a ij ] m × n ,
m
where T (α j ) = Σ a ij β i , j = 1, 2, … , n. …(1)
i =1

If α = x1α1 + … + x n α n is a vector in U, then


 n 
T (α) = T  Σ x j α j 
 j =1 
n
= Σ x j T (α j )
j =1 [ ∵ T is a linear transformation]
n m
= Σ xj Σ a ij β i [From (1)]
j =1 i =1

m  n 
= Σ  Σ a ij x j  β i . …(2)
i =1  j =1 
The co-ordinate matrix of T (α) with respect to ordered basis B ′ is an m × 1matrix.
From (2), we see that the ith entry of this column matrix [T (α)] B ′
n
= Σ a ij x j
j =1

i. e., the coefficient of β i in the linear combination (2) for T (α).


If X is the co-ordinate matrix [α] B of α with respect to ordered basis B, then X is an
n × 1matrix. The product AX will be an m × 1matrix. The ith entry of this column
matrix AX will be
n
= Σ a ij x j .
j =1

∴ [T (α)] B ′ = AX = A [α] B = [T ; B ; B ′ ] [α] B .


Note: If we take U = V , then the above result will be
[T (α)] B = [T ] B [α] B .
155

Matrices of Identity and Zero transformations.


Theorem 3:Let V ( F ) be an n-dimensional vector space and B be any ordered basis for V.

If I be the identity transformation and ^


0 be the zero transformation on V, then
(i) [I ; B ] = I (unit matrix of order n) and
^
(ii) [0 ; B ] = null matrix of the type n × n.
Proof: Let B = {α1 , α 2 , … , α n }.
(i) We have I (α j ) = α j , j = 1, 2, … , n
= 0α1 + … + 1α j + 0α j + 1 + … + 0α n
n
= Σ δ ij α i , where δ ij is Kronecker delta.
i =1

∴ By def. of matrix of a linear transformation , we have


[I ; B ] = [δ ij ] n × n = I i. e., unit matrix of order n.

(ii) We have ^
0 (α j ) = 0, j = 1, 2, … , n
= 0α1 + 0α 2 + … + 0α n
n
= Σ o ij α i , where each o ij = 0.
i =1

∴ By def. of matrix of a linear transformation, we have

[^
0 ; B ] = [o ij ] n × n = null matrix of the type n × n.

Theorem 4: Let T and S be linear transformations from an n-dimensional vector space


U into an m-dimensional vector space V and let B and B ′ be ordered bases for U and V
respectively. Then
(i) [T + S ; B ; B ′ ] = [T ; B ; B ′ ] + [S ; B ; B ′ ]
(ii) [cT ; B ; B ′ ] = c [T ; B ; B ′ ] where c is any scalar.
Proof: Let B = {α1 , α 2 , … , α n } and B ′ = { β1 , β 2 , … , β m }.

Let [a ij ] m × n be the matrix of T relative to B, B ′. Then


m
T (α j ) = Σ a ij β i , j = 1, 2, … , n.
i =1

Also let [b ij ] m × n be the matrix of S relative to B, B ′. Then


m
S (α j ) = Σ b ij β i , j = 1, 2, … , n.
i =1
(i) We have
(T + S ) (α j ) = T (α j ) + S (α j ), j = 1, 2, … , n
m m m
= Σ a ij β i + Σ b ij β i = Σ (a ij + b ij ) β i .
i =1 i =1 i =1

∴ matrix of T + S relative to B, B ′ = [a ij + b ij ] m × n
= [a ij ] m × n + [b ij ] m × n ,
156

∴ [T + S ; B ; B ′ ] = [T ; B ; B ′ ] + [S ; B ; B ′ ].
(ii) We have (cT ) (α j ) = cT (α j ), j = 1, 2, … , n
m m
= c Σ a ij β i = Σ (ca ij ) β i .
i =1 i =1

∴ [cT ; B ; B ′ ] = matrix of cT relative to B, B ′


= [ca i j ] m × n = c [a i j ] m × n = c [T ; B ; B ′ ].

Theorem 5: Let U, V and W be finite dimensional vector spaces over the field F ; let T be a
linear transformation from U into V and S a linear transformation from V into W.Further let
B, B ′ and B ′ ′ be ordered bases for spaces U, V and W respectively. If A is the matrix of T
relative to the pair B, B ′ and D is the matrix of S relative to the pair B ′ , B ′ ′ then the matrix
of the composite transformation ST relative to the pair B, B ′ ′ is the product matrix C = DA.
Proof: Let dim U = n, dim V = m and dim W = p. Further let
B = {α1 , α 2 , … , α n }, B ′ = { β1 , β 2 , … , β m }
and B′ ′ = {γ 1 , γ 2 , … , γ p }.
Let A = [a i j ] m × n , D = [dk i ] p × m and C = [c k j ] p × n . Then
m
T (α j ) = Σ a ij β i , j = 1, 2, … , n, …(1)
i =1
p
S ( β i) = Σ dki γ k , i = 1, 2, … , m. …(2)
k =1
p
and (ST ) (α j ) = Σ c kj γ k , j = 1, 2, … , n. …(3)
k =1

We have (ST ) (α j ) = S [T (α j )], j = 1, 2, … , n


 m 
= S  Σ a ij β i  [From (1)]
 i =1 
m
= Σ a ij S ( β i ) [ ∵ S is linear]
i =1
m p
= Σ a ij Σ dki γ k [From (2)]
i =1 k =1
p  m 
= Σ  Σ dki a ij  γ k . …(4)
k =1  i =1 
Therefore from (3) and (4), we have
m
c kj = Σ dki a ij , j = 1, 2, … , n ; k = 1, 2, … , p.
i =1

m 
∴ [c kj ] p × n =  Σ dki a ij 
i = 1 p × n
= [dki ] p × m [a ij ] m × n , by def. of product of two matrices.
Thus C = DA.
157

Note: If U = V = W, then the statement and proof of the above theorem will be as
follows :
Let V be an n-dimensional vector space over the field F ; let T and S be linear transformations
of V. Further let B be an ordered basis for V. If A is the matrix of T relative to B, and D is the
matrix of S relative to B,then the matrix of the composite transformation ST relative to B is the
product matrix
C = DA i. e., [ST ] B = [S ] B [T ] B .
Proof: Let B = {α1 , α 2 , … , α n }.
Let A = [a ij ] n × n , D = [dki ] n × n and C = [c kj ] n × n . Then
n
T (α j ) = Σ a ij α i , j = 1, 2, … , n, …(1)
i =1
n
S (αi ) = Σ dki α k , i = 1, 2, … , n, …(2)
k =1
n
and (ST ) (α j ) = Σ c kj α k , j = 1, 2, … , n. …(3)
k =1

We have (ST ) (α j ) = S [T (α j )]
 n  n n n
= S  Σ a ij α i  = Σ a ij S (α i ) = Σ a ij Σ dki α k
 i =1  i =1 i =1 k =1

n  n 
= Σ  Σ dki a ij  α k . …(4)
k =1  i =1 
∴ from (3) and (4), we have
n
c kj = Σ dki a ij .
i =1

 n 
∴ [c kj ] n × n =  Σ dki a ij  = [dk i ] n × n [a i j ] n × n .
i = 1  n× n

∴ C = DA.
Theorem 6: Let U be an n-dimensional vector space over the field F and let V be an
m-dimensional vector space over F. For each pair of ordered bases B, B ′ for U and V
respectively, the function which assigns to a linear transformation T its matrix relative to B,
B ′ is an isomorphism between the space L (U, V ) and the space of all m × n matrices over the
field F.
Proof. Let B = {α1 , … , α n } and B′ = { β1 , … , β m }.
Let M be the vector space of all m × n matrices over the field F. Let
ψ : L (U, V ) → M such that
ψ (T ) = [T ; B ; B′ ] V T ∈ L (U, V ).
Let T1 , T2 ∈ L (U, V ) ; and let
[T1 ; B ; B′ ] = [a ij ] m × n and [T2 ; B ; B ′ ] = [b ij ] m × n .
158
m
Then T1 (α j ) = Σ a ij β i , j = 1, 2 , … , n
i =1
m
and T2 (α j ) = Σ b ij β i , j = 1, 2, … , n.
i =1

To prove that ψ is one-one:


We have ψ (T1 ) = ψ (T2 )
⇒ [T1 ; B ; B ′ ] = [T2 ; B ; B ′ ] [by def. of ψ]
⇒ [a ij ] m × n = [b ij ] m × n
⇒ a ij = b ij for i = 1, … , m and j = 1, … , n
m m
⇒ Σ a ij β i = Σ b ij β i for j = 1, … , n
i =1 i =1

⇒ T1 (α j ) = T2 (α j ) for j = 1, … , n
⇒ T1 = T2 [∵ T1 and T2 agree on a basis for U ]
∴ ψ is one-one.
ψ is onto:
Let [c ij ] m × n ∈ M. Then there exists a linear transformation T from U into V such
that
m
T (α j ) = Σ c ij β i , j = 1, 2, … , n.
i =1

We have [ T ; B ; B ′ ] = [c ij ] m × n ⇒ ψ (T) = [c ij ]m × n .
∴ ψ is onto.
ψ is a linear transformation:
If a, b ∈ F, then
ψ (aT1 + bT2 ) = [aT1 + bT2 ; B ; B′ ] [by def. of ψ]
= [aT1 ; B ; B′ ] + [bT2 ; B ; B′ ] [by theorem 4]
= a [T1 ; B ; B′ ] + b [T2 ; B ; B′ ] [by theorem 4]
= aψ (T1 ) + bψ (T2 ), by def. of ψ.
∴ ψ is a linear transformation.
Hence ψ is an isomorphism from L (U, V ) onto M.
Note: It should be noted that in the above theorem if U → V , then ψ also
preserves products and I i. e.,
ψ (T1 T2 ) = ψ (T1 ) ψ (T2 )
and ψ (I ) = I i. e., unit matrix.
Theorem 7: Let T be a linear operator on an n-dimensional vector space V and let B be an
ordered basis for V. Prove that T is invertible iff [T ] B is an invertible matrix. Also if T is
invertible, then
[ T −1 ] B = [T ] B −1 ,
i. e., the matrix of T −1 relative to B is the inverse of the matrix of T relative to B.
159

Proof:Let T be invertible. Then T −1 exists and we have


T −1 T = I = T T −1 ⇒ [T −1 T ] B = [I ] B = [T T −1 ] B
⇒ [T −1 ] B [T ] B = I = [T ] B [T −1 ] B
⇒ [T ] B is invertible and ([T ] B ) −1 = [T −1 ] B .
Conversely, let [T ] B be an invertible matrix. Let [T ] B = A. Let C = A −1 and let S
be the linear transformation of V such that
[S ] B = C .
We have C A = I = AC
⇒ [S ] B [T ] B = I = [T ] B [S ] B ⇒ [ST ] B = [I ] B = [TS ] B
⇒ ST = I = TS ⇒ T is invertible.

4.3 Change of Basis


Suppose V is an n-dimensional vector space over the field F. Let B and B ′ be
two ordered bases for V. If α is any vector in V, then we are now interested to
know what is the relation between its coordinates with respect to B and its
coordinates with respect to B ′ .
Theorem 8:Let V ( F ) be an n-dimensional vector space and let B and B ′ be two ordered
bases for V.Then there is a unique necessarily invertible, n × n matrix A with entries in F such
that
(1) [α] B = A [α] B ′
(2) [α] B ′ = A −1 [α] B for every vector α in V.
Proof: Let B = {α1 , α 2 , … , α n } and B′ = { β1 , β 2 , … , β n }.
Then there exists a unique linear transformation T from V into V such that
T (α j ) = β j , j = 1, 2, … , n. …(1)
Since T maps a basis B onto a basis B ′ , therefore T is necessarily invertible. The
matrix of T relative to B i. e., [T ] B will be a unique n × n matrix with elements in F.
Also this matrix will be invertible because T is invertible.
Let [T ] B = A = [a ij ] n × n . Then
n
T (α j ) = Σ a ij α i , j = 1, 2, … , n. …(2)
i =1

Let x1 , x2 , … , x n be the coordinates of α with respect to B and y1 , y2 , … , y n be


the coordinates of α with respect to B′. Then
n
α = y1 β1 + y2 β 2 + … + y n β n = Σ y j β j
j =1
n
= Σ y j T (α j ) [From (1)]
j =1
n n
= Σ y j Σ a ij α i [From (2)]
j =1 i =1
160

n  n 
= Σ  Σ a ij y j  α i .
i =1  j =1 
n
Also α = Σ xi α i .
i =1
n
∴ x i = Σ a ij y j
j =1

because the expression for α as a linear combination of elements of B is unique.


Now [α] B is a column matrix of the type n × 1. Also [α] B ′ is a column matrix of the
type n × 1. The product matrix A [α] B ′ will also be of the type n × 1.
n
The i th entry of [α] B = x i = Σ a ij y j = i th entry of A [α] B ′ .
j =1

∴ [α] B = A [α] B ′ ⇒ A −1 [α] B = A −1 A [α] B ′


⇒ A −1 [α] B = I [α] B ⇒ A −1 [α] B = [α] B ′ .

Note: The matrix A = [T ] B is called the transition matrix from B to B ′. It


expresses the coordinates of each vector in V relative to B in terms of its coordinates
relative to B ′ .
How to write the transition matrix from one basis to another ?
Let B = {α1 , α 2 , … , α n } and B′ = { β1 , β 2 , … , β n } be two ordered bases for the
n-dimensional vector space V ( F ). Let A be the transition matrix from the basis B
to the basis B′ . Let T be the linear transformation from V into V which maps the
basis B onto the basis B ′ . Then A is the matrix of T relative to B i. e., A = [T ] B .
So in order to find the matrix A, we should first express each vector in the basis B ′
as a linear combination over F of the vectors in B. Thus we write the relations
β1 = a11 α1 + a21 α 2 + … + a n1 α n
β 2 = a12 α1 + a22 α 2 + … + a n2 α n
… … … … …
… … … … …
β n = a1n α1 + a2 n α 2 + … + a nn α n .
Then the matrix A = [a ij ] n × n i. e., A is the transpose of the matrix of coefficients in
the above relations. Thus
 a11 a12 ... a1n 
a a22 ... a2 n 
 21 
A =  ... ... ... ... ⋅
 ... ... ... ...
 
 a n1 a n2 ... a nn 
Now suppose α is any vector in V. If [α] B is the coordinate matrix of α relative to the
basis B and [α] B ′ its coordinate matrix relative to the basis B ′ then
[α] B = A [α] B ′ and [α] B ′ = A −1 [α] B .
161

Theorem 9: Let B = {α1 , … , α n } and B′ = { β1 , … , β n } be two ordered bases for an


n-dimensional vector space V ( F ). If ( x1 , … , x n ) is an ordered set of n scalars, let
n n
α = Σ x i α i and β = Σ xi β i .
i =1 i =1

Then show that T (α) = β, where T is the linear operator on V defined by


T (α i ) = β i , i = 1, 2, … , n.
 n  n
Proof: We have T (α) = T  Σ x i α i  = Σ x i T (α i ) [ ∵ T is linear]
 i =1  i =1
n
= Σ x i β i = β.
i =1

Similarity:
Similarity of matrices. Definition:Let A and B be square matrices of order n over the
field F. Then B is said to be similar to A if there exists an n × n invertible square matrix C with
elements in F such that
B = C −1 AC.
Theorem 10: The relation of similarity is an equivalence relation in the set of all n × n
matrices over the field F.
Proof: If A and B are two n × n matrices over the field F, then B is said to be
similar to A if there exists an n × n invertible matrix C over F such that
B = C −1 AC.
−1
Reflexive: Let A be any n × n matrix over F. We can write A = I AI , where I is
n × n unit matrix over F.
∴ A is similar to A because I is definitely invertible.
Symmetric: Let A be similar to B. Then there exists an n × n invertible matrix P
over F such that
A = P −1 BP
⇒ PAP −1 = P ( P −1 BP ) P −1
⇒ PAP −1 = B ⇒ B = PAP −1
⇒ B = ( P −1 ) −1 AP −1
[∵ P is invertible means P −1 is invertible and ( P −1 ) −1 = P ]
⇒ B is similar to A.
Transitive: Let A be similar to B and B be similar to C. Then
A = P −1 BP and B = Q −1 CQ,
where P and Q are invertible n × n matrices over F.
We have A = P −1 BP = P −1 (Q −1 CQ ) P
162

= ( P −1 Q −1 ) C (QP )
= (QP) −1 C (QP) [ ∵ P and Q are invertible means QP
is invertible and (QP) −1 = P −1 Q −1 ]
∴ A is similar to C.
Hence similarity is an equivalence relation on the set of n × n matrices over the field
F.
Theorem 11: Similar matrices have the same determinant.
Proof: Let B be similar to A.Then there exists an invertible matrix C such that
B = C −1 AC
⇒ det B = det (C −1 AC ) ⇒ det B = (det C −1 ) (det A) (det C )
⇒ det B = (det C −1 ) (det C ) (det A) ⇒ det B = (det C −1 C ) (det A)
⇒ det B = (det I ) (det A) ⇒ det B = 1 (det A) ⇒ det B = det A.
Similarity of linear transformations: Definition: Let A and B be linear
transformations on a vector space V ( F ). Then B is said to be similar to A if there exists an
invertible linear transformation C on V such that
B = CAC −1 .
Theorem 12: The relation of similarity is an equivalence relation in the set of all linear
transformations on a vector space V ( F ).
Proof: If A and B are two linear transformations on the vector space V ( F ), then
B is said to be similar to A if there exists an invertible linear transformation C on V
such that
B = CAC −1 .
Reflexive: Let A be any linear transformation on V. We can write
A = IAI −1 ,
where I is identity transformation on V.
∴ A is similar to A because I is definitely invertible.
Symmetric: Let A be similar to B. Then there exists an invertible linear
transformation P on V such that
A = PBP −1
⇒ P −1 AP = P −1 ( PBP −1 ) P
⇒ P −1 AP = B ⇒ B = P −1 AP
⇒ B = P −1 A ( P −1 ) −1
⇒ B is similar to A.
Transitive: Let A be similar to B and B be similar to C.
Then A = PBP −1 ,
and B = QCQ −1 ,
where P and Q are invertible linear transformations on V.
163

We have A = PBP −1 = P (QCQ −1 ) P −1


= ( PQ ) C (Q −1 P −1 ) = ( PQ ) C ( PQ ) −1 .
∴ A is similar to C.
Hence similarity is an equivalence relation on the set of all linear transformations
on V ( F ).
Theorem 13: Let T be a linear operator on an n-dimensional vector space V ( F ) and let B
and B′ be two ordered bases for V. Then the matrix of T relative to B′ is similar to the matrix of
T relative to B.
Proof:Let B = {α1 , α 2 , … , α n } and B′ = { β1 , … , β n }.

Let A = [a i j ] n × n be the matrix of T relative to B


and C = [c i j ] n × n be the matrix of T relative to B ′ . Then
n
T (α j ) = Σ a ij α i , j = 1, 2, … , n …(1)
i =1
n
and T ( β j ) = Σ c ij β i , j = 1, 2, … , n. …(2)
i =1

Let S be the linear operator on V defined by


S (α j ) = β j , j = 1, 2, … , n. …(3)
Since S maps a basis B onto a basis B ′, therefore S is necessarily invertible. Let P be
the matrix of S relative to B. Then P is also an invertible matrix.
If P = [ pi j ] n × n , then
n
S (α j ) = Σ pij α i , j = 1, 2, … , n …(4)
i =1

We have T ( β j ) = T [S (α j )] [From (3)]


 n 
= T  Σ pkj α k 
 k =1 
[From (4), on replacing i by k which is immaterial]
n
= Σ pkj T (α k )
k =1 [ ∵ T is linear]
n n
= Σ pkj Σ a ik α i
k =1 i =1 [From (1), on replacing j by k]
n  n 
= Σ  Σ a ik pkj  α i . …(5)
i =1  k =1 
n
Also T ( β j ) = Σ c kj β k [From (2), on replacing i by k ]
k =1
n
= Σ c kj S (α k ) [From (3)]
k =1
n n
= Σ c kj Σ pik α i [From (4), on replacing j by k ]
k =1 i =1
164

n  n 
= Σ  Σ pik c kj  α i . …(6)
i =1  k =1 
From (5) and (6), we have
n  n  n  n 
Σ  Σ a ik pkj  α i = Σ  Σ pik c kj  α i
i =1  k =1  i = 1  k = 1 
n n
⇒ Σ a ik pkj = Σ pik c kj
k =1 k =1

⇒ [a ik ] n × n [ pkj ] n × n = [ pik ] n × n [c kj ] n × n
[by def. of matrix multiplication]
⇒ AP = PC
⇒ P −1 AP = P −1 PC [∵ P −1 exists]
⇒ P −1 AP = IC ⇒ P −1 AP = C
⇒ C is similar to A.
Note: Suppose B and B′ are two ordered bases for an n-dimensional vector space
V ( F ).Let T be a linear operator on V.Suppose A is the matrix of T relative to B and
C is the matrix of T relative to B′ . If P is the transition matrix from the basis B to the
basis B′ , then C = P −1 AP.
This result will enable us to find the matrix of T relative to the basis B ′ when we
already knew the matrix of T relative to the basis B.
Theorem 14: Let V be an n-dimensional vector space over the field F and T1 and T2 be
two linear operators on V. If there exist two ordered bases B and B ′ for V such that
[T1 ] B = [T2 ] B ′ , then show that T2 is similar to T1 .
Proof: Let B = {α1 , … , α n } and B′ = { β1 , … , β n }.
Let [T1 ] B = [T2 ] B ′ = A = [a ij ] n × n . Then
n
T1 (α j ) = Σ a ij α i , j = 1, 2, … , n, …(1)
i =1
n
and T2 ( β j ) = Σ a ij β i , j = 1, 2, … , n. …(2)
i =1

Let S be the linear operator on V defined by


S (α j ) = β j , j = 1, 2, … , n. …(3)
Since S maps a basis of V onto a basis of V, therefore S is invertible.
We have T2 ( β j ) = T2 [S (α j )] [From (3)]
= (T2 S ) (α j ). …(4)
n
Also T2 ( β j ) = Σ ai j β i [From (2)]
i =1
n
= Σ a i j S (α i ) [From (3)]
i =1
165

 n 
= S  Σ ai j α i  [ ∵ S is linear]
 i =1 
= S [T1 (α j )] [From (1)]
= (ST1 ) (α j ). …(5)
From (4) and (5), we have
(T2 S ) (α j ) = (ST1 ) (α j ), j = 1, 2, … , n.
Since T2 S and ST1 agree on a basis for V, therefore we have T2 S = ST1
⇒ T2 SS −1 = ST1 S − 1 ⇒ T2 I = ST1 S −1
⇒ T2 = ST1 S −1 ⇒ T2 is similar to T1 .
Determinant of a linear transformation on a finite dimensional vector
space: Let T be a linear operator on an n-dimensional vector space V ( F ).If B and
B′ are two ordered bases for V, then [T ] B and [T ] B ′
are similar matrices. Also similar matrices have the same determinant. This enables
us to make the following definition :
Definition: Let T be a linear operator on an n-dimensional vector space V ( F ). Then the
determinant of T is the determinant of the matrix of T relative to any ordered basis for V.
By the above discussion the determinant of T as defined by us will be a unique
element of F and thus our definition is sensible.
Scalar Transformation: Definition: Let V ( F ) be a vector space. A linear
transformation T on V is said to be a scalar transformation of V if
T (α) = cα V α ∈ V ,
where c is a fixed scalar in F.
Also then we write T = c and we say that the linear transformation T is equal to the
scalar c.
Also obviously if the linear transformation T is equal to the scalar c, then we have
T = cI , where I is the identity transformation on V.
Trace of a Matrix: Definition: Let A be a square matrix of order n over a field F. The
sum of the elements of A lying along the principal diagonal is called the trace of A. We shall
write the trace of A as trace A. Thus if A = [a ij ] n × n , then
n
tr A = Σ a ii = a11 + a22 + … + a nn .
i =1

In the following two theorems we have given some fundamental properties of the
trace function.
Theorem 15: Let A and B be two square matrices of order n over a field F and λ ∈ F.Then
(i) tr (λ A) = λ tr A ;
(ii) tr ( A + B ) = tr A + tr B ;
(iii) tr ( AB ) = tr ( BA).
166

Proof: Let A = [a ij ] n × n and B = [b ij ] n × n .

(i) We have λA = [λa ij ] n × n , by def. of multiplication of a matrix by a scalar.


n n
∴ tr (λ A) = Σ λa ii = λ Σ a ii = λ tr A.
i =1 i =1

(ii) We have A + B = [a ij + b ij ] n × n .
n n n
∴ tr ( A + B ) = Σ (a ii + b ii ) = Σ a ii + Σ b ii = tr A + tr B.
i =1 i =1 i =1
n
(iii) We have AB = [c i j ] n × n where c ij = Σ a i k b k j .
k =1
n
Also BA = [dij ] n × n where dij = Σ b ik a kj .
k =1

n n  n 
Now tr ( AB ) = Σ c ii = Σ  Σ a ik b ki 
i =1 i =1  k =1 
n n
= Σ Σ a ik b ki ,
k =1 i =1

interchanging the order of summation in the last sum


n  n  n
= Σ  Σ b ki a ik  = Σ dkk
k =1  i =1  k =1
= d11 + d22 + … + dnn = tr ( BA).
Theorem 16: Similar matrices have the same trace.
Proof: Suppose A and B are two similar matrices. Then there exists an invertible
matrix C such that B = C −1 AC.
−1
Let C A = D.
Then tr B = tr ( DC )
= tr (CD ) [by theorem 15]
−1
= tr (CC A) = tr (IA) = tr A.
Trace of a linear transformation on a finite dimensional vector space: Let T
be a linear operator on an n-dimensional vector space V ( F ). If B and B′ are two
ordered bases for V, then [T ] B and [T ] B ′
are similar matrices. Also similar matrices have the same trace. This enables us to
make the following definition.
Definition of trace of a linear transformation. Let T be a linear operator on an
n-dimensional vector space V ( F ). Then the trace of T is the trace of the matrix of T relative to
any ordered basis for V.
By the above discussion the trace of T as defined by us will be a unique element of F
and thus our definition is sensible.
167

Example 3: Find the matrix of the linear transformation T on V3 (R) defined as


T (a, b, c ) = (2b + c , a − 4b, 3a), with respect to the ordered basis B and also with respect to
the ordered basis B ′ where
(i) B = {(1, 0, 0), (0, 1, 0), (0, 0, 1) }
(ii) B′ = {(1, 1, 1), (1, 1, 0), (1, 0, 0) }.
Solution: (i) We have
T (1, 0, 0) = (0, 1, 3) = 0 (1, 0, 0) + 1 (0, 1, 0) + 3 (0, 0, 1)
T (0, 1, 0) = (2, − 4, 0) = 2 (1, 0, 0) − 4 (0, 1, 0) + 0 (0, 0, 1)
and T (0, 0, 1) = (1, 0, 0) = 1 (1, 0, 0) + 0 (0, 1, 0) + 0 (0, 0, 1).
∴ by def. of matrix of T with respect to B, we have
0 2 1

[T ] B = 1 −4 0  .
 
 3 0 0 
Note: In order to find the matrix of T relative to the standard ordered basis B, it is
sufficient to compute T (1, 0, 0), T (0, 1, 0) and T (0, 0, 1).There is no need of further
expressing these vectors as linear combinations of (1, 0, 0), (0, 1, 0) and (0, 0, 1).
Obviously the co-ordinates of the vectors T (1, 0, 0), T (0, 1, 0) and T (0, 0, 1)
respectively constitute the first, second and third columns of the matrix [T ] B .
(ii) We have T (1, 1, 1) = (3, − 3, 3).
Now our aim is to express (3, − 3, 3) as a linear combination of vectors in B′ .
Let (a, b, c ) = x (11
, , 1) + y (1, 1, 0) + z (1, 0, 0)
= ( x + y + z , x + y, x).
Then x + y + z = a, x + y = b, x = c
i. e., x = c , y = b − c , z = a − b. …(1)
Putting a = 3, b = − 3, and c = 3 in (1), we get
x = 3, y = − 6 and z = 6.
∴ T (1, 1, 1) = (3, − 3, 3) = 3 (1, 1, 1) − 6 (1, 1, 0) + 6 (1, 0, 0).
Also T (1, 1, 0) = (2, − 3, 3).
Putting a = 2, b = − 3 and c = 3 in (1), we get
T (1, 1, 0) = (2, − 3, 3) = 3 (1, 1, 1) − 6 (1, 1, 0) + 5 (1, 0, 0).
Finally, T (1, 0, 0) = (0, 1, 3).
Putting a = 0, b = 1 and c = 3 in (1), we get
T (1, 0, 0) = (0, 1, 3) = 3 (1, 1, 1) − 2 (1, 0, 0) − 1 (1, 0, 0).
168

 3 3 3
∴ [T ] B ′ =  −6 −6 −2  .
 
 6 5 −1
Example 4: Let T be the linear operator on R3 defined by
T ( x1 , x2 , x3 ) = (3 x1 + x3 , − 2 x1 + x2 , − x1 + 2 x2 + 4 x3 ).
What is the matrix of T in the ordered basis {α1 , α 2 , α 3 } where
α1 = (1, 0, 1), α 2 = (− 1, 2, 1) and α 3 = (2, 1, 1) ?
Solution: By def. of T, we have
T (α1 ) = T (1, 0, 1) = (4, − 2, 3).
Now our aim is to express (4, − 2, 3) as a linear combination of the vectors in the
basis B = {α1 , α 2 , α 3 }. Let
(a, b, c ) = x α1 + yα 2 + zα 3
= x (1, 0, 1) + y (− 1, 2, 1) + z (2, 1, 1)
= ( x − y + 2z , 2 y + z , x + y + z ).
Then x − y + 2z = a, 2 y + z = b, x + y + z = c .
Solving these equations, we have
− a − 3b + 5c b+c −a b−c +a
x= , y= , z = ⋅ …(1)
4 4 2
Putting a = 4, b = − 2, c = 3 in (1), we get
17 3 1
x= , y = − ,z = − ⋅
4 4 2
17 3 1
∴ T (α1 ) = α1 − α 2 − α 3 .
4 4 2
Also T (α 2 ) = T (− 1, 2, 1) = (− 2, 4, 9).
35 15 −7
Putting a = − 2, b = 4, c = 9 in (1), we get x = , y= ,z = ⋅
4 4 2
35 15 7
∴ T (α 2 ) = α1 + α 2 − α3 .
4 4 2
Finally T (α 3 ) = T (2, 1, 1) = (7, − 3, 4).
11 3
Putting a = 7, b = − 3, c = 4 in (1), we get x = , y = − , z = 0.
2 2
11 3
∴ T (α 3 ) = α1 − α 2 + 0α 3 .
2 2
 17 35 11
 4 4 2
 3 15 3
∴ [T ] B = − − ⋅
 4 1
4
7
2
− − 0
 2 2 
169

Example 5: Let T be a linear operator on R3 defined by


T ( x1 , x2 , x3 ) = (3 x1 + x3 , − 2 x1 + x2 , − x1 + 2 x2 + 4 x3 ).
−1
Prove that T is invertible and find a formula for T .
Solution: Suppose B is the standard ordered basis for R3 . Then
B = {(1, 0, 0), (0, 1, 0), (0, 0, 1) }.
Let A = [T ] B i. e.,let A be the matrix of T with respect to B.First we shall compute A.
We have
T (1, 0, 0) = (3, − 2, − 1), T (0, 1, 0) = (0, 1, 2) and
T (0, 0, 1) = (1, 0, 4).
 3 0 1
∴ A = [T ] B = − 2 1 0 ⋅
 
 − 1 2 4
Now T will be invertible if the matrix [T ] B is invertible. [See theorem 7 of article
4.2].
 3 0 1
We have det A = | A | = − 2 1 0 = 3 (4 − 0) + 1 (− 4 + 1) = 9.
 
 − 1 2 4
Since det A ≠ 0, therefore the matrix A is invertible and consequently T is
invertible.
Now we shall compute the matrix A −1 . For this let us first find adj. A.
The cofactors of the elements of the first row of A are
 1 0  − 2 0  − 2 1
 , −  ,  i. e., 4, 8, − 3.
 2 4  − 1 4  − 1 2

The cofactors of the elements of the second row of A are


0 1 , 
3 1  3 0
−  , −   i. e., 2, 13, − 6.
 2 4 − 1 4  − 1 2
The cofactors of the elements of the third row of A are


0 1  3 1  3 0
, −  ,   i. e., − 1, − 2, 3.
 1 0  − 2 0  − 2 1
 4 8 − 3  4 2 − 1
∴ 
Adj. A = transpose of the matrix 2 13 − 6 =   8 13 − 2 ⋅
   
− 1 − 2 3 − 3 − 6 3
 4 2 − 1
1 1
∴ −1
A = Adj. A = 8 13 −2 .
det A 9 
 −3 −6 3 
Now [T −1 ] B = ([T ] B ) −1 = A −1 . [See theorem 7 of article 4.2]
170

We shall now find a formula for T −1 . Let α = (a, b, c ) be any vector belonging to R3 .
Then
[T −1 (α)] B = [T −1 ] B [α] B [See Note of Theorem 2, article 4.2]

 4 2 −1 a  4a + 2b − c 
1 1
= 8 13 −2  b =  8a + 13b − 2c 
9    9 
 −3 −6 3   c  − 3a − 6b + 3c 
Since B is the standard ordered basis for R3 ,
−1 −1 1
∴ T (α) = T (a, b, c ) = (4a + 2b − c , 8a + 13b − 2c , − 3a − 6b + 3c ).
9
Example 6: Let T be the linear operator on R3 defined by
T ( x1 , x2 , x3 ) = (3 x1 + x3 , − 2 x1 + x2 , − x1 + 2 x2 + 4 x3 ).
(i) What is the matrix of T in the standard ordered basis B for R3 ?
(ii) Find the transition matrix P from the ordered basis B to the ordered basis
B′ = {α1 , α 2 , α 3 } where α1 = (1, 0, 1), α 2 = (− 1, 2, 1), and α 3 = (2, 1, 1). Hence find the
matrix of T relative to the ordered basis B ′ .
Solution: (i) Let A = [T ] B . Then

 3 0 1
A = − 2 1 0 ⋅ [For calculation work see Example 5]
 
 − 1 2 4
(ii) Since B is the standard ordered basis, therefore the transition matrix P from B
to B′ can be immediately written as
 1 − 1 2
P =  0 2 1 ⋅
 
 1 1 1
−1
Now [T ] B ′ = P [T ] B P. [See note of theorem 13, article 4.2]
−1
In order to compute the matrix P , we find that det P = − 4.

 1 3 −5 
1 1
Therefore P −1 = Adj. P = −  1 −1 −1.
det P 4 
 −2 −2 2 
 1 3 − 5  3 0 1  1 − 1 2
1
∴ [T ] B ′ =− 1 − 1 − 1 − 2 1 0 0 2 1
4   
− 2 − 2 2  − 1 2 4  1 1 1

 2 − 7 − 19  1 − 1 2
1
=− 6 − 3 − 3  0 2 1
4  
− 4 2 6  1 1 1
171

 17 35 11
− 17 − 35 − 22  4 4 2
1  3 15 3
=− 3 − 15 6 = − − ⋅
4  4 4 2
 2 14 0  1 7
− − 0
 2 2 
[Note that this result tallies with that of Example 4].
Example 7: Let T be a linear operator on R2 defined by :
T ( x, y) = (2 y, 3 x − y).
Find the matrix representation of T relative to the basis { (1, 3), (2, 5) }.
Solution: Let α1 = (1, 3) and α 2 = (2, 5). By def. of T, we have

T (α1 ) = T (1, 3) = (2 . 3, 3 . 1 − 3) = (6, 0)


and T (α 2 ) = T (2, 5) = (2 . 5, 3 . 2 − 5) = (10, 1).
Now our aim is to express the vectors T (α1 ) and T (α 2 ) as linear combinations of
the vectors in the basis {α1 , α 2 }.
Let (a, b) = pα1 + qα 2 = p (1, 3) + q (2, 5) = ( p + 2q, 3 p + 5q).
Then p + 2q = a, 3 p + 5q = b.
Solving these equations, we get
p = − 5a + 2b, q = 3a − b. …(1)
Putting a = 6, b = 0 in (1), we get p = − 30, q = 18.
∴ T (α1 ) = (6, 0) = − 30α1 + 18α 2 . …(2)
Again putting a = 10, b = 1 in (1), we get
p = − 48, q = 29.
∴ T (α 2 ) = (10, 1) = − 48α1 + 29α 2 . …(3)
From the relations (2) and (3), we see that the ma trix of T relative to the basis
 −30 −48 
{α1 , α 2 } is =  .
 18 29 
Example 8: Consider the vector space V (R) of all 2 × 2 matrices over the field R of real
numbers. Let T be the linear transformation on V that sends each matrix X onto AX, where
 1 1
A=   . Find the matrix of T with respect to the ordered basis B = {α1 , α 2 , α 3 , α 4 } for
 1 1
V where
 1 0  0 1 0 0  0 0 
α1 =   , α 2 =  0 0  , α 3 =  1 0  , α 4 =  0 1 ⋅
 0 0       
Solution: We have
 1 1  1 0   1 0 
T (α1 ) =   = 
 1 1  0 0   1 0 
172

1 0 0 1 0 0 0 0 
=1  +0   +1 +0  0 1 ,
0 0  0 0 1 0   
 1 1  0 1  0 1
T (α 2 ) =   =  0 1
 1 1  0 0   
1 0  0 1 0 0 0 0 
=0  + 10 0  + 0  1 +1
 0 0     0  
 0 1
 1 1  0 0   1 0 
T (α 3 ) =   = 
 1 1  1 0   1 0 
 1 0  0 1 0 0 0 0 
=1  +0   +1  +0  ,
0 0  0 0  1 0  0 1
 1 1  0 0   0 1
and T (α 4 ) =   = 
 1 1  0 1  0 1
 1 0  0 1 0 0  0 0 
=0  +1  +0  1 0  + 1  0 1.
0 0  0 0     
1 0 1 0
0 1 0 1
∴ [T ] B =  ⋅
1 0 1 0
0 1 0 1
 
Example 9: If the matrix of a linear transformation T on V2 ( C), with respect to the
 1 1
ordered basis B = {(1, 0), (0, 1) }is   , what is the matrix of T with respect to the ordered
 1 1
basis B ′ = {(1, 1), (1, − 1) } ?
Solution: Let us first define T explicitly. It is given that
 1 1
[T ] B =  .
 1 1
∴ T (1, 0) = 1 (1, 0) + 1 (0, 1) = (1, 1), and T (0, 1) = 1 (1, 0) + 1 (0, 1) = (1, 1).
If (a, b) ∈ V2 (C), then we can write (a, b) = a (1, 0) + b (0, 1).
∴ T (a, b) = aT (1, 0) + bT (0, 1)
= a (1, 1) + b (1, 1) = (a + b, a + b).
This is the explicit expression for T.
Now let us find the matrix of T with respect to B′ .
We have T (1, 1) = (2, 2).
Let (2, 2) = x (1, 1) + y (1, − 1) = ( x + y, x − y).
Then x + y = 2, x − y = 2
⇒ x = 2 , y = 0.
∴ (2, 2) = 2 (1, 1) + 0 (1, − 1).
Also T (1, − 1) = (0, 0) = 0 (1, 1) + 0 (1, − 1).
173

2 0
∴ [T ] B ′ =  ⋅
0 0 
Note: If P is the transition matrix from the basis B to the basis B ′ , then
 1 1
P= ⋅
 1 − 1
We can compute [T ] B ′ by using the formula
[T ] B ′ = P −1 [T ] B P.
Example 10: Show that the vectors α1 = (1, 0, − 1), α 2 = (1, 2, 1), α 3 = (0, − 3, 2) form
a basis for R3 . Express each of the standard basis vectors as a linear combination of
α1 , α 2 , α 3 .
Solution: Let a, b, c be scalars i. e., real numbers such that
aα1 + bα 2 + cα 3 = 0
i. e., a (1, 0, − 1) + b (1, 2, 1) + c (0, − 3, 2) = (0, 0, 0)
i. e., (a + b + 0 c , 0 a + 2b − 3c , − a + b + 2c ) = (0, 0, 0)
a + b + 0 c = 0,

i. e., 0 a + 2b − 3c = 0, …(1)
− a + b + 2c = 0.
The coefficient matrix A of these equations is
 1 1 0
A=  0 2 − 3 ⋅
 
− 1 1 2
 1 1 0

We have det A = | A | = 0 2 − 3 ⋅
 
−1 1 2
= 1 (4 + 3) − 1 (0 − 3) = 7 + 3 = 10.
Since det A ≠ 0, therefore the matrix A is non-singular and rank A = 3 i. e., equal to
the number of unknowns a, b, c . Hence a = 0, b = 0, c = 0 is the only solution of the
equations (1). Therefore the vectors α1 , α 2 , α 3 are linearly independent over R.
Since dim R3 = 3, therefore the set {α1 , α 2 , α 3 } containing three linearly
independent vectors forms a basis for R3 .
Now let B = { e1 , e2 , e3 } be the standard ordered basis for R3 .
Then e1 = (1, 0, 0), e2 = (0, 1, 0), e3 = (0, 0, 1).
Let B′ = {α1 , α 2 , α 3 }.
We have α1 = (1, 0, − 1) = 1 e1 + 0 e2 − 1e3
α 2 = (1, 2, 1) = 1e1 + 2e2 + 1e3
α 3 = (0, − 3, 2) = 0 e1 − 3e2 + 2e3 .
174

If P is the transition matrix from the basis B to the basis B′ , then

 1 1 0
P = 0 2 − 3 ⋅

 
−1 1 2
Let us find the matrix P −1 . For this let us first find Adj. P.
The cofactors of the elements of the first row of P are
 2 −3   0 −3   0 2 
 1 2  , −  −1 2  ,  −1 1 , i. e., 7, 3, 2.
     
The cofactors of the elements of the second row of P are
1 0  1 0 1 1
−  , , − i. e., − 2, 2, − 2.
 1 2  −1 2 −1 1
The cofactors of the elements of the third row of P are
1 0 1 0 1 1
,− , , i. e, − 3, 3, 2
2 −3 0 −3 0 2
 7 3 2 7 − 2 − 3
∴ Adj P = transpose of the matrix − 2 2 − 2 = 3
 2 3 ⋅
   
− 3 3 2 2 − 2 2
7 − 2 − 3
1 1 
∴ −1
P = Adj P = 3 2 3 ⋅
det P 10  
2 − 2 2
Now e1 = 1e1 + 0 e2 + 0 e3 .
 1
∴ Coordinate matrix of e1 relative to the basis B = 0 ⋅
 
0
∴ Co-ordinate matrix of e1 relative to the basis B ′
 1
−1  
= [e1 ] B ′ = P 0
 
0
7 − 2 − 3  1
1 
= 3 2 3 0
10   
2 − 2 2 0

7 7 /10
1   
= 3 = 3 /10 ⋅
10    
2 2 /10

7 3 2
∴ e1 = α1 + α2 + α3 .
10 10 10
175

0 0
Also [e2 ] B =  1 and [e3 ] B = 0 ⋅
   
0  1

0 0
∴ [e2 ] B ′ = P −1 1 and [e3 ] B ′ = P −1 0 ⋅
   
0 1

− 2 − 3
1   1  
Thus [e2 ] B ′ = 2 , [e3 ] B ′ = 3 ⋅
10   10  
− 2  2

2 2 2
∴ e2 = − α1 + α2 − α3
10 10 10
3 3 2
and e3 = − α1 + α2 + α3 .
10 10 10
Example 11: Let A be an m × n matrix with real entries. Prove that A = 0 (null matrix) if
and only if trace ( A t A) = 0.
Solution: Let A = [a ij ] m × n .

Then A t = [b ij ] n × m , where b ij = a ji .
Now A t A is a matrix of the type n × n.
Let A t A = [c ij ] n × n . Then
c ii = the sum of the products of the corresponding elements of
the ith row of A t and the ith column of A
= b i1 a1i + b i2 a2 i + … + b im a mi
= a1i a1i + a2 i a2 i + … + a m i a m i [∵ b ij = a ji ]
= a1i + a2 i + … + a mi .
2 2 2

n n
Now trace ( A t A) = Σ c ii = Σ {a1i2 + a2 i2 + … + a mi2 }
i =1 i =1

= the sum of the squares of all the elements of A.


Now the elements of A are all real numbers.
Therefore trace ( A t A) = 0
⇒ the sum of the squares of all the elements of A is zero
⇒ each element of A is zero
⇒ A is a null matrix.
Conversely if A is a null matrix, then A t A is also a null matrix and so trace
( A t A) = 0.
Hence trace ( A t A) = 0 iff A = 0.
176

Example 12: Show that the only matrix similar to the identity matrix I is I itself.
Solution: The identity matrix I is invertible and we can write I = I −1 I I .
Therefore I is similar to I. Further let B be a matrix similar to I. Then there exists an
invertible matrix P such that
B = P −1 IP
⇒ B = P −1 P [ ∵ P −1 I = P −1 ]
⇒ B = I.
Hence the only matrix similar to I is I itself.
Example 13: If T and S are similar linear transformations on a finite dimensional vector
space V ( F ), then det T = det S.
Solution: Since T and S are similar, therefore there exists an invertible linear
transformation P on V such that T = PSP −1 .
Therefore det T = det ( PSP −1 ) = (det P) (det S ) (det P −1 )
= (det P) (det P −1 ) (det S ) = [det ( PP −1 )] (det S )
= (det I ) (det S ) = 1 (det S ) = det S.

Comprehensive Exercise 1

1. Let V = R 3 and T : V → V be a linear mapping defined by


T ( x, y, z ) = ( x + z , − 2 x + y, − x + 2 y + z ).
What is matrix of T relative to the basis B = {(1, 0, 1), (−1, 1, 1),(0, 1, 1)} ?
2. Find matrix representation of linear mapping T : R 3 → R 3 given by
T ( x, y, z ) = (z , y + z , x + y + z ) relative to the basis
B = {(1, 0, 1), (− 1, 2, 1), (2, 1, 1)}.
3. Find the coordinates of the vector (2, 1, 3, 4) of R 4 relative to the basis
vectors
α1 = (1, 1, 0, 0), α 2 = (1, 0, 1, 1), α 3 = (2, 0, 0, 2), α 4 = (0, 0, 2, 2).
4. Let T be the linear operator on R 2 defined by
T ( x, y) = (4 x − 2 y, 2 x + y).
Compute the matrix of T relative to the basis {α1 , α 2 } where α1 = (1, 1),
α 2 = (− 1, 0).
5. Let T be the linear operator on R 2 defined by
T ( x, y) = (4 x − 2 y, 2 x + y).
(i) What is the matrix of T in the standard ordered basis B for R 2 ?
177

(ii) Find the transition matrix P from the ordered basis B to the ordered
basis B ′ = {α1 , α 2 } where α1 = (1, 1), α 2 = (− 1, 0). Hence find the
matrix of T relative to the ordered basis B′.
6. Let T be the linear operator on R 2 defined by T (a, b) = (a, 0). Write the
matrix of T in the standard ordered basis B = {(1, 0), (0, 1)}.If
B ′ = {(1, 1), (2, 1)} is another ordered basis for R 3 , find the transition matrix P
from the basis B to the basis B ′. Hence find the matrix of T relative to the
basis B ′.
7. The matrix of a linear transformation T on V3 (C) relative to the basis
 0 1 1
B = {(1, 0, 0), (0, 1, 0), (0, 0, 1)} is  1 0 −1 .
 
 −1 −1 0 
What is the matrix of T relative to the basis
B′ = {(0, 1, − 1), (1, − 1, 1), (− 1, 0, 1)}?
8. Find the matrix relative to the basis
α1 =  , ,−  , α 2 =  ,− ,−  , α 3 =  ,− , 
2 2 1 1 2 2 2 1 2
 3 3 3  3 3 3  3 3 3
of R 3 , of the linear transformation T : R 3 → R 3 whose matrix relative to
2 0 0
the standard ordered basis is  0 4 0  ⋅
 
 0 0 3 
9. Find the matrix representation of the linear mappings relative to the usual
bases for R n .
(i) T : R 3 → R 2 defined by T ( x, y, z ) = (2 x − 4 y + 9z , 5 x + 3 y − 2z ).
(ii) T : R → R 2 defined by T ( x) = (3 x, 5 x).
(iii) T : R 3 → R 3 defined by T ( x, y, z ) = ( x, y, 0).
(iv) T : R 3 → R 3 defined by T ( x, y, z ) = (z , y + z , x + y + z ).
10. Let B = {(1, 0),(0, 1)} and B′ = {(1, 2), (2, 3)} be any two bases of R 2 and
T ( x, y) = (2 x − 3 y, x + y).
(i) Find the transition matrices P and Q from B to B′ and from B′ to B
respectively.
(ii) Verify that [α] B = P [α] B ′ V α ∈ R 2
(iii) Verify that P −1 [T ] B P = [T ] B ′ .
11. Let V be the space of all 2 × 2 matrices over the field F and let P be a fixed 2 × 2
matrix over F. Let T be the linear operator on V defined by
T ( A) = PA, V A ∈ V . Prove that trace (T ) = 2 trace ( P).
178

1 2
12. Let V be the space of 2 × 2 matrices over R and let M =  ⋅
3 4
Let T be the linear operator on V defined by T ( A) = MA. Find the trace
of T.
13. Find the trace of the operator T on R 3 defined by
T ( x, y, z ) = (a1 x + a2 y + a3 z , b1 x + b2 y + b3 z , c1 x + c 2 y + c 3 z ).
14. Show that the only matrix similar to the zero matrix is the zero matrix itself.

A nswers 1

 3 0 13 
 2 1 2 2 4 
 0 1 1 1 5
1. 2.  1
 
2 4 
 −2 2 0  1
0 1 − 
 2
1, 0, 1 , 3  3 − 2 
3.   4.  
 2 2  1 2 
 4 −2   1 − 1  3 − 2 
5. (i)  (ii)  ; 
2 1 1 0   1 2 

 1 0 1 2   −1 −2 
6. [T ] B =   ;P=  ; [T ] B ′ =  
0 0   1 1  1 2

1 0 −4   3 −2/3 −2/3 
7. [T ] B ′ = 0 0 −2  
8. −2/3 10 /3 0
   
 0 0 −3   −2/3 0 8/3 

 2 −4 9  3
9. (i)   (ii)  
 5 3 −2  5

 1 0 0  0 0 1
(iii)  0 1 0  (iv)  0 1 1
   
 0 0 0   1 1 1

1 2 −3 2
10. (i) P=  ;Q=  
2 3  2 −1
12. trace (T ) = 10
13. trace (T ) = a1 + b2 + c 3
179

Objective Type Questions

Multiple Choice Questions


Indicate the correct answer for each question by writing the corresponding letter from
(a), (b), (c) and (d).
1. Let T be a linear transformation on the vector space V2 ( F ) defined by
T (a, b) = (a, 0).
The matrix of T relative to the ordered basis {(1, 0), (0, 1) } of V2 ( F ) is

1 0 0 1
(a)   (b)  
0 0  0 0 

0 0  0 0 
(c)   (d)  .
1 0 0 1
2. The transition matrix P from the standard ordered basis to the ordered basis
{(1, 1), (− 1, 0)} is

 1 1  1 − 1
(a)   (b)  
 −1 0  1 0 

 0 1 1 0
(c)   (d)  .
1 0  0 1
3. Let V be the vector space of 2 × 2 matrices over R and let
 1 2
M = .
3 4
Let T be the linear operator on V defined by T ( A) = MA. Then the trace of T
is
(a) 5 (b) 10
(c) 0 (d) None of these.
Fill in the Blank(s)
Fill in the blanks ‘‘……’’ so that the following statements are complete and
correct.
1. If T is a linear operator on R2 defined by
2
T ( x, y) = ( x − y, y), then T ( x, y) = …… .
2. Let A be a square matrix of order n over a field F. The sum of the elements of A
lying along the principal diagonal is called the …… of A.
3. Let T and S be similar linear operators on the finite dimensional vector space
V ( F ), then det (T ) …… det (S ).
180

4. Let V ( F ) be an n-dimensional vector space and B be any ordered basis for V.


If I be the identity transformation on V, then [I ; B ] = …… .
5. Let T be a linear operator on an n-dimensional vector space V ( F ) and let B
and B′ be two ordered bases for V. Then the matrix of T relative to B′ is ……
to the matrix of T relative to B.

True or False
Write ‘T’ for true and ‘F’ for false statement.
1. The relation of similarity is an equivalence relation in the set of all linear
transformations on a vector space V ( F ).
2. Similar matrices have the same trace.

A nswers

Multiple Choice Questions


1. (a) 2. (b) 3. (b)

Fill in the Blank(s)


1. ( x − 2 y, y) 2. trace
3. = 4. unit matrix of order n 5. similar

True or False
1. T 2. T

¨
181

5
L inear F unctionals

5.1 Linear Functionals


et V ( F ) be a vector space. We know that the field F can be regarded as a vector
L space over F. This is the vector space F ( F ) or F 1 . We shall simply denote it by
F. A linear transformation from V into F is called a linear functional on V. We
shall now give independent definition of a linear functional.
Linear Functionals: Definition: Let V ( F ) be a vector space. A function f from V
into F is said to be a linear functional on V if
f (aα + bβ) = af (α) + bf ( β) V a, b ∈ F and V α, β ∈ V .
If f is a linear functional on V ( F ),then f (α) is in F for each α belonging to V. Since
f (α) is a scalar, therefore a linear functional on V is a scalar valued function.

Illustration 1: Let Vn ( F ) be the vector space of ordered n-tuples of the elements of the
field F.
Let x1 , x2 , … , x n be n field elements of F. If
α = (a1 , a2 , … , a n ) ∈ Vn ( F ),
182

let f be a function from Vn ( F ) into F defined by


f (α) = x1 a1 + x2 a2 + … + x n a n .
Let β = (b1 , b2 , … , b n ) ∈ Vn ( F ). If a, b ∈ F, we have
f (aα + bβ) = f [a (a1 , … , a n ) + b (b1 , … , b n )]
= f (aa1 + bb1 , … , aa n + bb n )
= x1 (aa1 + bb1 ) + … + x n (aa n + bb n )
= a ( x1 a1 + … + x n a n ) + b ( x1 b1 + … + x n b n )
= af (a1 , … , a n ) + bf (b1 , … , b n ) = af (α) + bf ( β).
∴ f is a linear functional on Vn ( F ).
Illustration 2: Now we shall give a very important example of a linear
functional.
We shall prove that the trace function is a linear functional on the space of all n × n matrices
over a field F.
Let n be a positive integer and F a field. Let V ( F ) be the vector space of all n × n
matrices over F. If A = [a ij ] n × n ∈ V , then the trace of A is the scalar
n
tr A = a11 + a22 + … + a nn = Σ a ii .
i =1

Thus the trace of A is the scalar obtained by adding the elements of A lying along
the principal diagonal.
The trace function is a linear functional on V because if
a, b ∈ F and A = [a ij ] n × n , B = [b ij ] n × n ∈ V , then
tr (aA + bB) = tr (a [a ij ] n × n + b [b ij ] n × n ) = tr ([aa ij + bb ij ] n × n )
n n n
= Σ (aa ii + bb ii ) = a Σ a ii + b Σ b ii = a (tr A ) + b (tr B ).
i =1 i =1 i =1

Illustration 3:Now we shall give another important example of a linear functional.


Let V be a finite-dimensional vector space over the field F and let B be an ordered basis for V.
The function f i which assigns to each vector α in V the ith coordinate of α relative to the ordered
basis B is a linear functional on V.
Let B = {α1 , α 2 , … , α n }.
If α = a1α1 + a2 α 2 + … + a n α n ∈ V , then by definition of f i , we have
f i (α) = a i .
Similarly if β = b1α1 + … + b n α n ∈ V , then f i ( β) = b i .
If a, b ∈ F, we have
f i (aα + bβ) = f i [a (a1α1 + … + a n α n ) + b (b1α1 + … + b n α n )]
= f i [(aa1 + bb1 ) α1 + … + (aa n + bb n ) α n ]
= aa i + bb i = af i (α) + bf i ( β).
Hence f i is a linear functional on V.
183

Illustration 4: Let V be the vector space of polynomials in x over R. Let T : V → R be the


1
integral operator defined by T ( f ( x)) = ∫0 f ( x) dx. Then T is linear and hence it is a

linear functional on V.

5.2 Some Particular Linear Functionals


1. Zero functional: Let V be a vector space over the field F. The function f from V into F
defined by
f (α) = 0 (zero of F) V α ∈ V
is a linear functional on V.
Proof: Let α , β ∈ V and a, b ∈ F. We have
f (aα + bβ) = 0 (by def. of f )
= a0 + b0 = af (α) + bf ( β).
∴ f is a linear functional on V.It is called the zero functional and we shall in future
denote it by ^
0
2. Negative of a linear functional: Let V be a vector space over the field F. Let f be a
linear functional on V. The correspondence − f defined by
(− f ) (α) = − [ f (α)] V α ∈ V
is a linear functional on V.
Proof: Since f (α) ∈ F ⇒ − f (α) ∈ F,therefore − f is a function from V into F.
Let a , b ∈ F and α, β ∈ V . Then
(− f ) (aα + bβ) = − [ f (aα + bβ)] [by def. of − f ]
= − [af (α) + bf ( β)] [∵ f is a linear functional]
= a [− f (α)] + b [− f ( β)]
= a [(− f ) (α)] + b [(− f ) ( β)].
∴ − f is a linear functional on V.
Properties of a linear functional:
Theorem 1:Let f be a linear functional on a vector space V ( F ). Then
(i) f (0) = 0 where 0 on the left hand side is zero vector of V, and 0 on the right hand side is
zero element of F.
(ii) f (− α) = − f (α) V α ∈ V .
Proof. (i) Let α ∈ V. Then f (α) ∈ F.
We have f (α) + 0 = f (α) [ ∵ 0 is zero element of F ]
= f (α + 0) [ ∵ 0 is zero element of V ]
= f (α) + f (0) [ ∵ f is a linear functional]
184

Now F is a field. Therefore


f (α) + 0 = f (α) + f (0)
⇒ f (0) = 0, by left cancellation law for addition in F.
(ii) We have f [α + (− α)] = f (α) + f (− α) [ ∵ f is a linear functional]
But f [α + (− α)] = f (0) = 0 [by (i)]
Thus in F, we have
f (α) + f (− α) = 0
⇒ f (− α) = − f (α).

5.3 Dual Spaces


Let V ′ be the set of all linear functionals on a vector space V ( F ). Sometimes we
denote this set by V * . Now our aim is to impose a vector space structure on the set
V ′ over the same field F. For this purpose we shall have to suitably define addition
in V ′ and scalar multiplication in V ′ over F.
Theorem:Let V be a vector space over the field F. Let f1 and f 2 be linear functionals on V.
The function f1 + f 2 defined by
( f1 + f 2 ) (α) = f1 (α) + f 2 (α) V α ∈ V
is a linear functional on V. If c is any element of F, the function cf defined by
(cf ) (α) = cf (α) V α ∈ V
is a linear functional on V.The set V ′ of all linear functionals on V, together with the addition
and scalar multiplication defined as above is a vector space over the field F.
Proof: Suppose f1 and f 2 are linear functionals on V and we define f1 + f 2 as
follows :
( f1 + f 2 ) (α) = f1 (α) + f 2 (α) V α ∈ V . …(1)
Since f1 (α) + f 2 (α) ∈ F, therefore f1 + f 2 is a function from V into F.
Let a, b ∈ F and α, β ∈ V . Then
( f1 + f 2 ) (aα + bβ) = f1 (aα + bβ) + f 2 (aα + bβ) [by (1)]
= [af1 (α) + bf1 ( β)] + [af 2 (α) + bf 2 ( β)]
[ ∵ f1 and f 2 are linear functionals]
= a [ f1 (α) + f 2 (α)] + b [ f1 ( β) + f 2 ( β)]
= a [( f1 + f 2 ) (α)] + b [( f1 + f 2 ) ( β)] [by (1)]
∴ ( f1 + f 2 ) is a linear functional on V. Thus
f1 , f 2 ∈ V ′ ⇒ f1 + f 2 ∈ V ′ .
Therefore V ′ is closed with respect to addition defined in it.
Again let f ∈ V ′ and c ∈ F. Let us define cf as follows :
(cf ) (α) = cf (α) V α ∈ V . …(2)
185

Since cf (α) ∈ F, therefore cf is a function from V into F.


Let a , b ∈ F and α , β ∈ V . Then
(cf ) (aα + bβ) = cf (aα + bβ) [by (2)]
= c [af (α) + bf ( β)] [ ∵ f is linear functional]
= c [af (α)] + c [bf ( β)] [∵ F is a field]
= (ca) f (α) + (cb) f ( β)
= (ac ) f (α) + (bc ) f ( β)
= a [cf (α)] + b [cf ( β)]
= a [(cf ) (α)] + b [(cf ) ( β)].
∴ cf is a linear functional on V. Thus
f ∈ V ′ and c ∈ F ⇒ cf ∈ V ′ .
Therefore V ′ is closed with respect to scalar multiplication defined in it.
Associativity of addition in V ′.
Let f1 , f 2 , f 3 ∈ V ′ . If α ∈ V, then
[ f1 + ( f 2 + f 3 )] (α) = f1 (α) + ( f 2 + f 3 ) (α) [by (1)]
= f1 (α) + [ f 2 (α) + f 3 (α)] [by (1)]
= [ f1 (α) + f 2 (α)] + f 3 (α) [∵ addition in F is associative]
= ( f1 + f 2 ) (α) + f 3 (α) [by (1)]
= [( f1 + f 2 ) + f 3 ] (α) [by (1)]
∴ f1 + ( f 2 + f 3 ) = ( f1 + f 2 ) + f 3
[by def. of equality of two functions]
Commutativity of addition in V ′. Let f1 , f 2 ∈ V ′ .If α is any element of V,then

( f1 + f 2 ) (α) = f1 (α) + f 2 (α) [by (1)]


= f 2 (α) + f1 (α)[ ∵ addition in F is commutative]
= ( f 2 + f1 ) (α) [by (1)]
∴ f1 + f 2 = f 2 + f1 .

Existence of additive identity in V ′. Let ^


0 be the zero linear functional in V
i. e.,
^
0 (α) = 0 V α ∈ V .
^
Then0 ∈ V ′ . If f ∈ V ′ and α ∈ V, we have

(^
0 + f ) (α) = ^
0 (α) + f (α) [by (1)]

= 0 + f (α) [by def. of ^


0]
= f (α) [0 being additive identity in F ]
186

^
∴ 0 + f = f V f ∈V′.
^
∴ 0 is the additive identity in V ′ .
Existence of additive inverse of each element in V ′.
Let f ∈ V ′ . Let us define − f as follows :
(− f ) (α) = − f (α) V α ∈ V .
Then − f ∈ V ′ . If α ∈ V, we have
(− f + f ) (α) = (− f ) (α) + f (α) [by (1)]
= − f (α) + f (α) [by def. of − f ]
=0

=^
0 (α) [by def. of ^
0]
^
∴ − f + f = 0 for every f ∈ V ′ .
Thus each element in V ′ possesses additive inverse. Therefore V ′ is an abelian
group with respect to addition defined in it.
Further we make the following observations :
(i) Let c ∈ F and f1 , f 2 ∈ V ′ . If α is any element in V, we have
[c ( f1 + f 2 )] (α) = c [( f1 + f 2 ) (α)] [by (2)]
= c [ f1 (α) + f 2 (α)] [by (1)]
= cf1 (α) + cf 2 (α)
= (cf1 ) (α) + (cf 2 ) (α) [by (2)]
= (cf1 + cf 2 ) (α) [by (1)]
∴ c ( f1 + f 2 ) = cf1 + cf 2 .
(ii) Let a, b ∈ F and f ∈ V ′ . If α ∈ V, we have
[(a + b) f ] (α) = (a + b) f (α) [by (2)]
= af (α) + bf (α) [∵ F is a field]
= (af ) (α) + (bf ) (α) [by (2)]
= (af + bf ) (α) [by (1)]
∴ (a + b) f = af + bf .
(iii) Let a, b ∈ F and f ∈ V ′ . If α ∈ V, we have
[(ab) f ] (α) = (ab) f (α) [by (2)]
= a [bf (α)] [∵ multiplication in F is associative]
= a [(bf ) (α)] [by (2)]
= [a (bf )] (α) [by (2)]
∴ (ab) f = a (bf ).
187

(iv) Let 1 be the multiplicative identity of F and f ∈ V ′ . If α ∈ V, we have


(1 f ) (α) = 1 f (α) [by (2)]
= f (α) [ ∵ F is a field]
∴ 1f = f .
Hence V ′ is a vector space over the field F.
Dual Space:
Definition: Let V be a vector space over the field F. Then the set V ′ of all linear
functionals on V is also a vector space over the field F. The vector space V ′ is called the dual
space of V.
^
Sometimes V * and V are also used to denote the dual space of V. The dual space of
V is also called the conjugate space of V.

5.4 Dual Bases


Theorem 1:Let V be an n-dimensional vector space over the field F and let B = {α1 , … , α n }
be an ordered basis for V. If {x1 , … , x n } is any ordered set of n scalars, then there exists a
unique linear functional f on V such that
f (α i ) = x i , i = 1, 2, … , n.
Proof: Existence of f . Let α ∈ V.
Since B = {α1 , α 2 , … , α n } is a basis for V, therefore there exist unique scalars
a1 , a2 , … , a n such that
α = a1α1 + … + a n α n .
For this vector α, let us define
f (α) = a1 x1 + … + a n x n .
Obviously f (α) as defined above is a unique element of F. Therefore f is a
well-defined rule for associating with each vector α in V a unique scalar f (α) in F.
Thus f is a function from V into F.
The unique representation of α i ∈ V as a linear combination of the vectors
belonging to the basis B is
α i = 0α1 + 0α 2 + … + 1 α i + 0α i + 1 + …+ 0α n .
Therefore according to our definition of f , we have
f (α i ) = 0 x1 + 0 x2 + … + 1x i + 0 x i + 1 + … + 0 x n
i. e., f (α i ) = x i , i = 1, 2 , … , n.
Now to show that f is a linear functional.
Let a , b ∈ F and α , β ∈ V . Let
α = a1α1 + … + a n α n ,
and β = b1α1 + … + b n α n .
188

Then f (aα + bβ) = f [a (a1α1 + …+ a n α n ) + b (b1α1 + … + b n α n )]


= f [(aa1 + bb1 ) α1 + … + (aa n + bb n ) α n ]
= (aa1 + bb1 ) x1 + … + (aa n + bb n ) x n [by def. of f ]
= a (a1 x1 + … + a n x n ) + b (b1 x1 + … + b n x n ) = af (α) + bf ( β).
∴ f is a linear functional on V. Thus there exists a linear functional f on V such
that f (α i ) = x i , i = 1, 2 , … , n.
Uniqueness of f : Let g be a linear functional on V such that
g (α i ) = x i , i = 1, 2 , … , n.
For any vector α = a1α1 + … + a n α n ∈ V , we have
g (α) = g (a1α1 + … + a n α n )
= a1 g (α1 ) + … + a n g (α n ) [ ∵ g is linear]
= a1 x1 + … + a n x n [by def. of g]
= f (α). [by def. of f ]
Thus g (α) = f (α) V α ∈ V .
∴ g = f.
This shows the uniqueness of f .
Remark:From this theorem we conclude that if f is a linear functional on a finite
dimensional vector space V, then f is completely determined if we mention under
f the images of the elements of a basis set of V. If f and g are two linear functionals
on V such that f (α i ) = g (α i ) for all α i belonging to a basis of V, then
f (α) = g (α) V α ∈ V i. e., f = g.
Thus two linear functionals of V are equal if they agree on a basis of V.
Theorem 2: Let V be an n-dimensional vector space over the field F and let
B = {α1 , … , α n } be a basis for V. Then there is a uniquely determined basis
B′ = { f1 , … , f n } for V ′ such that f i (α j ) = δ ij . Consequently the dual space of an
n-dimensional space is n-dimensional.
The basis B ′ is called the dual basis of B.
Proof: B = {α1 , … , α n } is an ordered basis for V. Therefore by theorem 1, there
exists a unique linear functional f1 on V such that
f1 (α1 ) = 1, f1 (α 2 ) = 0, … , f1 (α n ) = 0
where {1, 0, … , 0} is an ordered set of n scalars.
In fact, for each i = 1, 2, … , n there exists a unique linear functional f i on V such that
 0 if i ≠ j
f i (α j ) = 
 1 if i = j
i. e., f i (α j ) = δ ij , …(1)
where δ ij ∈ F is Kronecker delta i. e., δ ij = 1 if i = j and δ ij = 0 if i ≠ j.
189

Let B′ = { f1 , … , f n }. Then B′ is a subset of V ′ containing n distinct elements of V ′ .


We shall show that B′ is a basis for V ′ .
First we shall show that B′ is linearly independent.
Let c1 f1 + c 2 f 2 + … + c n f n = ^
0
^
⇒ (c1 f1 + … + c n f n ) (α) = 0 (α) V α ∈ V
^
⇒ c1 f1 (α) + … + c n f n (α) = 0 V α ∈ V [ ∵ 0 (α) = 0]
n
⇒ Σ c i f i (α) = 0 V α ∈ V
i =1
n
⇒ Σ c i f i (α j ) = 0, j = 1, 2 , … , n
i =1 [Putting α = α j where j = 1, 2 , … , n]
n
⇒ Σ c i δ ij = 0, j = 1, 2, … , n
i =1

⇒ c j = 0, j = 1, 2 , … , n
⇒ f1 , f 2 , … , f n are linearly independent.
In the second place, we shall show that the linear span of B′ is equal to V ′ .
Let f be any element of V ′ . The linear functional f will be completely
determined if we define it on a basis for V. So let
f (α i ) = a i , i = 1, 2, … , n. …(2)
We shall show that
n
f = a1 f1 + … + a n f n = Σ a i f i .
i =1
We know that two linear functionals on V are equal if they agree on a basis of V. So
let α j ∈ B where j = 1, … , n. Then
 n  n
i Σ a i f i  (α j ) = Σ a i f i (α j )
 =1  i =1
n
= Σ a i δ ij [from (1)]
i =1

= aj , on summing with respect to i and


remembering that δ ij = 1 when i = j
and δ ij = 0 when i ≠ j
= f (α j ) [from (2)]
 n 
Thus i Σ a i f i  (α j ) = f (α j ) V α j ∈ B.
 =1 
n
Therefore f = Σ a i f i . Thus every element f in V ′ can be expressed as a linear
i =1

combination of f1 , … , f n .
∴ V ′ = linear span of B′ . Hence B′ is a basis for V ′ .
Now dim V ′ = number of distinct elements in B′ = n.
190

Corollary: If V is an n-dimensional vector space over the field F, then V is isomorphic to


its dual space V ′ .
Proof: We have dim V ′ = dim V = n.
∴ V is isomorphic to V ′ .
Theorem 3: Let V be an n-dimensional vector space over the field F and let
B = {α1 , … , α n }be a basis for V. Let B′ = { f1 , … , f n }be the dual basis of B. Then for each
linear functional f on V, we have
n
f = Σ f (α i ) f i
i =1

and for each vector α in V we have


n
α= Σ f i (α) α i .
i =1

Proof: Since B′ is dual basis of B, therefore


f i (α j ) = δ ij . …(1)
If f is a linear functional on V, then f ∈ V ′ for which B′ is basis. Therefore f can be
n
expressed as a linear combination of f1 , … , f n . Let f = Σ c i f i .
i =1
 n  n
Then f (α j ) =  Σ c i f i  (α j ) = Σ c i f i (α j )
i = 1  i =1
n
= Σ c i δ ij [From (1)]
i =1

= c j , j = 1, 2, … , n.
n
∴ f = Σ f (α i ) f i .
i =1

Now let α be any vector in V. Let


α = x1α1 + … + x n α n . …(2)
 n   n 
Then f i (α) = f i  Σ x j α j  From (2), α = Σ xj αj
j =1   j =1 
n
= Σ x j f i (α j ) [ ∵ f i is linear functional]
j =1
n
= Σ x j δ ij [From (1)]
j =1
= xi .
n
∴ α = f1 (α) α1 + … + f n (α) α n = Σ f i (α) α i .
i =1

Important: It should be noted that if B = {α1 , … , α n } is an ordered basis for V


and B′ = { f1 , … , f n } is the dual basis, then f i is precisely the function which
assigns to each vector α in V the ith coordinate of α relative to the ordered basis B.
191

Theorem 4:Let V be an n-dimensional vector space over the field F. If α is a non-zero vector
in V, there exists a linear functional f on V such that f (α) ≠ 0.
Proof: Since α ≠ 0, therefore {α}is a linearly independent subset of V. So it can be
extended to form a basis for V. Thus there exists a basis B = {α1 , … , α n }for V such
that α1 = α.
If B′ = { f1 , … , f n } is the dual basis, then
f1 (α) = f1 (α1 ) = 1 ≠ 0.
Thus there exists linear functional f1 such that
f1 (α) ≠ 0.
Corollary: Let V be an n-dimensional vector space over the field F. If
f (α) = 0 V f ∈ V ′ , then α = 0.
Proof:Suppose α ≠ 0. Then there is a linear functional f on V such that f (α) ≠ 0.
This contradicts the hypothesis that
f (α) = 0 V f ∈ V ′ .
Hence we must have α = 0.
Theorem 5: Let V be an n-dimensional vector space over the field F. If α, β are any two
different vectors in V,then there exists a linear functional f on V such that f (α) ≠ f ( β).
Proof: We have α ≠ β ⇒ α − β ≠ 0.
Now α − β is a non-zero vector in V. Therefore by theorem 4, there exists a linear
functional f on V such that
f (α − β) ≠ 0 ⇒ f (α) − f ( β) ≠ 0 ⇒ f (α) ≠ f ( β)
Hence the result.

5.5 Reflexivity
Second dual space (or Bi-dual space): We know that every vector space V
possesses a dual space V ′ consisting of all linear functionals on V. Now V ′ is also a
vector space. Therefore it will also possess a dual space (V ′ )′ consisting of all linear
functionals on V ′. This dual space of V ′ is called the Second dual space or
Bi-dual space of V and for the sake of simplicity we shall denote it by V ′ ′ .
If V is finite-dimensional, then
dim V = dim V ′ = dim V ′ ′
showing that they are isomorphic to each other.
Theorem 1: Let V be a finite dimensional vector space over the field F. If α is any vector in
V, the function Lα on V ′ defined by
Lα ( f ) = f (α) V f ∈ V ′
is a linear functional on V ′ i.e., Lα ∈ V ′ ′ .
Also the mapping α ⇒ Lα is an isomorphism of V onto V ′ ′ .
192

Proof: If α ∈ V and f ∈ V ′ , then f (α) is a unique element of F. Therefore the


correspondence Lα defined by
Lα ( f ) = f (α) V f ∈ V ′ …(1)
is a function from V ′ into F.
Let a, b ∈ F and f , g ∈ V ′ , then
Lα (af + bg) = (af + bg) (α) [From (1)]
= (af ) (α) + (bg) (α)
= a f (α) + b g (α)
[by scalar multiplication of linear functionals]
= a [ Lα ( f )] + b [ Lα ( g)]. [From (1)]
Therefore Lα is a linear functional on V ′ and thus Lα ∈ V ′ ′ .
Now let ψ be the function from V into V ′ ′ defined by
ψ (α) = Lα V α ∈ V .
ψ is one-one: If α , β ∈ V , then
ψ (α) = ψ ( β)
⇒ Lα = Lβ ⇒ Lα ( f ) = Lβ ( f ) V f ∈ V ′
⇒ f (α) = f ( β) V f ∈ V ′ [From (1)]
⇒ f (α) − f ( β) = 0 V f ∈ V ′
⇒ f (α − β) = 0 V f ∈ V ′
⇒ α−β=0
[∵ by theorem 4 of 2.16, if α − β ≠ 0, then ∃ a linear
functional f on V such that f (α − β) ≠ 0. Here we have
f (α − β) = 0 V f ∈ V ′ and so α − β must be 0]
⇒ α = β.
∴ ψ is one-one.
ψ is a linear transformation:
Let a , b ∈ F and α , β ∈ V . Then
ψ (aα + bβ) = Laα + bβ [by def. of ψ]
For every f ∈ V ′ , we have
Laα + bβ ( f ) = f (aα + bβ)
= af (α) + bf ( β) [From (1)]
= aLα ( f ) + bLβ ( f ) [From (1)]
= (aLα ) ( f ) + (bLβ ) ( f ) = (aLα + bLβ ) ( f ).
∴ Laα + bβ = aLα + bLβ = a ψ (α) + b ψ ( β).
Thus ψ (aα + bβ) = a ψ (α) + b ψ ( β).
∴ ψ is a linear transformation from V into V ′ ′ . We have dim V = dim V ′ ′ .
Therefore ψ is one-one implies that ψ must also be onto.
Hence ψ is an isomorphism of V onto V ′ ′ .
193

Note: The correspondence α → Lα as defined in the above theorem is called the


natural correspondence between V and V ′ ′ . It is important to note that the
above theorem shows not only that V and V ′ ′ are isomorphic—this much is
obvious from the fact that they have the same dimension — but that the natural
correspondence is an isomorphism. This property of vector spaces is called
reflexivity. Thus in the above theorem we have proved that every finite-dimensional
vector space is reflexive.
In future we shall identify V ′ ′ with V through the natural isomorphism α ↔ Lα .
We shall say that the element L of V ′ ′ is the same as the element α of V iff L = Lα
i. e., iff
L ( f ) = f (α) V f ∈ V ′ .
It will be in this sense that we shall regard V ′ ′ = V .
Theorem 2: Let V be a finite dimensional vector space over the field F. If L is a linear
functional on the dual space V ′ of V, then there is a unique vector α in V such that
L ( f ) = f (α) V f ∈ V ′ .
Proof: This theorem is an immediate corollary of theorem 1. We should first
prove theorem 1. Then we should conclude like this :
The correspondence α → Lα is a one-to-one correspondence between V and V ′ ′.
Therefore if L ∈ V ′ ′ , there exists a unique vector α in V such that L = Lα i. e., such
that
L ( f ) = f (α) V f ∈ V ′ .
Theorem 3: Let V be a finite dimensional vector space over the field F. Each basis for V ′ is
the dual of some basis for V.
Proof: Let B ′ = { f1 , f 2 , … , f n } be a basis for V. Then there exists a dual basis
( B′ )′ = {L1 , L 2 , … , Ln } for V ′ ′ such that
Li ( f j ) = δ ij . …(1)
By previous theorem, for each i there is a vector α i in V such that
Li = Lα i where Lα i ( f ) = f (α i ) V f ∈ V ′ . …(2)
The correspondence α ↔ Lα is an isomorphism of V onto V ′ ′ . Under an
isomorphism a basis is mapped onto a basis. Therefore B = {α1 , … , α n }is a basis for
V because it is the image set of a basis for V ′ ′ under the above isomorphism.
Putting f = f j in (2), we get
f j (α i ) = Lα i ( f j ) = Li ( f j )
= δ ij . [From (1)]
∴ B′ = { f1 , … , f n } is the dual of the basis B.
Hence the result.
Theorem 4: Let V be a finite dimensional vector space over the field F. Let B be a basis for
V and B′ be the dual basis of B. Then show that
B′ ′ = ( B′ )′ = B.
194

Proof: Let B = {α1 , … , α n } be a basis for V,


B′ = { f1 , … , f n } be the dual basis of B in V ′ and
B′ ′ = ( B′ )′ = {L1 , … , Ln } be the dual basis of B in V ′ ′ . Then
f i (α j ) = δ ij ,
and Li ( f j ) = δ ij , i = 1, … , n ; j = 1, … , n.
If α ∈ V, then there exists Lα ∈ V ′ ′ such that
Lα ( f ) = f (α) V f ∈ V ′ .
Taking α i in place of α , we see that for each j = 1, … , n,
Lα i ( f j ) = f j (α i ) = δ ij = Li ( f j ).
Thus Lα i and Li agree on a basis for V ′ . Therefore Lα i = Li .
If we identify V ′ ′ with V through the natural isomorphism α ↔ Lα , then we
consider Lα as the same element as α.
So Li = Lα i = α i where i = 1, 2, … , n.
Thus B′ ′ = B.

Example 1: Find the dual basis of the basis set


B = {(1, − 1, 3), (0, 1, − 1), (0, 3, − 2) } for V3 (R).
Solution: Let α1 = (1, − 1, 3), α 2 = (0, 1, − 1), α 3 = (0, 3, − 2).
Then B = {α1 , α 2 , α 3 }.
If B′ = { f1 , f 2 , f 3 } is dual basis of B, then
f1 (α1 ) = 1, f1 (α 2 ) = 0, f1 (α 3 ) = 0,
f 2 (α1 ) = 0, f 2 (α 2 ) = 1, f 2 (α 3 ) = 0,
and f 3 (α1 ) = 0, f 3 (α 2 ) = 0, f 3 (α 3 ) = 1.
Now to find explicit expressions for f1 , f 2 , f 3 .
Let (a, b, c ) ∈ V3 (R).
Let (a, b, c ) = x (1, − 1, 3) + y (0, 1, − 1) + z (0, 3, − 2) …(1)
= xα1 + yα 2 + zα 3 .
Then f1 (a, b, c ) = x, f 2 (a, b, c ) = y, and f 3 (a, b, c ) = z .
Now to find the values of x, y, z .
From (1), we have
x = a, − x + y + 3z = b, 3 x − y − 2z = c .
Solving these equations, we have
x = a, y = 7a − 2b − 3c , z = b + c − 2a.
195

Hence f1 (a, b, c ) = a, f 2 (a, b, c ) = 7a − 2b − 3c ,


and f 3 (a, b, c ) = − 2a + b + c .
Therefore B′ = { f1 , f 2 , f 3 } is a dual basis of B where f1 , f 2 , f 3 are as defined
above.
Example 2: The vectors α1 = (1, 1, 1), α 2 = (1, 1, − 1), and α 3 = (1, − 1, − 1) form a basis

of V3 (C). If { f1 , f 2 , f 3 } is the dual basis and if α = (0, 1, 0), find f1 (α), f 2 (α) and
f 3 (α).
Solution: Let α = a1α1 + a2 α 2 + a3 α 3 . Then
f1 (α) = a1 , f 2 (α) = a2 , f 3 (α) = a3 .
Now α = a1α1 + a2 α 2 + a3 α 3
⇒ (0, 1, 0) = a1 (1, 1, 1) + a2 (1, 1, − 1) + a3 (1, − 1, − 1)
⇒ (0, 1, 0) = (a1 + a2 + a3 , a1 + a2 − a3 , a1 − a2 − a3 )
⇒ a1 + a2 + a3 = 0, a1 + a2 − a3 = 1, a1 − a2 − a3 = 0
1 1
⇒ a1 = 0, a2 = , a3 = − ⋅
2 2
1 1
∴ f1 (α) = 0, f 2 (α) = , f 3 (α) = − ⋅
2 2
Example 3: If f is a non-zero linear functional on a vector space V and if x is an arbitrary
scalar, does there necessarily exist a vector α in V such that f (α) = x ?
Solution: f is a non-zero linear functional on V. Therefore there must be some
non-zero vector β in V such that
f ( β) = y where y is a non-zero element of F.
If x is any element of F, then
−1 −1
x = ( xy ) y = ( xy ) f ( β)
−1
= f [( xy ) β] [ ∵ f is linear functional]
Thus there exists α = ( xy −1 ) β ∈ V such that f (α) = x.
Note: If f is a non-zero linear functional on V ( F ), then f is necessarily a
function from V onto F.
Important Note: In some books f (α) is written as [α, f ].

Example 4. Prove that if f is a linear functional on an n-dimensional vector space V ( F ),


then the set of all those vectors α for which f (α) = 0 is a subspace of V, what is the dimension
of that subspace ?
Solution: Let N = {α ∈ V : f (α) = 0}.
N is not empty because at least 0 ∈ N. Remember that
f (0) = 0.
196

Let α, β ∈ N . Then f (α) = 0, f ( β) = 0.


If a , b ∈ F, we have
f (aα + bβ) = af (α) + bf (β) = a0 + b0 = 0.
∴ aα + bβ ∈ N .
Thus a , b ∈ F and α , β ∈ N
⇒ aα + b β ∈ N .
∴ N is a subspace of V. This subspace N is the null space of f .
We know that dim V = dim N + dim (range of f ).
(i) If f is zero linear functional, then range of f consists of zero element of F alone.
Therefore dim (range of f ) = 0 in this case.
∴ In this case, we have
dim V = dim N + 0
⇒ n = dim N .
(ii) If f is a non-zero linear functional on V, then f is onto F. So range of f consists
of all F in this case. The dimension of the vector space F′ is 1.
∴ In this case we have
dim V = dim N + 1
⇒ dim N = n − 1.
Example 5: Let V be a vector space over the field F. Let f be a non-zero linear functional on
V and let N be the null space of f . Fix a vector α 0 in V which is not in N. Prove that for each α
in V there is a scalar c and a vector β in N such that α = cα 0 + β. Prove that c and β are
unique.
Solution: Since f is a non-zero linear functional on V, therefore there exists a
non-zero vector α 0 in V such that f (α 0 ) ≠ 0. Consequently α 0 ∉ N. Let
f (α 0 ) = y ≠ 0.
Let α be any element of V and let f (α) = x.
We have f (α) = x
⇒ f (α) = ( x y −1 ) y [ ∵ 0 ≠ y ∈ F ⇒ y −1 exists]
⇒ f (α) = cy where c = x y −1 ∈ F
⇒ f (α) = c f (α 0 )
⇒ f (α) = f (c α 0 ) [ ∵ f is a linear functional]
⇒ f (α) − f (c α 0 ) = 0
⇒ f (α − c α 0 ) = 0
⇒ α − c α0 ∈ N
⇒ α − c α 0 = β for some β ∈ N.
⇒ α = c α 0 + β.
197

If possible, let
α = c ′ α 0 + β ′ where c ′ ∈ F and β′ ∈ N.
Then c α 0 + β = c ′ α 0 + β ′ …(1)
⇒ (c − c ′ ) α 0 + ( β − β ′ ) = 0
⇒ f [(c − c ′ ) α 0 + ( β − β ′ )] = f (0)
⇒ (c − c ′ ) f (α 0 ) + f ( β − β ′ ) = 0
⇒ (c − c ′ ) f (α 0 ) = 0
[ ∵ β, β ′ ∈ N ⇒ β − β ′ ∈ N and thus f ( β − β ′ ) = 0]
⇒ (c − c ′ ) = 0 [ ∵ f (α 0 ) is a non-zero element of F ]
c = c′.
Putting c = c ′ in (1), we get cα 0 + β = cα 0 + β ′ ⇒ β = β ′ .
Hence c and β are unique.
Example 6: If f and g are in V ′ such that f (α) = 0 ⇒ g (α) = 0, prove that g = kf for
some k ∈ F.
Solution: It is given that f (α) = 0 ⇒ g (α) = 0. Therefore if α belongs to the null
space of f , then α also belongs to the null space of g. Thus null space of f is a subset
of the null space of g.
(i) If f is zero linear functional, then null space of f is equal to V. Therefore in this
case V is a subset of null space of g. Hence null space of g is equal to V. So g is also
zero linear functional. Hence we have
g = k f V k ∈ F.
(ii) Let f be non-zero linear functional on V. Then there exists a non-zero vector
α 0 ∈ V such that f (α 0 ) = y where y is a non-zero element of F.
g (α 0 )
Let k= ⋅
f (α 0 )
If α ∈ V, then we can write
α = c α 0 + β where c ∈ F and β ∈ null space of f .
We have g (α) = g (c α 0 + β) = cg (α 0 ) + g ( β)
= cg (α 0 )
[ ∵ β ∈ null space of f ⇒ f ( β) = 0 and so g ( β) = 0]
Also (k f ) (α) = k f (α) = k f (cα 0 + β)
= k [cf (α 0 ) + f ( β)]
= kc f (α 0 ) [ ∵ f ( β) = 0]
g (α 0 )
= cf (α 0 ) = cg (α 0 ).
f (α 0 )
Thus g (α) = (k f ) (α) V α ∈ V .
∴ g = k f.
198

Comprehensive Exercise 1

1. Let f : R 2 → R and g : R 2 → R be the linear functionals defined by


f ( x, y) = x + 2 y and g ( x, y) = 3 x − y. Find (i) f + g (ii) 4 f (iii) 2 f − 5 g.
2. Let f : R 2 → R and g : R 2 → R be the linear functionals defined by
f ( x, y, z ) = 2 x − 3 y + zand g ( x, y, z ) = 4 x − 2 y + 3z . Find (i) f + g (ii) 3 f
(iii) 2 f − 5 g.
3. Find the dual basis of the basis set {(2, 1), (3, 1)} for R 2 .
4. Find the dual basis of the basis set
B = {(1, 0, 0), (0, 1, 0), (0, 0, 1)} for V3 (R).
5. Find the dual basis of the basis set
B = {(1, − 2, 3), (1, − 1, 1), (2, − 4, 7)} of V3 (R).
6. If B = {(− 1, 1, 1), (1, − 1, 1), (1, 1, − 1)} is a basis of V3 (R), then find the dual
basis of B.
7. Let V be the vector space of polynomials over R of degree ≤ 1 i. e.,
V = {a + bx : a, b ∈ R }.
Let φ1 and φ2 be linear functionals on V defined by
1 2
φ1 [ f ( x)] = ∫0 f ( x) dx and φ2 [ f ( x)] = ∫0 f ( x) dx.

Find the basis { f1 , f 2 } of V which is dual to {φ1 , φ2 }.


8. Let V be the vector space of polynomials over R of degree ≤ 2. Let φ1 , φ2 and
φ3 be the linear functionals on V defined by
1
φ1 [ f ( x)] = ∫0 f ( x) dx, φ2 [ f ( x)] = f ′ (1), φ3 [ f ( x)] = f (0).

Here f ( x) = a + bx + cx 2 ∈ V and f ′ ( x) denotes the derivative of f ( x). Find


the basis { f1 ( x), f 2 ( x), f 3 ( x)} of V which is dual to {φ1 , φ2 , φ3 }.
9. Prove that every finite dimensional vector space V is isomorphic to its second
conjugate space V ** under an isomorphism which is independent of the
choice of a basis in V.
10. Define a non-zero linear functional f on C 3 such that f (α) = 0 = f (β)
where α = (1, 1, 1) and β (1, 1, − 1).
199

A nswers 1

1. (i) 4x + y (ii) 4 x + 8 y (iii) − 13 x + 9 y


2. (i) 6 x − 5 y + 4z (ii) 6 x − 9 y + 3z (iii) − 16 x + 4 y − 13z
3. B′ = { f1 , f 2 } where f1 (a, b) = 3b − a, f 2 (a, b) = a − 2b
4. B′ = { f1 , f 2 , f 3 } where f1 (a, b, c ) = a, f 2 (a, b, c ) = b, f 3 (a, b, c ) = c
5. B′ = { f1 , f 2 , f 3 } where f1 (a, b, c ) = − 3a − 5b − 2c , f 2 (a, b, c ) = 2a + b,
f 3 (a, b, c ) = a + 2b + c
1 1
6. B′ = { f1 , f 2 , f 3 }, where f1 (a, b, c ) = (b + c ), f 2 (a, b, c ) = (a + c ),
2 2
1
f 3 (a, b, c ) = (a + b)
2
1
7. f1 ( x) = 2 − 2 x, f 2 ( x) = − + x
2
3 1 3 3
8. f1 ( x) = 3 x − x 2 , f 2 ( x) = − x + x 2 , f 3 ( x) = 1 − 3 x + x 2
2 2 4 2
10. f (a, b, c ) = x (a − b), where x is any non-zero scalar.

5.6 Annihilators
Definition:If V is a vector space over the field F and S is a subset of V,the annihilator of S is
the set S 0 of all linear functionals f on V such that
f (α) = 0 V α ∈ S.
Sometimes A (S ) is also used to denote the annihilator of S.
Thus S 0 = { f ∈ V ′ : f (α) = 0 V α ∈ S }.
It should be noted that we have defined the annihilator of S which is simply a
subset of V . S should not necessarily be a subspace of V.
If S = zero subspace of V, then S 0 = V ′ .
If S = V , then S 0 = V 0 = zero subspace of V ′ .
If V is finite dimensional and S contains a non-zero vector, then S 0 ≠ V ′. If
0 ≠ α ∈ S, then there is a linear functional f on V such that f (α) ≠ 0. Thus there is
f ∈ V ′ such that f ∉ S 0 . Therefore S 0 ≠ V ′ .
Theorem 1: If S is any subset of a vector space V ( F ), then S 0 is a subspace of V ′ .

Proof:First we see that S 0 is a non-empty subset of V ′ because at least^


0 ∈ S0 .
200

^
We have 0 (α) = 0 V α ∈ S.
0
Let f , g ∈ S . Then f (α) = 0 V α ∈ S, and g (α) = 0 V α ∈ S.
If a, b ∈ F, then
(af + bg) (α) = (af ) (α) + (bg) (α) = af (α) + bg (α) = a0 + b0 = 0.
∴ af + bg ∈ S 0 .
Thus a, b ∈ F and f , g ∈ S 0 ⇒ af + bg ∈ S 0 .
∴ S 0 is a subspace of V ′ .
Dimension of annihilator:
Theorem 2: Let V be a finite dimensional vector space over the field F, and let W be a
subspace of V. Then
dim W + dim W 0 = dim V .
Proof: If W is zero subspace of V, then W 0 = V ′ .
∴ dim W 0 = dim V ′ = dim V .
Also in this case dim W = 0. Hence the result.
Similarly the result is obvious when W = V .
Let us now suppose that W is a proper subspace of V.
Let dim V = n, and dim W = m where 0 < m < n.
Let B1 = {α1 , … , α m }be a basis for W. Since B1 is a linearly independent subset of V
also, therefore it can be extended to form a basis for V. Let
B = {α1 , … , α m , α m + 1 , … , α n } be a basis for V.
Let B′ = { f1 , … , f m , f m + 1 , … , f n }be the dual basis of B. Then B′ is a basis for V ′
such that f i (α j ) = δ ij .
We claim that S = { f m + 1 , … , f n } is a basis for W 0 .

Since S ⊂ B′ , therefore S is linearly independent because B′ is linearly


independent. So S will be a basis for W 0 , if W 0 is equal to the subspace of V ′
spanned by S i. e., if W 0 = L (S ).
First we shall show that W 0 ⊆ L (S ). Let f ∈ W 0 . Then f ∈ V ′ . So let
n
f = Σ xi f i . …(1)
i =1

Now f ∈ W 0 ⇒ f (α) = 0 V α ∈ W
⇒ f (α j ) = 0 for each j = 1, … , m [ ∵ α1 , … , α m are in W ]
 n 
⇒  Σ x i f i  (α j ) = 0 [From (1)]
 i =1 
n n
⇒ Σ x i f i (α j ) = 0 ⇒ Σ x i δ ij = 0
i =1 i =1
201

⇒ x j = 0 for each j = 1, … , m.
Putting x1 = 0, x2 = 0, … , x m = 0 in (1), we get
f = xm + 1 f m + 1 + … + xn f n
= a linear combination of the elements of S.
∴ f ∈ L (S ).
0
Thus f ∈ W ⇒ f ∈ L (S ).
0
∴ W ⊆ L (S ).
0
Now we shall show that L (S ) ⊆ W .
Let g ∈ L (S ). Then g is a linear combination of
f m + 1, … , f n .
n
Let g= Σ yk f k . …(2)
k = m+1

Let α ∈ W. Then α is a linear combination of α1 , … , α m . Let


m
α = Σ cj αj . …(3)
j =1

 m 
We have g (α) = g  Σ c j α j  [From (3)]
j =1 
m
= Σ c j g(α j ) [ ∵ g is linear functional]
j =1
m  n 
= Σ cj  Σ y k f k  (α j ) [From (2)]
j =1 k = m+1 
m n m n
= Σ cj Σ y k f k (α j ) = Σ c j Σ y k δ kj
j =1 k = m+1 j =1 k = m+1
m
= Σ cj 0 [ ∵ δ kj = 0 if k ≠ j which is so for each
j =1

k = m + 1, … , n and for each j = 1, … , m]


= 0.
0
Thus g (α) = 0 V α ∈ W. Therefore g ∈ W .
Thus g ∈ L (S ) ⇒ g ∈W0 .
0
∴ L (S ) ⊆ W .
Hence W 0 = L (S ) and S is a basis for W 0 .
0
∴ dim W = n − m = dim V − dim W
or dim V = dim W + dim W 0 .
Corollary: If V is finite-dimensional and W is a subspace of V, then W ′ is isomorphic to
0
V ′ /W .
202

Proof: Let dim V = n and dim W = m . W′ is dual space of W,


so dim W ′ = dim W = m.
Now dim V ′ /W 0 = dim V ′ − dim W 0
= dim V − (dim V − dim W ) = dim W = m.
Since dim W ′ = dim V ′ /W 0 , therefore W ′ ≅ V ′ /W 0 .
Annihilator of an annihilator: Let V be a vector space over the field F.If S is any
subset of V,then S 0 is a subspace of V ′ .By definition of an annihilator, we have
(S 0 )0 = S 00 = {L ∈ V ′ ′ : L ( f ) = 0 V f ∈ S 0 }.
Obviously S 00 is a subspace of V ′ ′ . But if V is finite dimensional, then we have
identified V ′ ′ with V through the natural isomorphism α ↔ Lα . Therefore we
may regard S 00 as a subspace of V. Thus
S 00 = {α ∈ V : f (α) = 0 V f ∈ S 0 }.
Theorem 3:Let V be a finite dimensional vector space over the field F and let W be a
subspace of V. Then W 00 = W.
Proof: We have
W 0 = { f ∈ V ′ : f (α) = 0 V α ∈ W } …(1)
and W 00 = {α ∈ V : f (α) = 0 V f ∈ W 0 }. …(2)
0 00
Let α ∈ W. Then from (1), f (α) = 0 V f ∈ W and so from (2), α ∈ W .
∴ α ∈ W ⇒ α ∈ W 00 .
Thus W ⊆ W 00 . Now W is a subspace of V and W 00 is also a subspace of V. Since
W ⊆ W 00 , therefore W is a subspace of W 00 .
0
Now dim W + dim W = dim V . [by theorem (2)]
0
Applying the same theorem for vector space V ′ and its subspace W , we get
0 00
dim W + dim W = dim V ′ = dim V .
0 00
∴ dim W = dim V − dim W = dim V − [dim V − dim W ]
00
= dim W .
00 00 00
Since W is a subspace of W and dim W = dim W , therefore W = W .

Example 7: If S1 and S2 are two subsets of a vector space V such that S1 ⊆ S2 , then show
that S2 0 ⊆ S10 .

Solution: Let f ∈ S2 0 . Then f (α) = 0 V α ∈ S2


203

⇒ f (α) = 0 V α ∈ S1 [ ∵ S1 ⊆ S2 ]
⇒ f ∈ S10 .
∴ S2 0 ⊆ S10 .
Example 8: Let V be a vector space over the field F. If S is any subset of V, then show that
0 0
S = [ L (S )] .
Solution: We know that S ⊆ L (S ).
∴ [ L (S )]0 ⊆ S 0 . …(1)
Now let f ∈ S 0 . Then f (α) = 0 V α ∈ S.
If β is any element of L (S ), then
n
β = Σ x i α i where each α i ∈ S.
i =1
n
We have f ( β) = Σ x i f (α i ) = 0, since each f (α i ) = 0.
i =1

Thus f ( β) = 0 V β ∈ L (S ).
∴ f ∈ ( L (S ))0 .
Therefore S 0 ⊆ ( L (S ))0 …(2)
0 0
From (1) and (2), we conclude that S = ( L (S )) .

Example 9: Let V be a finite-dimensional vector space over the field F. If S is any subset of
00
V, then S = L (S ).

Solution:We have S 0 = ( L (S ))0 . [See Example 8]


00 00
∴ S = ( L (S )) . …(1)
But V is finite-dimensional and L (S ) is a subspace of V.Therefore by theorem 3,
( L (S ))00 = L (S ).
∴ from (1), we have
S 00 = L (S ).
Example 10: Let W1 and W2 be subspaces of a finite dimensional vector space V.

(a) Prove that (W1 + W2 )0 = W10 ∩ W2 0 .

(b) Prove that (W1 ∩ W2 )0 = W10 + W2 0 .

Solution: (a) First we shall prove that


W10 ∩ W2 0 ⊆ (W1 + W2 )0 .
Let f ∈ W10 ∩ W2 0 . Then f ∈ W10 , f ∈ W2 0 .
Suppose α is any vector in W1 + W2 . Then
204

α = α1 + α 2 where α1 ∈ W1 , α 2 ∈ W2 .
We have f (α) = f (α1 + α 2 ) = f (α1 ) + f (α 2 )
=0 +0 [ ∵ α1 ∈ W1 and f ∈ W10 ⇒ f (α1 ) = 0
and similarly f (α 2 ) = 0]
= 0.
Thus f (α) = 0 V α ∈ W1 + W2 .
∴ f ∈ (W1 + W2 )0 .
∴ W10 ∩ W2 0 ⊆ (W1 + W2 )0 . …(1)
Now we shall prove that
(W1 + W2 )0 ⊆ W10 ∩ W2 0 .
We have W1 ⊆ W1 + W2 .
∴ (W1 + W2 )0 ⊆ W10 . …(2)
Similarly, W2 ⊆ W1 + W2 .
∴ (W1 + W2 )0 ⊆ W2 0 . …(3)
From (2) and (3), we have
(W1 + W2 )0 ⊆ W10 ∩ W2 0 . …(4)
From (1) and (4), we have
(W1 + W2 )0 = W10 ∩ W2 0 .
(b) Let us use the result (a) for the vector space V ′ in place of the vector space V.
Thus replacing W1 by W10 and W2 by W2 0 in (a), we get
(W10 + W2 0 )0 = W100 ∩ W2 00
⇒ (W10 + W2 0 )0 = W1 ∩ W2 [ ∵ W100 = W1 etc. ]
⇒ (W10 + W2 0 )00 = (W1 ∩ W2 )0
⇒ W10 + W2 0 = (W1 ∩ W2 )0 .
Example 11: If W1 and W2 are subspaces of a vector space V, and if V = W1 ⊕ W2 ,
then V ′ = W10 ⊕ W2 0 .

Solution: To prove that V ′ = W10 ⊕ W2 0 , we are to prove that

(i) W10 ∩ W2 0 = { ^
0}
and (ii) V ′ = W10 + W2 0 i. e., each f ∈ V ′ can be written as f1 + f 2
where f1 ∈ W10 , f 2 ∈ W2 0 .

(i) First to prove that W10 ∩ W2 0 = {^


0 }.
Let f ∈ W10 ∩ W2 0 . Then f ∈ W10 and f ∈ W2 0 .
205

If α is any vector in V,then, V being the direct sum of W1 and W2 , we can write
α = α1 + α 2 where α1 ∈ W1 , α 2 ∈ W2 .
We have f (α) = f (α1 + α 2 )
= f (α1 ) + f (α 2 ) [ ∵ f is linear functional]
=0 +0 [∵ f ∈ W10 and α1 ∈ W1 ⇒ f (α1 ) = 0
and similarly f (α 2 ) = 0]
= 0.
Thus f (α) = 0 V α ∈ V .

∴ f =^
0.

∴ W10 ∩ W2 0 = {^
0 }.
(ii) Now to prove that V ′ = W10 + W2 0 .
Let f ∈ V ′ .
If α ∈ V, then α can be uniquely written as
α = α1 + α 2 where α1 ∈ W1 , α 2 ∈ W2 .
For each f , let us define two functions f1 and f 2 from V into F such that
f1 (α) = f1 (α1 + α 2 ) = f (α 2 ) …(1)
and f 2 (α) = f 2 (α1 + α 2 ) = f (α1 ). …(2)
First we shall show that f1 is a linear functional on V. Let a, b ∈ F and
α = α1 + α 2 , β = β1 + β 2 ∈ V where α1 , β1 ∈ W1 and α 2 , β 2 ∈ W2 . Then
f1 (aα + bβ) = f1 [a (α1 + α 2 ) + b ( β1 + β 2 )]
= f1 [(aα1 + bβ1 ) + (aα 2 + bβ 2 )]
= f (aα 2 + bβ 2 )
[ ∵ aα1 + bβ1 ∈ W1 , aα 2 + bβ 2 ∈ W2 ]
= af (α 2 ) + bf ( β 2 ) [ ∵ f is linear functional]
= af1 (α) + bf1 ( β) [From (1)]
∴ f1 is linear functional on V i. e., f1 ∈ V ′ .
Now we shall show that f1 ∈ W10 .
Let α1 be any vector in W1 . Then α1 is also in V. We can write
α1 = α1 + 0, where α1 ∈ W1 , 0 ∈ W2 .
∴ from (1), we have
f1 (α1 ) = f1 (α1 + 0) = f (0) = 0.
Thus f1 (α1 ) = 0 V α1 ∈ W1 .
∴ f1 ∈ W10 .
Similarly we can show that f 2 is a linear functional on V and f 2 ∈ W2 0 .
Now we claim that f = f1 + f 2 .
206

Let α be any element in V. Let


α = α1 + α 2 , where α1 ∈ W1 , α 2 ∈ W2 .
Then( f1 + f 2 ) (α) = f1 (α) + f 2 (α) = f (α 2 ) + f (α1 ) [From (1) and (2)]
= f (α1 ) + f (α 2 ) = f (α1 + α 2 )
[ ∵ f is linear functional]
= f (α).
Thus ( f1 + f 2 ) (α) = f (α) V α ∈ V .
∴ f = f1 + f 2 .
Thus f ∈ V ′ ⇒ f = f1 + f 2 where f1 ∈ W10 , f 2 ∈ W2 0 .
∴ V ′ = W10 + W2 0 .
Hence V ′ = W10 ⊕ W2 0

5.7 Invariant Direct-Sum Decompositions


Let T be a linear operator on a vector space V ( F). If S is a non-empty subset of V,
then by T (S) we mean the set of those elements of V which are images under T of
the elements in S. Thus
T (S) = {T (α) ∈ V : α ∈ S }.
Obviously T (S) ⊆ V . We call it the image of S under T.
Invariance.Definition.Let V be a vector space and T a linear operator on V. If W is a subspace of
V, we say that W is invariant under T if
α ∈ W ⇒ T (α) ∈ W.
Example 1: If T is any linear operator on V, then V is invariant under T. If α ∈ V, then
T (α) ∈ V because T is a linear operator on V. Thus V is invariant under T.
The zero subspace of V is also invariant under T. The zero subspace contains only
one vector i. e., 0 and we know that T (0) = 0 which is in zero subspace.
Example 2: Let V ( F ) be the vector space of all polynomials in x over the field F and let D be the
differentiation operator on V. Let W be the subspace of V consisting of all polynomials of
degree not greater than n.
If f ( x) ∈ W, then D [ f ( x)] ∈ W because differentiation operator D is degree
decreasing. Therefore W is invariant under D.
Let W be a subspace of the vector space V and let W be invariant under the linear
operator T on V i. e., let
α ∈ W ⇒ T (α) ∈ W.
We know that W itself is a vector space. If we ignore the fact that T is defined
outside W, then we may regard T as a linear operator on W. Thus the linear
operator T induces a linear operator TW on the vector space W defined by
TW (α) = T (α) V α ∈ W.
207

It should be noted that TW is quite a different object from T because the domain
of TW is W while the domain of T is V.
Invariance can be considered for several linear transformations also. Thus W is
invariant under a set of linear transformations if it is invariant under each
member of the set.
Matrix interpretation of invariance: Let V be a finite dimensional vector
space over the field F and let T be a linear operator on V. Suppose V has a
subspace W which is invariant under T. Then we can choose suitable ordered basis
B for V so that the matrix of T with respect to B takes some particular simple
form.
Let B1 = {α1 , … , α m} be an ordered basis for W where dim W = m. We can extend
B1 to form a basis for V. Let
B = {α1 , … , α m , α m + 1 , … , α n}
be an ordered basis for V where dim V = n.
Let A = [a ij ] n × n be the matrix of T with respect to the ordered basis B. Then
n
T (α j ) = Σ a i j α i , j = 1, 2, … , n. …(1)
i =1

If 1≤ j ≤ m, then α j is in W. But W is invariant under T. Therefore if 1≤ j ≤ m, then


T (α j ) is in W and so it can be expressed as a linear combination of the vectors
α1 , … , α m , which form a basis for W. This means that
m
T (α j ) = Σ a i j α i , 1 ≤ j ≤ m. …(2)
i =1

In other words in the relation (1), the scalars a i j are all zero if 1≤ j ≤ m and
m + 1 ≤ i ≤ n.
Therefore the matrix A takes the simple form
 M C
A= 
 O D 
where M is an m × m matrix, C is an m × (n − m) matrix, O is the null matrix of the
type (n − m) × m and D is an (n − m) × (n − m) matrix.
From the relation (2) it is obvious that the matrix M is nothing but the matrix of the
induced operator TW on W relative to the ordered basis B1 for W.
Reducibility: Definition: Let W1 and W2 be two subspaces of a vector space V and
let T be a linear operator on V. Then T is said to be reduced by the pair (W1 , W2 ) if
(i) V = W1 ⊕ W2 ,
(ii) both W1 and W2 are invariant under T.
It should be noted that if a subspace W1 of V is invariant under T, then there are
many ways of finding a subspace W2 of V such that V = W1 ⊕ W2 , but it is not
necessary that some W2 will also be invariant under T. In other words among the
collection of all subspaces invariant under T we may not be able to select any two
other than V and the zero subspace with the property that V is their direct sum.
208

The definition of reducibility can be extended to more than two subspaces. Thus let
W1 , … , Wk be k subspaces of a vector space V and let T be a linear operator on V. Then T is
said to be reduced by (W1 , … , Wk ) if
(i) V is the direct sum of the subspaces W1 , … , Wk ,
and (ii) Each of the subspaces Wi is invariant under T.
Direct sum of linear operators: Definition:
Suppose T is a linear operator on the vector space V. Let
V = W1 ⊕ … ⊕ Wk
be a direct sum decomposition of V in which each subspace Wi is invariant under T.
Then T induces a linear operator Ti on each Wi by restricting its domain from V to
Wi . If α ∈ V, then there exist unique vectors α1 , … , α k with α i in Wi such that
α = α1 + … + α k
⇒ T (α) = T (α1 + … + α k )
⇒ T (α) = T (α1 ) + … + T (α k ) [ ∵ T is linear]
⇒ T (α) = T1 (α1 ) + … + Tk (α k ) [∵ if α i ∈ Wi , then by def. of
Ti , we have T (α i ) = Ti (α i )]
Thus we can find the action of T on V with the help of independent action of the
operators Ti on the subspaces Wi . In such situation we say that the operator T is the
direct sum of the operators T1 ,… , Tk . It should be noted carefully that T is a
linear operator on V, while the Ti are linear operators on the various
subspaces Wi .

Matrix representation of reducibility: If T is a linear operator on a finite


dimensional vector space V and is reduced by the pair (W1 , W2 ), then by choosing a
suitable basis B for V, we can give a particularly simple form to the matrix of T with
respect to B.
Let dim V = n and dim W1 = m. Then dim W2 = n − m since V is the direct sum of
W1 and W2 .
Let B1 = {α1 , … , α m} be a basis for W1 and B2 = {α m + 1 , … , α n} be a basis for W2 .
Then B = B1 ∪ B2 = {α1 , … , α m , α m + 1 , … , α n} is a basis for V.
It can be easily seen, as in the case of invariance, that
 M O
[T ] B =  
 O N 
where M is an m × m matrix, N is an (n − m) × (n − m) matrix and O are null matrices
of suitable sizes.
Also if T1 and T2 are linear operators induced by T on W1 and W2 respectively, then
M = [T1 ] B , and N = [T2 ] B .
1 2
209

Example 12: If T is a linear operator on a vector space V and if W is any subspace of


V, then T (W) is a subspace of V. Also W is invariant under T iff T (W) ⊆ W.
Solution: We have, by definition
T (W) = {T (α) : α ∈ W }.
Since 0 ∈ W and T (0) = 0, therefore T (W) is not empty because at least 0 ∈ T (W).
Now let T (α1 ), T (α 2 ) be any two elements of T (W) where α1 , α 2 are any two
elements of W.
If a, b ∈ F, then
aT (α1 ) + bT (α 2 ) = T (aα1 + bα 2 ), because T is linear.
But W is a subspace of V.
Therefore α1 , α 2 ∈ W and a, b ∈ F ⇒ aα1 + bα 2 ∈ W. Consequently
T (aα1 + bα 2 ) ∈ T (W).
Thus a, b ∈ F and T (α1 ), T (α 2 ) ∈ T (W)
⇒ aT (α1 ) + bT (α 2 ) ∈ T (W).
∴ T (W) is a subspace of V.
Second Part: Suppose W is invariant under T.
Let T (α) be any element of T (W) where α ∈ W.
Since α ∈ W and W is invariant under T, therefore
T (α) ∈ W. Thus T (α) ∈ T (W) ⇒ T (α) ∈ W.
Therefore T (W ) ⊆ W.
Conversely suppose that T (W) ⊆ W.
Then T (α) ∈ W V α ∈ W. Therefore W is invariant under T.
Example 13: If T is any linear operator on a vector space V, then the range of T and the
null space of T are both invariant under T.
Solution: Let N (T ) be the null space of T. Then
N (T ) = {α ∈ V : T (α) = 0}.
If β ∈ N (T ), then T ( β) = 0 ∈ N (T ) because N (T ) is a subspace.
∴ N (T ) is invariant under T.
Again let R (T ) be the range of T. Then
R (T ) = {T (α) ∈ V : α ∈ V }.
Since R (T ) is a subset of V, therefore β ∈ R (T ) ⇒ β ∈ V .
Now β ∈ V ⇒ T (β) ∈ R (T ).
Thus β ∈ R (T ) ⇒ T (β) ∈ R (T ). Therefore R (T ) is invariant under T.
210

Example 14: Give an example of a linear transformation T on a finite-dimensional vector


space V such that V and the zero subspace are the only subspaces invariant under T.
Solution: Let T be the linear operator on V2 (R) which is represented in the
standard ordered basis by the matrix
 0 −1 
1 0⋅
 
Let W be a proper subspace of V2 (R) which is invariant under T. Then W must be
of dimension 1. Let W be the subspace spanned by some non-zero vector α. Now
α ∈ W and W is invariant under T. Therefore T (α) ∈ W.
∴ T (α) = cα for some c ∈ R
⇒ T (α) = cI (α) where I is identity operator on V
⇒ [T − cI ] (α) = 0
⇒ T − cI is singular [ ∵ α ≠ 0]
⇒ T − cI is not invertible.
If B denotes the standard ordered basis for V2 (R), then
[T − cI ] B = [T ] B − c [I ] B

0 − 1  1 0  − c − 1
=  −c  = ⋅
1 0 0 1   1 − c 

− c − 1  − c − 1 2
Now det   =  = c + 1 ≠ 0 for any real number c.
 1 − c  1 − c

− c − 1 
∴  1 − c  i. e., [T − cI ] B is invertible.
 
Consequently T − cI is invertible which is contradictory to the result that T − cI is
not invertible.
Hence no proper subspace W of V2 (R) can be invariant under T.

5.8 The Adjoint or the Transpose of a Linear Transformation


In order to bring some simplicity in our work we shall introduce a few changes in
our notation of writing the image of an element of a vector space under a linear
transformation and that under a linear functional. If T is a linear transformation
on a vector space V and α ∈ V, then in place of writing T (α) we shall simply write
Tα i. e., we shall omit the brackets. Thus Tα will mean the image of α under T. If
T1 and T2 are two linear transformations of V, then in our new notation T1 T2 α
will stand for T1 [T2 (α)].
Let f be a linear functional on V. If α ∈ V, then in place of writing f (α) we shall
write [α , f ]. This is the square brackets notation to write the image of a vector
211

under a linear functional. Thus [α , f ] will stand for f (α). If a , b ∈ F and α, β ∈ V ,


then in this new notation the linearity property of f
i. e., f (aα + bβ) = af (α) + bf (β)
will be written as
[aα + bβ , f ] = a [α , f ] + b [ β, f ].
Also if f and g are two linear functionals on V and a, b ∈ F, then the property
defining addition and scalar multiplication of linear functionals i. e.,the property
(af + bg) (α) = af (α) + bg (α)
will be written as
[α , af + bg] = a [α , f ] + b [α, g].
Note that in this new notation, we get
[α , f ] = f (α), [α , g] = g (α).
Theorem 1: Let U and V be vector spaces over the field F. For each linear transformation
T from U into V, there is a unique linear transformation T ′ from V ′ into U ′ such that
[T ′ ( g)] (α) = g [T (α)] (in old notation)
or [α , T ′ g] = [Tα , g] (in new notation)
for every g in V ′ and α in U.
The linear transformation T ′ is called the adjoint or the transpose or the dual of
T. In some books it is denoted by T t or by T * .
Proof : T is a linear transformation from U to V. U ′ is the dual space of U and V ′
is the dual space of V. Suppose g ∈ V ′ i. e., g is a linear functional on V. Let us define
f (α) = g [T (α)] V α ∈ U …(1)
Then f is a function from U into F. We see that f is nothing but the product or
composite of the two functions T and g where T : U → V and g : V → F.Since both
T and g are linear therefore f is also linear. Thus f is a linear functional on
U i. e., f ∈ U ′ . In this way T provides us with a rule T ′ which associates with each
functional g on V a linear functional
f = T ′ ( g) on U, defined by (1).
Thus T ′ : V ′ → U ′ such that
T ′ ( g) = f V g ∈ V ′ where f (α) = g [T (α)] V α ∈ U.
Putting f = T ′ ( g) in (1), we see that T ′ is a function from V ′ into U ′ such that
[T ′ ( g)] (α) = g [T (α)]
or in square brackets notation
[α , T ′ g] = [Tα , g] …(2)
V g ∈ V ′ and V α ∈ U.
Now we shall show that T ′ is a linear transformation from V ′ into U ′ . Let
g1 , g2 ∈ V ′ and a, b ∈ F.
212

Then we are to prove that


T ′ (ag1 + bg2 ) = aT ′ g1 + bT ′ g2 …(3)
where T ′ g1 stands for T ′ ( g1 ) and T ′ g2 stands for T ′ ( g2 ).
We see that both the sides of (3) are elements of U ′ i. e., both are linear functionals
on U. So if α is any element of U, we have
[α , T ′ (ag1 + bg2 )] = [Tα, ag1 + bg2 ] [From (2) because ag1 + bg2 ∈ V ′ ]
= [Tα , ag1 ] + [Tα , bg2 ] [by def. of addition in V ′ ]
= a [Tα, g1 ] + b [Ta , g2 ]
[by def. of scalar multiplication in V ′ ]
= a [α , T ′ g1 ] + b [α1 , T ′ g2 ] [From (2)]
= [α , aT ′ g1 ] + [α , bT ′ g2 ]
[by def. of scalar multiplication in U ′ .
Note that T ′ g1 , T ′ g2 ∈ U ′ ]
= [α , aT ′ g1 + bT ′ g2 ] [By addition in U ′]
Thus V α ∈ U, we have
[α, T ′ (ag1 + bg2 )] = [α , aT ′ g1 + bT ′ g2 ].
∴ T ′ (ag1 + bg2 ) = aT ′ g1 + bT ′ g2 [by def. of equality of two functions]
Hence T ′ is a linear transformation from V ′ into U ′ .
Now let us show that T ′ is uniquely determined for a given T. If possible, let T1 be a
linear transformation from V ′ into U ′ such that
[α , T1 g] = [Tα , g] V g ∈ V ′ and α ∈ U. …(4)
Then from (2) and (4), we get
[α , T1 g] = [α , T ′ g] V α ∈ U, V g ∈ V ′
⇒ T1 g = T ′ g V g ∈ V ′
⇒ T1 = T ′ .
∴ T ′ is uniquely determined for each T. Hence the theorem.
Note: If T is a linear transformation on the vector space V, then in the proof of
the above theorem we should simply replace U by V.
Theorem 2 : If T is a linear transformation from a vector space U into a vector space V,
then
(i) the annihilator of the range of T is equal to the null space of T ′ i. e.,
[ R (T )]0 = N (T ′ ) .
If in addition U and V are finite dimensional, then
(ii) ρ (T ′ ) = ρ (T )
and (iii) the range of T ′ is the annihilator of the null space of T i. e.,
R (T ′ ) = [N (T )]0 .
213

Proof : (i) If g ∈ V ′, then by definition of T ′ , we have


[α , T ′ g] = [Tα , g] V α ∈ U. …(1)
Let g ∈ N (T ′ ) which is a subspace of V ′ . Then

T ′ g =^
^ ^
0 where 0 is zero element of U ′ i. e., 0 is zero functional on U. Therefore
from (1), we get
^
[Tα, g] = [α, 0 ] V α ∈ U

⇒ [Tα , g] = 0 V α ∈ U [∵^
0 (α) = 0 V α ∈ U ]
⇒ g ( β) = 0 V β ∈ R (T )
[ ∵ R (T ) = {β ∈V : β = T (α) for some α ∈ U }]
⇒ g ∈ [ R (T )]0 .
∴ N (T ′ ) ⊆ [ R (T )]0 .
Now let g ∈ [ R (T )]0 which is a subspace of V ′ . Then
g ( β) = 0 V β ∈ R (T )
⇒ [Tα, g] = 0 V α ∈ U [∵ V α ∈ U, Tα ∈ R (T )]
⇒ [α , T ′ g] = 0 V α ∈ U [From (1)]

⇒ T ′ g =^
0 (zero functional on U ) ⇒ g ∈ N (T ′ ).
0
∴ [ R (T )] ⊆ N (T ′ ).
Hence [ R (T )]0 = N (T ′ ).
(ii) Suppose U and V are finite dimensional. Let dim U = n, dim V = m. Let
r = ρ (T ) = the dimension of R (T ).
Now R (T ) is a subspace of V. Therefore
dim R (T ) + dim [ R (T )]0 = dim V . [See Th. 2 of article 5.6]
0
∴ dim [ R (T )] = dim V − dim R (T )
= dim V − r = m − r.
By part (i) of this theorem [ R (T )]0 = N (T ′ ).
∴ dim N (T ′ ) = m − r ⇒ nullity of T ′ = m − r.
But T ′ is a linear transformation from V ′ into U ′ .
∴ ρ (T ′ ) + ν (T ′ ) = dim V ′
or ρ (T ′ ) = dim V ′ − ν (T ′ )
= dim V − nullity of T ′ = m − (m − r) = r.
∴ ρ (T ) = ρ (T ′ ) = r.
(iii) T ′ is a linear transformation from V ′ into U ′. Therefore R (T ′ ) is a subspace
of U ′. Also [N (T )]0 is a subspace of U ′ because N (T ) is a subspace of U. First
we shall show that R (T ′ ) ⊆ [N (T )]0 .
214

Let f ∈ R (T ′ ).
Then f = T ′ g for some g ∈ V ′ .
If α is any vector in N (T ), then Tα = 0.
We have
[α , f ] = [α , T ′ g] = [Tα , g] = [0, g] = 0.
Thus f (α) = 0 V α ∈ N (T ).
Therefore f ∈ [N (T )]0 .
∴ R (T ′ ) ⊆ [N (T )]0
⇒ R (T ′ ) is a subspace of [N (T )]0 .
Now dim N (T ) + dim [N (T )]0 = dim U. [Theorem 2 of article 5.6]
∴ dim [N (T )]0 = dim U − dim N (T )
= dim R (T ) [∵ dim U = dim R (T ) + dim N (T )]
= ρ (T ) = ρ (T ′ ) = dim R (T ′ ).
Thus dim R (T ′ ) = dim [N (T )]0
and R (T ′ ) ⊆ [N (T )]0 .
∴ R (T ′ ) = [N (T )]0 .
Note: If T is a linear transformation on a vector space V, then in the proof of the
above theorem we should replace U by V and m by n.
Theorem 3 : Let U and V be finite-dimensional vector spaces over the field F. Let B be an
ordered basis for U with dual basis B ′ , and let B1 be an ordered basis for V with dual basis
B1 ′. Let T be a linear transformation from U into V. Let A be the matrix of T relative to B, B1
and let C be the matrix of T ′ relative to B1 ′ , B ′. Then C = A′ i. e., the matrix C is the
transpose of the matrix A.
Proof : Let dim U = n, dim V = m .
Let B = {α1 ,... , α n }, B ′ = { f1 ,... , f n },
B1 = { β1 , ... , β m }, B1 ′ = { g1 , ... , g}.
Now T is a linear transformation from U into V and T ′ is that from V ′ into U ′ .The
matrix A of T relative to B, B1 will be of the type m × n. If A = [a ij ] m × n , then by
definition
m
T (α j ) or simply Tα j = Σ a ij β i , j = 1, 2 ..., n. …(1)
i =1
The matrix C of T ′ relative to B1 ′ , B′ will be of the type n × m. If C = [c ji ] n × m,
then by definition
n
T ′ ( g i ) or simply T ′ g i = Σ c j i f j , i = 1, 2 , ... , m …(2)
j =1

Now T ′ g i is an element of U ′ i. e., T ′ g i is a linear functional on U. If f is any


linear functional on U, then we know that
215

n
f = Σ f (α j ) f j . [See theorem 3 of Chapter 5, article 5.4]
j =1

Applying this formula for T ′ g i in place of f , we get


n
T ′ g i = Σ {(T ′ g i ) (α j )} f j . …(3)
j =1

Now let us find (T ′ g i ) (α j ).


We have
(T ′ g i ) (α j ) = g i T (α j ) [by def. of T ′ ]
 m 
= g i  Σ ak j βk  [From (1), replacing the suffix i
k = 1 
by k which is immaterial]
m
= Σ a kj g i ( β k ) [∵ g i is linear]
k =1
m
= Σ a kj δ ik [∵ g i ∈ B1 ′ which is dual basis of B1 ]
k =1

= ai j
[On summing with respect to k and remembering
that δ ik = 1 when k = i and δ ik = 0 when k ≠ i]
Putting this value of (T ′ g i ) (α j ) in (3), we get
m
T ′ g i = Σ a ij f j . …(4)
j =1

Since f1 , ... , f n are linearly independent, therefore from (2) and (4), we get
c ji = a ij .
Hence by definition of transpose of a matrix, we have
C = A′ .
Note: If T is a linear transformation on a finite-dimensional vector space V, then
in the above theorem we put U = V and m = n. Also according to our convention
we take B1 = B. The students should write the complete proof themselves.
Theorem 4 : Let A be any m × n matrix over the field F. Then the row rank of A is equal to
the column rank of A.
Proof : Let A = [ a ij ] m × n . Let

B = {α1 , ... , α n } and B1 = { β1 , ... , β m }


be the standard ordered bases for Vn ( F ) and Vm ( F ) respectively. Let T be the
linear transformation from Vn ( F ) into Vm ( F ) whose matrix is A relative to
ordered bases B and B1 . Then obviously the vectors T (α1 ), ... , T (a n ) are nothing
but the column vectors of the matrix A. Also these vectors span the range of T
because α1 ,..., α n form a basis for the domain of T i. e., Vn ( F ).
∴ the range of T = the column space of A
216

⇒ the dimension of the range of T = the dimension of the column space of A


⇒ ρ (T ) = the column rank of A. …(1)
If T ′ is the adjoint of the linear transformation T, then the matrix of T ′ relative to
the dual bases B1 ′ and B ′ is the matrix A' which is the transpose of the matrix A.
The columns of the matrix A′ are nothing but the rows of the matrix A. By the
same reasoning as given in proving the result (1), we have
ρ (T ′ ) = the column rank of A'
= the row rank of A …(2)
Since ρ (T ) = ρ (T ′ ), therefore from (1) and (2), we get the result that
the column rank of A = the row rank of A.
Theorem 5 : Prove the following properties of adjoints of linear operators on a vector
space V ( F ) :
^
(i) 0′ = ^
0;
(ii) I ′=I;
(iii) (T1 + T2 )′ = T1 ′+ T2 ′ ;
(iv) (T1 T2 )′ = T2 ′ T1 ′ ;
(v) (aT )′ = aT ′ where a ∈ F ;
−1
(vi) (T )′ = (T ′ ) −1 if T is invertible ;
(vii) (T ′ )′ = T ′ ′ = T if V is finite-dimensional.

Proof : (i) If ^
0 is the zero transformation on V, then by the definition of the
adjoint of a linear transformation, we have
[α, ^
0 ′ g] = [ ^
0 α , g] for every g in V ′ and α in V

= for every g in V ′ [∵^


0 (α) = 0 V α ∈ V ]
=0 [∵ g (0) = 0]
^
= [ α , 0 ] V α ∈V [Here 0 ∈ V ′ and ^
^
0 (α) = 0]
^
= [α , 0 g] V g ∈ V ′ and α ∈V

[Here ^
0 is the zero transformation on V ′ ]
Thus we have
[α ,^
0 ′ g] = [α , ^
0 g] for all g in V ′ and α in V.
^ ^
∴ 0 ′ = 0.
(ii) If I is the identity transformation on V, then by the definition of the adjoint of
a linear transformation, we have
[α , I ′ g] = [Iα, g] for every g in V ′ and α in V
= [α , g] for every g in V ′ and α in V [∵ I (α) = α V α ∈ V ]
217

= [α , Ig] for every g in V ′ and α in V


[Here I is the identity operator on V' ]
∴ I ′ = I.
(iii) If T1 , T2 are linear operators on V, then T1 + T2 is also a linear operator on V.
By the definition of adjoint, we have
[α , (T1 + T2 )′ g] = [(T1 + T2 ) α , g] for every g in V ′ and α in V
= [T1 α + T2 α , g]
[by def. of addition of linear transformations]
= [T1 α , g] + [T2 α , g] [by linearity property of g]
= [α , T1 ′ g] + [α , T2 ′ g] [by def. of adjoint]
= [α , T1 ′ g + T2 ′ g] [by def. of addition of linear
functionals. Note that T1 ′ g, T2 ′ g are
elements of V ′ ]
= [α , (T1 ′ + T2 ′ ) g].
Thus we have
[α , (T1 + T2 )′ g] = [a, (T1 ′ + T2 ′ ) g] for every g in V ′ and α in V.
∴ (T1 + T2 )′ g = (T1 + T2 ′ ) g V g ∈ V ′.
∴ (T1 + T2 )′ = T1 ′ + T2 ′ .
(iv) If T1 , T2 are linear operators on V, then T1 T2 is also a linear operator on V. By
the definition of adjoint, we have
[α, (T1 T2 )′ g] = [(T1 T2 ) α , g] for every g in V ′ and α in V
= [(T1 ) T2 α, g]
[by def. of product of linear transformations]
= [T2 α , T1 ′ g] [by def. of adjoint]
= [α , T2 ′ T1 ′ g] [by def. of adjoint]
Thus we have
[α , (T1 T2 )′ g] = [a , T2 ′ T1 ′ g] for every g in V ′ and α in V.
∴ (T1 T2 )′ = T2 ′ T1 ′ .
Note: This is called the reversal law for the adjoint of the product of two linear
transformations.
(v) If T is a linear operator on V and a ∈ F, then aT is also a linear operator on V. By
the definition of the adjoint, we have
[α , (aT )′ g] = [(aT ) α , g] for every g in V ′ and α in V
= [a (Tα), g] [by def. of scalar multiplication of a
linear transformation]
= a [Tα , g] [∵ g is linear]
= a [α, T ′ g] [by def. of adjoint]
= [α, a (T ′ g)] [by def. of scalar multiplication in V ′ .
218

Note that T ′ g ∈ V ′ ]
= [α , (aT ′ ) g]
[by def. of scalar multiplication of T ′ by a]
∴ (aT )′ = aT ′ .
−1
(vi) Suppose T is an invertible linear operator on V. If T is the inverse of T,
−1 −1
we have T T =I =T T
−1 −1
⇒ (T T )′ = I ′ = (T T )′
−1 −1
⇒ T ′ (T ) = I = (T )′ T ′ [Using results (ii) and (iv)]
−1 −1
∴ T ′ is invertible and (T ′ ) = (T )′ .
(vii) V is a finite dimensional vector space. T is a linear operator on V , T ′ is a linear
operator on V ′ and (T ′ )′ or T ′ ′ is a linear operator on V ′ ′ . We have identified
V ′ ′ with V through natural isomorphism α ↔ Lα where α ∈ V and Lα ∈ V ′ ′ .
Here Lα is a linear functional on V ′ and is such that
Lα ( g) = g (α) ∀ g ∈ V ′ . …(1)
Through this natural isomorphism we shall take α = Lα and thus T ′ ′ will be
regarded as a linear operator on V.
Now T ′ is a linear operator on V ′ .Therefore by the definition of adjoint, we have
[ g, T ′ ′ Lα ] = [ g, (T ′ )′ Lα ] = [T ′ g, Lα ] for every g ∈ V ′ and α ∈ V.
Now T ′ g is an element of V ′ . Therefore from (1), we have
[T ′ g, Lα ] = [α , T ′ g]
[Note that from (1), Lα ( T ′ g ) = ( T ′ g ) α]
= [Tα , g]. [by def. of adjoint]
Again T ′ ′ Lα is an element of V ′ ′ . Therefore from (1), we have
[ g, T ′ ′ Lα ] = [β , g] where β ∈ V and β ↔ T ′ ′ Lα
under natural isomorphism
= [T ′ ′ α , g] [∵ β = T ′ ′ Lα = T ′ ′ α when we regard
T ′ ′ as linear operator on V in place of V ′ ′ ]
Thus, we have
[Tα , g] = [T ′ ′ α , g] for every g in V ′ and α in V
⇒ g (Tα) = g (T ′ ′ α) for every g in V ′ and α in V
⇒ g (Tα − T ′ ′ α) = 0 for every g in V ′ and α in V
⇒ Tα − T ′ ′ α = 0 for every α in V
⇒ (T − T ′ ′ ) α = 0 for every α in V
^
⇒ T − T ′′ = 0

⇒ T = T ′ ′.
219

Example 15 : Let f be the linear functional on R2 defined by f ( x, y) = 2x − 5y. For


each linear mapping
T : R 3 → R 2 find [T ′ ( f )] ( x, y, z ), where
(i) T ( x, y, z ) = ( x − y, y + z ),
(ii) T ( x, y, z ) = ( x + y + 2z , 2 x + y),
(iii) T ( x, y, z ) = ( x + y, 0).
Solution : By definition of the transpose mapping,
T ′ ( f ) = f o T i. e., [T ′ ( f )] α = f [T (α)] for every α ∈ R3 .
(i) [T ′ ( f )] ( x, y, z ) = f [T ( x, y, z )]
= f ( x − y, y + z ) = 2 ( x − y) − 5 ( y + z ) = 2 x − 7 y − 5z .
(ii) [T ′ ( f )] ( x, y, z ) = f [T ( x, y, z )]
= f ( x + y + 2z , 2 x + y)
= 2 ( x + y + 2z ) − 5 (2 x + y) = − 8 x − 3 y + 4z .
(iii) [T ′ ( f )] ( x, y, z ) = f [T ( x, y, z )]
= f ( x + y, 0) = 2 ( x + y) − 5 . 0 = 2 x + 2 y.
Example 16 : If A and B are similar linear transformations on a vector space V, then so
also are A′ and B′ .
Solution : A is similar to B means that there exists an invertible linear
transformation C on V such that
−1
A = CBC
−1
⇒ A = (CBC )′
−1
⇒ A′ = (C )′ B ′ C ′ .
Now C is invertible implies that C ′ is also invertible and
−1 −1
(C ′ ) = (C )′ .
∴ A′ = (C ′ ) −1 B ′ C ′ ⇒ C ′ A′ (C ′ ) −1 = B ′
[Multiplying on right by (C ′ ) −1 and on left by C ′ ]
⇒ B ′ is similar to A′
⇒ A′ and B ′ are similar.

Example 17 : Let V be a finite dimensional vector space over the field F. Show that
T → T ′ is an isomorphism of L (V , V ) onto L (V ′ , V ′ ).
Solution : Let dim V = n.
Then dim V ′ = n.
220

Also dim L (V , V ) = n2 , dim L (V ′ , V ′ ) = n2 .


Let ψ : L (V , V ) → L (V ′ , V ′ ) such that
ψ (T ) = T ′ V T ∈ L (V , V ).
(i) ψ is linear transformation:
Let a, b ∈ F and T1 , T2 ∈ L (V , V ). Then
ψ (aT1 + bT2 ) = (aT1 + bT2 )′ [by def. of ψ]
= (aT1 )′ + (bT2 )′ [ ∵ ( A + B )′ = A′ + B′ ]
= aT1 ′ + bT2 ′ [∵ (aA)′ = aA′ ]
= aψ (T1 ) + bψ (T2 ) [by def. of ψ]
∴ ψ is a linear transformation from L (V , V ) into L (V ′ , V ′ ).
(ii) ψ is one-one:
Let T1 , T2 ∈ L (V , V ).
Then ψ (T1 ) = ψ (T2 )
⇒ T1 ′ = T2 ′
⇒ T1 ′ ′ = T2 ′ ′
⇒ T1 = T2 [ ∵ V is finite-dimensional]
∴ ψ is one-one.
(iii) ψ is onto:
We have dim L (V , V ) = dim L (V ′ , V ′ ) = n2 .
Since ψ is a linear transformation from L (V , V ) into L (V ′ , V ′ ) therefore ψ is
one-one implies that ψ must be onto.
Hence ψ is an isomorphism of L (V , V ) onto L (V ′ , V ′ ).
Example 18 : If A and B are linear transformations on an n-dimensional vector space
V, then prove that
(i) ρ ( AB) ≥ ρ ( A) + ρ ( B) − n.
(ii) ν ( AB) ≤ ν ( A) + ν ( B). (Sylvester’s law of nullity)
Solution : (i) First we shall prove, that if T is a linear transformation on V and W1
is an h-dimensional subspace of V, then the dimension of T (W1 ) is ≥ h − ν (T ).
Since V is finite-dimensional, therefore the subspace W1 will possess complement.
Let V = W1 ⊕ W2 . Then
dim W2 = n − h = k (say).
Since V = W1 + W2 , therefore
T (V ) = T (W1 ) + T (W2 ), as can be easily seen.
∴ dim T (V ) = dim [T (W1 ) + T (W2 )]
≤ dim T (W1 ) + dim T (W2 )
[∵ the dimension of a sum is ≤ the sum of the dimensions]
221

But T (V ) = the range of T.


∴ dim T (V ) = ρ (T ).
Thus dim T (W1 ) + dim T (W2 ) ≥ ρ (T ). …(1)
Now T (W2 ) is a subspace of W2 . Therefore
dim W2 ≥ dim T (W2 ). …(2)
From (1) and (2), we get
dim T (W1 ) + dim W2 ≥ ρ (T )
⇒ dim T (W1 ) ≥ ρ (T ) − dim W2
⇒ dim T (W1 ) ≥ n − ν (T ) − k [∵ ρ (T ) + ν (T ) = n]
⇒ dim T (W1 ) ≥ n − k − ν (T )
⇒ dim T (W1 ) ≥ h − ν (T ). …(3)
Now taking T = A and W1 = B (V ) in (3), we get
dim A [ B (V )] ≥ dim B (V ) − ν ( A)
⇒ dim ( AB) (V ) ≥ ρ ( B ) − ν ( A) [∵ B (V ) = the range of B ]
⇒ ρ ( AB) ≥ ρ ( B ) − [n − ρ ( A)]
⇒ ρ ( AB ) ≥ ρ ( A) + ρ ( B ) − n.
(ii) We have ρ ( AB ) + ν ( AB ) = n.
∴ ρ ( AB ) = n − ν ( AB ).
But ρ ( AB ) ≥ ρ ( A) + ρ ( B ) − n.
∴ n − v ( AB ) ≥ ρ ( A) + ρ ( B ) − n
⇒ ν ( AB ) ≤ [n − ρ ( A)] + [n − ρ ( B )]
⇒ ν ( AB ) ≤ ν ( A) + ν ( B ) [∵ ρ ( A) + ν ( A) = n]

Comprehensive Exercise 2

1. Let V be a finite dimensional vector space over the field F. If W1 and W2 are
subspaces of V, then W10 = W20 iff W1 = W2 .
2. If W1 and W2 are subspaces of a finite-dimensional vector space V and if
V = W1 ⊕ W2 , then
(i) W1 ′ is isomorphic to W20 . (ii) W2 ′ is isomorphic to W10 .
3. Let W be the subspace of R 3 spanned by (1, 1, 0) and (0, 1, 1.
) Find a basis of the
annihilator of W.
4. Let W be the subspace of R 4 spanned by (1, 2, − 3, 4), (1, 3, − 2, 6) and
(1, 4, − 1, 8). Find a basis of the annihilator of W.
222

5. If the set S = {Wi } is the collection of subspaces of a vector space V which are
invariant under T, then show that W = ∩i Wi is also invariant under T.

6. Prove that the subspace spanned by two subspaces each of which is invariant
under some linear operator T, is itself invariant under T.
7. Let V be a vector space over the field F, and let T be a linear operator on V and
let f (t) be a polynomial in the indeterminate t over the field F. If W is the null
space of the operator f (T ), then W is invariant under T.
8. Let T be the linear operator on R2 , the matrix of which in the standard
ordered basis is
 1 −1
A=  ⋅
2 2 
(a) Prove that the only subspaces of R2 invariant under T are R2 and the
zero subspace.
(b) If U is the linear operator on C 2 , the matrix of which in the standard
ordered basis is A, show that U has one dimensional invariant
subspaces.
9. Let T be the linear operator on R2 , the matrix of which in the standard
ordered basis is
2 1
0 ⋅
 2 

If W1 is the subspace of R2 , spanned by the vector (1, 0), prove that W1 is


invariant under T.
10. Show that the space generated by (1, 1, 1) and (1, 2, 1) is an invariant subspace of
R3 under T, where T ( x, y, z ) = ( x + y − z , x + y, x + y − z ).

11. If A and B are linear transformations on a finite-dimensional vector space V,


then prove that
(i) ρ ( A + B ) ≤ ρ ( A) + ρ ( B )
(ii) ρ ( AB ) ≤ min { ρ ( A), ρ ( B)}.
(iii) If B is invertible, then ρ ( AB ) = ρ ( BA) = ρ ( A).

A nswers 2
3. Basis is { f ( x, y, z ) = x − y + z }
4. Basis is { f1 , f 2 } where f1 ( x, y, z , w) = 5 x − y + z , f 2 ( x, y, z , w) = 2 y − w
223

O bjective T ype Q uestions

Multiple Choice Questions


Indicate the correct answer for each question by writing the corresponding letter from
(a), (b), (c) and (d).
1. Let V be an n-dimensional vector space over the field F. The dimension of
the dual space of V is
(a) n (b) n2
1
(c) n (d) none of these
2
2. If the dual basis of the basis set B = {(1, 0, 0), (0, 1, 0), (0, 0, 1)} for V3 (R) is
B ′ = { f1 , f 2 , f 3 }, then
(a) f1 (a, b, c ) = a, f 2 (a, b, c ) = b, f 3 (a, b, c ) = c
(b) f1 (a, b, c ) = b, f 2 (a, b, c ) = c , f 3 (a, b, c ) = a
(c) f1 (a, b, c ) = c , f 2 (a, b, c ) = a, f 3 (a, b, c ) = b
(d) None of these.
3. If V ( F) is a vector space and f be a linear functional from V → F, then
(a) f (0) = 0 (b) f (0) ≠ 0
(c) f (0) = 0 (d) f (0) ≠ 0

4. If V ( F) is a vector space and f is a linear functional from V → F, then


(a) f (− x) = f ( x) (b) f (− x) = − f ( x)
(c) f (− x) ≠ − f ( x) (d) f (− x) ≠ f ( x)

5. Let V be a vector space over a field F. A linear functional on V is a linear


mapping from
(a) F into V (b) V into F
(c) V into itself (d) none of these

6. 0
If S is any subset of a vector space V ( F ), then S is a subspace of
(a) V (b) V ′
(c) V ′ ′ (d) None of these.

7. Let V be a finite dimensional vector space over the field F. If S is any subset of
V, then S 00 =
(a) S (b) L (S )
0
(c) [ L (S )] (d) None of these.
224

Fill in the Blank(s)


Fill in the blanks ‘‘……’’ so that the following statements are complete and
correct.
1. Let V be a vector space over the field F. The dual space of V is also called the
…… space of V.
2. If V is an n-dimensional vector space over the field F, then V is …… to its dual
space V ′ .
3. Let V be a finite dimensional vector space over the field F, and let W be a
subspace of V. Then
dim W + dim W 0 = …… .
4. Let V be a finite dimensional vector space over the field F and let W be a
subspace of V. Then W 00 = …… .

True or False
Write ‘T’ for true and ‘F’ for false statement.
1. Let V ( F ) be a vector space. A linear functional on V is a vector valued
function.
2. Every finite dimensional vector space is not reflexive.
3. If V ( F) is a vector space, then mapping f : V → F : f (α) = 0, then f is a
linear functional.
4. If V is finite-dimensional and W is a subspace of V then W′ is isomorphic to
V ′ /W 0 .
5. If T is a linear transformation from a vector space U into a vector space V then
ρ (T ′ ) ≠ ρ (T ).

A nswers

Multiple Choice Questions


1. (a) 2. (a) 3. (a) 4. (b) 5. (b)
6. (b) 7. (b)

Fill in the Blank(s)


1. conjugate 2. isomorphic
3. dim V 4. W

True or False
1. F 2. F 3. T 4. T 5. F

¨
225

6
C haracteristic V alues and
A nnihilating P olynomials

6.1 Characteristic Values and Characteristic


Vectors (or Eigen Values and Eigen Vectors)
hroughout this discussion T will be regarded as a linear operator on a finite
T dimensional vector space.

Definition: Let T be a linear operator on an n-dimensional vector space V over the field F.
Then a scalar c ∈ F is called a characteristic value of T if there is a non-zero vector α
in V such that Tα = cα. Also if c is a characteristic value of T, then any non-zero vector α in V
such that Tα = cα is called a characteristic vector of T belonging to the
characteristic value c.

Characteristic values are sometimes also called proper values, eigen values, or
spectral values. Similarly characteristic vectors are called proper vectors, eigen
vectors, or spectral vectors.

The set of all characteristic values of T is called the spectrum of T.


226

Theorem 1: If α is a characteristic vector of T corresponding to the characteristic value c,


then kα is also a characteristic vector of T corresponding to the same characteristic value c.
Here k is any non-zero scalar.
Proof: Since α is a characteristic vector of T corresponding to the characteristic
value c, therefore α ≠ 0 and
T (α) = cα. …(1)
If k is any non-zero scalar, then kα ≠ 0.
Also T (kα) = kT (α) = k (cα) = (kc ) α = (ck) α = c (kα).
∴ kα is a characteristic vector of T corresponding to the characteristic value c.
Thus corresponding to a characteristic value c, there may correspond more than
one characteristic vectors.
Theorem 2: If α is a characteristic vector of T, then α cannot correspond to more than one
characteristic values of T.
Proof: Let α be a characteristic vector of T corresponding to two distinct
characteristic values c1 and c 2 of T. Then
Tα = c1 α and Tα = c 2 α
∴ c1 α = c 2 α ⇒ (c1 − c 2 ) α = 0 ⇒ c1 − c 2 = 0 [ ∵ α ≠ 0]
⇒ c1 = c 2 .
Theorem 3: Let T be a linear operator on a finite dimensional vector space V and let c be a
characteristic value of T. Then the set Wc = {α ∈ V : Tα = cα } is a subspace of V.
Proof:Let α , β ∈ Wc . Then Tα = cα , and T β = cβ.
If a , b ∈ F, then
T (aα + bβ) = aTα + bT β = a (cα) + b (cβ) = c (aα + bβ).
∴ aα + bβ ∈ Wc .
Therefore Wc is a subspace of V.
Note: The set Wc is nothing but the set of all characteristic vectors of T
corresponding to the characteristic value c provided we include the zero vector in
this set. In other words Wc is the null space of the linear operator T − cI . The
subspace Wc of V is called the characteristic space of the characteristic value c
of the linear operator T. It is also called the space of characteristic vectors of T
associated with the characteristic value c.
Theorem 4: Distinct characteristic vectors of T corresponding to distinct characteristic
values of T are linearly independent.
Proof: Let c1 , c 2 , … , c m be m distinct characteristic values of T and let
α1 , α 2 , … , α m be the characteristic vectors of T corresponding to these
characteristic values respectively. Then
Tα i = c i α i where 1≤ i ≤ m.
227

Let S = {α1 , … , α m}.


Then to prove that the set S is linearly independent. We shall prove the
theorem by induction on m, the number of vectors in S.
If m = 1, then S is linearly independent because S contains only one non-zero
vector. Note that a characteristic vector cannot be 0 by our definition.
Now suppose that the set
S1 = {α1 , … , α k }, where k < m,
is linearly independent.
Consider the set S2 = {α1 , … , α k , α k + 1}.
We shall show that S2 is linearly independent.
Let a1 , … , a k + 1 ∈ F and let
a1 α1 + … + a k + 1 α k + 1 = 0 …(1)
⇒ T (a1 α1 + … + a k + 1 α k + 1 ) = T (0)
⇒ a1 T (α1 ) + … + a k + 1 T (α k + 1 ) = 0
⇒ a1 (c1 α1 ) + … + a k + 1 (c k + 1 α k + 1 ) = 0 …(2)
Multiplying (1) by the scalar c k + 1 and subtracting from (2), we get
a1 (c1 − c k + 1 ) α1 + … + a k (c k − c k + 1 ) α k = 0.
∴ a1 = 0, … , a k = 0 since α1 , … , α k are linearly independent according to our
assumption and c1 , … , c k + 1 are all distinct.
Putting each of a1 , … , a k equal to 0 in (1), we get
ak + 1 α k + 1 = 0 ⇒ a k + 1 = 0 since α k + 1 ≠ 0.
Thus the relation (1) implies that
a1 = 0, … , a k = 0, a k + 1 = 0.
∴ the set S2 is linearly independent.
Now the proof is complete by induction.
Corollary: If T is a linear operator on an n-dimensional vector space V, then T cannot
have more than n distinct characteristic values.
Proof: Suppose T has more than n distinct characteristic values. Then the
corresponding set of distinct characteristic vectors of T will be linearly
independent. Thus we shall have a linearly independent subset of V containing
more than n vectors which is not possible because V is of dimension n. Hence T
cannot have more than n distinct characteristic values.
Theorem 5: Let T be a linear operator on a finite-dimensional vector space V. Then the
following are equivalent.
(i) c is a characteristic value of T.
(ii) the operator T − cI is singular (not invertible).
(iii) det ( T − cI ) = 0.
Proof: (i) ⇒ (ii).
228

c is a characteristic value of T implies that there exists a non-zero vector α in V such


that
Tα = c α
or Tα = cIα where I is the identity operator on V
or Tα = (cI ) α or (T − cI ) α = 0.
Thus (T − cI ) α = 0 while α ≠ 0. Therefore the operator T − cI is singular and thus
T − cI is not invertible.
(ii) ⇒ (iii).
If the operator T − cI is singular, then it is not invertible.
Therefore det (T − cI ) = 0.
(iii) ⇒ (i).
If det (T − cI ) = 0, then T − cI is not invertible.
If T − cI is not invertible, then T − cI is singular because every non-singular
operator on a finite-dimensional vector space is invertible. Now T − cI is singular
means that there is a non-zero vector α in V such that
(T − cI ) α = 0 or Tα − cI α = 0 or Tα = cα.
∴ c is a characteristic value of T.
This completes the proof of the theorem.
Let T be a linear operator on an n-dimensional vector space V. Let B be an ordered
basis for V and let A be the matrix of T with respect to B i. e., let A = [T ] B . If c is
any scalar, we have
[ T − cI ] B = [T ] B − c [I ] B
= A − cI where I is the unit matrix of order n.
[Note that [I ] B = I ].
We have det (T − cI ) = det [T − cI ] B = det ( A − cI ).
Therefore c is a characteristic value of T iff det ( A − cI ) = 0. This enables us to make
the following definition.

6.2 Characteristic Values of a Matrix


Definition: Let A = [a ij ] n × n be a square matrix of order n over the field F. An element c in
F is called a characteristic value of A if
det ( A − cI ) = 0 where I is the unit matrix of order n.
Now suppose T is a linear operator on an n-dimensional vector space V and A is the
matrix of T with respect to any ordered basis B. Then c is a characteristic value of T
iff c is a characteristic value of the matrix A. Therefore our definition of
characteristic values of a matrix is sensible.
229

6.3 Characteristic Equation of a Matrix


Definition: Let A be a square matrix of order n over the field F. Consider the
matrix A − x I .The elements of this matrix are polynomials in x of degree at most 1.
If we evaluate det ( A − x I ), then it will be a polynomial in x of degree n. The
coefficients of x in this polynomial will be (− 1) n . Let us denote this polynomial by
f ( x).
Then f ( x) = det ( A − xI ) is called the characteristic polynomial of the matrix A.
The equation f ( x) = 0 is called the characteristic equation of the matrix A. Now
c is a characteristic value of the matrix A iff det ( A − cI ) = 0 i. e., iff f (c ) = 0 i. e., iff c is
a root of the characteristic equation of A. Thus in order to find the characteristic
values of a matrix we should first obtain its characteristic equation and then we
should find the roots of this equation.

6.4 Characteristic Vector of a Matrix


Definition: If c is a characteristic value of an n × n matrix A, then a non-zero matrix X of
the type n × 1 such that AX = cX is called a characteristic vector of A corresponding to the
characteristic value c.
Theorem 6:Let T be a linear operator on an n-dimensional vector space V and A be the
matrix of T relative to any ordered basis B. Then a vector α in V is an eigenvector of T
corresponding to its eigenvalue c if and only if its coordinate vector X relative to the basis B is
an eigen-vector of A corresponding to its eigenvalue c.
Proof: We have
[T − cI ] B = [T ] B − c [I ] B = A − cI .
If α ≠ 0, then the coordinate vector X of α is also non-zero.
Now [(T − cI ) (α)] B = [T − cI ] B [α] B [See theorem 2 of chap. 4, article 4.2]
= ( A − cI ) X .
∴ (T − cI ) (α) = 0 iff ( A − cI ) X = O
or T (α) = cα iff AX = cX
or α is an eigenvector of T iff X is an eigenvector of A.
Thus with the help of this theorem we see that our definition of characteristic
vector of a matrix is sensible. Now we shall define the characteristic polynomial of a
linear operator. Before doing so we shall prove the following theorem.
Theorem 7: Similar matrices A and B have the same characteristic polynomial and
hence the same eigenvalues. If X is an eigenvector of A corresponding to the eigenvalue c, then
P −1 X is an eigenvector of B corresponding to the eigenvalue c where
B = P −1 AP.
230

Proof: Suppose A and B are similar matrices. Then there exists an invertible
matrix P such that
B = P −1 AP.
We have B − xI = P −1 AP − xI = P −1 AP − P −1 ( xI ) P
[∵ P −1 ( xI ) P = xP −1 IP = xI ]
= P −1 ( A − xI ) P.
∴ det ( B − xI ) = det P −1det ( A − xI ) det P
= det P −1 . det P . det ( A − xI ) = det ( P −1 P) . det ( A − xI )
= det I . det ( A − xI ) = 1 . det ( A − xI ) = det ( A − xI ).
Thus the matrices A and B have the same characteristic polynomial and
consequently they will have the same characteristic values.
If c is an eigenvalue of A and X is a corresponding eigenvector, then AX = cX , and
hence
B ( P −1 X ) = ( P −1 AP) P −1 X = P −1 AX = P −1 (cX ) = c ( P −1 X ).
∴ P −1 X is an eigenvector of B corresponding to c. This completes the proof of the
theorem.
Now suppose that T is a linear operator on an n-dimensional vector space V. If
B1 , B2 are any two ordered bases for V, then we know that the matrices [T ] B and
1
[T ] B are similar. Also similar matrices have the same characteristic polynomial.
2
This enables us to define sensibly the characteristic polynomial of T.

6.5 Characteristic Polynomial of a Linear Operator


Definition: Let T be a linear operator on an n-dimensional vector space V. The
characteristic polynomial of T is the characteristic polynomial of any n × n matrix which
represents T in some ordered basis for V. On account of the above discussion the
polynomial of T as defined by us will be unique.
If B is any ordered basis for V and A is the matrix of T with respect to B then
det (T − xI ) = det [T − xI ] B = det ([T ] B − x [I ] B ) = det ( A − xI )
= the characteristic polynomial of A and so also that of T.
∴ the characteristic polynomial of T = det (T − xI ).
The equation det (T − xI ) = 0 is called the characteristic equation of T.

6.6 Existence of Characteristic Values


Let T be a linear operator on an n-dimensional vector space V over the field F. Then
c belonging to F will be a characteristic value of T iff c is a root of the characteristic
equation of T i. e., iff c is a root of the equation
231

det (T − xI ) = 0. …(1)
The equation (1) is of degree n in x. If the field F is algebraically closed i. e., if every
polynomial equation in F possesses a root then T will definitely have at least one
characteristic value. If the field F is not algebraically closed, then T may or may not
have a characteristic value according as the equation (1) has or has not a root in F.
Since the equation (1) is of degree n in x, therefore if T has a characteristic value
then it cannot have more than n distinct characteristic values. The field of complex
numbers is algebraically closed. By fundamental theorem of algebra we know that
every polynomial equation over the field of complex numbers is solvable.
Therefore if F is the field of complex numbers then T will definitely have at least
one characteristic value. The field of real numbers is not algebraically closed. If F is
the field of real numbers, then T may or may not have a characteristic value.
Illustration: Consider the linear operator T on V2 (R) which is represented in the
standard ordered basis by the matrix
0 −1
A=  ⋅
1 0 
The characteristic polynomial for T (or for A) is
0− x −1  − x − 1 2
det ( A − xI ) = det  =  = x + 1.
 1 0 − x   1 − x
The polynomial equation x 2 + 1 = 0 has no roots in R . Therefore T has no
characteristic values.
However if T is a linear operator on V2 (C), then the characteristic equation of T
has two distinct roots i and − i in C. In this case T has two characteristic values i and
− i.

6.7 Algebraic and Geometric Multiplicity of a


Characteristic Value
Definition: Let T be a linear operator on an n-dimensional vector space V and let c be a
characteristic value of T. By geometric multiplicity of c we mean the dimension of the
characteristic space Wc of c. By algebraic multiplicity of c we mean the multiplicity of c as
root of the characteristic equation of T.
Method of finding the characteristic values and the corresponding
characteristic vectors of a linear operator T: Let T be a linear operator on an
n-dimensional vector space V over the field F. Let B be any ordered basis for V and
let A = [T ] B . The roots of the equation det ( A − xI ) = 0 will give the characteristic
values of A or also of T. Let c be a characteristic value of T. Then 0 ≠ α will be a
characteristic vector corresponding to this characteristic value if
(T − cI ) α = 0 i. e., if [T − cI ] B [α] B = [0] B
232

i. e., if ( A − cI ) X = O, …(1)
where X = [α] B = a column matrix of the type n × 1and O is the null matrix of the
type n × 1.Thus to find the coordinate matrix of α with respect to B, we should solve
the matrix equation (1) for X.

6.8 Matric Polynomials


Definition: An expression of the form
f ( x) = A0 + A1 x + A2 x 2 + … + Am x m ,

where A0 , A1 , A2 , … , Am are all square matrices of order n, is called a Matric polynomial of


degree m provided Am is not a null matrix. The symbol x is called indeterminate.

Equality of Matric Polynomials: Two matric polynomials are equal iff the
coefficients of the like powers of x are the same.
Lemma: Every square matrix over the field F whose elements are ordinary polynomials in x
over F, can essentially be expressed as a matric polynomial in x of degree m, where m is the
highest power of x occurring in any element of the matrix.
We shall explain this theorem by the following illustration:
Consider the matrix
 1 + 2 x + 3 x2 x2 4 − 6x 
 
A =  1 + x3 3 + 4x 2
1 − 2x + 4x 3

 3 5 6 
 2 − 3 x + 2 x 

in which the highest power of x occurring in any element is 3. Rewriting each


element as a cubic in x, supplying missing coefficients with zeros, we get
1 + 2 . x + 3 . x 2 + 0 . x 3 0 + 0 . x + 1. x 2 + 0 . x 3 4 − 6. x + 0 . x2 + 0 . x3 
 
A = 1 + 0 . x + 0 . x 2 + 1. x 3 3 + 0 . x + 4. x2 + 0 . x3 1 − 2. x + 0 . x2 + 4. x3  ⋅
 2 3 
2 − 3 . x + 0 . x + 2 . x 5 + 0 . x + 0 . x2 + 0 . x3 6 + 0 . x + 0 . x2 + 0 . x3 

Obviously A can be written as the matrix polynomial


1 0 4  2 0 −6 3 1 0 
A= 1 3 
1 + x 0 0 − 2  + x2 0 4 0 
     
 2 5 6   − 3 0 0   0 0 0 

0 0 0 
+ x 3 1 0 4 ⋅
 
 2 0 0 
233

Theorem 8: The Cayley-Hamilton Theorem: Let T be a linear operator on an


n-dimensional vector space V ( F ). Then T satisfies its characteristic equation i. e., if f ( x) be
the characteristic polynomial of T, then f (T ) = ^
0.
Or
Every square matrix satisfies its characteristic equation .
Proof: Let T be a linear operator on an n-dimensional vector space V over the field
F. Let B be any ordered basis for V and A be the matrix of T relative to B i. e., let
A = [T ] B . The characteristic polynomial of T is the same as the characteristic
polynomial of A. If A = [a ij ] n × n , then the characteristic polynomial f ( x) of A is
given by
 a11 − x a12 … a1n 
a a22 − x … a2 n 
 21 
f ( x) = det ( A − xI ) =  …… …… ……… 
 …… …… ……… 
 
 a n1 a n2 a nn − x 

= a0 + a1 x + a2 x 2 + … + a n x n (say), …(1)
where the a i ’s are in F.
The characteristic equation of A is f ( x) = 0
i. e., a0 + a1 x + a2 x 2 + … + a n x n = 0.
Since the elements of the matrix A − xI are polynomials at most of the first degree
in x, therefore the elements of the matrix adj ( A − x I ) are ordinary polynomials in x
of degree n − 1 or less. Note that the elements of the matrix adj ( A − xI ) are the
cofactors of the elements of the matrix A − xI . Therefore adj ( A − xI ) can be
written as a matrix polynomial in x in the form
adj ( A − xI ) = B0 + B1 x + B2 x 2 + … + Bn − 1 x n − 1 , …(2)
where the Bi ’s are square matrices of order n over F with elements independent of x.
Now by the property of adjoints, we know that
( A − xI ). adj. ( A − xI ) = {det. ( A − xI )} I .
∴ ( A − xI ) { B0 + x B1 + x 2 B2 + … + x n − 1 Bn − 1}
= { a0 + a1 x + … + a n x n } I
[from (1) and (2)]
Equating the coefficients of like powers of x on both sides, we get
AB0 = a0 I
AB1 − IB0 = a1 I
AB2 − IB1 = a2 I
…………………
…………………
234

ABn − 1 − IBn − 2 = a n − 1 I
− IBn − 1 = a n I .
Premultiplying these equations successively by I , A, A2 , … , A n and adding, we get
a0 I + a1 A + a2 A2 + … + a n A n = O,
where O is the null matrix of order n.
Thus f ( A) = O.
Now f (T ) = a0 I + a1T + a2 T 2 + … + a nT n.
∴ [ f (T )] B = [a0 I + a1T + a2 T 2 + … + a nT n ] B
= a0 [I ] B + a1 [T ] B + a2 [T 2 ] B + … + a n [T n ] B
= f ( A).
∴ f ( A) = O ⇒ [ f (T )] B = O = [ ^
0 ]B ⇒ f (T ) = ^
0

⇒ a0 I + a1T + a2 T 2 + … + a nT n = ^
0. …(3)
Corollary:We have
f ( x) = a0 + a1 x + … + a n x n = det. ( A − xI ).
∴ f (0) = a0 = det. A = det. T .
If T is non-singular, then T is invertible and
det. T ≠ 0 i. e., a0 ≠ 0.
Then from (3), we get
a0 I = − (a1T + a2 T 2 + … + a nT n )
a a2 a 
⇒ I =− 1 I + T + … + n T n − 1 T
 a0 a0 a0 
a a a 
T −1 = −  1 I + T + … + n T n − 1 ⋅
2

 a0 a0 a0 

6.9 Diagonalizable Operators


Definition: Suppose T is a linear operator on the finite-dimensional vector space V. Then
T is said to be diagonalizable if there is a basis B for V each vector of which is a characteristic
vector of T.
Matrix of a Diagonalizable Operator: Let T be a diagonalizable operator on an
n-dimensional vector space V. Let B = { α1 , … , α n } be an ordered basis for V such
that each α j is a characteristic vector of T. Let Tα i = c i α i . Then
Tα1 = c1α1 = c1α1 + 0α 2 + … + 0α n
Tα 2 = c 2 α 2 = 0α1 + c 2 α 2 + … + 0α n
… … … … … …
… … … … … …
Tα n = c nα n = 0α1 + 0α 2 + … + 0α n − 1 + c nα n .
235

Therefore the matrix of T relative to B is


c1 0 … 0
0 c2 … 0 
 
[T ] B = … … … … ⋅
… … … …

0 0 … c n 
This matrix is a diagonal matrix. Note that a square matrix of order n is said to be a
diagonal matrix if all the elements lying above and below the principal diagonal are
equal to zero. The scalars c1 , … , c n need not all be distinct. If V is n dimensional,
then T is diagonalizable iff T has n linearly independent characteristic vectors.
Diagonalizable Matrix: Definition:A matrix A over a field F is said to be
diagonalizable if it is similar to a diagonal matrix over the field F. Thus a matrix A is
diagonalizable if there exists an invertible matrix P such that P −1 AP = D where D
is a diagonal matrix. Also the matrix P is then said to diagonalize A or transform A to
diagonal form.
Theorem 9: A necessary and sufficient condition that an n × n matrix A over a field F be
diagonalizable is that A has n linearly independent characteristic vectors in Vn ( F ).
Proof: If A is diagonalizable, then A is similar to a diagonal matrix D. Therefore
there exists an invertible matrix P such that
P −1 AP = D or AP = PD. …(1)
If c1 , c 2 , … , c n are the diagonal elements of D, then c1 , c 2 , … , c n are the
characteristic values of D as can be easily seen. But similar matrices have the same
characteristic values. Therefore c1 , c 2 , … , c n are the characteristic values of A.
Now suppose P1 , P2 , … , Pn are the column vectors of the matrix P. Then equating
corresponding columns on each side of (1), we get
APi = c i Pi (i = 1, 2 , … , n). …(2)
But (2) shows that Pi is a characteristic vector of A corresponding to the
characteristic value c i . Since the matrix P is invertible, therefore its column vectors
P1 , P2 , … , Pn are n linearly independent vectors belonging to Vn ( F ). Thus A has n
linearly independent characteristic vectors P1 , … , Pn .
Conversely, if P1 , … , Pn are n linearly independent characteristic vectors of A
corresponding to characteristic values c1 , … , c n , then equations (2) holds.
Therefore equation (1) holds where P is the matrix with columns P1 , … , Pn . Since
the columns of P are linearly independent, therefore P is invertible and hence (1)
implies P −1 AP = D.Thus A is similar to a diagonal matrix and so is diagonalizable.
This completes the proof of the theorem.
Remark: In the proof of the above theorem we have shown that if A is
diagonalizable and P diagonalizes A, then
236

c1 0 … 0
0 c2 … 0
 
P −1 AP = … … … … ⋅
… … … …

0 0 … c n 
if and only if the j th column of P is a characteristic vector of A corresponding to the
characteristic value c j of A, ( j = 1, 2 ,… , n).
Theorem 10: A linear operator T on an n-dimensional vector space V ( F ) is
diagonalizable if and only if its matrix A relative to any ordered basis B of V is
diagonalizable.
Proof: Suppose T is diagonalizable. Then T has n linearly independent
characteristic vectors α1 , α 2 , … , α n in V. Suppose X1 , X 2 , … , X n are the
co-ordinate vectors of α1 , α 2 , … , α n relative to the basis B. Then X1 , … , X n are
also linearly independent since V is isomorphic to Vn ( F ) by isomorphism which
takes a vector in V to its co-ordinate vector in Vn ( F ). Under an isomorphism a
linearly independent set is mapped onto a linearly independent set. Further
X1 , … , X n are the characteristic vectors of the matrix A [see theorem 6]. Therefore
the matrix A is diagonalizable. [See theorem 9].
Conversely suppose the matrix A is diagonalizable. Then A has n linearly
independent characteristic vectors X1 , … , X n in Vn ( F ). If α1 , … , α n are the
vectors in V having X1 , … , X n as their coordinate vectors, then α1 , … , α n will be n
linearly independent characteristic vectors of T. So T is diagonalizable.

Theorem 11: Let T be any linear operator on a finite dimensional vector space V, let
c1 , c 2 , … , c k be the distinct characteristic values of T, and let Wi be the null space of
(T − c i I ). Then the subspaces W1 , … , Wk are independent.
Further show that if in addition T is diagonalizable, then V is the direct sum of the subspaces
W1 , … , Wk .
Proof: By definition of Wi , we have
Wi = {α : α ∈ V and (T − c i I ) α = 0 i. e., Tα = c i α}.
Now let α i be in Wi , i = 1, … , k, and suppose that
α1 + α 2 + … + α k = 0. …(1)
Let j be any integer between 1 and k and let
U j = Π (T − c i I ).
1≤i≤k
i≠ j

Note that U j is the product of the operators (T − c i I ) for i ≠ j. In other words


U j = (T − c1 I ) (T − c 2 I ) … (T − c k I ) where in the product the factor T − c j I is
missing.
Let us find U j α i , i = 1, … , k . By the definition of Wi , we have
237

(T − c i I ) α i = 0. Since the operators (T − c i I ) all commute, being polynomials in


T, therefore U j α i = 0 for i ≠ j. Note that for each i ≠ j, U j contains a factor
(T − c i I ) and (T − c i I ) α i = 0.
Also U j α j = [(T − c1 I ) … (T − c k I )] α j
= [(T − c1 I ) … (T − c k −1 I )] (Tα j − c k Iα j )
= [(T − c1 I ) … (T − c k −1 I )] (c j α j − c k α j )
[ ∵ Tα j = c j α j and Iα j = α j ]
= [(T − c1 I ) … (T − c k −1 I )] (c j − c k ) α j
= (c j − c k ) [(T − c1 I ) … (T − c k − 1 I )] α j
= (c j − c k ) (c j − c k − 1 ) … (c j − c1 ) α j ,
the factor c j − c j will be missing.
 
 
Thus U j α j =  Π (c j − c i )  α j . …(2)
1≤i≤k
 i ≠ j 

Now applying U j to both sides of (1), we get


U j α1 + U j α 2 + … + U j α k = 0
⇒ Uj α j = 0 [ ∵ U j α i = 0 if i ≠ j]
 
⇒ i Π (c j − c i ) α j = 0 [by (2)]
 ≠j 
Since the scalars c i are all distinct , therefore the product
Π (c j − c i ), is a non-zero scalar.
i≠ j
 
Hence i Π

(c j − c i ) α j = 0 ⇒ α j = 0.
 j 
Thus α j = 0 for every integer j between 1 and k.
In this way α1 + … + α k = 0
⇒ α i = 0 for each i.
Hence the subspaces W1 , … , Wk are independent.
Second Part: Now suppose that T is diagonalizable. Then we shall show that
V = W1 + … + Wk . Since T is diagonalizable, therefore there exists a basis of V
each vector of which is a characteristic vector of T. Thus there exists a basis of V
consisting of vectors belonging to the characteristic subspaces W1 , … , Wk . If
α ∈ V, then α can be expressed as a linear combination of these basis vectors. Thus α
can be written as α = α1 + … + α k where α i ∈ Wi , i = 1, … , k. In this way
α ∈ W1 + … + Wk . Therefore V = W1 + … + Wk . But in the first part, we have
proved that the subspaces W1 , … , Wk are independent. Hence
V = W1 ⊕ … ⊕ Wk .
238

Example 1: Let V be an n-dimensional vector space over F. What is the characteristic


polynomial of (i) the identity operator on V, (ii) the zero operator on V ?
Solution: Let B be any ordered basis for V.
(i) If I is the identity operator on V, then
[I ] B = I .
The characteristic polynomial of I = det (I − xI )

1 − x 0 … 0 
 0 1− x … 0 
 
. . … .
  = (1 − x) n .
 . . … . 
 . . … . 
 
 0 0 … 1− x

(ii) If ^
0 is the zero operator on V, then [ ^
0 ] B = O i. e., the null matrix of order n.

The characteristic polynomial of ^


0 = det (O − xI )
− x 0 … 0
 0 − x … 0
 
. . … .
=  = (− 1) n x n .
 . . … . 
 . . … .
 
 0 0 … − x
Example 2: Let T be a linear operator on a finite dimensional vector space V and let c
be a characteristic value of T. Show that the characteristic space of c i. e., Wc is invariant
under T.
Solution: We have by definition, Wc = { α ∈ V : Tα = c α }.
Let α ∈ Wc . Then Tα = cα.
Since Wc is a subspace, therefore
c ∈ F and α ∈ Wc ⇒ c α ∈ Wc .
Thus α ∈ Wc ⇒ Tα ∈ Wc .
Hence Wc is invariant under T.
Example 3: If A and B are similar linear operators on a finite dimensional vector space V,
then A and B have the same characteristic polynomial.
Solution: Suppose A and B are similar linear transformations on a finite
dimensional vector space V. Then there exists an invertible linear operator C on V
such that A = CBC −1.
239

We have A − xI = CBC −1 − xI = CBC −1 − C ( xI )C −1 = C ( B − xI ) C −1 .


∴ det ( A − xI ) = det {C ( B − xI ) C −1 }
= det C . det ( B − xI ). det C −1 = det C. det C −1 . det ( B − xI )
= det (CC −1 ) det ( B − xI ) = det I . det ( B − xI ) = 1 . det ( B − xI )
= det ( B − xI ).
Now the characteristic polynomial of A = det ( A − xI ).
∴ the characteristic polynomial of A = the characteristic polynomial of B.
Example 4: Find all (complex) characteristic values and characteristic vectors of the
following matrices
1 1 1  1 1 1
(a)  1 1 1 (b) 0 1 1 ⋅
   
 1 1 1   0 0 1 
1 1 1
Solution: (a) Let A =  1 1 1 ⋅
 
 1 1 1 
 1− x 1 1 
We have A − xI =  1 1− x 1 ⋅
 
 1 1 1− x 
∴ the characteristic polynomial of A is
1 − x 1 1 
= det ( A − xI ) =  1 1− x 1 
 
 1 1 1− x

3 − x 1 1 
= 3 − x 1− x 1  C1 + C2 + C3
 
3 − x 1 1− x

1 1 1 
= (3 − x) 1 1− x 1 
 
1 1 1− x
1 1 1
= (3 − x) 0 − x 0  R2 − R1 , R3 − R1
 
0 0 − x
= (3 − x) x 2 .
∴ the characteristic equation of A is (3 − x) x 2 = 0.
The only roots of this equation are x = 3, 0.
∴ 0 and 3 are the only characteristic values of A.
240

 x1 
Let X =  x2  be the coordinate matrix of a characteristic vector corresponding to
 
 x3 
the characteristic value x = 0. Then X will be given by a non-zero solution of the
equation
( A − 0I ) X = O
1 1 1   x1   0   x1 + x2 + x3   0 
i. e., 1 1 1   x2  =  0  or  x + x + x  = 0 
      1 2 3   
 1 1 1   x3   0   x1 + x2 + x3   0 
or x1 + x2 + x3 = 0.
This equation has two linearly independent solutions i. e.,
 1  0
X1 =  
0 , and X 2 =  1  ⋅
   
 − 1   − 1 
Every non-zero multiple of these column matrices X1 and X 2 is a characteristic
vector of A corresponding to the characteristic value 0.
The characteristic space of this characteristic value will be the subspace W spanned
by these two vectors X1 and X 2 . Any non-zero vector in W will be a characteristic
vector corresponding to this characteristic value.
To find the characteristic vectors corresponding to the characteristic value 3 we
consider the equation
( A − 3I ) X = O
 −2 1 1  x1   0 
i. e.,  1 −2 1  x =0 
   2   
 1 1 − 2   x3   0 
 − 2 x1 + x2 + x3  0 
i. e.,  x −2 x + x =0 
 1 2 3   
 x1 + x2 − 2 x3   0 
i. e., − 2 x1 + x2 + x3 = 0, x1 − 2 x2 + x3 = 0, x1 + x2 − 2 x3 = 0.
Solving these equations, we get x1 = x2 = x3 = k.
k
∴ X =  k  , where k ≠ 0.
 
 k 

 1 1 1
(b) Let A = 0  1 1 ⋅
 
 0 0 1 
The characteristic equation of A is (1− x)3 = 0.
241

∴ x = 1 is the only characteristic value of A.

 x1 
Let X =  x2  be the coordinate matrix of a characteristic vector corresponding to
 
 x3 
the characteristic value 1. Then X will be given by a non-zero solution of the
equation
(A − I ) X = O

0 1 1  x1   0   x2 + x3   0 
i. e., 0 0 1  x =0  i. e.,  x3 =0 
   2       
 0 0 0   x3   0   0   0 
i. e., x2 + x3 = 0 , x3 = 0 .
∴ x1 = k, x2 = 0, x3 = 0.
 k 
Thus X =0  , where k ≠ 0.
 
 0 
Example 5:Let T be the linear operator on R3 which is represented in the standard basis by
the matrix
 −9 4 4 
 −8 3 4 ⋅
 
 − 16 8 7 

Prove that T is diagonalizable.


 −9 4 4
Solution: Let A =  − 8 3 4⋅
 
 − 16 8 7 

The characteristic equation of A is

−9 − x 4 4 
 −8 3− x 4 = 0
 
 − 16 8 7 − x

 −1− x 4 4 
or  −1− x 3− x 4  = 0, applying C1 + C2 + C3
 
 −1− x 8 7 − x

1 4 4 
or − (1 + x) 1 3− x 4 = 0
 
1 8 7− x
242

1 4 4 
or (1 + x) 0 −1− x 0  = 0, applying R2 − R1 , R3 − R1
 
0 4 3− x
or (1 + x) (1 + x) (3 − x) = 0.
The roots of this equation are − 1, − 1, 3.
∴ The eigenvalues of the matrix A are − 1, − 1, 3.
The characteristic vectors X of A corresponding to the eigenvalue − 1are given by
the equation
( A − (− 1) I ) X = O or (A + I ) X = O
 −8 4 4   x1   0 
or  −8 4 4   x2  =  0  ⋅
    
 − 16 8 8   x3   0 
These equations are equivalent to the equations
 −8 4 4  x1   0 
 0 0 0  x  =  0  , applying
   2    R2 − R1 , R3 − 2 R1 .
 0 0 0   x3   0 
The matrix of coefficients of these equations has rank 1. Therefore these equations
have two linearly independent solutions. We see that these equations reduce to the
single equation
− 2 x1 + x2 + x3 = 0.
Obviously
 1  0

X1 = 1 , X 2 =  1 

   
 1   − 1 
are two linearly independent solutions of this equation. Therefore X1 and X 2 are
two linearly independent eigenvectors of A corresponding to the eigenvalue − 1.
Now the eigenvectors of A corresponding to the eigenvalue 3 are given by
( A − 3I ) X = O

 − 12 4 4   x1   0 
i. e.,  −8 0 4   x =0 ⋅
   2   
 − 16 8 4   x3   0 
These equations are equivalent to the equations

 − 12 4 4  x1   0 
 4 −4 0  x =0  , applying R − R , R − R .
   2    2 1 3 1
 − 4 4 0   x3   0 

The matrix of coefficients of these equations has rank 2. Therefore these equations
243

will have a non-zero solution. Also these equations will have 3 − 2 = 1 linearly
independent solution. These equations can be written as
− 12 x1 + 4 x2 + 4 x3 = 0
4 x1 − 4 x2 = 0
− 4 x1 + 4 x2 = 0.
From these, we get x1 = x2 = 1, say.
Then x3 = 2 .
 1
∴ X3 =  1
 
 2 
is an eigenvector of A corresponding to the eigenvalue 3.

1 0 1
Now let P = 1 1 1⋅
 
 1 −1 2 
We have det P = 1 ≠ 0. Therefore the matrix P is invertible. Therefore the columns
of P are linearly independent vectors belonging to R3 . Since the matrix A has three
linearly independent eigenvectors in R3 , therefore it is diagonalizable.
Consequently the linear operator T is diagonalizable. Also the diagonal form D of
A is given by
 −1 0 0
−1
P AP =  0 −1 0  = D.
 
 0 0 3 

1 2
Example 6: Prove that the matrix A=  is not diagonalizable over the
0 1 
field C.
Solution: The characteristic equation of A is
 1− x 2 
 = 0 or (1 − x)2 = 0.
 0 1− x
The roots of this equation are 1, 1. Therefore the only distinct eigenvalue of A is 1.
The eigenvectors of A corresponding to this eigenvalue are given by
0 2   x1   0 
0 = or 0 x1 + 2 x2 = 0.
 0   x2   0 
This equation has only one linearly independent solution. We see that
 1
X = 
0 
is the only linearly independent eigenvector of A. Since A has not two linearly
independent eigenvectors, therefore it is not diagonalizable.
244

Comprehensive Exercise 1

1. Show that the characteristic values of a diagonal matrix are precisely the
elements in the diagonal. Hence show that if a matrix B is similar to a
diagonal matrix D, then the diagonal elements of D are the characteristic
values of B.
2. Let T be a linear operator on a finite dimensional vector space V. Then show
that 0 is a characteristic value of T iff T is not invertible.
3. Suppose S and T are two linear operators on a finite dimensional vector space
V. If S and T have the same characteristic polynomial, then
det S = det T.
4. Find all (complex) proper values and proper vectors of the following
matrices:

0 1  1 0 1 1
(a)  (b)  (c)  ⋅
0 0  0 i  0 i 

5. Find the characteristic equation of the matrix


 2 −1 1

A = −1 2 − 1
 
 1 −1 2 
and verify that it is satisfied by A and hence obtain A −1 .
6. Show that the characteristic equation of the complex matrix
0 0 c 
A= 1 0 b
 
 0 1 a 
is x 3 − ax 2 − bx − c = 0.
7. Find the eigenvalues and the corresponding eigen space for the matrix
 8 −6 2

A= −6 7 −4⋅
 
 2 −4 3 
8. Find the characteristic roots and the characteristic spaces of the matrix
1 2 3
A= 0 2 3⋅
 
 0 0 2 
9. Let T be a linear operator on the n-dimensional vector space V, and suppose
that T has n distinct characteristic values. Prove that T is diagonalizable.
245

10. Show that distinct eigenvectors of a matrix A corresponding to distinct


eigenvalues of A are linearly independent.
11. Let T be the linear operator on R3 which is represented in the standard
ordered basis by the matrix
 5 −6 −6

A = −1 4 2⋅
 
 3 −6 − 4 
Find the characteristic values of A and prove that T is diagonalizable.
 1 1
12. Is the matrix A = 
 −1 1 
similar over the field R to a diagonal matrix ? Is A similar over the field C to a
diagonal matrix ?
1 1
13. Show that the matrix A =  is not diagonalizable.
0 1 
1 2 3
14. Let A =  0 2 3  ⋅ Is A similar to a diagonal matrix ?
 
 0 0 3 
3 1 −1
15. Is the matrix A =  2 2 −1
 
 2 2 0 
similar over the field R to a diagonal matrix ? Is A similar over the field C to a
diagonal matrix ?
16. For each of the following matrices over the field C, find the diagonal form and
a diagonalizing matrix P.
 20 18   3 4
(a)   (b) 
 − 27 − 25   −4 3 

 4 2 −2  − 17 18 −6
(c)  − 5 3 2 (d)  − 18 19 −6⋅
   
 − 2 4 1   − 9 9 2 

Answers 1
4. (a) 0 is the only characteristic value of A. The corresponding characteristic
vectors are given by [k 0]′ where k is any non-zero scalar.
(b) Characteristic values are1, i. Characteristic vectors corresponding to the
value 1 are given by [k 0]′ and corresponding to the value i are given by
[0, c ]′ , where k and c are any non-zero scalars.
246

(c) 1, i are characteristic values. Characteristic values corresponding to 1 are


given by [k 0]′ and corresponding to iare given by [c (i − 1) c ]′ , where k
and c are any non-zero scalars.
 3 1 −1
1
3 2
5. x − 6 x + 9 x − 4 = 0 ; A = −1
1 3 1 ⋅
4 
 − 1 1 3 
7. 0, 3, 15. Corresponding eigenspaces are spanned by
1
2  −4   −2 
     
 0 ,  − 2  ,  − 2  respectively.
   4   1 
 0 

 1 2
8. 1, 2, 2. Corresponding eigenspaces are spanned by 0 ,  1
   
0 0
respectively.
11. 1, 2, 2.
12. Not similar over R to a diagonal matrix ; similar over C to a diagonal matrix.
14. Yes.
15. Not similar over the field R as well as C to a diagonal matrix.
2 0  1 −3
16. (a) D =   , P= ,
0 −7  −1 2 
 3 + 4i 0  1 −1
(b) D =   , P= ,
 0 3 − 4i   i i 
 1 0 0  2 1 0 
(c) D =  0 2 0 ,P= 1 1 1 ,
   
 0 0 5   4 2 1 
 −2 0 0 2 1 −1
(d) D =  0 1 
0 ,P=2 1 0 ⋅
   
 0 0 1   1 0 3 

6.10 Minimal Polynomial and Minimal Equation of a Linear


Operator or of a Matrix
Annihilating Polynomials: Suppose T is a linear operator on a finite
dimensional vector space over the field F and f ( x) is a polynomial over F. If
f (T ) = ^
0, then we say that polynomial f ( x) annihilates the linear operator T.
247

Similarly suppose A is a square matrix of order n over the field F and f ( x) is a


polynomial over F. If f ( A) = O, then we say that the polynomial f ( x) annihilates
the matrix A. We know that every linear operator T on an n-dimensional vector
space V ( F ) satisfies its characteristic equation. Also the characteristic polynomial
of T is a non-zero polynomial i. e., a polynomial in which the coefficients of various
terms are not all zero. Note that if A is the matrix of T in some ordered basis, then
the characteristic polynomial of T is | A − xI | in which the coefficient of x n is (− 1) n
which is not zero. Thus we see that at least the characteristic polynomial of T is a
non-zero polynomial which annihilates T. Therefore the set of those non-zero
polynomials which annihilate T is not empty.
Monic polynomial: Definition:A polynomial in x over a field F is called a
monic polynomial if the coefficient of the highest power of x in it is unity. Thus
5
x 3 − 2 x 2 + x + 5 is a monic polynomial of degree 3 over the field of rational
7
numbers.
Among these non-zero polynomials which annihilate a linear operator T, the
polynomial which is monic and which is of the lowest degree is of special interest. It
is called the minimal polynomial of the linear operator T.
Minimal polynomial of a linear operator: Definition: Suppose T is a linear
operator on an n-dimensional vector space V ( F ). The monic polynomial of lowest degree over
the field F that annihilates T is called the minimal polynomial of T. Also if f ( x) is the
minimal polynomial of T, the equation f ( x) = 0 is called the minimal equation of the linear
operator T.
Similarly we can define the minimal polynomial of a matrix. Suppose A is a square
matrix of order n over the field F. The monic polynomial of lowest degree over the
field F that annihilates A is called the minimal polynomial of A.
Now suppose T is a linear operator on an n-dimensional vector space V ( F ) and A is
the matrix of T in some ordered basis B. If f ( x) is any polynomial over F, then
[ f (T )] B = f ( A). Therefore f (T ) = ^
0 if and only if f ( A) = O. Thus f ( x)
annihilates T iff it annihilates A. Therefore if f ( x) is the polynomial of lowest
degree that annihilates T, then it is also the polynomial of lowest degree that
annihilates A and conversely. Hence T and A have the same minimal polynomial.
Further the characteristic polynomial of the matrix A is of degree n. Since the
characteristic polynomial of A annihilates A, therefore the minimal polynomial of
A cannot be of degree greater than n. Its degree must be less than or equal to n.
Theorem 1: The minimal polynomial of a matrix or of a linear operator is unique.
Proof: Suppose the minimal polynomial of a matrix A is of degree r. Then no
non-zero polynomial of degree less than r can annihilate A. Let
−1 −2
f ( x) = x r + a1 x r + a2 x r + … + ar −1 x + ar
248

−1 −2
and g ( x) = x r + b1 x r + b2 x r + … + br −1 x + br
be two minimal polynomials of A.Then both f ( x) and g ( x) annihilate A.Therefore
we have f ( A) = O and g ( A) = O. These give
−1
A r + a1 A r + … + ar −1 A + a r I = O, …(1)
−1
and A r + b1 A r + … + br −1 A + b r I = O. …(2)
Subtracting (1) from (2), we get
−1
(b1 − a1 ) A r + … + (b r − a r ) I = O. …(3)
r −1
From (3), we see that the polynomial (b1 − a1 ) x + … + (b r − a r ) also
annihilates A. Since its degree is less than r, therefore it must be a zero polynomial.
This gives b1 − a1 = 0, b2 − a2 = 0, … , b r − a r = 0. Thus a1 = b1 , … , a r = b r .
Therefore f ( x) = g ( x) and thus the minimal polynomial of A is unique.
Theorem 2: The minimal polynomial of a matrix (linear operator) is a divisor of every
polynomial that annihilates the matrix (linear operator).
Proof: Suppose m ( x) is the minimal polynomial of a matrix A. Let h ( x) be any
polynomial that annihilates A. Since m ( x) and h ( x) are two polynomials, therefore
by the division algorithm there exist two polynomials q ( x) and r ( x) such that
h ( x) = m ( x) q ( x) + r ( x), …(1)
where either r ( x) is a zero polynomial or its degree is less than the degree of m ( x).
Putting x = A on both sides of (1), we get
h ( A) = m ( A) q ( A) + r ( A)
⇒ O = O q ( A) + r ( A) [∵ both m ( x) and h ( x) annihilate A]
⇒ r ( A) = O.
Thus r ( x) is a polynomial which also annihilates A. If r ( x) ≠ 0, then it is a non-zero
polynomial of degree smaller than the degree of the minimal polynomial m ( x) and
thus we arrive at a contradiction that m ( x) is the minimal polynomial of A.
Therefore r ( x) must be a zero polynomial. Then (1) gives
h ( x) = m ( x) q ( x) ⇒ m ( x) is a divisor of h ( x).

Corollary: The minimal polynomial of a matrix is a divisor of the characteristic


polynomial of that matrix.
Proof: Suppose f ( x) is the characteristic polynomial of a matrix A. Then
f ( A) = O by Cayley-Hamilton theorem . Thus f ( x) annihilates A. If m ( x) is the
minimal polynomial of A, then by the above theorem we see that m ( x) must be a
divisor of f ( x).
Theorem 3: Let T be a linear operator on an n-dimensional vector space V [or, let A be an
n × n matrix]. The characteristic and minimal polynomials for T [ for A] have the same roots,
except for multiplicities.
249

Proof: Suppose f ( x) is the characteristic polynomial of a linear operator T and


m ( x) is its minimal polynomial. First we shall prove that every root of the equation
m ( x) = 0 is also a root of the equation f ( x) = 0. We know that the minimal
polynomial is a divisor of the characteristic polynomial. Therefore m ( x) is a divisor
of f ( x). Then there exists a polynomial q ( x) such that
f ( x) = m ( x) q ( x). …(1)
Suppose c is a root of the equation m ( x) = 0. Then m (c ) = 0. Putting x = c on both
sides of (1) we get f (c ) = m (c ) q (c ) = 0 q (c ) = 0. Therefore c is also a root of
f ( x) = 0. Thus c is also a characteristic root of the linear operator T.
Conversely suppose that c is a characteristic value of T. Then there exists a
non-zero vector α such that Tα = cα .Since m ( x) is a polynomial, therefore we have
[m (T )] (α) = m (c ) α .
But m ( x) is the minimal polynomial for T. So m ( x) annihilates
T i. e., m (T ) = ^
0.
^ ^
∴ 0 (α) = m (c ) α ⇒ 0 = m (c ) α [∵ 0 (α) = 0]
⇒ m (c ) = 0. [ ∵ α ≠ 0]
Thus c is a root of the minimal equation of T.
Hence every root of the minimal equation of T is also a root of its characteristic
equation and every root of the characteristic equation of T is also a root of its
minimal equation.
Theorem 4: Let T be a diagonalizable linear operator and let c1 , … , c k be the distinct
characteristic values of T. Then the minimal polynomial for T is the polynomial
p ( x) = ( x − c1 ) … ( x − c k ).
Proof: We know that each characteristic value of T is a root of the minimal
polynomial for T. Therefore each of the scalars c1 , … , c k is a root of the minimal
polynomial for T and so each of the polynomials x − c1 , … , x − c k is a factor of the
minimal polynomial for T. Therefore the polynomial p ( x) = ( x − c1 ) … ( x − c k ) will
be the minimal polynomial for T provided it annihilates T i. e., provided p (T ) = ^
0.
Let α be a characteristic vector of T. Then one of the operators T − c1 I , … , T − c k I
sends α into 0. Therefore
(T − c1 I ) … (T − c k I ) α = 0 i. e., p (T ) α = 0
for every characteristic vector α .
Now T is a diagonalizable operator. Let V be the underlying vector space. Then
there exists a basis B for V which consists of characteristic vectors. If β is any vector
in V, then β can be expressed as a linear combination of the vectors in the basis B.
But we have just shown that p (T ) α = 0 for every characteristic vector α .
Therefore we have
250

p (T ) β = 0, V β ∈ V ⇒ p (T ) = ^
0.
∴ p ( x) annihilates T and so p ( x) is the minimal polynomial for T.
Thus we have proved that if T is a diagonalizable linear operator, the minimal
polynomial for T is a product of distinct linear factors.

Corollary:If the roots of the characteristic equation of a linear operator T are all distinct say
c1 , c 2 , … , c n , then the minimal polynomial for T is the polynomial
p ( x) = ( x − c1 ) … ( x − c n ).
Proof: Since the roots of the characteristic equation of T are all distinct,
therefore T is diagonalizable. Hence by the above theorem, the minimal
polynomial for T is the polynomial
( x − c1 ) ( x − c 2 ) … ( x − c n ).

Example 7: Let V be a finite-dimensional vector space. What is the minimal polynomial for
the identity operator on V ? What is the minimal polynomial for the zero operator ?

Solution: We have I − 1 I = I − I = ^
0. Therefore the monic polynomial x − 1
annihilates the identity operator I and it is the polynomial of lowest degree that
annihilates I. Hence x − 1 is the minimal polynomial for I.

Again we see that the monic polynomial x annihilates the zero operator ^
0 and it is
the polynomial of lowest degree that annihilates ^
0. Hence x is the minimal
^
polynomial for 0.

Example 8:Let V be an n-dimensional vector space and let T be a linear operator on V.


Suppose that there exists some positive integer k so that T k = ^
0. Prove that T n = ^
0.

Solution: Since T k = ^
0, therefore the polynomial x k annihilates T. So the
minimal polynomial for T is a divisor of x k . Let x r be the minimal polynomial for T
where r ≤ n. Then T r = ^
0.

Now Tn = Tn− r Tr = Tn− r ^


0 =^
0.

Example 9: Find the minimal polynomial for the real matrix


 7 4 −1
A=  4 7 −1 ⋅
 
 − 4 −4 4 
251

7 − x 4 −1 
Solution: We have | A − xI | =  4 7− x −1 
 
−4 −4 4 − x
7 − x 4 −1 
= 4 7− x − 1  , by R3 + R2
 
 0 3− x 3 − x
7 − x 4 − 1
= (3 − x) 4 7− x − 1
 
 0 1 1

7 − x 4 −5 
= (3 − x) 4 7− x x − 8  , by C3 − C2
 
 0 1 0 
7 − x −5 
= − (3 − x)  , expanding along third row
 4 x − 8
3 − x 3 − x
= − (3 − x)  , by R1 − R2
 4 x − 8
1 1 
= − (3 − x)2   = − (3 − x)2 ( x − 12).
4 x − 8
Therefore the roots of the equation | A − xI | = 0 are x = 3, 3, 12. These are the
characteristic roots of A.
Let us now find the minimal polynomial of A. We know that each characteristic
root of A is also a root of its minimal polynomial. So if m ( x) is the minimal
polynomial for A, then both x − 3 and x − 12 are factors of m ( x). Let us try whether
the polynomial h ( x) = ( x − 3) ( x − 12) = x 2 − 15 x + 36 annihilates A or not.

 69 60 − 15 
We have A =  60
2
69 − 15 ⋅
 
 − 60 − 60 24 
 69 60 − 15   7 4 −1
∴ A − 15 A + 36I =  60
2
69 − 15  − 15  4 7 −1
   
 − 60 − 60 24   − 4 −4 4 
 36 0 0 
+ 0 36 0 
 
 0 0 36 
252

 105 60 − 15   105 60 − 15 
=  60 105 − 15  −  60 105 − 15  = O.
   
 − 60 − 60 60   − 60 − 60 60 
∴ h ( x) annihilates A. Thus h ( x) is the monic polynomial of lowest degree which
annihilates A. Hence h ( x) is the minimal polynomial for A.
Note: In order to find the minimal polynomial of a matrix A, we should not forget
that each characteristic root of A must also be a root of the minimal polynomial.
We should try to find the monic polynomial of lowest degree which annihilates A
and which has also the characteristic roots of A as its roots.

Comprehensive Exercise 2

1. Show that the minimal polynomial of the real matrix


 5 −6 −6

A = −1 4 2  is ( x − 1) ( x − 2).
 
 3 −6 − 4 
2. Write the characteristic polynomial and the minimal polynomial of the
4 3 0

matrix A = 2 1 0 ⋅
 
 5 7 9 
0 1
3. Show that the minimal polynomial of the real matrix 
1 0 
is x 2 − 1.
1 1 0 2 0 0
4. Let A =  0 2 0 and B =  0 2 2⋅
   
 0 0 1   0 0 1 
Show that A and B have different characteristic polynomials but have the
same minimal polynomial.
5. Show that similar matrices have the same minimal polynomial.

Answers 2

2. (9 − x) ( x 2 − 5 x − 2) ; ( x − 9) ( x 2 − 5 x − 2).
253

O bjective T ype Q uestions

Multiple Choice Questions


Indicate the correct answer for each question by writing the corresponding letter from
(a), (b), (c) and (d).
1. If V is an n-dimensional vector space over the field F, then the characteristic
polynomial of the identity operator on V is
(a) x n (b) (1 + x) n
(c) (1 + x)2 n (d) none of these.

1 0
2. The characteristic values of the matrix  are
0 i 
(a) 1, i (b) 1, − i
(c) − 1, i (d) − 1, − i.
3. Let V be a finite-dimensional vector space. The minimal polynomial for the
zero operator is
(a) − x (b) x
(c) x − 1 (d) none of these.
0 1
4. The minimal polynomial of the real matrix  is
1 0 
(a) x 2 + 1 (b) x 2 − 1
(c) x + 1 (d) x − 1.
1 2 3 
5. The eigen values of the triangular matrix  0 2 3  are
 
 0 0 3 
(a) 1, 2, 3 (b) 2, 3, 4
(c) 1, 3, 4 (d) none of these.
1 4 
6. Let A =   ⋅ The eigen values of A are
2 3 
(a) 1, 5 (b) − 1, 5
(c) 1, − 5 (d) − 1, − 5.
1 1 0 
7. The only eigen value of the matrix  0 1 0  is
 
 0 0 1 
(a) 1 (b) 2
(c) 3 (d) 4.
254

Fill in the Blank(s)


Fill in the blanks ‘’……’’ so that the following statements are complete and correct.
1. Distinct characteristic vectors of T corresponding to distinct characteristic
values of T are linearly …… .
2. Let T be a linear operator on a finite dimensional vector space V. Then c is a
characteristic value of T iff det ( A − cI ) = …… .
3. If c is a characteristic value of an invertible transformation T, then c −1 is a
characteristic value of …… .
4. Let V be a finite-dimensional vector space. The minimal polynomial for the
identity operator on V is …… .
5. If the characteristic equation of a linear operator T has n distinct roots, say
c1 , c 2 , …… , c n , then the minimal polynomial for T is the polynomial
p ( x) = …… .
True or False
Write ‘T’ for true and ‘F’ for false statement.
1. The minimal polynomial of a matrix or of a linear operator is not unique.
2. The minimal polynomial of a matrix is a divisor of the characteristic
polynomial of that matrix.
3. Cayley Hamilton theorem states that ‘Every square matrix satisfies its
characteristic equation’.
4. The characteristic roots of a diagonal matrix are just the diagonal elements of
the matrix.
5. Similar matrices do not have the same minimal polynomial.
0 1
6. 0 is the only eigen value of the matrix  ⋅
0 0 

A nswers

Multiple Choice Questions


1. (d) 2. (a) 3. (b) 4. (b) 5. (a).
6. (b) 7. (a)
Fill in the Blank(s)
1. independent 2. 0 3. T −1 4. x −1
5. ( x − c1 ) ( x − c 2 ) …… ( x − c n )

True or False
1. F. 2. T. 3. T. 4. T. 5. F.
6. T.

¨
255

7
I nner P roduct S paces

7.1 Introduction
hroughout this chapter we shall deal only with real or complex vector spaces.
T Thus if V is the vector space over the field F, then F will not be an arbitrary
field. In this chapter F will be either the field R of real numbers or the field C of
complex numbers.
Before defining inner product and inner product spaces, we shall just give some
important properties of complex numbers.
Let z ∈ C i.e., let z be a complex number. Then z = x + iy where x, y ∈ R and
i = √ (− 1) . Here x is called the real part of z and y is called the imaginary part of z.
We write x = Re z , and y = Im z . The modulus of the complex number z = x + iy is
the non-negative real number √ ( x 2 + y 2 ) and is denoted by | z |. Also if z = x + iy
is a complex number, then the complex number z = x − iy is called the conjugate
complex of z. If z = z , then x + iy = x − iy and therefore y = 0. Thus z = z implies
that z is real. Obviously we have
(i) z + z = 2 x = 2 Re z , (ii) z − z = 2iy = 2i Im z ,
2 2 2
(iii) zz = x + y =| z | , (iv) | z | = 0 ⇔ x = 0, y = 0
i.e., | z | = 0 ⇔ z = 0,
256

(v) ( z ) = z, (vi) | z | = | z |, and


2 2
(vii) | z | = √ ( x + y ) ≥ x i. e., | z | ≥ Re z
If z1 and z 2 are two complex numbers, then
(i) | z1 + z 2 | ≤ | z1 | + | z 2 | (ii) z1 + z 2 = z1 + z 2
(iii) z1 z 2 = z1 z 2 , and (iv) z1 − z 2 = z1 − z 2 .

7.2 Inner Product Spaces


(Lucknow 2011)
Definition: Let V ( F ) be a vector space where F is either the field of real numbers or the field
of complex numbers. An inner product on V is a function from V × V into F which assigns
to each ordered pair of vectors α, β in V a scalar (α, β) in such a way that
(1) (α, β) = (β, α) [Here (β, α) denotes the conjugate complex of the number (β, α)].
(2) (aα + bβ, γ ) = a (α, γ ) + b (β, γ )
(3) (α, α) ≥ 0 and (α, α) = 0 ⇒ α = 0
for any α, β, γ ∈ V and a, b ∈ F.
Also the vector space V is then said to be an inner product space with respect to the
specified inner product defined on it.
It should be noted that in the above definition (α, β) does not denote the ordered
pair of the vectors α and β. But it denotes the inner product of the vectors α and β. It
is an element of V which has been assigned by the function (named as inner
product) to the vectors α and β. Sometimes the inner product of the ordered pair of
vectors α , β is also written as (α | β). If F = R, then (α, β) is a real number and if
F = C, then (α, β) is a complex number.
If F is the field of real numbers, then the complex conjugate appearing in (1) is
superfluous and (1) should be read as (α, β) = (β, α) . If F is the field of complex
numbers, then from (1), we have (α, α) = (α, α) and therefore (α, α) is real. Thus
(α, α) is always real whether F = R or F = C. Therefore the inequality given in (3)
makes sense.
If V ( F ) is an inner product space, then it is called a Euclidean space if F is the field of real
numbers. Also it is called a Unitary space if F is the field of complex numbers.
Note 1: The property (3) in the definition of inner product is called non-negativity.
The property (2) is called the linearity property. If F = R, then the property (1) is
called symmetry and if F = C, then it is called conjugate symmetry.
Note 2: If in an inner product space V ( F ), the vector α is 0, then (α, α) = 0.
We have (0, 0) = (0 0, 0) [∵ 0 0 = 0 in V]
= 0 (0, 0) [by linearity property of inner product]
=0 [∵ (0, 0) ∈ F and therefore 0 (0, 0) = 0]
257

Illustration 1: On Vn (C ) there is an inner product which we call the standard inner


product.
If α = (a1 , a2 , … , a n ), β = (b1 , b2 , … , b n ) ∈ Vn (C),
then we define
n
(α, β) = a1 b1 + a2 b2 + .... + a n b n = Σ ai bi . ...(1)
i =1

Let us see that all the postulates of an inner product hold in (1).
(i) Conjugate symmetry : From the definition of inner product given in (1), we
have (β, α) = b1 a1 + … + b n a n .
∴ (β, α) = (b1 a1 + … + b n a n ) = (b1 a1 ) + … + (b n a n )
= b1 ( a1 ) + … + b n ( a n ) = b1 a1 + … + b n a n
= a1 b1 + … + a n b n [∵ multiplication in C is commutative]
= (α, β).
Thus (α, β) = (β, α).
(ii) Linearity : Let γ = (c1 , … , c n ) ∈ Vn (C) and let a, b ∈ C.
We have aα + bβ = a (a1 , … , a n ) + b (b1 , … , b n )
= (aa1 + bb1 , … , aa n + bb n ).
∴ (aα + bβ, γ ) = (aa1 + bb1 ) c1 + … + (aa n + bb n ) c n [by (1)]
= (aa1 c1 + … + aa n c n ) + (bb1 c1 + … + bb n c n )
= a (a1 c1 + … + a n c n ) + b (b1 c1 + … + b n c n )
= a (α , γ ) + b (β , γ ) [by (1)]
(iii) Non-negativity :
(α , α) = a1 a1 + … + a n a n [by (1)]
2 2
= | a1 | + … + | a n | . ...(2)
Now a i is a complex number. Therefore | a i | 2 ≥ 0. Thus (2) is a sum of n
non-negative real numbers and therefore it is ≥ 0. Thus (α , α) ≥ 0. Also (α , α) = 0
⇒ | a1 |2 + … + | a n |2 = 0
⇒ each | a i | 2 = 0 and so each a i = 0 ⇒ α = 0.
Hence the product defined in (1) is an inner product on Vn (C) and with respect to
this inner product Vn (C) is an inner product space.
If α, β are two vectors in Vn (C), then the standard inner product of α and β is also
called the dot product of α and β and is denoted by α . β. Thus if
α = (a1 , … , a n ), β = (b1 , … , b n ) then α . β = a1 b1 + a2 b2 + … + a n b n .
Illustration 2: If α = (a1 , a2 ), β = (b1 , b2 ) ∈ V2 (R), let us define
(α , β) = a1 b1 − a2 b1 − a1 b2 + 4a2 b2 . ...(1)
We shall show that all the postulates of an inner product hold good in (1).
258

(i) Symmetry : We have


(β, α) = b1 a1 − b2 a1 − b1 a2 + 4b2 a2 [from (1)]
= a1 b1 − a2 b1 − a1 b2 + 4a2 b2 [∵ a1 , a2 , b1 , b2 ∈ R]
= (α , β).
(ii) Linearity : If a, b ∈ R, we have
aα + bβ = a (a1 , a2 ) + b (b1 , b2 ) = (aa1 + bb1 , aa2 + bb2 ).
Let γ = (c1 , c 2 ) ∈ V2 (R). Then
(aα + bβ, γ ) = (aa1 + bb1 ) c1 − (aa2 + bb2 ) c1 − (aa1 + bb1 ) c 2
+ 4 (aa2 + bb2 ) c 2 [from (1)]
= (aa1 c1 − aa2 c1 − aa1 c 2 + 4aa2 c 2 ) + (bb1 c1 − bb2 c1 − bb1 c 2 + 4bb2 c 2 )
= a (a1 c1 − a2 c1 − a1 c 2 + 4a2 c 2 ) + b (b1 c1 − b2 c1 − b1 c 2 + 4b2 c 2 )
= a (α, γ ) + b (β, γ ). [from (1)]
(iii) Non-negativity: We have
(α , α) = a1 a1 − a2 a1 − a1 a2 + 4a2 a2 = a12 − 2a1 a2 + 4a2 2
= (a1 − a2 )2 + 3a2 2 . ...(2)
Now (2) is a sum of two non-negative real numbers. Therefore it is ≥ 0. Thus
(α, α) ≥ 0.
Also (α , α) = 0 ⇒ (a1 − a2 )2 + 3a2 2 = 0
⇒ (a1 − a2 )2 = 0, 3a2 2 = 0 ⇒ a1 − a2 = 0, a2 = 0
⇒ a1 = 0, a2 = 0 ⇒ α = 0.
Hence the product defined in (1) is an inner product on V2 (R). Also with respect to
this inner product V2 (R) is an inner product space.
Note: There can be defined more than one inner product on a vector space. For
example, we have the standard inner product on V2 (R) and the inner product
defined in example 3.
Illustration 3: Let V (C) be the vector space of all continuous complex-valued functions on
the unit interval, 0 ≤ t ≤ 1. If f (t) , g (t) ∈ V ,, let us define
1
( f (t), g (t)) = ∫0 f (t) g (t) dt. …(1)

We shall show that all the postulates of an inner product hold in (1).
(i) Conjugate Symmetry: We have
1
( g (t), f (t)) = ∫0 g (t) f (t) dt [from (1)]

g(t), f (t)) = ∫ g (t) f (t) dt =


1 1

 0  ∫0 [ g (t) f (t)] dt
1 1
= ∫0 g (t) f (t) dt = ∫0 f (t) g (t) dt = ( f (t), g (t))

(ii) Linearity: Let a, b ∈ C and h (t) ∈ V . Then


259
1
(af (t) + bg (t), h (t)) = ∫0 [af (t) + bg (t)] h (t) dt
1 1
= a∫ f (t) h (t) dt + b ∫ g (t) h (t) dt
0 0

= a ( f (t), h (t)) + b ( g) (t), h (t)).


(iii) Non-negativity: We have
1 1
( f (t), f (t)) = ∫0 f (t) f (t) dt = ∫0 | f (t)|2 dt …(2)

Since | f (t)|2 ≥ 0 for every t lying in the closed interval [0, 1], therefore (2) ≥ 0.
Thus ( f (t), f (t)) ≥ 0
Also ( f (t), f (t)) = 0
1
⇒ ∫0 | f (t)|2 dt = 0

⇒ | f (t)|2 = 0 for every t lying in [0, 1]


⇒ f (t) = 0 for every t lying in [0, 1] ⇒ f (t) = 0.
Hence the product defined in (1) is an inner product on V (C).

7.3 Norm or Length of a Vector in an Inner Product Space


Consider the vector space V3 (R) with standard inner product defined on it. If
α = (a1 , a2 , a3 ) ∈ V3 (R), we have (α , α) = a12 + a2 2 + a3 2 .
Now we know that in the three dimensional Euclidean space √ (a12 + a2 2 + a3 2 ) is
the length of the vector α = (a1 , a2 , a3 ). Taking motivation from this fact, we make
the following definition.
Definition: Let V be an inner product space. If α ∈ V, then the norm or the length of the
vector α , written as || α|| , is defined as the positive square root of (α, α) i.e.,
|| α|| = √ (α, α).
Unit vector. Definition: Let V be an inner product space. If α ∈ V is such that|| α|| = 1 ,
then α is called a unit vector or is said to be normalized. Thus in an inner product space a vector
is called a unit vector if its length is 1.
Theorem 1: In an inner product space V ( F ), prove that
(i) (aα − b β, γ ) = a (α, γ ) − b (β, γ ) (ii) (α, a β + b γ ) = a (α, β) + b (α, γ ).
Proof: (i) We have
(aα − bβ, γ ) = (aα + (− b) β, γ )
= a (α, γ ) + (− b) (β, γ ) [by linearity property]
= a (α, γ ) − b (β, γ ).
(ii) (α, aβ + b γ ) = (aβ + b γ , α) [by conjugate symmetry]
= a (β, α) + b (γ , α) [by linearity property]
260

= a (β, α) + b (γ , α) = a (β, α) + b (γ , α)
= a (α, β) + b (α, γ ).
Note 1: If F = R, then the result (ii) can be simply read as
(α , aβ + b γ ) = a (α , β) + b (α , γ ).
Note 2: Similarly it can be proved that (α , aβ − b γ ) = a (α , β) − b (α , γ ).
Also (α , β + γ ) = (α , 1β + 1γ ) = 1 (α, β) + 1 (α, γ ) = (α , β) + (α , γ ).
Theorem 2: In an inner product space V ( F ), prove that
(i) ||α|| ≥ 0; and ||α|| = 0; if and only if α = 0.
(ii) || aα|| = | a |.|| α||.
Proof: (i) We have ||α|| = √ (α, α) [by def. of norm]
2
⇒ ||α|| = (α, α)
⇒ ||α||2 ≥ 0 [∵ (α, α) ≥ 0]
⇒ ||α||2 ≥ 0
Also (α, α) = 0 iff α = 0
∴ || α ||2 = 0 iff α = 0 i.e., ||α|| = 0 iff α = 0.
Thus in an inner product space, || α|| > 0 iff α ≠ 0.
(ii) We have || aα|| 2 = (aα, aα) [by def. of norm]
= a (α, aα) [by linearity property]
= aa (α, α) [by theorem 1]
2 2
= | a| .|| α|| .
2 2 2
Thus ||aα|| = |a| ⋅ ||α||
Taking square root, we get || aα|| = | a |.|| α||.
1
Note: If α is any non-zero vector of an inner product space V, then α is a unit
|| α ||
vector in V. We have ||α||≠ 0 because α ≠ 0.
1
Therefore α ∈ V.
||α ||
 1 1  1  1  1 1
Now  α, α = α, α = (α, α) = ||α ||2 = 1.
||α || ||α ||  ||α ||  ||α ||  ||α ||2 ||α ||2
α α
Therefore = 1 and thus is unit vector.
||α || ||α ||
For example if α = (2, 1, 2) is a vector in V3 (R) with standard inner product, then
||α|| = √ (α, α) = √ (4 + 1 + 4) = 3 .
Therefore (2, 1, 2) i.e.,  , ,  is a unit vector.
1 2 1 2
3  3 2 3
261

Theorem 3: Cauchy-Schwarz’s Inequality. In an inner product space V ( F), prove


that |α, β | ≤ || α || ⋅ || β ||. (Lucknow 2011)
Proof: If α = 0, then || α|| = 0. Also in that case
(α, β) = ( 0, β) = (00, β) = 0 ( 0, β) = 0.
∴ |(α, β)| = 0.
Thus if α = 0, then |(α, β)| = 0 and || α|||| β|| = 0.
∴ the inequality |(α, β) |≤||α | | ⋅||β || is valid.
Now let α ≠ 0. Then ||α ||> 0.
1 (β, α)
Therefore 2
is a positive real number. Consider the vector γ = β − α.
|| α || || α || 2
We have
 (β, α) (β, α) 
(γ , γ) = β − 2
α, β − α
 ||α || ||α ||2 
 β,α  (β, α)  (β, α) 
= β,β − α − α,β −
 α [by linearity property]
 ||α ||2  ||α ||
2
 ||α ||2 
(β, α) (β, α) (β , α) (β , α)
= (β, β) − 2
(β, α) − 2
(α, β) + (α, α)
|| α || || α || || α || 2 .|| α || 2
(β, α) (β, α) (α, β) (α,β) (β,α) (β,α)
= || β ||2 − 2
− 2
+ ⋅ ||α ||2
||α || ||α || ||α ||2 ||α ||2
(α,β) (α,β)
= || β ||2 − , the second and the fourth terms cancel
|| α ||2
|(α,β)|2
= || β ||2 − ⋅
||α ||2 [∵ zz = |z|2 if z ∈ C]
But (γ, γ ) = ||γ ||2 ≥ 0
|(α, β)|2
∴ || β ||2 − ≥0 or || β ||2 ⋅||α ||2 ≥ |(α, β)|2
||α ||2
or |(α,β)|≤ ||α ||||
⋅ β ||, taking square root of both sides.
Schwarz’s inequality has very important applications in mathematics.
Theorem 4: Triangle inequality. If α, β are vectors in an inner product space V, prove
that ||α + β||≤ || α|| + ||β | |. (Lucknow 2007)
2
Proof: We have ||α + β|| = (α + β, α + β) [by def. of norm]
= (α , α + β) + (β, α + β) [by linearity property]
= (α , α) + (α , β) + (β, α) + (β, β) [by theorem 1]
2 2
= ||α|| + (α, β) + (α, β) + ||β | | [∵ (β, α) = (α, β)]
= ||α ||2 + 2Re (α,β) + || β | |2 [∵ z + z = 2Re z ]
262

≤ ||α ||2 + 2|(α,β)| + || β ||2 [∵ Re z ≤ | z |]


2 2
≤ ||α || + 2 ||α || ⋅ || β || + || β ||
[∵ by Schwarz inequality |(α,β)|≤ ||α || ⋅ ||β || ]
= (||α|| + || β||)2 .
Thus ||α + β||2 ≤ (||α|| + ||β||)2 .
Taking square root of both sides, we get
||α + β || ≤ ||α || + ||β||.
Geometrical interpretation: Let α, β be the vectors in the inner product space
V3 (R) with standard inner product defined on it. Suppose the vectors α, β
represent the sides AB and BC respectively of a triangle ABC in the three
dimensional Euclidean space. Then ||α || = AB,|| β || = BC. Also the vector α + β
represents the side AC of the triangle ABC and || α + β || = AC. Then from the
above inequality, we have AC ≤ AB + BC.
Special Cases Of Cauchy-Schwarz’s Inequality.
Case I. Consider the vector space Vn (C) with standard inner product defined on it.
Let α = (a1 , a2 , … , a n ) and β = (b1 , b2 ,..., b n ) ∈ Vn (C)
Then a1 , … , a n , b1 , … , b n are all complex numbers. (Lucknow 2006)
We have (α, β) = a1 b1 + a2 b2 + … + a n b n .
|(α, β)| 2 = | a1 b1 + a2 b2 + .... + a n b n | 2 .
Also ||α||2 = (α,α) = a1 a1 + … + a n a n = | a1 | 2 + … + | a n | 2 .
Similarly ||β||2 = |b1|2 + … + |b n|2 .
By Schwarz’s inequality, we have |(α, β)|2 ≤ ||α||2 ⋅ ||β||2
∴ If a1 , … , a n , b1 , … , b n are complex numbers, then
| a1 b1 + a2 b2 + … + a n b n | 2
≤ (|a1|2 + … + |a n|2 )(|b1|2 ) + … + |b n|2 ).
This inequality is known as Cauchy’s inequality.
If a1 , … , a n , b1 , … , b n are all real numbers, then this inequality gives that
(a1 b1 + a2 b2 + .... + a n b n ) 2 ≤ (a12 + .... + a n2 ) (b12 + .... + b n2 ).
Case II. Consider the vector space V (C) of all continuous, complex-valued
functions on the unit interval 0 ≤ t ≤ 1, with inner product defined by
1
( f (t), g (t)) = ∫0 f (t) g (t) dt.
1 1
We have || f (t)||2 = ( f (t), f (t)) = ∫0 f (t) f (t) dt = ∫ 0| f (t)|2 dt.
1
Similarly || g (t)||2 = ∫0| g (t)|
2
dt.
263

1 2
Also |( f (t), g(t))|2 = ∫0 f (t) g (t) dt .

By Schwarz’s inequality, we have


|( f (t), g (t))|2 ≤ || f (t)||2 ⋅ || g (t)||2 .
Therefore if f (t), g (t) are continuous complex valued functions on the unit
interval [0, 1], then
2
≤  ∫ | f (t)|2 dt  ∫ | g (t)|2 dt .
1 1 1
∫0 f (t) g (t) dt
 0   0 
Case III. Consider the vector space V3 (R) with standard inner product defined on
it i.e., if
α = (a1 , a2 , a3 ), β = (b1 , b2 , b3 ) ∈ V3 (R),
then (α, β) = a1 b1 + a2 b2 + a3 b3 . ...(1)
We see that (1) is nothing but the dot product of two vectors α and β in three
dimensional Euclidean space. If θ is the angle between the non-zero vectors α and β,
then we know that
(a1 b1 + a2 b2 + a3 b3 )2 {(α, β)}2
cos 2 θ = =
(a12 + a2 2 + a3 2 ) (b12 + b2 2 + b3 2 ) (α , α) (β, β)
|(α,β)|2
= [∵ if (α, β) is real then {(α , β)}2 = |(α, β)| 2 ]
||α||2 ||β||2
But by Schwarz’s inequality, we have
|(α,β)|2 ≤ ||α ||2 ⋅|| β ||2
||α ||2 || β ||2
∴ cos 2 θ ≤ 2 2
i.e., cos 2 θ ≤ 1.
||α || || β ||
Thus the absolute value of the cosine of a real angle cannot be greater than 1.
Normed vector space. Definition: Let V ( F ) be a vector space where F is either the
field of real numbers or the field of complex numbers. Then V is said to be a normed vector space
if to each vector α there corresponds a real number denoted by|| α ||called the norm of α in such
a manner that
(1) || α|| ≥ 0 and || α || = 0 ⇒ α = 0.
(2) ||aα || = |a| ⋅ ||α ||, V a ∈ F.
(3) || α + β || ≤ || α || + || β ||, V α, β ∈ V.
We have shown in theorems 2 and 4 that the norm of an inner product space
satisfies all the three conditions of the norm of a normed vector space. Hence every
inner product space is a normed vector space.
But the converse is not true i.e., every normed vector space is not an inner product
space. For example, consider a normed vector space R2 (R) with a norm defined by
|| α || = max. (| a |,| b |), where α = (a, b).
264

It is impossible to define an inner product, denoted by (,) on R2 (R) such that


(α , α) = || α || 2 .
Distance in an inner product space:
Definition: Let V ( F) be an inner product space. Then we define the distance (α , β)
between two vectors α and β by
d(α, β) = ||α − β || = √ [(α − β, α − β)].
Theorem 5: In an inner product space V ( F ) we define the distance d (α, β) from α to β by
d (α, β) = || α − β || . Prove that
(1) d (α, β)≥ 0 and d (α, β) = 0 iff α = β.
(2) d (α, β) = d (β, α).
(3) d (α, β) ≤ d (α, γ ) + d (γ , β). [Triangle inequality]
(4) d (α, β) = d (α + γ , β + γ ).
Proof: (1) We have d(α, β) = ||α − β||. [by definition]
Now || α − β || ≥ 0 and || α − β || = 0 if and only if α − β = 0.
d (α, β) ≥ 0 and d (α, β) = 0 if and only if α = β.
(2) We have d(α, β) = || α − β | | (by def.)
= ||(− 1) (β − α)||
= | − 1||| β − α|| [∵ || aα || = | a | || α ||]
=|| β − α | | = d (β, α )
(3) We have d(α,β) = ||α − β|| = ||(α − γ ) + (γ − β)||
≤ || α − γ || + || γ − β|| [by theorem 4]
= d (α, γ ) + d (γ , β).
∴ d(α, β) ≤ d(α, γ ) + d(γ ,β)
(4) We have d (α,β) = || (α − β)|| = ||(a + γ ) − (β + γ )|| = d (α + γ , β + γ ).

Matrix of an Inner Product:


Before defining the matrix of an inner product, we shall define some special types
of matrices over the complex field C.
Conjugate transpose of a Matrix: Definition: Let A = [a ij ] n × n be a square matrix
of order n over the field C of complex numbers. Then the matrix [ a ji ] n × n is called the
conjugate transpose of A and we shall denote it by A* .
Thus in order to obtain A* from A, we should first replace each element of A by its
conjugate complex and then we should write transpose of the new matrix.
If in place of the field C we take the field R of the real numbers, then a ji = a ji . So in
this case A* will simply be the transpose of the matrix A.
If A = A* , then the matrix A is said to be a self-adjoint matrix or a Hermitian
matrix.
265

Symmetric matrix: Definition: A square matrix A over a field F is said to be a


symmetric matrix if it is equal to its transpose i.e., if
A = AT .
Obviously a Hermitian matrix over the field of real numbers is a symmetric matrix.
Theorem: If B = {α1 , α 2 , … , α n } is an ordered basis of a finite dimensional vector space
V, then an inner product on V is completely determined by the values which it takes on pairs of
vectors in B.
Proof: Suppose we are given a particular inner product on V. We shall show that
this inner product on V is completely determined by the values
g ij = (α j , α i ) where i = 1, 2, … , n; j = 1, 2, … , n.
n n
Let α= Σ x j α j and β = Σ y i α i be any two vectors in V. Then
j =1 i =1

 n  n
(α ,β) =  Σ x j α j , β = Σ x j (α j , β),
 j =1  j =1
by linearity property of the inner product
n  n  n n
= Σ x j α j , Σ y i α i  = Σ xj Σ y i (α j , α i )
j =1  i =1  j =1 i =1

[See theorem 1, part (ii) of 4.3]


n n
= Σ Σ y i g ij x j
j =1 i =1
*
=Y GX , ...(1)
where X , Y are the coordinate matrices of α, β in the ordered basis B and G is the
matrix [ g ij ] n × n .
From (1) we observe that the inner product (α, β) is completely determined by the
matrix G i.e., by the scalars g ij . Hence the result of the theorem.
Definition: Let B = { α1 , … , α n } be an ordered basis for an n-dimensional inner product
space V. The matrix G = [ g ij ] n × n , where g ij = (α j , α i ) , is called the matrix of the
underlined inner product in the ordered basis B.
We observe that g ij = (α j , α i ) = (α i , α j ) = g ji . Therefore the matrix G is such that
G* = G. Thus G is a Hermitian matrix.
Further we know that in an inner product space, (α, α) > 0 if α ≠ 0. Therefore from
the relation (1) given above we observe that the matrix G is such that
X * GX > 0, X ≠ 0. ...(2)
From the relation (2) we conclude that the matrix G is invertible. For if G is not
invertible then there exists an X ≠ 0 such that GX = 0. For any such X the relation
(2) is impossible. Hence G must be invertible.
In the last, from the relation (2) we observe that if x1 , x2 , … , x n are scalars not all of
which are zero, then
266
n n
Σ Σ x i g ij x j > 0. ...(3)
i =1 j =1

Now suppose that out of the n scalars x1 , … , x n we take x i = 1 and each of the
remaining n − 1scalars is taken as 0. Then from (3) we conclude that g ii > 0. Thus
g ii > 0 for each i = 1, … , n. Hence each entry along the principal diagonal of the
matrix G is positive.

Example 1: Show that we can always define an inner product on a finite dimensional vector
space real or complex.
Solution: Let V be a finite dimensional vector space over the field F real or complex.
Let B = { α1 , ..., α n } be a basis for V.
Let α, β ∈ V. Then we can write α = a1α1 + .... + a nα n ,
and β = b1α1 + .... + b nα n
where a1 , … , a n and b1 , … , b n are uniquely determined elements of F.Let us define
(α, β) = a1 b1 + .... + a n b n . ...(1)
We shall show that (1) satisfies all the conditions for an inner product.
(i) Conjugate Symmetry: We have
(β, α) = b1 a1 + … + b n a n .
∴ (β, α) = (b1 a1 + … + b n a n ) = b1 a1 + … + b n a n
= a1 b1 + … + a n b n = (α , β).
(ii) Linearity: Let γ = c1α1 + .... + c nα n ∈ V and a, b ∈ F. We have
aα + bβ = a (a1α1 + .... + a nα n ) + b (b1α1 + .... + b nα n )
= (aa1 + bb1 ) α1 + … + (aa n + bb n ) α n .
∴ (aα + bβ, γ ) = (aa1 + bb1 ) c1 + … + (aa n + bb n ) c n
= a (a1 c1 + … + a n c n ) + b (b1 c1 + .... + b n c n )
= a (α, γ ) + b (β, γ ).
(iii) Non-negativity: We have
(α, α) = a1 a1 + … + a n a n = | a1 | 2 + .... + | a n | 2 ≥ 0.
Also (α, α) = 0 ⇒ | a1 | 2 + .... + | a n | 2 = 0
⇒ | a1 | 2 = 0, … ,| a n | 2 = 0
⇒ a1 = 0, … , a n = 0 ⇒ α = 0.
Hence (1) is an inner product on V.
Example 2: In V2 ( F ) define for α = (a1 , a2 ) and β = (b1 , b2 ) ,
(α, β) = 2a1 b1 + a1 b2 + a2 b1 + a2 b2 .
Show that this defines an inner product on V2 ( F ).
267

Solution: (1) Conjugate Symmetry: We have


(β, α) = 2b1 a1 + b1 a2 + b2 a1 + b2 a2 .
∴ (β, α) = (2b1 a1 + b1 a2 + b2 a1 + b2 a2 ) = 2b1 a1 + b1 a2 + b2 a1 + b2 a2
= 2a1 b1 + a1 b2 + a2 b1 + a2 b2 = (α, β).
(2) Linearity: Let a, b ∈ F and γ = (c1 , c 2 ) ∈ V2 ( F ). Then
aα + bβ = a (a1 , a2 ) + b (b1 , b2 ) = (aa1 + bb1 , aa2 + bb2 ).
∴ (aα + bβ, γ ) = 2 (aa1 + bb1 ) c1 + (aa1 + bb1 ) c 2
+ (aa2 + bb2 ) c1 + (aa2 + bb2 ) c 2
= a (2a1 c1 + a1 c 2 + a2 c1 + a2 c 2 ) + b (2b1 c1 + b1 c 2 + b2 c1 + b2 c 2 )
= a (α, γ ) + b (β, γ ).
(3) Non-negativity: We have
(α, α) = 2a1 a1 + a1 a2 + a2 a1 + a2 a2 = a1 a1 + (a1 + a2 ) ( a1 + a2 )
= | a1 | 2 + (a1 + a2 ) (a1 + a2 ) = | a1 | 2 + | a1 + a2 | 2 ≥ 0.
Also (α, α) = 0 ⇒ | a1 | 2 + | a1 + a2 | 2 = 0
a1 = 0, a1 + a2 = 0 ⇒ a1 = 0, a2 = 0 ⇒ α = 0.
Hence the result.
Example 3: If α and β are vectors in an inner product space then show that
||α + β||2 + ||α − β||2 = 2 ||α||2 + 2 ||β||2 . (Lucknow 2010)
(Parallelogram law)
Solution: We have || α + β|| 2 = (α + β, α + β) [by def. of norm]
= (α, α + β) + (β, α + β) [by linearity property]
= (α, α) + (α, β) + (β, α) + (β, β)
= || α||2 + (α, β) + (β, α) + ||β||2 ...(1)
Also || α − β|| 2 = (α − β, α − β) = (α, α − β) − (β, α − β)
= (α, α) − (α, β) − (β, α) + (β, β)
= || α||2 − (α, β) − (β, α) − ||β||2 ...(2)
Adding (1) and (2), we get
||α + β||2 + ||α − β||2 = 2 || α||2 + 2 ||β ||2 .
Geometrical interpretation: Let α and β be vectors in the vector space V2 (R) with
standard inner product defined on it. Suppose the vector α is represented by the
side AB and the vector β by the side BC of a parallelogram ABCD. Then the vectors
α + β and α − β represent the diagonals AC and DB of the parallelogram.
∴ AC 2 + DB2 = 2 AB2 + 2 BC 2 ,
i.e., the sum of the squares of the sides of a parallelogram is equal to the sum of the
squares of its diagonals.
268

Example 4: If α , β are vectors in an inner product space V ( F) and a, b ∈ F, then prove


that
(i) || aα + bβ|| 2 = | a | 2 || α|| 2 + ab (α, β) + ab (β, α) + | b | 2 || β|| 2 ,
1 1
(ii) Re (α, β) = || α + β|| 2 − || α − β|| 2 .
4 4
Solution: (i) We have ||aα + bβ||2 = (aα + bβ, aα + bβ)
= a (α , aα + bβ) + b (β, aα + bβ)
= a { a (α, α) + b (α, β)} + b { a (β, α) + b (β, β)}
= aa (α, α) + ab (α, β) + ba (β, α) + bb (β, β)
= |a|2 ||α||2 + ab (α,β) + ab (β, α) + |b|2 || β||2
(ii) We have || α + β|| 2 = (α + β, α + β) = (α, α + β) + (β, α + β)
= (α, α) + (α, β) + (β, α) + (β, β) = ||α||2 + (α, β) + (α, β) + || β ||2
= || α ||2 + 2 Re (α, β) + || β ||2 ...(1)
2
Also || α − β|| = (α − β, α − β) = (α, α − β) − (β, α − β)
= (α, α) − (α, β) − (β, α) + (β, β) = || α ||2 − {(α, β) + (β, α)} + || β ||2
=||α ||2 − {(α, β) + (α, β)} + || β ||2
= ||α||2 − 2Re (α, β) + ||β | | 2 ...(2)
Subtracting (2) from (1), we get
|| α + β ||2 − || α − β ||2 = 4Re (α,β).
1 1
∴ Re (α, β) = || α + β ||2 − || α − β||2 .
4 4
Note: If F = R, then Re (α, β) = (α , β).

Example 5: Suppose that α and β are vectors in an inner product space V. If


|(α, β)| = || α ||||β ||(that is, if the Schwarz inequality reduces to an equality), then α and β
are linearly dependent.
Solution: It is given that |(α,β)| = || α || ⋅ || β ||. …(1)
If α = 0, then (1) is satisfied. Therefore if α and β satisfy (1), then α can be 0 also. If
α = 0, the vectors α and β are linearly dependent because any set of vectors
containing zero vector is linearly dependent.
Let us now suppose that α and β satisfy (1) and α ≠ 0.
If α ≠ 0, then || α || > 0. Consider the vector
(α, β)
γ =β− α.
|| α ||2
 (β, α) (β, α) 
We have (γ, γ ) = β − 2
α, β − α
 ||α|| || α||2 
269

 (β, α)  (α, β)  (β, α) 


= β,β − α −
2 
α, β −
 α
 ||α||  ||α||2  ||α||2 
(β,α) (β,α) (β,α)(β, α)
= (β, β) − 2
(β, α) − 2
(α, β) + (α, α)
||α|| || α || || α ||2 || α ||2
(β,α) (β, α) (α,β) (α,β) (β, α) (β,α)
= || β ||2 − 2
− 2
+ 2 2
|| α ||2
|| α || || α || || α || || α ||
2
|(α, β)| || β || ||α || − |(α, β)|2
2 2
= || β ||2 − = = 0.
|| α ||2 || α ||2 [from (1)]
(β, α)
Now (γ , γ ) = 0 ⇒ γ = 0 ⇒ β− α=0
|| α ||2
⇒ α and β are linearly dependent.
Example 6: If in an inner product space the vectors α and β are linearly dependent, then
|(α, β)| = || α || ⋅ || β || .
Solution: If α = 0, then |( 0, β)| = 0 and || α|| = 0. Therefore the given result is
true.
Also if β = 0, then (α, 0) = (0, α) = 0 = 0 and || β|| = 0.
So let us suppose that both α and β are non-zero vectors. Since they are linearly
dependent, therefore α = cβ where c is some scalar. We have
(α, β) = (cβ, β) = c (β, β) = c || β|| 2 .
∴ |(α, β)| = |c| ||β ||2 .
Also ||α || = ||cβ || = |c| ⋅ ||β ||.
∴ ||α ||||β || = |c| ⋅ || β ||2 .
Hence |( α, β )| = ||α ||⋅ || β ||.

Comprehensive Exercise 1

1. If α = (a1 , a2 , … , a n )|, β = (b1 , b2 , … , b n ) ∈ Vn (R), then prove that


(α, β) = a1 b1 + a2 b2 + .... + a n b n ...(1)
defines an inner product on Vn (R).
2. Show that for the vectors α = ( x1 , x2 ) and β = ( y1 , y2 ) from R2 the following
defines an inner product on R2 :
(α, β) = x1 y1 − x2 y1 − x1 y2 + 2 x2 y2 . ...(1)
3. Which of the following define inner products in V2 (R) ? Give reasons.
[Assume α = ( x1 , x2 ), β = ( y1 , y2 )] .
270

(a) (α, β) = x1 y1 + 2 x1 y2 + 2 x2 y1 + 5 x2 y2
(b) (α, β) = x12 − 2 x1 y2 − 2 x2 y1 + y12 .
(c) (α, β) = 2 x1 y1 + 5 x2 y2 .
(d) (α, β) = x1 y1 − 2 x1 y2 − 2 x2 y1 + 4 x2 y2 .
4. Let α = (a1 , a2 ) and β = (b1 , b2 ) be any two vectors ∈ V2 (C). Prove that
(α, β) = a1 b1 + (a1 + a2 ) (b1 + b2 ) defines an inner product in V2 (C). Show
that the norm of the vector (3, 4) in this inner product space is √ (58).
5. Let V be an inner product space.
(a) Show that (0, β) = 0 for all β in V.
(b) Show that if (α, β) = 0 for all β in V, then α = 0.
6. Let V be an inner product space, and α, β be vectors in V. Show that α = β if
and only if (α, γ ) = (β, γ ) for every γ in V.
7. Normalize each of the following vectors in the Euclidean space R3 :
(ii)  , , −  ⋅
1 2 1
(i) (2, 1, − 1),
2 3 4
8. Let V (R) be a vector space of polynomials with inner product defined by
1
( f (t), g (t)) = ∫0 f (t) g(t) dt.

If f (t) = t 2 + t − 4, g (t) = t − 1, then find ( f , g) and || g||.


9. If α, β be vectors in a real inner product space such that ||α || = ||β ||,
then prove that (α + β, α − β) = 0.
10. Prove that if α and β are vectors in a unitary space, then
(i) 4 (α, β ) = || α + β ||2 − || α − β ||2 + i|| α + iβ ||2 − i|| α − iβ ||2 .
(ii) (α, β) = Re (α, β) + i Re (α, iβ).
11. Show that any two vectors α, β of an inner product space are linearly
dependent if and only if |(α, β)| = || α || ⋅ || β ||.
12. If in an inner product space || α + β || = || α || + || β ||, then prove that the
vectors α and β are linearly dependent. Give an example to show that the
converse of this statement is false.

A nswers 1

3. (a) and (c) are inner products, (b) and (d) are not
(i) 
2 1 1 1
7. , ,−  (ii) (6, 8, − 3)
√6 √6 √6  √ (109)
7 1
8. ;
4 √3
271

7.4 Orthogonality
(Lucknow 2008)
Definition: Let α and β be vectors in an inner product space V. Then α is said to be
orthogonal to β if (α , β) = 0.
The relation of orthogonality in an inner product space is symmetric. We have α is
orthogonal to β ⇒ (α, β) = 0 ⇒ (α, β) = 0 ⇒ (β, α) = 0 ⇒ β is orthogonal to α.
So we can say that two vectors α and β in an inner product space are orthogonal if
(α, β) = 0.
Note 1: If α is orthogonal to β, then every scalar multiple of α is orthogonal to β. Let k be any
scalar. Then
(kα, β) = k (α, β) = k 0 = 0. [∵ (α, β) = 0]
Therefore kα is orthogonal to β.
Note 2: The zero vector is orthogonal to every vector. For every vector α in V, we have
(0, α) = 0.
Note 3: The zero vector is the only vector which is orthogonal to itself.
We have α is orthogonal to α ⇒ (α, α) = 0
⇒ α = 0, by def. of an inner product space.
Definition: A vector α is said to be orthogonal to a set S if it is orthogonal to each vector in S.
Similarly two subspaces are called orthogonal if every vector in each is orthogonal to every
vector in the other.
Orthogonal set. Definition: Let S be a set of vectors in an inner product space V. Then S
is said to be an orthogonal set provided that any two distinct vectors in S are orthogonal.
Theorem 1: Let S = {α1 , … , α m } be an orthogonal set of non-zero vectors in an inner
product space V. If a vector β in V is in the linear span of S, then
m (β, α k )
β= Σ αk .
k = 1 || α || 2
k

Proof: Since β ∈ L (S ), therefore β can be expressed as a linear combination of the


vectors in S. Let
m
β = c1 α1 + .... + c m α m = Σ cj αj .
j =1

We have for each k where 1≤ k ≤ m,


 m  m
(β, α k ) =  Σ c j α j , α k  = Σ c j (α j , α k )
 j =1  j =1

[by linearity property of inner product]


= c k (α k , α k ).
[On summing with respect to j. Note that S is an orthogonal
set of non-zero vectors and so (α j , α k ) = 0 if j ≠ k]
272

Now α k ≠ 0. Therefore (α k , α k ) ≠ 0. Thus || α k || 2 ≠ 0.


(β, α k )
∴ ck = ,1 ≤ k ≤ m.
|| α k||2
Putting these values of c1 , ..., c m in (1), we get
m (β, α k )
β= Σ αk .
k = 1 || α || 2
k

Theorem 2: Any orthogonal set of non-zero vectors in an inner product space V is linearly
independent.
Proof: Let S be an orthogonal set of non-zero vectors in an inner product space V.
Let S1 = {α1 , … , α m } be a finite subset of S containing m distinct vectors. Let
m
Σ c j α j = c1 α1 + … + c m α m = 0. ...(1)
j =1

We have, for each k where 1≤ k ≤ m,


 m  m
 Σ c j α j , α k  = Σ c j (α j , α k ) = c k (α k , α k ) [∵ (α j , α k ) = 0 if j ≠ k]
 j =1  j =1
= c k || α k|| 2 .
m  m 
But from (1), Σ c j α j = 0. Therefore  Σ c j α j , α k  = (0, α) = 0.
j =1  j =1 
∴ (1) implies that c k || α k ||2 = 0, 1≤ k ≤ m

⇒ ck = 0 [∵ α k ≠ 0 ⇒ || α k ||2 ≠ 0]
∴ The set S1 is linearly independent. Thus every finite subset of S is linearly
independent. Therefore S is linearly independent.
Orthonormal set. Definition: Let S be a set of vectors in an inner product space V. Then
S is said to be an orthonormal set if
(i) α ∈ S ⇒ || α|| = 1 i.e., (α, α) = 1,
and (ii) α, β ∈ S and α ≠ β ⇒ (α, β) = 0.
Thus an orthonormal set is an orthogonal set with the additional property that
each vector in it is of length 1. In other words a set S consisting of mutually orthogonal unit
vectors is called an orthonormal set. Obviously an orthonormal set cannot contain zero
vector because || 0 || = 0.
A finite set S = {α1 , … , α m } is orthonormal if
(α i , α j ) = δ ij where δ ij = 1 if i = j and δ ij = 0 if i ≠ j.
Existence of an orthonormal set: Every inner product space V which is not equal to zero
space possesses an orthonormal set.
 α 
Let 0 ≠ α ∈ V. Then || α || ≠ 0. The set   containing only one vector is
|| α||
necessarily an orthonormal set.
273

 α α  1  α  1 1
We have  ,  = α,  = , (α, α)
|| α || || α || || α ||  || α || || α || || α ||
1
= || α|| 2 = 1.
|| α|| 2
Theorem 3: Let S = {α1 , ..., α m }be an orthonormal set of vectors in an inner product space
m
V. If a vector β is in the linear span of S, then β = Σ (β, α k ) α k .
k =1

Proof: Since β ∈ L (S ), therefore β can be expressed as a linear combination of the


vectors in S. Let
m
β = c1α1 + .... + c m α m = Σ c j α j. ...(1)
j =1

We have for each k where 1≤ k ≤ m,


 m  m
(β, α k ) =  Σ c j α j , α k  = Σ c j (α j , α k )
 j =1  j =1
[by linearity of inner product]
m
= Σ c j δ jk [∵ S is an orthonormal set]
j =1

= ck. [On summing with respect to j and remembering


that δ jk = 1 if j = k and δ jk = 0 if j ≠ k]
Putting the values of c1 , … , c m in (1), we get
m
β = Σ (β, α k ) α k .
k =1

Theorem 4: If S = {α1 , … , α m } is an orthonormal set in V and if β ∈ V then


m
γ = β − Σ (β, α i ) α i is orthogonal to each of α1 , … , α m and, consequently, to the subspace
i =1

spanned by S.
Proof: We have for each k where 1≤ k ≤ m,
 m 
(γ , α k ) = β − Σ (β, α i ) α i , α k 
 i =1 
 m 
= (β, α k ) −  Σ (β, α i ) α i , α k 
 i =1  [by linearity of inner product]
m
= (β, α k ) − Σ (β, α i ) (α i , α k ) [by linearity of inner product]
i =1
m
= (β, α k ) − Σ (β, α i ) δ ik
i =1 [∵ α i , α k belong to an orthonormal set]
= (β, α k ) − (β, α k ) [∵ δ ik = 1 if i = k and δ ik = 0 if i ≠ k]
= 0.
Hence the first part of the theorem.
274

Now let δ be any vector in the subspace spanned by S i.e., let δ ∈ L (S). Then
m
δ= Σ a i α i where each a i is some scalar.
i =1

 m  m m
We have (γ , δ) =  γ, Σ a i α i  = Σ a i (γ , α i ) = Σ a i 0 = 0.
 i =1  i =1 i =1

Thus γ is orthogonal to every vector δ in L (S). Therefore γ is orthogonal to L (S ).


Theorem 5: Any orthonormal set of vectors in an inner product space is linearly
independent.
Proof: Let S be any orthonormal set of vectors in an inner product space V. Let
S1 = {α1 , … , α m } be a finite subset of S containing m distinct vectors. Let
m
Σ c j α j = c1α1 + … + c m α m = 0. ...(1)
j =1

We have, for each k where 1≤ k ≤ m,


 m  m
 Σ c j α j , α k  = Σ c j (α j , α k )
 j =1  j =1 [by linearity of inner product]
m
= Σ c j δ jk
j =1 [∵ (α j , α k ) = δ jk ]
= ck. [On summing with respect to j]
m  m 
But from (1), Σ c j α j = 0. Therefore  Σ c j α j , α k  = (0, α k ) = 0.
j =1  j =1 
∴ (1) implies that c k = 0 for each 1≤ k ≤ m.
∴ the set S1 is linearly independent. Thus every finite subset of S is linearly
independent. Therefore S is linearly independent.
Complete orthonormal set: Definition: An orthonormal set is said to be complete if it
is not contained in any larger orthonormal set.
Orthonormal dimension of a finite-dimensional vector space: Definition:
Let V be a finite-dimensional inner product space of dimension n. If S is any orthonormal set
in V then S is linearly independent. Therefore S cannot contain more than n distinct vectors
because in an n-dimensional vector space a linearly independent set cannot contain more than
n vectors.
The orthogonal dimension of V is defined as the largest number of vectors an
orthonormal set in V can contain.
Obviously the orthogonal dimension of V will be ≤ n where n is the linear dimension
of V.
The following theorem gives us a characterization of completeness i.e., it gives us
equivalent definitions of completeness.
Theorem 6: If S = {α1 , … , α n }is any finite orthonormal set in an inner product space V,
then the following six conditions on S are equivalent:
(i) The orthonormal set S is complete.
275

(ii) If (β, α i ) = 0 for i = 1, … , n, then β = 0.


(iii) The linear span of S is equal to V i.e., L (S) = V .
n
(iv) If β ∈ V, then β = Σ (β ,α i ) α i .
i =1
n
(v) If β and γ are in V, then (β, γ ) = Σ (β , α i ) (α i , γ ).
i =1
n
(vi) If β is in V, then Σ | β, (α i )|2 = || β ||2 .
i =1

Proof: (i) ⇒ (ii).


It is given that S is a complete orthonormal set. Let β ∈ V and (β, α i ) = 0 for each
i = 1, ..., n.
Then β is orthogonal to each of the vectors α1 , … , α n .
β
If β ≠ 0, then adjoining the vector to the set S we obtain an orthonormal set
|| β ||
larger than S. This contradicts the given statement that S is a complete
orthonormal set. Hence β = 0.
(ii) ⇒ (iii).
It is given that if (β, α i ) = 0 for i = 1, … , n, then β = 0. To prove that L (S ) = V .
n
Let γ be any vector in V. Consider the vector δ = γ − Σ (γ , α i ) α i .
i =1

We know that δ is orthogonal to each of the vectors α1 , … , α n i.e., (δ, α i ) = 0 for each
i = 1, ..., n. Therefore according to the given statement δ = 0. This gives
n
γ = Σ (γ , α i ) α i . Thus every vector γ in V can be expressed as a linear
i =1

combination of α1 , … , α n . Therefore L (S) = V .


(iii) ⇒ (iv).
It is given that L (S) = V . Therefore if β ∈ V, then β can be expressed as a linear
combination of α1 , … , α n . From theorem 3, we know that this expression for β will
n
be β= Σ (β, α i ) α i .
i =1

(iv) ⇒ (v).
n
It is given that if β is in V, then β = Σ (β, α i ) α i . If γ is another vector in V, then
i =1
n
γ = Σ (γ , α i ) α i .
i =1

 n n 
We have (β, γ ) =  Σ (β, α i ) α i , Σ (γ , α j ) α j 
 i =1 j =1 
n n n
= Σ Σ (β,α i ) (γ , α j ) (α i ,α j ) = Σ (β,α i )(γ ,α i )
i =1 j =1 i =1
[On summing with respect to j]
276
n
= Σ (β, α i ) (α i , γ ).
i =1

(v) ⇒ (vi).
n
It is given that if β and γ are in V, then (β, γ ) = Σ (β , α i ) (α i , γ ).
i =1

If β is in V then taking γ = β in the given result, we get


n n
(β, β) = Σ (β, α i ) (α i ,β) = Σ (β, α i ) (β, α i )
i =1 i =1
n
⇒ || β ||2 = Σ |(β, α i )|2 .
i =1

(vi) ⇒ (i).
n
It is given that if β is in V, then|| β ||2 = Σ |(β,α i )|2 . . To prove that S is a complete
i =1

orthonormal set.
Let S be not a complete orthonormal set i.e., let S be contained in a larger
orthonormal set S1 .
Then there exists a vector α 0 in S1 such that|| α 0|| = 1and α 0 is orthogonal to each
of the vectors α1 , … , α n . Since α 0 is in V, therefore from the given condition,
n
we have ||α 0||2 = Σ |(α 0 ,α i )|2 = 0.
i =1

This contradicts the fact that || α 0|| = 1. Hence S must be complete.


Corollary: Every complete orthonormal set in a finite-dimensional inner product space V
forms a basis for V.
Proof: Let S be a complete orthonormal set in a finite-dimensional inner product
space V. Then S is linearly independent. Also by the above theorem L (S) = V .
Hence S must be a basis for V.
In the next theorem we shall prove that the orthogonal dimension of a finite
dimensional inner product space is equal to its linear dimension.
Theorem 7: If V is an n-dimensional inner product space, then there exist complete
orthonormal sets in V, and every complete orthonormal set in V contains exactly n elements.
The orthogonal dimension of V is the same as its linear dimension.
 α 
Proof: Let 0 ≠ α ∈ V. Then   is an orthonormal set in V. If it is not complete,
|| α||
then we can enlarge it by adding one more vector to it so that the resulting set is also
an orthonormal set. If this resulting orthonormal set is still not complete, then we
enlarge it again. Thus we proceed by induction. Ultimately we must reach a
complete orthonormal set because an orthonormal set is linearly independent and
so it can contain at most n elements. Thus there exist complete orthonormal sets in
V.
277

Now suppose S = {α1 , … , α m } is a complete orthonormal set in V. Then S is


linearly independent. Also the linear span of S is V. Therefore S is a basis for V.
Hence the number of vectors in S must be n. Thus we must have m = n.
Thus we have proved that there exist complete orthonormal sets in V and each of
them will have n elements. Thus n is the largest number of vectors that an
orthonormal set in V will contain. Therefore the orthogonal dimension of V is
equal to n which is also the linear dimension of V.
Now we shall give an alternative proof of theorem 7. This proof will be a
constructive proof i.e., it will also give us a process to construct an orthonormal
basis for a finite dimensional inner product space.
Orthonormal basis: Definition: A basis of an inner product space that consists of
mutually orthogonal unit vectors is called an orthonormal basis.
Gram-Schmidt orthogonalization process:
Theorem 8: Every finite-dimensional inner product space has an orthonormal basis.
(Lucknow 2007)
Proof: Let V be an n-dimensional inner product space and let B = {β1 , β 2 , … , β n }
be a basis for V. From this set we shall construct an orthonormal set
B1 = {α1 , … , α n } of n distinct vectors by means of a construction known as
Gram-Schmidt orthogonalization process. The main idea behind this
construction is that each α j ,1≤ j ≤ n will be in the linear span of β1 , … , β j .
β1
We have β1 ≠ 0 because the set B is linearly independent. Let α1 = .
|| β1 ||
 β β1  1 1
We have (α1 ,α1 ) =  1 ,  = (β1 ,β1 ) = ⋅ || β1 ||2 = 1.
 1
|| β || || β1 
|| || β1 ||2
|| β1 ||2

Thus we have constructed an orthonormal set {α1 }containing one vector. Also α1 is
in the linear span of β1 .
Now let γ 2 = β 2 − (β 2 , α1 ) α1 . By theorem 4, γ 2 is orthogonal to α1 . Also γ 2 ≠ 0
because if γ 2 = 0,then β 2 is a scalar multiple of α1 and therefore of β1 . But this is not
possible because the vectors β1 and β 2 are linearly independent. Hence γ 2 ≠ 0. Let
γ2
us now put α 2 = ⋅ Then|| α 2 || = 1. Also α 2 is orthogonal to α1 because α 2 is
|| γ 2 ||
simply a scalar multiple of γ 2 which is orthogonal to α1 . Further α 2 ≠ α1 . For
otherwise β 2 will become a scalar multiple of β1 . Thus {α1 , α 2 } is an orthonormal
set containing two distinct vectors such that α1 is in the linear span of β1 and α 2 is
in the linear span of β1 , β 2 .
The way ahead is now clear. Suppose that we have constructed an orthonormal set
{α1 , … , α k } of k (where k < n) distinct vectors such that each α j ( j = 1, ..., k) is a
linear combination of β1 , … , β j . Consider the vector
γ k + 1 = β k + 1 − (β k + 1 , α1 ) α1 − (β k + 1 , α 2 ) α 2 − … − (β k + 1 , α k ) α k ...(1)
278

By theorem 4, γ k +1 is orthogonal to each of the vectors α1 , … , α k . Suppose


γ k + 1 = 0. Then β k +1 is a linear combination of α1 , ..., α k . But according to our
assumption each α j ( j = 1, ..., k) is a linear combination of β1 , β 2 , … , β j .
Therefore β k +1 is a linear combination of β1 , … , β k . This is not possible because
β1 , … , β k , β k + 1 are linearly independent.
Therefore we must have γ k + 1 ≠ 0.
γ k +1
Let us now put α k +1 = . …(2)
||γ k +1 ||
We have ||α k+1 || = 1. Also α k +1 is orthogonal to each of the vectors α1 , … , α k ,
because α k +1 is simply a scalar multiple of γ k +1 which is orthogonal to each of the
vectors α1 , … , α k . Further obviously α k + 1 ≠ α j , j = 1, .., k. For otherwise from (1)
and (2),we see that β k +1 will become a linear combination of β1 , … , β k . Also from
(1) and (2), we see that α k +1 is in the linear span of β1 , … , β k + 1 .
Thus we have been able to construct an orthonormal set
{α1 , … , α k , α k + 1 }
containing k + 1distinct vectors such that α j ( j = 1, 2, ..., k + 1) is in the linear span
of β1 , … , β j . Our aim is now complete by induction. Thus continuing in this way
we shall ultimately obtain an orthonormal set B1 = {α1 , … , α n } containing n
distinct vectors. The set B1 is linearly independent because it is an orthonormal
set. Therefore B1 is a basis for V because the number of vectors in B1 is equal to the
dimension of V. Also the set B1 is a complete orthonormal set because the
maximum number of vectors in an orthonormal set in V can be n. Thus there exist
complete orthonormal sets in V. Also the orthogonal dimension of V is equal to n
i.e., equal to the linear dimension of V.
γ2
Note: In the above construction the vector α 2 will be where
|| γ 2||
γ3
γ 2 = β 2 − (β 2 , α1 ) α1 . Similarly the vector α 3 will be where
|| γ 3||
γ 3 = β 3 − (β 3 , α1 ) α1 − (β 3 , α 2 ) α 2 . Similarly the other vectors can be found.
How to apply Gram-Schmidt orthogonalization process to numerical
problems ?
Suppose B = {β1 , β 2 , … , β n } is a given basis of a finite dimensional inner product
space V. Let {α1 , α 2 , … , α n } be an orthonormal basis for V which we are required
to construct from the basis B. The vectors α1 , α 2 , … , α n will be obtained in the
following way.
β1
Take α1 = ,
|| β1||
γ2
α2 = where γ 2 = β 2 − (β 2 , α1 ) α1 ,
|| γ 2||
279

γ3
α3 = where γ 3 = β 3 − (β 3 , α1 ) α1 − (β 3 , α 2 ) α 2 ,
|| γ 3||
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
γn
αn = where γ n = β n − (β n , α1 ) α1 − (β n , α 2 ) α 2 − …
|| γ n ||
− (β n , α n − 1 ) α n − 1 .
Now we shall give an example to illustrate the Gram-Schmidt process.
Example 7: Apply the Gram-Schmidt process to the vectors β1 = (1, 0, 1),β 2 = (1, 0, − 1),
β 3 = (0, 3, 4) , to obtain an orthonormal basis for V3 (R) with the standard inner product.
(Lucknow 2009)
2
Solution: We have || β1|| = (β1 , β1 ) = 1 . 1 + 0 . 0 + 1 . 1 = (1) + (0)2 + (1)2 = 2.
2

β1 1  1 , 0, 1  .
Let α1 = = (1, 0, 1) =  
|| β1 || √ 2 √2 √ 2

Now let γ 2 = β 2 − (β 2 , α1 ) α1 .
1 1
We have (β 2 , α1 ) = 1 . + 0 . 0 + (− 1) . = 0.
√2 √2

γ 2 = (1, 0, − 1) − 0 
1 1
∴ ,0 ,  = (1, 0, − 1).
√2 √ 2
Now || γ 2||2 = (γ 2 , γ 2) = (1)2 + (0)2 + (−1)2 = 2.
γ2
(1, 0, − 1) = 
1 1 1
Let α2 = = ,0 , − ⋅
|| γ 2|| √2 √2 √ 2

Now let γ 3 = β 3 − (β 3 , α1 ) α1 − (β 3 , α 2 ) α 2 .
 1 
(β 3 , α1 ) = (0, 3, 4), 
1 1 1
We have ,0 ,  = 0 ⋅ + 3⋅0 + 4 ⋅ = 2 √ 2.
 √2 √ 2  √2 √2
 1 
(β 3 , α 2 ) = (0, 3, 4), 
1 1 1
, 0 ,−  = 0 ⋅ + 3⋅0 − 4 ⋅ = 2 √ 2.
 √2 √2   √2 √2

γ 3 = (0, 3, 4) − 2 √ 2 
1 1  1 , 0 ,− 1 
∴ ,0 ,  + 2√2  
√2 √ 2 √2 √ 2
∴ = (0, 3, 4) − (2, 0, 2) + (2, 0, − 2) = (0, 3, 0).
Now || γ 3 ||2 = (γ 3 , γ 3) = (0)2 + (3)2 + (0)2 = 9.
γ3 1
Put α3 = = (0, 3, 0) = (0, 1, 0).
|| γ 3|| 3

 1 
{α1 , α 2 , α 3 } i.e.,  
1  1 1
Now ,0 ,  , ,0 , −  , (0, 1, 0)
 √ 2 √ 2  √ 2 √ 2 
is the required orthonormal basis for V3 (R).
280

Theorem 9: Bessel’s inequality: (Lucknow 2007)


If B = {α1 , … , α m }is any finite orthonormal set in an inner product space V and if β is any
m
vector in V, then Σ |(β, α i )|2 ≤ || β ||2 .
i =1

Furthermore, equality holds if and only if β is in the subspace spanned by α1 , … , α m .


m
Proof: Consider the vector γ = β − Σ (β,α i ) α i .
i =1

 m m 
We have || γ | |2 = (γ , γ ) = β − Σ (β,α i ) α i ,β − Σ (β,α j ) α j 
 i =1 j =1 
m m
= (β, β) − Σ (β, α i ) (α i ,β) − Σ (β, α j )(β, α j )
i =1 j =1
m m
+ Σ Σ (β,α i ) (β, α j ) (α i , α j )
i =1 j =1
m m m
= (β, β) − Σ (β, α i ) (β, α i ) − Σ (β, α j ) (β, α j ) + Σ (β, α i ) (β, α i )
i =1 j =1 i =1

[On summing with re spect to j and remembering that


(α i , α j ) = 1 when j = i and (α i , α j ) = 0 when j ≠ i]
m m m
= ||β || 2 − Σ |(β,α i )|2 − Σ |(β,α i )|2 + Σ |(β,α i )|2 .
i =1 i =1 i =1
m
∴ || γ || 2 = || β|| 2 − Σ |(β, α i )| 2 ...(1)
i =1

Now || γ ||2 ≥ 0.
m m
∴ || β ||2 − Σ |(β,α i )|2 ≥ 0 or Σ |(β,α i )|2 ≤ || β ||2 ,
i =1 i =1
m
If the equality holds i.e., if Σ |(β,α i )|2 = || β ||2 then from (1) we have|| γ || 2 = 0.
i =1
m
This implies that γ = 0 i.e., β = Σ (β, α i ) α i .
i =1

Thus if the equality holds, then β is a linear combination of α1 , … , α m .


If β is a linear combination of α1 , … , α m , then from theorem 3, we know that
m
β= Σ (β, α i ) α i . This implies that γ = 0 which in itself implies that || γ ||2 = 0.
i =1
m
Then from (1), we get Σ |(β, α i )| 2 = || β|| 2
i =1

and thus the equality holds.


Note: Another statement of Bessel’s inequality:
Let {α1 , … , α m } be an orthogonal set of non-zero vectors in an inner product space V. If β is
any vector in V, then
m |(β,α )|2
Σ i
≤ || β ||2 .
i = 1 || α ||2
i (Lucknow 2010)
281

Proof: Let B = {δ1 , … , δ m } ,


αi
where δi = , 1≤ i ≤ m.
|| α i ||
Then || δ i|| = 1. Thus the set B is an orthonormal set. Now proceeding as in the
previous theorem, we get
m
Σ |(β,δ i )|2 ≤ || β ||2 . ...(1)
i =1

 αi  1
Also (β, δ i ) = β,  = (β,α i ).
 || α i || || α i ||
|(β, α i )| 2
|(β, δ i )| 2 = ⋅ ...(2)
|| α i || 2
From (1) and (2), we get the required result.
Corollary: If V is finite dimensional and if {α1 , … , α m } is an orthonormal set in V such
m
that Σ | (β, α i )| 2 = || β ||2 for every β ∈ V, prove that {α1 , … , α m }must be a basis of V.
i =1

Proof: Let β be any vector in V. Consider the vector


m
γ = β − Σ (β, α i ) α i . ...(1)
i =1

As in the proof of Bessel’s inequality, we have


m
|| γ ||2 = (γ , γ ) = || β ||2 − Σ |(β, α i )| 2
i =1 [prove it here]
= 0, by the given condition.
m
∴ γ = 0 i.e., β = Σ (β, α i ) α i .
i =1 [from (1)]
Thus every vector β in V can be expressed as a linear combination of the vectors in
the set S = {α1 , … , α m }i.e., L (S) = V . Also S is linearly independent because it is an
orthonormal set. Hence S must be a basis for V.
Orthogonal complement. Definition: Let V be an inner product space, and let S be
any set of vectors in V. The orthogonal complement of S, written as S ⊥ and read as S
perpendicular, is defined by
S ⊥ = {α ∈ V :(α, β) = 0 V β ∈ S}.

Thus S is the set of all those vectors in V which are orthogonal to every vector in
S.

Theorem 10: Let S be any set of vectors in an inner product space V. Then S is a subspace
of V.
Proof: We have, by definition,
S ⊥ = {α ∈ V : (α, β) = 0 V β ∈ S}.
282

Since (0, β) = 0 V β ∈ S, therefore at least 0 ∈ S ⊥ and thus S ⊥ is not empty.


Let a, b ∈ F and γ ,δ ∈ S ⊥ . Then (γ , β) = 0 V β ∈ S and (δ, β) = 0 V β ∈ S.
For every β ∈ S, we have
(a γ + bδ, β) = a (γ , β) + b (δ, β) = a0 + b0 = 0.
⊥ ⊥
Therefore a γ + bδ ∈ S . Hence S is a subspace of V.
Note: The orthogonal complement of V is the zero subspace and the orthogonal
complement of the zero subspace is V itself.
Orthogonal complement of an orthogonal complement:

Definition: Let S be any subset of an inner product space V. Then S is a subset of V. We
define (S ⊥ ) ⊥ , written as S ⊥ ⊥ , by
S ⊥ ⊥ = {α ∈ V : (α, β) = 0 V β ∈ S ⊥
}.
Obviously S ⊥ ⊥ is a subspace of V. Also it can be easily seen that S ⊆ S ⊥⊥
.
⊥ ⊥ ⊥
Let α ∈ S. Then (α, β) = 0 V β ∈ S ]. Therefore by definition of (S ) ,
α ∈ (S ⊥ ) ⊥ . Thus α ∈ S ⇒ α ∈ S ⊥ ⊥ . Therefore S ⊆ S ⊥ ⊥ .
Theorem 11: Projection Theorem: The following theorem is known as the
projection theorem and is very important.
Let W be any subspace of a finite dimensional inner product space V. Then
(i) V = W ⊕ W⊥, and (ii) W ⊥ ⊥ = W.
Proof: (i) First we shall prove that V = W + W ⊥ .
Since W is a subspace of a finite dimensional vector space V therefore W itself is
also finite-dimensional. Let dim V = n and dim W = m.
Now every finite-dimensional vector space possesses an orthonormal basis. Let
B1 = {α1 , … , α m } be an orthonormal basis for W.
Let β be any vector in V. Consider the vector
m
γ =β− Σ (β, α i ) α i . ...(1)
i =1

By theorem 4, the vector γ is orthogonal to each of the vectors α1 , ..., α m and


consequently γ is orthogonal to the subspace W spanned by these vectors. Thus γ is
m
orthogonal to every vector in W.Therefore γ ∈ W ⊥ . Also the vector Σ (β, α i ) α i is
i =1

in W because it is a linear combination of vectors belonging to a basis for W.


Now from (1), we have
 m  m
β= Σ (β, α i ) α i  + γ where Σ (β, α i ) α i is in W and γ is in W ⊥ .
 i =1  i =1

Therefore V = W + W ⊥ .
Now we shall prove that the subspaces W and W ⊥ are disjoint. Let α ∈ W ∩ W ⊥ .
283

Then α ∈ W and α ∈ W ⊥ . Since α ∈ W ⊥ , therefore α is orthogonal to every vector


in W. In particular α is orthogonal to α because α ∈ W. Now (α, α) = 0 ⇒ α = 0.
Thus 0 is the only vector which belongs to both W and W ⊥ . Hence W and W ⊥ are
disjoint.
∴ V = W ⊕ W⊥.
(ii) We have V = W ⊕ W ⊥ . ...(2)
Now W ⊥ is also a subspace of V. Therefore taking W ⊥ in place of W and using
the result (2), we get V = W ⊥ ⊕ W ⊥ ⊥ . ...(3)
Since V is the direct sum of W and W ⊥ and V is finite-dimensional, therefore
dim V = dim W + dim W ⊥ . ...(4)
Similarly from (3), we get dim V = dim W ⊥ + dim W ⊥ ⊥ . ...(5)
From (4) and (5), we get dim W = dim W ⊥ ⊥ . ...(6)
⊥⊥
Now we shall prove that W ⊆ W .
Let α ∈ W. Then (α, β) = 0 V β ∈ W ⊥ . Therefore by definition of (W ⊥ ) ⊥ ,
α ∈ (W ⊥ ) ⊥ . Thus α ∈ W ⇒ α ∈ W ⊥ ⊥ . Therefore W ⊆ W ⊥ ⊥ .
Since W ⊆ W ⊥ ⊥ , therefore W is a subspace of W ⊥ ⊥ . Also dim W = dim W ⊥⊥
.
⊥⊥
Hence W = W .
Corollary: Let W be any subspace of a finite-dimensional inner product space V. Then
dim W ⊥ = dim V − dim W.
Proof: Since V is finite dimensional and V = W ⊕ W ⊥ ,
therefore, dim V = dim W + dim W ⊥
⇒ dim W ⊥ = dim V − dim W.
Definition: If W is a subspace of a finite dimensional inner product space V, then
V = W ⊕ W ⊥ . Therefore every vector α in V can be uniquely expressed as α = α1 + α 2

where α1 ∈ W and α 2 ∈ W .
The vectors α1 and α 2 are then called the orthogonal projections of α on the subspaces W

and W .

Example 8: State whether the following statement is true or false. Give reasons to support
your answer.
α is an element of an n-dimensional unitary space V and α is perpendicular to n linearly
independent vectors from V, then α = 0.
284

Solution: True. Suppose α is perpendicular to n linearly independent vectors


α1 , … , α n .
Since V is of dimension n, therefore the n linearly independent vectors α1 , ..., α n
constitute a basis for V. So we can write α = a1α1 + … + a nα n .
Now (α, α) = (a1α1 + … + a nα n , α) = a1 (α1 , α) + … + a n (α n , α)
= a1 × 0 + .... + a n × 0[ ∵ α is ⊥ to each of the vectors
α1 , ..., α n ]
= 0.
∴ α = 0.
Example 9: If α and β are orthogonal unit vectors (that is, {α, β} is an orthonormal set),
what is the distance between α and β ?
Solution: If d (α, β) denotes the distance between α and β,
Then d(α,β) = ||α − β ||.
We have || α − β|| 2 = (α − β, α − β) = (α, α − β) − (β, α − β)
= (α, α) − (α, β) − (β, α) + (β, β)
= ||α||2 − 0 − 0 + || β ||2 [∵ α is orthogonal to β]
=1+1 [ α and β are unit vectors]
= 2.
∴ d (α, β) = || α − β|| = √ 2.
Example 10: Prove that two vectors α and β in a real inner product space are orthogonal if
and only if || α + β || 2 = || α || 2 + || β || 2 .
Solution: Let α, β be two vectors in a real inner product space V. We have
|| α + β ||2 = (α + β, α + β) = (α, α) + (α, β) + (β,α) + (β, β)
=||α||2 + 2(α, β) + ||β ||2 [∵ (β, α) = (α, β)]
Thus in a real inner product space V, we have
|| α + β ||2 = || α ||2 + 2(α, β) + || β ||2 ...(1)
If α and β are orthogonal, (α, β) = 0.
Therefore from (1), we get || α + β ||2 = || α ||2 + || β ||2
Conversely, suppose that || α + β ||2 = ||α ||2 + || β ||2 .
Then from (1), we get 2 (α, β) = 0 i.e., (α, β) = 0.
Therefore α and β are orthogonal.
Note 1: The above result is known as the Pythogorean theorem. Its geometrical
interpretation is that if ABC is a triangle in three dimensional Euclidean space,
then the angle B is a right angle if and only if AB2 + BC 2 = AC 2 .
Note 2: If V is a complex inner product space, then the above result becomes false.
In this case || α + β ||2 = || α ||2 + (α, β) + (α, β) + || β ||2
285

= || α ||2 + 2Re (α, β) + || β ||2 .


If α and β are orthogonal, then (α, β) = 0.
So Re (α, β) = 0 and we get || α + β ||2 = || α ||2 + || β ||2
But if || α + β ||2 = || α ||2 + || β ||2 ,then we get 2 Re (α, β) = 0. This implies that
Re (α, β) = 0.
This does not necessarily imply that (α, β) = 0 i.e., α and β are orthogonal. Thus in a
complex inner product space if α and β are orthogonal, then we have
|| α + β ||2 = || α ||2 + || β ||2 . But if we have || α + β ||2 = || α ||2 + || β ||2 , then
it is not necessary that α and β are orthogonal.
Example 11: If α and β are vectors in a real inner product space, and if α + β is orthogonal to
α − β, then prove that || α|| = || β||. Interpret the result geometrically.
Solution: We have α + β is orthogonal to α − β
⇒ (α − β, α + β) = 0 ⇒ (α, α + β) − (β, α + β) = 0
⇒ (α, α) + (α, β) − (β, α) − (β, β) = 0
⇒ || α||2 + (α, β) − (α, β) − || β||2 = 0 ⇒ || α ||2 = || β ||2 ⇒ || α || = || β ||.
Geometrical Interpretation: Let V be the three dimensional Euclidean space i.e.,
let V be the inner product space V3 (R) with standard inner product defined on it.
Let vectors α and β represent the sides AB and BC of a parallelogram ABCD. Then
the vectors α + β and α − β are along the diagonals AC and DB of a parallelogram. If
these diagonals are at right angles, then the length of α is equal to the length of β. So
AB = BC and the parallelogram is a rhombus.
Example 12: If S is a subset of an inner product space V, then prove that S ⊥ = S ⊥ ⊥ ⊥ .
Solution: We know that S ⊆ S ⊥ ⊥ . Taking S ⊥ in place of S, we see that
S ⊥ ⊆ (S ⊥ ) ⊥ ⊥
i.e., S ⊥ ⊆ (S ⊥ ) ⊥ ⊥ ⊥ . ...(1)
Also S ⊆ S⊥⊥ ⇒ (S ⊥ ⊥ ) ⊥ ⊆ S ⊥ [∵ S1 ⊆ S2 ⇒ S2 ⊥ ⊆ S1 ⊥ ]
⇒ S⊥ ⊥ ⊥ ⊆ S⊥. ...(2)
⊥ ⊥⊥⊥
From (1) and (2), we get S =S .

Example 13: If A = {α1 , … , α m } is an orthonormal basis for subspace W of a finite


dimensional inner product space V and B = {β1 , … , β t } is an orthonormal basis for W ⊥ ,
then prove that S = {α1 , … , α m , β1 , … , β t } is an orthonormal basis for V.
Solution: First we shall prove that the set S is an orthonormal set. Obviously each
vector in S is a unit vector. So it remains to prove that two distinct vectors in S are
orthogonal.
Now (α i , α j ) = 0 V i = 1, … , m, j = 1, … , m, i ≠ j. [∵ A is orthonormal]
286

Similarly (β i , β j ) = 0 V i = 1, ..., t, j = 1, ..., t, i ≠ j.


Lastly we are to verify that (α i , β j ) = 0 V i = 1, ..., m and j = 1, ..., t. But this is true
since α i ∈ W and β j ∈ W ⊥ .

Hence the set S is an orthogonal set. Therefore it is a linearly independent set. So S


will be a basis for V if L (S) = V .
Let α ∈ V. Since V = W ⊕ W ⊥ , therefore we can write α = γ + δ where γ ∈ W and
δ ∈ W ⊥ . Now γ ∈ W can be expressed as a linear combination of the vectors
belonging to the basis A of W. Similarly δ ∈ W ⊥ can be expressed as a linear
combination of the vectors belonging to the basis B of W ⊥ . Therefore α can be
expressed as a linear combination of the vectors belonging to A ∪ B i.e., belonging
to S. Therefore L (S ) = V .
Hence S is a basis for V.
Example 14: If W1 and W2 are subspaces of a finite-dimensional inner product space, then
(i) (W1 + W2 ) ⊥ = W1 ⊥ ∩ W2 ⊥ (ii) (W1 ∩ W2 ) ⊥ = W1 ⊥ + W2 ⊥
Solution: (i) We have W1 ⊆ W1 + W2 . ...(1)
∴ (W1 + W2 ) ⊥ ⊆ W1 ⊥ .
Also W2 ⊆ W1 + W2 .
∴ (W1 + W2 ) ⊥ ⊆ W2 ⊥ . ...(2)
⊥ ⊥ ⊥
From (1) and (2), we conclude that (W1 + W2 ) ⊆ W1 ∩ W2 .
⊥ ⊥ ⊥
Now we shall show that W1 ∩ W2 ⊆ (W1 + W2 ) .
Let α ∈ W1 ⊥ ∩ W2 ⊥ . Then α ∈ W1 ⊥ and α ∈ W2 ⊥ . Therefore α is orthogonal to
every vector in W1 and also to every vector in W2 .
Let β be any vector in W1 + W2 .
Then we can write β = γ 1 + γ 2 where γ 1 ∈ W1 , γ 2 ∈ W2 .
We have (α, β) = (α, γ 1 + γ 2 ) = (α, γ 1 ) + (α, γ 2 ) = 0 + 0 = 0.
Therefore α is orthogonal to every vector β in W1 + W2 .
So α ∈ (W1 + W2 ) ⊥ . ...(3)
Thus α ∈ W1 ⊥ ∩ W2 ⊥ ⇒ α ∈ (W1 + W2 ) ⊥ .
∴ W1 ⊥ ∩ W2 ⊥ ⊆ (W1 + W2 ) ⊥ ...(4)
⊥ ⊥ ⊥
From (3) and (4), we have (W1 + W2 ) = W1 ∩ W2 .
(ii) W1 ⊥ and W2 ⊥ are also subspaces of V. Taking W1 ⊥ in place of W1 and W2 ⊥ in
place of W2 in the result (i), we get (W1 ⊥ + W2 ⊥ ) ⊥ = W1 ⊥ ⊥ ∩ W2 ⊥ ⊥
⇒ (W1 ⊥ + W2 ⊥ ) ⊥ = W1 ∩ W2
[∵ V is finite dimensional and so W1 ⊥ ⊥ = W1 etc.]
⇒ (W1 ⊥ + W2 ⊥ ) ⊥ ⊥ = (W1 ∩ W2 ) ⊥
⇒ W1 ⊥ + W2 ⊥ = (W1 ∩ W2 ) ⊥
287

⇒ (W1 ∩ W2 ) ⊥ = W1 ⊥ + W2 ⊥ .
Example 15: Find two mutually orthogonal vectors each of which is orthogonal to the vector
α = (4, 2, 3) of V3 (R) with respect to standard inner product.
Solution: Let β = ( x1 , x2 , x3 ) be any vector orthogonal to the vector (4, 2, 3). Then
4 x1 + 2 x2 + 3 x3 = 0.
Obviously β = (3, − 3, − 2) is a solution of this equation. We now require a third
vector γ = ( y1 , y2 , y3 ) orthogonal to both α and β. This means γ must be a
solution vector of the system of equations
4 y1 + 2 y2 + 3 y3 = 0, 3 y1 − 3 y2 − 2 y3 = 0.
Obviously γ = (5, 17, − 18) is a solution of these equations. Thus, β and γ are
orthogonal to each other and to α. The solution is, of course, by no means unique.

Comprehensive Exercise 2

1. Let V3 (R) be the inner product space relative to the standard inner product.
Then find
(a) two linearly independent vectors each of which is orthogonal to the vector
(1, 1, 2).
(b) two mutually orthogonal vectors, each of which is orthogonal to ( 5, 2, − 1).
(c) two mutually orthogonal unit vectors, each of which is orthogonal to
( 2, − 1, 3).
(d) the projections of the vector ( 3, 4, 1) onto the space spanned by (1, 1, 1) and on
its orthogonal complement.
2. Let V3 (R) be the inner product space with respect to the standard inner
product and let W be the subspace of V3 (R) spanned by the vector
α = (2, − 1, 6). Find the projections of the vector β = (4, 1, 2) on W and W ⊥ .
1 2 2 2 1 2 2 2 1
3. Verify that the vectors ( , − , − ) , ( , − , ) and ( , , − ) from an
3 3 3 3 3 3 2 3 3
orthonormal basis for V3 (R) relative to the standard inner product.

4. Given the basis (1, 0, 0), (1, 1, 0), (1, 1, 1) for V3 (R), construct from it by the
Gram-Schmidt process an orthonormal basis relative to the standard inner
product.
5. Given the basis (2, 0, 1), (3, − 1, 5), and (0, 4, 2) for V3 (R), construct from it by
the Gram-Schmidt process an orthonormal basis relative to the standard
inner product.
6. Find an orthonormal basis of the vector space V of all real polynomials of
degree not greater than two, in which the inner product is defined as
288
1
(φ ( x), ψ( x)) = ∫ −1 φ ( x) Ψ ( x) dx , where φ ( x), ψ( x) ∈ V .

7. If α and β are vectors in a real inner product space, and if|| α || = || β || , then
α − β and α + β are orthogonal. Interpret the result geometrically.
8. If V is an inner product space, then prove that
(i) {0} ⊥ = V, (ii) V ⊥
= { 0}.
9. If V is an inner product space and S, S1 , S2 are subsets of V, then
(i) S1 ⊆ S2 ⇒ S2 ⊥ ⊆ S1 ⊥ (ii) S ⊥ = [ L(S)] ⊥
(iii) L (S ) ⊆ S ⊥ ⊥
(iv) L (S) = S ⊥ ⊥ if V is finite dimensional.
10. Let W be a subspace of an inner product space V. If {α1 , … , α n } is a basis for
W, then β ∈ W ⊥ if and only if (β, α i ) = 0 V i = 1, 2, ..., n.
11. Let V be a finite-dimensional inner product space, and let {α1 , … , α n } be an
orthonormal basis for V. Show that for any vectors α, β in V
n
(α, β) = Σ (α, α k ) (β, α k ).
k =1

12. If W1 , … , Wk are pairwise orthogonal subspaces in an inner product space V,


and if α = α1 + … + α k with α i in Wi for i = 1, ..., k, then
||α||2 = ||α1 ||2 + .... + ||α k||2
13. Find a vector of unit length which is orthogonal to the vector α = (2, − 1, 6) of
V3 (R) with respect to standard inner product.
14. Two vectors α and β in a complex inner product space are orthogonal if and
only if || aα + bβ ||2 = ||aα ||2 + || bβ ||2 for all pairs of scalars a and b.
15. Let W be a finite dimensional proper subspace of an inner product space V.
Let α ∈ V and α ∉W. Show that there is a vector β ∈ W such that α − β ⊥ W.

A nswers 2

1. (a), (b), (c). Check yourself


(d)  , ,  and  , , − 
8 8 8 1 4 5
 3 3 3 3 3 3
 38
,−
19 114 ,126 60
,−
32
2.  ,   , 
 41 41 41   41 41 41
4. {(1, 0, 0), (0, 1, 0), (0, 0, 1) }
1 1 1
5. (2, 0, 1) , (− 7, − 5, 14) , (− 1, 7, 2)
√5 √ (270) 3 √6
289

 1 √3 √5 
13.  , , − 
2 2 1
6.  , x, (3 x 2 − 1)
 √ 2 √ 2 √ 8  3 3 3

O bjective T ype Q uestions

Multiple Choice Questions


Indicate the correct answer for each question by writing the corresponding letter from
(a), (b), (c) and (d).
1. If in an inner product space V ( F ), the vector α is 0 then (α , α) =
(a) 0 (b) 1
(c) 0 (d) None of these.
2. In an inner product space V ( F ), || α + β|| ≤
(a) || α || ⋅ || β || (b) || α|| /|| β ||
(c) || α || + || β || (d) || α|| − || β ||.
2
3. The norm of α = (3, 4) ∈ R with respect to the usual inner product is :
(a) 5 (b) 3
(c) 1 (d) None of these.
4. Any two vectors α, β of an inner product space are linearly dependent if and
only if :
(a) |(α, β)| = 0 (b) |(α, β) = || α || ⋅ || β ||
(c) |(α, β)| = 1 (d) None of these.
5. If α and β are orthogonal unit vectors, the distance between α and β is :
(a) 2 (b) √ 2
(c) 0 (d) 1.
6. If S is a subset of an inner product space V, then S ⊥⊥⊥ =
(a) S (b) S ⊥
(c) S ⊥⊥ (d) None of these.
7. If W1 and W2 are subspaces of a finite-dimensional inner product space, then
(W1 + W2 ) ⊥ =
(a) W1 ⊥ ∩ W2 ⊥ (b) W1 ⊥ ∪ W2 ⊥
(c) W1 ⊥ + W2 ⊥ (d) None of these.

Fill in the Blank(s)


Fill in the blanks “……” so that the following statements are complete and correct.
1. In an inner product space V ( F ), (α, α) = 0 ⇒ α = …… .
290

2. In an inner product space V ( F ), if α ∈ V then || α || = ………


3. In an inner product space V ( F ),| (α, β)| ≤ ...... .
4 Two vectors α and β in an inner product space are orthogonal if (α, β) = …….
5. A set S consisting of mutually orthogonal unit vectors is called an ...... set.
6. If B = {α1 , … , α m } is any finite orthonormal set in an inner product space V
m
and if β is any vector in V, then Σ |(β, α i )|2 ≤ ……….
i =1

True or False
Write ‘T’ for true and ‘F’ for false statement.
1. If α = (a1 , … , a n ) and β = (b1 , … , b n ) are two vectors in Vn (C), then the
standard inner product of α and β is
α . β = a1 b1 + a2 b2 + … + a n b n .
2. In an inner product space V ( F ), (α , aβ + b γ ) = a (α , β) + b (α , γ ).
3. In an inner product space V ( F ), the distance d (α, β) from α to β is given by
d (α, β) = || α − β ||.
4. If α and β are vectors in an inner product space, then
|| α + β ||2 + || α − β ||2 = ||α ||2 + || β ||2
5. Any orthogonal set of non-zero vectors in an inner product space V is linearly
dependent.
6. Every finite-dimensional inner product space has an orthonormal basis.
7. If V is an inner product space, then V ⊥ = { 0}.

A nswers

Multiple Choice Questions


1. (a) 2. (c) 3. (a) 4. (b)
5. (b) 6. (b) 7. (a)
Fill in the Blank(s)
1. 0 2. √ (α, α) 3. || α || ⋅ || β ||
4. 0 5. orthonormal 6. || β ||2

True or False
1. F 2. T 3. T 4. F
5. F 6. T 7. T

¨
291

8
Bilinear, Quadratic and
Hermitian Forms

8.1 External Direct Product of Two Vector Spaces


uppose U and V are two vector spaces over the same field F. Let
S W = U × V i. e., W = {(α, β) : α ∈ U, β ∈ V }.
If (α1 , β1 ) and (α 2 , β 2 ) are two elements in W, then we define their equality as
follows :
(α1 , β1 ) = (α 2 , β 2 ) if α1 = α 2 and β1 = β 2 .
Also we define the sum of (α1 , β1 ) and (α 2 , β 2 ) as follows :
(α1 , β1 ) + (α 2 , β 2 ) = (α1 + α 2 , β1 + β 2 ).
If c is any element in F and (α, β) is any element in W, then we define scalar
multiplication in W as follows :
c (α, β) = (cα, cβ).
It can be easily shown that with respect to addition and scalar multiplication as
defined above, W is a vector space over the field F. We call W as the external direct
product of the vector spaces U and V and we shall write W = U ⊕ V .
Now we shall consider some special type of scalar-valued functions on W known as
bilinear forms.
292

8.2 Bilinear Forms


Definition: Let U and V be two vector spaces over the same field F. A bilinear form on
W = U ⊕ V is a function f from W into F, which assigns to each element (α , β) in W a
scalar f (α , β) in such a way that
f (aα1 + bα 2 , β) = a f (α1 , β) + b f (α 2 , β)
and f (α , aβ1 + bβ 2 ) = a f (α, β1 ) + b f (α, β 2 ).
Here f (α, β) is an element of F. It denotes the image of (α, β) under the function f .
Thus a bilinear form on W is a function from W into F which is linear as a function
of either of its arguments when the other is fixed.
If U = V , then in place of saying that f is a bilinear form on W = U ⊕ V , we shall
simply say that f is a bilinear form on V.
Thus if V is a vector space over the field F, then a bilinear form on V is a function f , which
assigns to each ordered pair of vectors α, β in V a scalar f (α , β) in F,and which satisfies
f (aα1 + bα 2 , β) = a f (α1 , β) + b f (α 2 , β)
and f (α, aβ1 + bβ 2 ) = a f (α, β1 ) + b f (α, β 2 ).

Example 1: Suppose V is a vector space over the field F.Let L1 , L2 be linear functionals on
V. Let f be a function from V × V into F defined as
f (α , β) = L1 (α) L 2 ( β).
Then f is a bilinear form on V.
Solution: If α, β ∈ V , then L1 (α), L 2 ( β) are scalars. We have

f (aα1 + bα 2 , β) = L1 (aα1 + bα 2 ) L2 ( β)
= [a L1 (α1 ) + b L1 (α 2 )] L2 ( β)
= aL1 (α1 ) L 2 ( β) + bL1 (α 2 ) L2 ( β)
= af (α1 , β) + bf (α 2 , β).
Also f (α, aβ1 + bβ 2 ) = L1 (α) L 2 (aβ1 + bβ 2 )
= L1 (α) [a L 2 ( β1 ) + b L 2 ( β 2 )]
= aL1 (α) L 2 ( β1 ) + b L1 (α) L 2 ( β 2 )
= a f (α, β1 ) + b f (α, β 2 ).
Hence f is a bilinear form on V.
Example 2:Suppose V is a vector space over the field F.Let T be a linear operator on V and f
a bilinear form on V. Suppose g is a function from V × V into F defined as
g (α , β) = f (Tα , T β).
Then g is a bilinear form on V.
293

Solution: We have g (aα1 + bα 2 , β) = f (T (aα1 + bα 2 ), T β)


= f (aTα1 + bTα 2 , T β)
= a f (Tα1 , T β) + b f (Tα 2 , T β)
= a g (α1 , β) + b g (α 2 , β).
Also g (α, aβ1 + bβ 2 ) = f (Tα, T (aβ1 + bβ 2 ))
= f (Tα, aT β1 + bT β 2 )
= a f (Tα, T β1 ) + b f (Tα, T β 2 )
= a g (α, β1 ) + b g (α, β 2 ).
Hence g is a bilinear form on V.

8.3 Matrix of a Bilinear Form


Theorem 1: If U is an n-dimensional vector space with basis {α1 , … , α n }, if V is an
m-dimensional vector space with basis { β1 , … , β m }, and if { a ij } is any set of nm scalars
(i = 1, … , n ; j = 1, … , m}, then there is one and only one bilinear form f on U ⊕ V such that
f (α i , β j ) = a ij for all i and j.
n m
Proof: Let α = Σ x i α i ∈ U and β = Σ y j β j ∈ V .
i =1 j =1

Let us define a function f from U × V into F such that


n m
f (α , β) = Σ Σ x i y j a ij . …(1)
i =1 j =1

We shall show that f is a bilinear form on U × V .


Let a, b ∈ F and let α1 , α 2 ∈ U.
n n
Let α1 = Σ a i α i , α 2 = Σ b i α i .
i =1 i =1
n m
Then f (α1 , β) = Σ Σ a i y j a ij
i =1 j =1
n m
and f (α 2 , β) = Σ Σ b i y j a ij .
i =1 j =1
n n n
Also aα1 + bα 2 = a Σ a i α i + b Σ b i α i = Σ (aa i + bb i ) α i .
i =1 i =1 i =1
n m
∴ f (aα1 + bα 2 , β) = Σ Σ (aa i + bb i ) y j a ij
i =1 j =1
n m n m
= Σ Σ aa i y j a ij + Σ Σ bb i y j a ij
i =1 j =1 i =1 j =1
n m n m
=a Σ Σ a i y j a ij + b Σ Σ b i y j a ij
i =1 j =1 i =1 j =1

= af (α1 , β) + bf (α 2 , β).
294

Similarly, we can prove that if a, b ∈ F, and β1 , β 2 ∈ V , then


f (α, aβ1 + bβ 2 ) = a f (α, β1 ) + b f (α, β 2 ).
Therefore f is a bilinear form on U × V .
Now α i = 0α1 + … + 0α i − 1 + 1α i + 0α i + 1 + … + 0α n
and β j = 0β1 + … + 0β j − 1 + 1β j + 0β j + 1 + … + 0β m .
Therefore from (1), we have
f (α i , β j ) = a ij .
Thus there exists a bilinear form f on U × V such that
f (α i , β j ) = a ij .
Now to show that f is unique.
Let g be a bilinear form on U × V such that g (α i , β j ) = a ij . …(2)
n m
If α = Σ x i α i be in U and β = Σ y j β j be in V, then
i =1 j =1

 n m 
g (α , β) = g  Σ x i α i , Σ y j β j 
 i =1 j =1 
n m
= Σ Σ x i y j g (α i , β j ) [ ∵ g is a bilinear form]
i =1 j =1
n m
= Σ Σ x i y j a ij [from (2)]
i =1 j =1

= f (α , β). [from (1)]


∴ By the equality of two functions, we have
g = f.
Thus f is unique.
Matrix of a bilinear form: Definition : Let V be a finite-dimensional vector space
and let B = {α1 , … , α n} be an ordered basis for V. If f is a bilinear form on V, the matrix of
f in the ordered basis B is the n × n matrix A = [a ij ] n × n such that

f (α i , α j ) = a ij , i = 1, … , n ; j = 1, … , n.
We shall denote this matrix A by [ f ] B .
Rank of a bilinear form: Definition: The rank of a bilinear form is defined as the
rank of the matrix of the form in any ordered basis.
Let us describe all bilinear forms on a finite-dimensional vector space V of
dimension n.
n n
If α = Σ x i α i , and β = Σ y j α j are vectors in V, then
i =1 j =1

 n n  n n
f (α, β) = f  Σ x i α i , Σ y j α j  = Σ Σ x i y j f (α i , α j )
 i =1 j =1  i =1 j =1
n n
= Σ Σ x i y j a ij = X ′ AY,
i =1 j =1
295

where X and Y are coordinate matrices of α and β in the ordered basis B and X′ is the
transpose of the matrix X. Thus f (α, β) = [ α ] B ′ A [ β ] B .
From the definition of the matrix of a bilinear form, we note that if f is a bilinear
form on an n-dimensional vector space V over the field F and B is an ordered basis
of V,then there exists a unique n × n matrix A = [ a ij ] n × n over the field F such that
A = [ f ]B .
Conversely, if A = [ a ij ] n × n be an n × n matrix over the field F, then from
theorem 1, we see that there exists a unique bilinear form f on V such that
[ f ] B = [ a ij ] n × n .
n n
If α = Σ x i α i , β = Σ y j α j are vectors in V, then the bilinear form f is de-
i =1 j =1
n n
fined as f (α, β) = Σ Σ x i y j a ij = X ′ AY, …(1)
i =1 j =1

where X, Y are the coordinate matrices of α, β in the ordered basis B. Hence the
bilinear forms on V are precisely those obtained from an n × n matrix as in (1).

Example 3: Let f be the bilinear form on V2 (R) defined by

f (( x1 , y1 ), ( x2 , y2 )) = x1 y1 + x2 y2 . (Lucknow 2011)

Find the matrix of f in the ordered basis


B = {(1, − 1), (1, 1)} of V2 (R).
Solution: Let B = {α1 , α 2 }
where α1 = (1, − 1), α 2 = (1, 1).
We have f (α1 , α1 ) = f ((1, − 1), (1, − 1)) = − 1 − 1 = − 2 ,
f (α1 , α 2 ) = f ((1, − 1), (1, 1)) = − 1 + 1 = 0,
f (α 2 , α1 ) = f ((1, 1), (1, − 1)) = 1 − 1 = 0,
f (α 2 , α 2 ) = f ((1, 1), (1, 1)) = 1 + 1 = 2 .
 −2 0 
∴ [ f ]B =  .
 0 2

8.4 Bilinear Form Corresponding to a Given Matrix


Bilinear Form: Definition: Let Vm and Vn be two vector spaces over the same field F and
let A = [a ij ] m × n be an m × n matrix over the field F.
296

 x1   y1 
x  y 
 2  2
Let X =  ...  and Y =  ...  be any two elements of Vm and Vn respectively so that
 ...   ... 
   
 x m   y n 

X T = the transpose of the column matrix X


= [ x1 x2 … … xm ]
T
and Y = [ y1 y2 … … y n ].
Then an expression of the form
m n
b (X, Y ) = X T AY = Σ Σ a ij x i y j …(1)
i =1 j =1

is called a bilinear form over the field F corresponding to the matrix A.


It should be noted that b (X, Y ) is an element of the field F and b (X, Y) is a
mapping from
Vm × Vn → F .
The matrix A is called the matrix of the bilinear form (1) and the rank of the matrix
A is called the rank of the bilinear form (1).
It should be noted that the coefficient of the product x i y j in (1) is the element a ij
of the matrix A which occurs in the i th row and the j th column.
Symmetric bilinear form: The bilinear form (1) is said to be symmetric if its
matrix A is a symmetric matrix.
If the field F is the real field R, then the bilinear form (1) is said to be a real bilinear
form. Thus in a real bilinear form b (X, Y ) assumes real values.
If the vectors X and Y belong to the same vector space Vn over a field F, then A is a
square matrix and X T AY is called a bilinear form on the vector space Vn over the
field F.
In order to show that b (X, Y ) = X T AY given by (1) is a bilinear form, first we
show that the mapping b (X, Y ) is a linear mapping from Vm → F.
Let X1 , X 2 be any two elements of Vm and α, β be any two elements of the field F
and let the vector Y be fixed. Then
b (αX1 + βX 2 , Y ) = (α X1 + βX 2 )T AY
= (α X1T + βX 2 T ) AY
= α X1T AY + βX 2 T AY
= αb (X1 , Y ) + βb (X 2 , Y ).
∴ The mapping b (X, Y ) is a linear mapping from Vm → F.
Now we show that the mapping b (X, Y ) is a linear mapping from Vn → F.
Let Y1 , Y2 be any two elements of Vn and α, β be any two elements of the field F and
let the vector X be fixed. Then
297

b (X, αY1 + βY2 ) = X T A (αY1 + βY2 )


= α X T AY1 + βX T AY2
= α b (X, Y1 ) + βb (X, Y2 ).
∴ The mapping b (X, Y ) is a linear mapping from Vn → F.
m n
Hence, b (X, Y ) = X T AY = Σ Σ a ij x i y j is a bilinear form.
i =1 j =1

Example of a real bilinear form:


 y1 
 5 3 1  
Let b (X, Y ) = [ x1 x2 ]    y2 
7 4 9  y 
 3
= 5 x1 y1 + 3 x1 y2 + x1 y3 + 7 x2 y1 + 4 x2 y2 + 9 x2 y3 .
Then b (X, Y ) is an example of a real bilinear form.
 5 3 1
The matrix   is the matrix of this bilinear form. The rank of this matrix is
7 4 9
2 and so the rank of this bilinear form is 2.
Equivalent Matrices: Definition: Let A and B be two m × n matrices over the same
field F. The matrix A is said to be equivalent to the matrix B if there exist non-singular
matrices P and Q of orders m and n respectively such that
B = P T AQ .
Equivalent Bilinear Forms: Definition: Let Vm and Vn be two vector spaces over the
same field F and let A and D be two m × n matrices over the field F.
The bilinear form b (X, Y ) = X T AY is said to be equivalent to the bilinear form
b (U, V ) = U T DV, where X and U are in Vm , Y and V are in Vn if there exist
non-singular matrices B and C of orders m and n respectively such that
D = BT AC
i. e., the matrices A and D are equivalent.
Thus, the bilinear form b (U, V ) equivalent to the bilinear b (X, Y ) is given by
b (U, V ) = U T (BT AC ) V = U T DV,
where D = BT AC and B and C are non-singular matrices of orders m and n
respectively. The transformations of vectors yielding these equivalent bilinear
forms are X = BU, Y = CV.
The matrices B and C are called the transformation matrices yielding these
equivalent bilinear forms.
Equivalent Canonical form of a given bilinear form.
Let Vm and Vn be two vector spaces over the same field F and let A be an m × n
matrix over the field F.
298

Let b (X, Y ) = X T AY ,
where X T = [ x1 x2 … x m ],
T
Y = [ y1 y2 … y n]
be a given bilinear form over the field F.
If the matrix A is of rank r, then there exist non-singular matrices P and Q of orders
m and n respectively such that
I r O
P T AQ =  
O O
is in the normal form.
If we transform the vectors X and Y to U and V by the transformations
X = PU, Y = QV,
then the bilinear form b (U, V ) equivalent to the bilinear form b (X, Y) is given by
I r O
b (U, V ) = U T (P T AQ ) V = U T  V
O O
= u1 v1 + u2 v2 + … + ur vr ,
where U = [u1 u2 … um ]T and V = [v1 v2 … vn ]T .
The bilinear form b (U, V ) = u1 v1 + … + ur vr is called the equivalent canonical form
or the equivalent normal form of the bilinear form
b (X, Y ) = X T AY .
Congruent Matrices. Definition: A square matrix B of order n over a field F is said to
be congruent to another square matrix A of order n over F, if there exists a non-singular matrix
P over F such that
B = P T AP.
Cogradient Transformations: Definition : Let X and Y be vectors belonging to
the same vector space Vn over a field F and let A be a square matrix of order n over F.Let
b (X, Y ) = X T AY
be a bilinear form on the vector space Vn over F.
Let B be a non-singular matrix of order n and let the vectors X and Y be transformed to the
vectors U and V by the transformations
X = BU, Y = BV.
Then the bilinear form b (X, Y ) transforms to the equivalent bilinear form
b (U, V ) = U T (BT AB ) V = U T DV,
where D = BT AB and U and V are both n-vectors.
The bilinear form U T DV is said to be congruent to the bilinear form X T AY . Under such
circumstances when X and Y are subjected to the same transformation X = BU and Y = BV,
we say that X and Y are transformed cogradiently.
Here, the matrix A is congruent to B because D = BT AB,where B is non- singular.
299

Example 4: Find the matrix A of each of the following bilinear forms b (X, Y ) = X T AY .

(i) 3 x1 y1 + x1 y2 − 2 x2 y1 + 3 x2 y2 − 3 x1 y3
(ii) 2 x1 y1 + x1 y2 + x1 y3 + 3 x2 y1 − 2 x2 y3 + x3 y2 − 5 x3 y3
Which of the above forms is symmetric ?
Solution: (i) The element a ij of the matrix A is the coefficient of x i y j in
the given bilinear form.
 3 1 −3 
∴ A = .
 −2 3 0 
The matrix A is not a symmetric matrix. So the given bilinear form is not
symmetric.
(ii) The element a ij of the matrix A which occurs in the i th row and the jth column
is the coefficient of x i y j in the given bilinear form.
 2 1 1
∴ A =  3 0 −2  .
 
 0 1 −5 
The matrix A is not symmetric. So the given bilinear form is not symmetric.
Example 5: Transform the bilinear form X T AY to the equivalent canonical form where
2 1 1
A = 4 2 2.
 
 1 2 2 

Solution: We write A = I 3 AI 3 i. e.,


 2 1 1  1 0 0   1 0 0 
4 2 2 = 0 1 0  A 0 1 0  ⋅
     
 1 2 2   0 0 1  0 0 1

Performing R1 ↔ R3 , we get
 1 2 2   0 0 1  1 0 0 
4 2 2  = 0 1 0  A0 1 0  ⋅
     
 2 1 1  1 0 0   0 0 1

Performing R2 → R2 − 4 R1 , R3 → R3 − 2 R1 , we get
 1 2 2 0 0 1  1 0 0 
 0 −6 −6  =  0 1 −4  A  0 1 0  ⋅
     
 0 −3 −3   1 0 −2   0 0 1
300

Performing C2 → C2 − 2C1 , C3 → C3 − 2C1 , we get


 1 0 0  0 0 1  1 − 2 − 2 
 0 −6 −6  =  0 1 −4  A  0 1 0⋅
     
 0 −3 −3   1 0 −2   0 0 1
1 1
Performing R2 → − R2 , R3 → − R3 , we get
6 3
 1 0 0  0 0 1  1 − 2 − 2 
 0 1 1 =  0 − 16 2 
1 0⋅
   3 A0 
 0 1 1  − 13 0 2
3
  0 0
 1

Performing R3 → R3 − R2 , we get
1 0 0  0 0 1  1 − 2 − 2 
0 1 1 =  0 − 16 2  
1 0⋅
   3  A0 
 0 0 0   − 13 1
6
0   0 0 1

Performing C3 → C3 − C2 , we get
1 0 0  0 1  1 − 2 0 
0
0 1 0  =  0 − 16
 2 1 − 1 ⋅
    A0
3 
 0 0 0   − 13 0   0 0
1
6
1
0 0 − 
1
 3
I 2 O  1 1
∴ P T AQ =   , where P = 0 − 
 O O  6 6
1 2
0
 3 
 1 −2 0 
and Q = 0 1 − 1 ⋅
 
 0 0 0 
Hence if the vectors X = [ x1 x2 x3 ]T and Y = [ y1 y2 y3 ]T are
transformed to the vectors U = [u1 u2 u3 ]T and V = [v1 v2 v3 ]T
respectively by the transformations X = PU, Y = QV,
then the bilinear form
 2 1 1  y1 
[ x1 x2 x3 ]  4 2 2   y2 
   
 1 2 2   y3 

transforms to the equivalent canonical form


 1 0 0   v1 
[u1 u2 u3 ]  0 1 0  v2  = u1 v1 + u2 v2 .
  
 0 0 0  v3 
301

8.5 Quadratic Forms


n n
Definition: An expression of the form Σ Σ a ij x i x j , where a ij ’s are elements of a field
i =1 j =1

F, is called a quadratic form in the n variables x1 , x2 ,……, x n over the field F.


Real Quadratic Form: Definition : An expression of the form
n n
Σ Σ a ij x i x j ,
i =1 j =1

where a ij ' s are all real numbers, is called a real quadratic form in the n variables
x1 , x2 ,…, x n .
For example,
(i) 2 x 2 + 7 xy + 5 y 2 is a real quadratic form in the two variables x and y.
(ii) 2 x 2 − y 2 + 2z 2 − 2 yz − 4zx + 6 xy is a real quadratic form in the three
variables x, y and z.
(iii) x12 − 2 x2 2 + 4 x3 2 − 4 x4 2 − 2 x1 x2 + 3 x1 x4 + 4 x2 x3 − 5 x3 x4 is a real
quadratic form in the four variables x1 , x2 , x3 and x4 .
Theorem: Every quadratic form over a field F in n variables x1 , x2 ,……, x n can be
expressed in the form X ′ BX where X = [ x1 , x2 ,……, x n ]T is a column vector and B is a
symmetric matrix of order n over the field F.
n n
Proof: Let Σ Σ a ij x i x j , …(1)
i =1 j =1

be a quadratic form over the field F in the n variables x1 , x2 ,……, x n .


In (1) it is assumed that x i x j = x j x i . Then the total coefficient of x i x j in (1) is
a ij + a ji . Let us assign half of this coefficient to x ij and half to x ji . Thus we define
1
another set of scalars b ij , such that b ii = a ii and b ij = b ji = (a ij + a ji ), i ≠ j. Then
2
n n n n
we have Σ Σ a ij x i x j = Σ Σ b ij x i x j .
i =1 j =1 i =1 j =1

Let B = [b ij ] n × n . Then B is a symmetric matrix of order n over the field F since


b ij = b ji .
 x1 
x 
 2
Let X =  ... ⋅ Then X T or X ′ = [ x1 x2 …… x n ].
 ...
 
 x n 
Now X T BX is a matrix of the type1 × 1. It can be easily seen that the single element
n n
of this matrix is Σ Σ b ij x i x j .
i =1 j =1
302

If we identify a1 × 1matrix with its single element i. e.,if we regard a1 × 1matrix equal
to its single element, then we have
n n n n
XT BX = Σ Σ b ij x i x j = Σ Σ a ij x i x j .
i =1 j =1 i =1 j =1

Hence the result.

8.6 Matrix of a Quadratic Form


n n
Definition: If φ = Σ Σ a ij x i x j is a quadratic form in n variables x1 , x2 ,……, x n ,
i =1 j =1

then there exists a unique symmetric matrix B of order n such that φ = XT BX where
X = [ x1 x2 …… x n ]T . The symmetric matrix B is called the matrix of the quadratic
n n
form Σ Σ a ij x i x j .
i =1 j =1

Since every quadratic form can always be so written that matrix of its coefficients is
a symmetric matrix, therefore we shall be considering quadratic forms which are so
adjusted that the coefficient matrix is symmetric.

8.7 Quadratic form Corresponding to a Symmetric


Matrix
Let A = [a ij ] n × n be a symmetric matrix over the field F and let
X = [ x1 x2 …… x n ]T
be a column vector. Then XT AX determines a unique quadratic form
n n
Σ Σ a ij x i x j in n variables x1 , x2 ,……, x n over the field F.
i =1 j =1

Thus we have seen that there exists a one-to-one correspondence between the set of
all quadratic forms in n variables over a field F and the set of all n-rowed symmetric
matrices over F.

8.8 Quadratic Form Corresponding to any Given


Square Matrix
 x1 
x 
 2
Definition: Let X =  ...  be any n-vector in the vector space Vn over a field F so
 ... 
 
 x n 
that X T = [ x1 x2 … x n ].
303

Let A = [a ij ] n × n be any given square matrix of order n over the field F. Then any polynomial
of the form
n n
q ( x1 , x2 , … , x n ) = X T AX = Σ Σ a ij x i x j
i =1 j =1

is called a quadratic form of order n over F in the n variables x1 , x2 , … , x n .


We can always find a unique symmetric matrix B = [b ij ] n × n of order n such that
1
X T AX = X T BX . We have b ij = b ji = (a ij + a ji ).
2
Discriminant of a quadratic form: Singular and Non-singular Quadratic
forms: By the discriminant of a quadratic form X T AX , we mean det A. The
quadratic form X T AX is said to be non-singular if det A ≠ 0, and it is said to be
singular if det A = 0.

Example 6: Write down the matrix of each of the following quadratic forms and verify that
they can be written as matrix products XT AX :
(i) x12 − 18 x1 x2 + 5 x2 2 .
(ii) x12 + 2 x2 2 − 5 x3 2 − x1 x2 + 4 x2 x3 − 3 x3 x1 .
Solution: (i) The given quadratic form can be written as
x1 x1 − 9 x1 x2 − 9 x2 x1 + 5 x2 x2 .
 1 −9 
Let A be the matrix of this quadratic form. Then A =  ⋅
 −9 5 
 x1 
Let X =   ⋅ Then X′ = [ x1 x2 ] ⋅
 x2 
 1 −9 
We have X ′ A = [ x1 x2 ]   = [ x1 − 9 x2 − 9 x1 + 5 x2 ] .
 −9 5 
 x1 
∴ X ′ AX = [ x1 − 9 x2 − 9 x1 + 5 x2 ]  
 x2 
= x1 ( x1 − 9 x2 ) + x2 (−9 x1 + 5 x2 )
= x12 − 9 x1 x2 − 9 x2 x1 + 5 x2 2
= x12 − 18 x1 x2 + 5 x2 2 .
(ii) The given quadratic form can be written as
1 3 1 3
x1 x1 − x1 x2 − x1 x3 − x2 x1 + 2 x2 x2 + 2 x2 x3 − x3 x1
2 2 2 2
+ 2 x3 x2 − 5 x3 x3 .
Let A be the matrix of this quadratic form. Then
304

 1 − 12 − 32 
 
A =  − 12 2 2 ⋅
−3 2 −5 
 2
Obviously A is a symmetric matrix.
 x1 
Let X =  x2  ⋅ Then X′ = [ x1 x2 x3 ].
 
 x3 
 1 − 12 − 32 
 
We have X ′ A = [ x1 x2 x3 ]  − 12 2 2
−3 2 −5 
 2
1 3 1 3
= [ x1 − x2 − x3 − x1 + 2 x2 + 2 x3 −
x1 + 2 x2 − 5 x3 ].
2 2 2 2
 x1 
− x1 + 2 x2 + 2 x3 − x1 + 2 x2 − 5 x3 ]  x2  .
1 3 1 3
∴ X ′AX = [x1 − x2 − x3
2 2 2 2  
 x3 
1 3 1 3
= x1 ( x1 − x2 − x3 ) + x2 (− x1 + 2 x2 + 2 x3 )+ x3 (− x1 + 2 x2 − 5 x3 )
2 2 2 2
1 3 1 3
= x12 2
− x1 x2 − x1 x3 − x2 x1 + 2 x2 + 2 x2 x3 − x3 x1
2 2 2 2
+ 2 x3 x2 − 5 x3 2
= x 1 2 + 2 x22 − 5 x32 − x1 x2 + 4 x2 x3 − 3 x3 x1 .

Example 7: Obtain the matrices corresponding to the following quadratic forms :


(i) x 2 + 2 y 2 + 3z 2 + 4 xy + 5 yz + 6zx.
(ii) ax 2 + by 2 + cz 2 + 2 fyz + 2 gzx + 2hxy.
Solution: (i) The given quadratic form can be written as
5 5
x 2 + 2 xy + 3 xz + 2 yx + 2 y 2 + yz + 3zx + zy + 3z 2 .
2 2
∴ if A is the matrix of this quadratic form, then
 
1 2 3 
 
5
A=2 2 , which is a symmetric matrix of order 3.
 2
 5 
3 3
 2 
(ii) The given quadratic form can be written as
ax 2 + hxy + gxz + hyx + by 2 + fyz + gzx + fzy + cz 2 .
∴ if A is the matrix of this quadratic form, then
305

a h g 
A=h b f ⋅
 
 g f c 

Example 8: Write down the quadratic forms corresponding to the following matrices :
0 1 2 3
 0 5 − 1  1 2 3 4
(i)  5 1 6 (ii)  ⋅
   2 3 4 5
 −1 6 2   3 4 5 6
 
Solution: (i) Let X = [ x1 x2 x3 ]T and A denote the given symmetric
matrix. Then XT AX is the quadratic form corresponding to this matrix. We have
 0 5 − 1
X A = [ x1 x2 x3 ]  5 1 6 
T
 
 −1 6 2 
= [5 x2 − x3 5 x1 + x2 + 6 x3 − x1 + 6 x2 + 2 x3 ].
∴ X T AX = x1 (5 x2 − x3 ) + x2 (5 x1 + x2 + 6 x3 ) + x3 (− x1 + 6 x2 + 2 x3 )
= x2 2 + 2 x3 2 + 10 x1 x2 − 2 x1 x3 + 12 x2 x3 .
(ii) Let X = [ x1 x2 x3 x4 ]T and A denote the given symmetric matrix.
Then X T AX is the quadratic form corresponding to this matrix. We have
0 1 2 3
 1 2 3 4
X T A = [ x1 x2 x3 x4 ]  
2 3 4 5
3 4 5 6
 
= [x2 + 2 x3 + 3 x4 x1 + 2 x2 + 3 x3 + 4 x4 2 x1 + 3 x2 + 4 x3 + 5 x4
3 x1 + 4 x2 + 5 x3 + 6 x4 ]
T
∴ X AX = x1 ( x2 + 2 x3 + 3 x4 ) + x2 ( x1 + 2 x2 + 3 x3 + 4 x4 )
+ x3 (2 x1 + 3 x2 + 4 x3 + 5 x4 ) + x4 (3 x1 + 4 x2 + 5 x3 + 6 x4 )
= 2 x2 2 + 4 x3 2 + 6 x4 2 + 2 x1 x2 + 4 x1 x3 + 6 x1 x4
+ 6 x2 x3 + 8 x2 x4 + 10 x3 x4 .

C omprehensive E xercise 1

1. Which of the following functions f , defined on vectors α = ( x1 , x2 ) and


β = ( y1 , y2 ) in R 2 , are bilinear forms ?
(i) f (α, β) = x1 y2 − x2 y1
(ii) f (α, β) = ( x1 − y1 )2 + x2 y2 ⋅
306

2. Find the matrix A of each of the following bilinear forms b (X, Y ) = X T AY .


(i) − 5 x1 y1 − x1 y2 + 2 x2 y1 − x3 y1 + 3 x3 y2
(ii) 4 x1 y1 + x1 y2 + x2 y1 − 2 x2 y2 − 4 x2 y3 − 4 x3 y2 + 7 x3 y3 ⋅
Which of the above forms is symmetric ?
3. Determine the transformation matrices P and Q so that the bilinear form
X T AY = x1 y1 + x1 y2 + 2 x1 y3 + x2 y1 + 2 x2 y2 + 3 x2 y3 − x3 y2 − x3 y3
is equivalent to a canonical form.
4. Obtain the matrices corresponding to the following quadratic forms
(i) ax 2 + 2hxy + by 2 (ii) 2 x1 x2 + 6 x1 x3 − 4 x2 x3 .
(iii) x12 + 5 x22 − 7 x3 2 (iv) 2 x12 − 7 x32 + 4 x1 x2 − 6 x2 x3 .
5. Obtain the matrices corresponding to the following quadratic forms :
(i) a11 x12 + a22 x22 + a33 x32 + 2a12 x1 x2 + 2a23 x2 x3 + 2a31 x3 x1 .
(ii) x12 − 2 x22 + 4 x32 − 4 x42 − 2 x1 x2 + 3 x1 x3 + 4 x2 x3 − 5 x3 x4 .
(iii) x1 x2 + x2 x3 + x3 x1 + x1 x4 + x2 x4 + x3 x4 .
(iv) x12 − 2 x2 x3 − x3 x4 .
(v) d1 x12 + d2 x22 + d3 x32 + d4 x42 + d5 x52 .
6. Find the matrix of the quadratic form
x12 − 2 x22 − 3 x32 + 4 x1 x2 + 6 x1 x3 − 8 x2 x3 .
and verify that it can be written as a matrix product X ′AX.
7. Write down the quadratic forms corresponding to the following symmetric
matrices :
1 2 3
(i)  2 0 3  (ii) diag. [λ 1 , λ 2 , ......, λ n ]
 
 3 3 1 
8. Write down the quadratic form corresponding to the matrix
0 a b c 
 a 0 l m
 ⋅
b l 0 p
c m p 0
 
9. Write down the quadratic form associated with the matrix
 2 1 2

B = − 3 − 3 − 1 .
 
 4 1 3 
Rewrite the matrix A of the form so that it is symmetric.
307

A nswers 1

1. (i) f is a bilinear form on R 2


(ii) f is not a bilinear form on R 2
 − 5 − 1
2. (i) A =  2 0  ⋅ The given bilinear form is not symmetric
 
 −1 3 

4 1 0

(ii) A = 1 −2 −4  ⋅ The given bilinear form is symmetric
 
 0 −4 7 

 1 − 1 − 1  1 − 1 − 1
3. 
P= 0 1 1 , Q = 0
 1 − 1
   
 0 0 1  0 0 1
0 1 3
 a h
4. (i)  

(ii) 1 0 −2 
h b  
 3 −2 0 

 1 0 0 2 2 0
(iii)  0 5 0  (iv)  2 0 −3 
   
 0 0 −7   0 −3 −7 

 1 −1 0 3 
2
 a11 a12 a31   
− 1 − 2 2 0
5. (i)  a12 a22 a23 ⋅ (ii)  
   0 2 4 − 52 
 a31 a23 a33   3 
 2 0 − 52 −4 

 0 12 12 12  1 0 0 0
1   0 0 −1 0 
 2
0 12 12   
(iii) 1 1 (iv)
 0 1  0 −1 0 − 12 
 21 21 1 2  0 0 − 1
 2 2 2 0   2
0 

(v) diag. [d1 , d2 , d3 , d4 , d5 ]


 1 2 3
6.  2 −2 −4 
 
 3 −4 −3 

7. (i) x12 + x32 + 4 x1 x2 + 6 x1 x3 + 6 x2 x3


(ii) λ 1 x12 + λ 2 x22 + ..... + λ n x 2n
308

8. 2ax1 x2 + 2bx1 x3 + 2cx1 x4 + 2lx2 x3 + 2mx2 x4 + 2 px3 x4


9. 2 x12 − 3 x22 + 3 x32 − 2 x1 x2 + 6 x1 x3
 2 −1 3 
A =  −1 −3 0 
 
 3 0 3 

8.9 Linear Transformations


Suppose Vn is the vector space of all ordered n-tuples of the elements of a field F and
let the vectors in Vn be written as column vectors. Let P be a matrix of order n over
the field F. If Y = [ y1 , y2 , … , y n ]T is a vector in Vn , then Y is a matrix of the type
n × 1.Obviously PYis a matrix of the type n × 1.Thus PYis also a vector in Vn .Let
PY = X = [ x1 , x2 , … , x n ]T .
The relation PY = X thus gives a mapping form Vn into Vn .
Since P (aY1 + bY2 ) = a (PY1 ) + b (PY2 ), therefore this mapping is a linear
transformation. If the matrix P is non-singular, then the linear transformation is
also said to be non-singular. Also if the matrix P is non-singular, then the mapping
PY = X is one-one onto as shown below :
Mapping P is one-one. We have PY1 = PY2
⇒ P −1 (PY1 ) = P −1 (PY2 ) ⇒ Y1 = Y2 .
Therefore the mapping P is one-one.
Mapping P is onto. Let Z be any vector in Vn . Then P −1 Z is also a vector of Vn
and we have P (P −1 Z) = (PP −1 ) Z = IZ = Z. Therefore the mapping P is onto.
If the linear transformation PY = X is non-singular, then PY = O if and only if
Y = O.
If Y = O then obviously PY = O.
Conversely, PY = O ⇒ P −1 (PY) = P −1 O ⇒ Y = O.

8.10 Congruence of Matrices


Definition:A square matrix B of order n over a field F is said to be congruent to another
square matrix A of order n over F, if there exists a non-singular matrix P over F such that
B = P T AP.
Theorem 1: The relation of ‘congruence of matrices’ is an equivalence relation in the set of
all n × n matrices over a field F.
309

Proof: Reflexivity: Let A be any n × n matrix over a field F. Then A = IT AI,


where I is unit matrix of order n over F. Since I is non-singular, therefore A is
congruent to itself.
Symmetry: Suppose A is congruent to B. Then A = P ′ BP, where P is
non-singular.
∴ (P ′ ) −1 AP −1 = (P ′ ) −1 P ′ BPP −1 = B
⇒ (P −1 ) ′ AP −1 = B [∵ (P ′ ) −1 = (P −1 ) ′ ]
⇒ B is congruent to A.
Transitivity: Suppose A is congruent to B and B is congruent to C. Then
A = P ′ BP, B = Q ′ CQ, where P and Q are non-singular. Therefore
A = P′ (Q′ CQ) P = (P′ Q′ ) CQP = (QP) ′ C (QP).
Since QP is also a non-singular matrix, therefore A is congruent to C.
Thus the relation of ‘congruence of matrices’ is reflexive, symmetric and transitive.
So it is an equivalence relation.
Theorem 2 : Every matrix congruent to a symmetric matrix is a symmetric matrix.
Proof : Let a matrix B be congruent to a symmetric matrix A.
Then there exists a non-singular matrix P such that B = P ′AP.
We have B′ = (P′AP) ′ = P′ A ′ (P′ ) ′ = P′ A ′ P
= P′AP [ ∵ A is symmetric ⇒ A ′ = A]
= B.
∴ B is also a symmetric matrix.
Congruence operation on a square matrix or Congruence transformations of
a square matrix.
A congruence operation of a square matrix is an operation of any one of the
following three types :
(i) Interchange of the i th and the j th rows as well as of the i th and the j th
columns. Both should be applied simultaneously. Thus the operation
Ri ↔ R j followed by Ci ↔ C j is a congruence operation.
(ii) Multiplication of the i th row as well as the i th column by a non-zero
number c i. e., Ri → cRi followed by Ci → cCi .
(iii) Ri → Ri + kR j followed by Ci → Ci + kC j .
Now we shall show that each congruent transformation of a matrix consists of a pair of
elementary transformations, one row and the other column, such that of the corresponding
elementary matrices each is the transpose of the other.
(a) Let E* , E be the elementary matrices corresponding to the elementary
transformations Ri ↔ R j and Ci ↔ C j respectively. Then
E* = E = E′ .
310

(b) Let E * , E be the elementary matrices corresponding to the elementary


transformations Ri → cRi and Ci → cCi respectively where c ≠ 0. Then
E* = E = E′ .
(c) Let E * , E be the elementary matrices corresponding to the elementary
transformations Ri → Ri + kR j and Ci → Ci + kC j respectively. Then
E* = E′ .
Now we know that every elementary row (column) transformation of a matrix can
be brought about by pre-multiplication (post-multiplication) with the
corresponding elementary matrix. Therefore if a matrix B has been obtained from A
by a finite chain of congruent operations applied on A, then there exist elementary
matrices E 1 , E 2 , ... , E s such that
B = Es ′ ... E2 ′ E1 ′ AE1 E2 ... Es
= (E 1 E 2 ... E s ) ′ A (E 1 E 2 ... E s )
= P ′ AP, where P = E 1 E 2 ... E s is a non-singular matrix.
Therefore B is congruent to A. Thus every matrix B obtained from any given matrix A by
subjecting A to a finite chain of congruent operations is congruent to B.
The converse is also true. If B is congruent to A, then
B = P′AP
where P is a non-singular matrix. Now every non-singular matrix can be expressed
as the product of elementary matrices. Therefore we can write P = E 1 E 2 ... E s
where E 1 , ... , E s are elementary matrices. Then
B = E s ′ . .. E 2 ′ E 1 ′ AE 1 E 2 . .. E s . Therefore B is obtained from A by a finite
chain of congruent operations applied on A.

8.11 Congruence of Quadratic Forms or Equivalence of


Quadratic Forms
Definition: Two quadratic forms XT AX and YT BY over a field F are said to be
congruent or equivalent over F if their respective matrices A and B are congruent over F. Thus
XT AX is equivalent to YT BY if there exists a non-singular matrix P over F such that
PT AP = B.
Since congruence of matrices is an equivalence relation, therefore equivalence of
quadratic forms is also an equivalence relation.
Equivalence of Real Quadratic Forms:
Definition: Two real quadratic forms X T AX and Y T BY are said to be real equivalent,
orthogonally equivalent, or complex equivalent according as there exists a non-singular real,
orthogonal, or a non-singular complex matrix P such that
B = PT AP.
311

8.12 The Linear Transformation of a Quadratic Form


Consider a quadratic form
X T AX …(1)
and a non-singular linear transformation
X = PY …(2)
so that P is a non-singular matrix.
Putting X = PY in (1), we get
X T AX = (PY )T A (PY) = Y T P T APY
= Y T BY, where B = P T AP.
Since B is congruent to a symmetric matrix A, therefore B is also a symmetric
matrix. Thus Y T BY is a quadratic form. It is called a linear transform of the form
X T AX by the non-singular matrix P. The matrix of the quadratic form Y T BY is
B = P T AP.
Thus the quadratic form Y T BY is congruent to X T AX.
Theorem: The ranges of values of two congruent quadratic forms are the same.
Proof: Let φ = X ′ AX and ψ = Y ′ BY be two congruent quadratic forms. Then
there exists a non-singular matrix P such that B = P T AP.
Consider the linear transformation X = PY.
Let φ = p when X = X 1 . Then p = X 1 ′ AX 1 . The value of ψ when Y = P − 1 X 1 is
= (P −1 X 1 ) ′ B (P −1 X 1 ) = X 1 ′ (P −1 ) ′ P′ APP −1 X 1
= X 1 ′ (P′ ) −1 P′ AX 1 = X 1 ′ AX 1 = p.
Thus each value of φ is equal to some value of ψ.
Conversely let ψ = q when Y = Y1 . Then q = Y1 ′ BY1 . The value of φ when X = PY1
is = (PY1 ) ′ A (PY1 ) = Y1 ′ P′ APY1
= Y1 ′ BY1 = q.
Thus each value of ψ is equal to some value of φ.
Hence φ and ψ have the same ranges of values.
Corollary: If the quadratic form Y ′ BY is a linear transform of the quadratic form X ′ AX
by a non-singular matrix P, then the two forms are congruent and so have the same ranges of
values.

8.13 Congruent Reduction of a Symmetric Matrix


Theorem : If A be any n-rowed non-zero symmetric matrix of rank r, over a field F, then
there exists an n-rowed non-singular matrix P over F such that
312

A1 O
P ′AP = 
O O
where A1 is a non-singular diagonal matrix of order r over F and each O is a null matrix of
suitable size.
Or
Every symmetric matrix of rank r is congruent to a diagonal matrix, r, of whose diagonal
elements only are non-zero.
Proof : We shall prove the theorem by induction on n, the order of the given
matrix. If n = 1, the theorem is obviously true. Let us suppose that the theorem is
true for all symmetric matrices of order n − 1. Then we shall show that it is also true
for an n × n symmetric matrix A.
Let A = [ a ij ] n × n be a symmetric matrix of rank r over a field F. First we shall show
that there exists a matrix B = [ b ij ] n × n over F congruent to A such that b11 ≠ 0.
Case 1: If a11 ≠ 0, then we take B = A.
Case 2:If a11 = 0, but some diagonal element of A, say, a ii ≠ 0, then applying the
congruent operation Ri ↔ R1 , Ci ↔ C1 to A, we obtain a matrix B congruent
to A such that
b11 = a ii ≠ 0.
Case 3: Suppose that each diagonal element of A is 0. Since A is a non-zero
matrix, let a ij be a non-zero element of A. Then a ij = a ji ≠ 0.
Applying the congruent operation Ri → Ri + R j , Ci → Ci + C j to A, we obtain a
matrix D = [ dij ] n × n congruent to A such that dii = a ij + a ji = 2 a ij ≠ 0.
Now applying the congruent operation Ri ↔ R1 , Ci ↔ C1 to Dwe obtain a matrix
B = [ b ij ] n × n
congruent to D and, therefore, also congruent to A such that b11 = dii ≠ 0.
Thus there always exists a matrix B = [ b ij ] n × n
congruent to a symmetric matrix, such that the leading element of B is not zero.
Since B is congruent to a symmetric matrix, therefore B itself is a symmetric matrix.
Since b11 ≠ 0,therefore all elements in the first row and first column of B, except the
leading element, can be made 0 by suitable congruent operations. We thus have a
matrix
 a11 0 ... 0 
 0 
C= ,
 ⋮ B1 
 0 
 
congruent to B and, therefore, also congruent to A such that B1 is a square matrix of
order n − 1. Since C is congruent to a symmetric matrix A, therefore C is also a
symmetric matrix and consequently B1 is also a symmetric matrix. Thus B1 is a
symmetric matrix of order n − 1.
313

Therefore by our induction hypothesis it can be reduced to a diagonal matrix by


congruent operations. If the congruent operations applied to B1 for this purpose be
applied to C, they will not affect the first row and the first column of C. So C can be
reduced to a diagonal matrix by congruent operations. Thus A is congruent to a
diagonal matrix, say diag [λ 1 , λ 2 , … , λ k , 0, 0, … , 0]. Thus there exists a
non-singular matrix P such that
P ′AP = diag [λ 1 ,… , λ k , 0, … , 0].
Since rank A = r and the rank of a matrix does not change on multiplication by a
non-singular matrix, therefore rank of the matrix
P ′ AP = diag. [λ 1 ,… , λ k , 0, … , 0] is also r. So precisely r elements of diag.
[λ 1 ,… , λ k , 0, … , 0] are non-zero. Therefore k=r and thus
P ′AP = diag. [λ 1 ,… , λ r , 0, … , 0].
Thus A can be reduced to diagonal form by congruent operations.
The proof is now complete by induction.
Corollary: Corresponding to every quadratic form X ′ AX over a field F, there exists a
non-singular linear transformation X = PY over F, such that the form X ′ AX transforms
to a sum of r, square terms λ 1 y12 + … + λ r y r 2 ,
where λ 1 , … , λ r belong to the field F and r is the rank of the matrix A.
Rank of a quadratic form: Definition : Let X ′ AX be a quadratic form over a field
F. The rank of the matrix A is called the rank of the quadratic form X ′ AX.
If X ′ AX is a quadratic form of rank r, then there exists a non-singular matrix P
which will reduce the form X ′ AX to a sum of r square terms.
Working Rule for Numerical Problems:
We should transform the given symmetric matrix A to diagonal form by applying
congruent operations. Then the application of corresponding column operations
to the unit matrix I n will give us a non-singular matrix P such that
P ′ AP = a diagonal matrix.
The whole process will be clear from the following examples.

Example 9: Determine a non-singular matrix P such that P′AP is a diagonal matrix,


where
 6 −2 2 
A =  − 2 3 − 1 ⋅
 
 2 −1 3 

Interpret the result in terms of quadratic forms.


314

Solution: We write A = IAI


 6 −2 2   1 0 0   1 0 0 
i. e.,  − 2 3 − 1 =  0 1 0  A  0 1 0  ⋅
     
 2 −1 3   0 0 1  0 0 1
We shall reduce A to diagonal form by applying congruent operations. On the right
hand side IAI, the corresponding row operations will be applied on prefactor I and
the column operations will be applied on the post-factor I. There is no need of
actually applying column operations on post-factor I because at any stage the
matrix obtained by applying column operations on post-factor I will be the
transpose of the matrix obtained by applying row operations on pre-factor I.
Performing the congruent operations
1 1
R2 → R2 + R1 , C2 → C2 + C1
3 3
1 1
and R3 → R3 − R1 , C3 → C3 − C1 ,
3 3
we have
6 0 0  1 0 0  1 13 − 13 
     
0
7
3
− 13  =  13 1 0  A  0 1 0  ⋅
0 − 1 7  − 1 0 1 0 0 1
 3 3   3  
Now performing the congruent operation
1 1
R3 → R3 + R2 , C3 → C3 + C2 , we get
7 7
6 0 0  1 0 0  1 13 − 27 
   1   1.
7
0 3 0  =  3 1 0  A 0 1 7
0 0 16   − 2 1 1 0 0 1
 7   7 7  
Thus we obtain a non-singular matrix
 1 13 − 27 
 1  such that P ′AP = diag. 6, 7 , 16 ⋅
P = 0 1 7  3 7 
0 0 1

The quadratic form corresponding to the matrix A is
X ′AX = 6 x12 + 3 x2 2 + 3 x3 2 − 4 x1 x2 − 2 x2 x3 + 4 x3 x1 . …(1)
The non-singular transformation corresponding to the matrix P is given by X = PY
i. e.,
 x1   1 13 − 27   y1 
x  =  0 1 1y 
 2  7   2
 x3   0 0 1  y3 

which is equivalent to
315

y3 
1 2
x1 = y1 + y2 −
3 7 
1 
x2 = y2 + y3  …(2)
7 
x3 = y3 . 

The transformation (2) will reduce the quadratic form (1) to the diagonal form
7 16
Y ′ P ′APY = 6 y12 + y2 2 + y3 2 .
3 7
The rank of the quadratic form X ′ AX is 3. So it has been reduced to a form which is
a sum of three squares.

8.14 Reduction of a Real Quadratic Form


Definition: Diagonal and Unit Quadratic Forms: If the matrix of a quadratic
form is diagonal, then it is called a diagonal quadratic form.
For example, X T diag. [a1 , a2 , … , a n ] X = a1 x12 + a2 x2 2 + … + a n x n2 is a
diagonal quadratic form. In the diagonal quadratic form, some of the a i ’s may be
zero.
If the matrix of a quadratic form is unit, then it is called a unit quadratic form.
For example, X T I n X = x12 + x2 2 + … + x n2 is a unit quadratic form.
Theorem 1: If A be any n-rowed real symmetric matrix of rank r, then there exists a real
non-singular matrix P such that
P′ AP = diag. [1, 1, … , 1, − 1, − 1, … , − 1, 0, … , 0]
so that 1, appears p times and − 1 appears r − p times.
Proof: A is a real symmetric matrix of rank r.Therefore there exists a non-singular
real matrix Q such that Q ′AQ is a diagonal matrix D with precisely r non-zero
diagonal elements. Let
Q ′AQ = D = diag. [λ 1 , λ 2 , … , λ r , 0, … , 0].
Suppose that p of the non-zero diagonal elements are positive. Then r − p are
negative.
Since in a diagonal matrix the positions of the diagonal elements occurring in i th
and j th rows can be interchanged by applying the congruent operation
Ri ↔ R j , Ci ↔ C j , therefore without any loss of generality we can take
λ 1 , … , λ p to be positive and λ p + 1 , … , λ r to be negative.

Let S be the n × n (real) diagonal matrix with diagonal elements

1 1 1 1
,…, , ,…, , 1, … , 1.
√ λ1 √ λ p √ (− λ p + 1 ) √ (− λ r )
316

 1 1 1 1 
Then S = diag  ,…, , ,…, , 1, … , 1
 √ λ 1 √ λ p √ (− λ p + 1 ) √ (− λ r ) 
is a real non-singular diagonal matrix and S ′ = S.
If we take P = QS, then P is also real non-singular matrix and we have
P′ AP = (QS) ′ A (QS) = S ′ Q′ AQS = S ′ DS = SDS
= diag. [1, … , 1, − 1, … , − 1, 0, … , 0]
so that 1 and − 1 appear p and r − p times respectively.
Corollary: If X ′AX is a real quadratic form of rank r in n variables, then there exists a real
non-singular linear transformation X = PY which transforms X ′AX to the form
Y′ P′ APY = y12 + … + y p 2 − y p + 12 − … − y r 2 .
Canonical or Normal form of a real quadratic form: Definition: If X ′AX is
a real quadratic form in n variables, then there exists real non-singular linear transformation
X = PY which transforms X ′ AX to the form
y12 + … + y p 2 − y p + 12 − … − y r 2 .
In the new form the given quadratic form has been expressed as a sum and difference of the
squares of new variables. This latter expression is called the canonical form or normal form of
the given quadratic form.
If φ = X ′ AX is a real quadratic form of rank r, then A is a matrix of rank r. If the real
non-singular linear transformation X = PY reduces φ to normal form, then P′ AP is a
diagonal matrix having 1 and − 1as its non-zero diagonal elements. Since P′ AP is
also of rank r,therefore it will have precisely r non-zero diagonal elements. Thus the
number of terms in each normal form of a given real quadratic form is the same.
Now we shall prove that the number of positive terms in any two normal
reductions of a real quadratic form is the same.
Theorem 2:The number of positive terms in any two normal reductions of a real quadratic
form is the same.
Proof: Let φ = X ′ AX be a real quadratic form of rank r in n variables. Suppose the
real non-singular linear transformations
X = PY and X = QZ
transform φ to the normal forms
y12 + … + y p 2 − y p + 12 − … − y r 2 …(1)
2 2 2 2
and z1 + … + z q − z q +1 − … − z r …(2)
respectively.
To prove that p = q.
Let p < q. Obviously y1 , … , y n , z1 , … , z n are linear homogeneous functions of
x1 , … , x n . Since q > p, therefore q − p > 0. So n − (q − p) is less than n. Therefore
(n − q) + p is less than n.
317

Now y1 = 0, y2 = 0, … , y p = 0, z q + 1 = 0, z q + 2 = 0, … , z n = 0 are (n − q) + p
linear homogeneous equations in n unknowns x1 , … , x n . Since the number of
equations is less than the number of unknowns n, therefore these equations must
possess a non-zero solution. Let x1 = a1 , … , x n = a n be a non-zero solution of these
equations and let X1 = [a1 , … , a n ]′ . Let Y = [b1 , … , b n ]′ = Y1 and Z = [c1 , … , c n ]′
when X = X1 . Then b1 = 0, … , b p = 0 and c q + 1 = 0, … , c n = 0. Putting
Y = [b1 , … , b n ]′ in (1) and Z = [c1 , … , c n ]′ in (2), we get two values of φ when
X = X1 .
These must be equal. Therefore we have
− b p + 12 − … − b r 2 = c12 + … + c q 2
⇒ b p + 1 = 0, … , b r = 0
⇒ Y1 = O
⇒ P −1 X 1 = O [ ∵ X1 = PY1 ]
⇒ X1 = O,
which is a contradiction since X1 is a non-zero vector.
Thus we cannot have p < q. Similarly, we cannot have q < p. Hence we must have
p = q.
Corollary: The number of negative terms in any two normal reductions of a real quadratic
form is the same. Also the excess of the number of positive terms over the number of negative
terms in any two normal reductions of a real quadratic form is the same.
Signature and index of a real quadratic form:
Definition: Let y12 + … + y p 2 − y p + 12 − … − y r 2 be a normal form of a real
quadratic form X ′ AX of rank r. The number p of positive terms in a normal form of X ′ AX is
called the index of the quadratic form. The excess of the number of positive terms over the
number of negative terms in a normal form of
X ′AX i. e., p − (r − p) = 2 p − r
is called the signature of the quadratic form and is usually denoted by s.
Thus s = 2 p − r.
In terms of signature theorem 2 may be stated as follows :
Theorem 3: Sylvester’s Law of Inertia: The signature of a real quadratic form is
invariant for all normal reductions.
Proof: For its proof give definition of signature and the proof of theorem 2.
Theorem 4: Two real quadratic forms in n variables are real equivalent if and only if they
have the same rank and index (or signature).
Proof: Suppose X ′ AX and Y ′ BY are two real quadratic forms in the same
number of variables.
Let us first assume that the two forms are equivalent. Then there exists a real
non-singular linear transformation X = PY which transforms X ′ AX to Y′ BY i. e.,
B = P′ AP.
318

Now suppose the real non-singular linear transformation Y = QZ transforms Y ′ BY


to normal form Z′ CZ. Then C = Q ′ BQ. Since P and Q are real non-singular
matrices, therefore PQ is also a real non-singular matrix. The linear transformation
X = (PQ) Z will transform X ′ AX to the form
(PQZ) ′ A (PQZ) = Z′ Q′ P′ APQZ = Z′ Q′ BQZ .
Thus the two given quadratic forms have a common normal form. Hence they have
the same rank and same index (or signature).
Conversely, suppose that the two forms have the same rank r and the same
signature s. Then they have the same index p where 2p − r = s. So they can be
reduced to the same normal form
Z′ CZ = z12 + … + z p 2 − z p + 12 − … − z r 2
by real non-singular linear transformations, say, X = P Z and Y = QZ respectively.
Then P′ AP = C and Q′ BQ = C.
Therefore Q′ BQ = P′ AP. This gives
B = (Q′ ) −1 P′ APQ −1 = (Q −1 ) ′ P′ APQ −1 = (PQ −1 ) ′ A (PQ −1 ).
Therefore the real non-singular linear transformation X = (PQ -1 ) Y transforms
X ′AX to Y ′ BY. Hence the two given quadratic forms are real equivalent.
Reduction of a real quadratic form in the complex field:
Theorem 5: If A be any n-rowed real symmetric matrix of rank r, there exists a
non-singular matrix P whose elements may be any complex numbers such that
P′AP = diag. [1, 1, … , 1, 0, … , 0] where 1 appears r times.
Proof: A is a real symmetric matrix of rank r. Therefore there exists a
non-singular real matrix Q such that Q′ AQ is a diagonal matrix D with precisely r
non-zero diagonal elements. Let
Q′ AQ = D = diag. [λ 1 , … , λ r , 0, …, 0].
The real numbers λ 1 , … , λ r may be positive or negative or both.
Let S be the n × n (complex) diagonal matrix with diagonal elements
1 1
,…, , 1, … , 1.
√ λ1 √λr
 1 1 
Then S = diag.  ,…, , 1, … , 1 is a complex non-singular diagonal
 √ λ1 √λr 
matrix and S ′ = S.
If we take P = QS, then P is also a complex non-singular matrix and we have
P′AP = (QS) ′ A (QS) = S ′ Q′ AQS = S ′ DS = SDS = diag. [1, 1, … , 1, 0, … , 0] so that 1
appears r times. Hence the result.

Corollary 1:Every real quadratic form X ′AX is complex-equivalent to the form


z12 + … + z r 2 where r is the rank of A.
319

Corollary 2:Two real quadratic forms in n variables are complex equivalent if and only if
they have the same rank.
Orthogonal reduction of a real quadratic form:
Theorem 6: If φ = X ′AX be a real quadratic form of rank r in n variables, then there exists
a real orthogonal transformation X = PY which transforms φ to the diagonal form
λ 1 y12 + … + λ r y r 2 ,
where λ 1 , … , λ r are the, r, non-zero eigenvalues of A, n − r eigenvalues of A being equal to
zero.
Proof: Since A is a real symmetric matrix, therefore there exists a real orthogonal
matrix P such that P −1 AP = D, where D is a diagonal matrix whose diagonal
elements are the eigenvalues of A.
Since A is of rank r, therefore P −1 AP = D is also of rank r. So D has precisely r
non-zero diagonal elements. Consequently A has exactly r non-zero eigenvalues,
the remaining n − r eigenvalues of A being zero. Let D = diag. [λ 1 , …, λ r , … 0, … 0].
Since P −1 = P ′, therefore P −1 AP = D
⇒ P ′ AP = D ⇒ A is congruent to D.
Now consider the real orthogonal transformation X = PY. We have
X ′ AX = (PY) ′ A (PY) = Y ′ P ′ AP Y = Y ′ DY
= λ 1 y12 + … + λ r y r 2 .
Hence the result.
Theorem 7: Every real quadratic form X ′ AX in n variables is real equivalent to the form
y12 + … + y p 2 − y p + 12 − … − y r 2 ,
where r is the rank of A and p is the number of positive eigenvalues of A.
Proof: A is a real symmetric matrix. Therefore there exists a real orthogonal
matrix Q such that Q −1 AQ = Q ′ AQ = D,
where D is a diagonal matrix whose diagonal elements are the eigenvalues of A.
Since A is of rank r, therefore D is also of rank r. So D has exactly r non-zero diagonal
elements. Consequently A has exactly r non-zero eigenvalues, the remaining n − r
eigenvalues of A being zero. Let D = diag. [λ 1 , λ 2 … , λ r , 0, … 0].
Let λ 1 , … , λ p be positive and λ p + 1 , … λ r be negative. Let S be the n × n real
diagonal matrix with diagonal elements
1 1 1 1
,…, , ,…, , 1, … , 1.
√ λ1 √ λ p √ (− λ p + 1 ) √ (− λ r )
Then S is non-singular and S ′ = S. If we take P = QS, then P is also a real
non-singular matrix and we have
P′ AP = (QS) ′ A (QS) = S ′ Q′ AQS = SDS
= diag. [1, … , 1, − 1, … , − 1, 0, … , 0]
so that 1 and − 1 appear p and r − p times respectively.
320

Now the real non-singular linear transformation X = PY reduces X ′ AX to the form


Y ′ P ′APY i. e.,
y12 + … + y p 2 − y p + 12 − … − y r 2 .
Hence the result.
Corollary: Two real quadratic forms X ′AX and Y ′ BY in the same number of variables
are real equivalent if and only if A and B have the same number of positive and negative
eigenvalues.
Important Note: If X ′AX is a real quadratic form, then the number of non-zero
eigenvalues of A is equal to the rank of X ′ AX and the number of positive
eigenvalues of A is equal to the index of X ′AX.
Theorem 8 :Two real quadratic forms X ′ AX and Y ′ BY are orthogonally equivalent if and
only if A and B have the same eigenvalues and these occur with the same multiplicities.
Proof: If A and B have eigenvalues λ 1 , λ 2 , … , λ n and D is a diagonal matrix with
λ 1 , λ 2 , … , λ n as diagonal elements, then there exist orthogonal matrices P and Q
such that P′AP = D = Q′ BQ.
Now Q′ BQ = P′AP
⇒ B = (Q′ ) −1 P′APQ −1
= (Q −1 ) ′ P′APQ −1
= (PQ −1 ) ′ A (PQ −1 ).
Since PQ −1 is an orthogonal matrix, therefore Y ′ BY is orthogonally equivalent to
X ′ AX.
Conversely, if the two forms are orthogonally equivalent, then there exists an
orthogonal matrix P such that B = P ′AP = P −1 AP. Therefore A and B are similar
matrices and so have the same eigenvalues with the same multiplicities.

Example 10: Reduce each of the following quadratic forms in three variables to real
canonical form and find its rank and signature. Also write in each case the linear
transformation which brings about the normal reduction.
(i) 2 x12 + x2 2 − 3 x3 2 − 8 x2 x3 − 4 x3 x1 + 12 x1 x2 .
(ii) 6 x12 + 3 x2 2 + 14 x3 2 + 4 x2 x3 + 18 x3 x1 + 4 x1 x2 .
Solution: (i) The matrix A of the given quadratic form is
 2 6 −2 
A = 6 1 −4  ⋅
 
 −2 −4 −3 
321

We write A = IAI i. e.,


 2 6 −2   1 0 0   1 0 0 
 6 1 −4  =  0 1 0  A  0 1 0  ⋅
     
 −2 −4 −3   0 0 1  0 0 1
Now we shall reduce A to diagonal form by applying congruence operations on it.
Performing R2 → R2 − 3 R1 , C2 → C2 − 3C1 and R3 → R3 + R1 , C3 → C3 + C1 ,
we get
2 0 0  1 0 0  1 − 3 1
 0 −17 2  =  −3 1 0  A  0 1 0⋅
     
 0 2 −5   1 0 1  0 0 1
[Note that we apply the row and column operations on A in two separate steps. But
in order to save labour we should apply them in one step. For this we should not
first write the first row. After changing R2 and R3 with the corresponding row
operations we should simply write 0 in the second and third places of the first row
and the first element of the first row should be kept unchanged].
2 2
Now performing R3 → R3 + R2 , C3 → C3 + C2 , we get
17 17

2 0 0  1 0 0  1 − 3 11 17

     
0 − 17 0 = − 3 1 0 A 0 1 17 ⋅
2

0 0 − 1781  11 2   0 1
   17 17 1 0
1 1 1 1
Performing R1 → R1 , C1 → C1 ; R2 → R2 , C2 → C2 ;
√2 √2 √ 17 √ 17
and R3 → √ (17 / 81) R3 , C3 → √ (17 / 81) C3 , we get

1 0 0  a 0 0  a − 3b 11
17
c
0 − 1 0 = − 3b  
b 0 A 0 b 2 
c ,
   17
0 0 − 1  11
17
c 2 c
17
c  0 0 c 

where a = 1 / √ 2 , b = 1 / √ 17, c = √ (17 / 81).


Thus the linear transformation X = PY
where

 a − 3b 11
17
c
 
P = 0 b 2
17
c  , X = [ x1 x2 x3 ]′ , Y = [ y1 y2 y3 ]′ ,
0 0 c 

transforms the given quadratic form to the normal form
y12 − y2 2 − y3 2 . …(1)
The rank r of the given quadratic form = the number of non-zero terms in its normal
form (1) = 3.
322

The signature of the given quadratic form = the excess of the number of positive
terms over the number of negative terms in its normal form = 1 − 2 = − 1.
The index of the given quadratic form = the number of positive terms in its normal
form = 1.
The linear transformation X = PY which brings about this normal reduction is
given by
11 2
x1 = ay1 − 3by2 + cy3 , x2 = by2 + cy3 , x3 = cy3 .
7 17
(ii) The matrix of the given quadratic form is
6 2 9 
A = 2 3 2  ⋅
 
 9 2 14 

6 2 9   1 0 0   1 0 0 
We write  2 3 2  =  0 1 0  A  0 1 0  ⋅
     
 9 2 14   0 0 1   0 0 1 
Performing congruence operations
1 1
R2 → R2 − R1 , C2 → C2 − C1 ;
3 3
3 3
and R3 → R3 − R1 , C3 → C3 − C1 ,
2 2
we get
6 0 0  1 0 0  1 − 13 − 32 
   1   
 0 3 − 1 =  − 3 1 0  A 1 0⋅
7
0
 0 − 1 1   − 3 0 1 0 0 1
 2   2  
3 3
Performing R3 → R3 + R2 , C3 → C3 + C2 , we get
7 7
6 0 0  1 0 0  1 − 13 23
14

   1   
7
0 3 0  =  − 3 1 0  A 0 1 3
7 .
0 0 1   23 3 1 0 0 1
 14   14 7  
Performing R1 → (1/ √ 6) R1 , C1 → (1/ √ 6)C1 ; R2 → √ (3 / 7) R2 , C2 → √ (3 / 7) C2 ;
R3 → (1/14) R3 , C3 → (1/ √ 14) C3 , we get
1 0 0  a 0 0  a − 13 b 23 c
14

 0 1 0  = − 1 b b 0 A  0

b 3 c 
   3   7 
 0 0 1   14
23 c 3 c 1
7  
0 0 1 
1 , 1
where a= b = √ (3 / 7), c = ⋅
√6 √ 14
Thus the linear transformation X = PY where
323

 a − 13 b 23
14
c
 
P = 0 b 3
7
c
0 0 1

transforms the given quadratic form to the normal form
y12 + y22 + y32 .
The rank of the given quadratic form is 3 and its signature is
3 − 0 = 3.
Example 11: Find an orthogonal matrix P that will diagonalize the real symmetric matrix
0 1 1

A = 1 0 − 1 ⋅
 
 1 −1 0 

Interpret the result in terms of quadratic forms.


Solution: The characteristic equation of the given matrix is
| A − λI | = 0
−λ 1 1
i. e., 1 − λ −1 = 0 i. e., (λ − 1)2 (λ + 2) = 0.
1 −1 − λ

∴ the eigenvalues of A are 1, 1, − 2.


Corresponding to the eigenvalue 1 we can find two mutually orthogonal
eigenvectors of A by solving
 −1 1 1  x1   0
(A − I) X =  1 −1 −1  x2  =  0
    
 1 −1 −1  x3   0

or − x1 + x2 + x3 = 0.
Two orthogonal solutions are
X1 = [1, 0, 1]′ and X 2 = [1, 2, − 1]′.
An eigenvector corresponding to the eigenvalue −2, is found by solving
2 x1 + x2 + x3 = 0, x1 + 2 x2 − x3 = 0
to be X 3 = [−1, 1, 1]′ .The required matrix P is therefore a matrix whose columns are
unit vectors which are scalar multiples of X1 , X 2 and X 3 .
 1/ √ 2 1/ √ 6 −1/ √ 3 
 
∴ P= 0 2 / √ 6 1/ √ 3 
 1/ √ 2 −1/ √ 6 1/ √ 3 
 
We have P ′ AP = diag. [1, 1, − 2]
324

The quadratic form corresponding to the symmetric matrix A is


φ = 2 x1 x2 + 2 x1 x3 − 2 x2 x3 .
The orthogonal linear transformation X = PY will transform it to the diagonal
form y12 + y22 − 2 y32 .
The rank of the quadratic form φ = the number of non-zero eigenvalues of its
matrix A = 3.
The signature of the quadratic form φ = the number of positive eigenvalues of A =
the number of negative eigenvalues of A = 2 − 1 = 1. The normal form is
z12 + z 22 − z 32 .

Example 12: Reduce the quadratic form


6 x12 + 3 x2 2 + 3 x3 2 − 4 x1 x2 + 4 x1 x3 − 2 x2 x3
to the canonical form by an orthogonal transformation and hence find the rank and signature
of the given quadratic form.
Solution: The matrix A of the given quadratic form is
 6 −2 2 
A =  − 2 3 − 1 ⋅
 
 2 −1 3 
The characteristic equation of A is | A − λI | = 0
6−λ −2 2
or −2 3 − λ −1 = 0
2 −1 3 − λ
6−λ −2 0
or −2 3 − λ 2 − λ = 0, C3 → C3 + C2
2 −1 2−λ
6−λ −2 0
or (2 − λ ) −2 3 − λ 1 =0
2 −1 1
6−λ −2 0
or (2 − λ ) −4 4 − λ 0 = 0, R2 → R2 − R3
2 −1 1

or (2 − λ ) [(6 − λ ) (4 − λ ) − 8] = 0
or (2 − λ ) (λ2 − 10 λ + 16) = 0
or (2 − λ ) (λ − 2) (λ − 8) = 0.
∴ the eigenvalues of A are 2 , 2 , 8.
The eigenvalue 8 is of algebraic multiplicity 1. So there will be only one linearly
independent eigenvector corresponding to this value.
325

The eigenvectors corresponding to the eigenvalue 8 are given by the equation


(A − 8I ) X = O
 −2 −2 2   x1  0
or  −2 −5 −1  x  = 0 ⋅
   2  
 2 −1 −6   x3  0
Since these equations have only one linearly independent solution, therefore the
coefficient matrix of these equations is of rank 2 and its third row can be made zero
 x1 
by elementary row operations. So in order to find an eigenvector X =  x2 
 
 x3 
corresponding to the eigenvalue 8, it is sufficient to find x1 , x2 , x3 satisfying the
equations
− 2 x1 − 2 x2 + 2 x3 = 0 …(1)
and − 2 x1 − 5 x2 − x3 = 0 …(2)
Subtracting (2) from (1), we get
3 x2 + 3 x3 = 0 .
∴ x2 = 1, x3 = − 1, x1 = − 2 is a solution
−2
Thus X1 =  1 is an eigenvector of A corresponding to the eigenvalue 8.
 
 −1
The eigenvalue 2 is of algebraic multiplicity 2. So we are to find two mutually
orthogonal eigenvectors corresponding to it.
The eigenvectors X corresponding to the eigenvalue 2 are given by the equation
(A − 2 I ) X = O
 4 −2 2   x1  0
or  −2 1 −1  x2  = 0 ⋅
    
 2 −1 1  x3  0
Since these equations have two linearly independent solutions, therefore their
coefficient matrix is of rank 1 and its second and third rows can be made zero by
elementary row operations. So we should find two orthogonal solutions of the
equation
4 x1 − 2 x2 + 2 x3 = 0
i. e., 2 x1 − x2 + x3 = 0. …(3)
Obviously x1 = 0, x2 = 1, x3 = 1 is a solution of (3).
0
∴ X2 = 1 is an eigenvector of A corresponding to the eigenvalue 2.
 
1
326

 x
Let X3 =  y be another eigenvector of A corresponding to the eigenvalue 2
 
 z 
and let X 3 be orthogonal to X 2 .
Then 2x − y + z = 0 [ ∵ X 3 is a solution of (3)]
and 0 + y+ z =0 [ ∵ X 2 and X 3 are orthogonal]
Obviously y = 1, z = − 1, x = 1 is a solution.
 1
∴ X3 =  1 ⋅
 
−1

Lengths of the vectors X1 , X 2 , X 3 are √ 6, √ 2 , √ 3 respectively.


 −2 / √ 6 0 1/ √ 3 
 
P =  X 3  =  1/ √ 6 1/ √ 2 1/ √ 3 
1 1 1
∴ X1 , X2 ,
√ 6 √2 √3   
 −1/ √ 6 1/ √ 2 −1/ √ 3 
is the required orthogonal matrix that will diagonalize A.We have P −1 = P T and

8 0 0
P −1
AP = P AP =  0 2 0  ⋅
T
 
 0 0 2 

The orthogonal linear transformation X = PY will transform the given quadratic


form to the diagonal form
8 y12 + 2 y2 2 + 2 y3 2 .
The canonical form of the given quadratic form is
z12 + z 2 2 + z 3 2 .
The rank of the given quadratic form = the number of non-zero eigenvalues of its
matrix A = 3.
The signature of the given quadratic form = the number of positive eigenvalues of
A − the number of negative eigenvalues of A
= 3 − 0 = 3.

8.15 Lagrange’s Reduction of a Real Quadratic Form


This method proceeds by forming squares successively by taking lesser and lesser
number of variables among x1 , x2 , … , x n . We shall illustrate this method by the
following examples.
327

Example 13: Reduce the quadratic form


q ( x1 , x2 , x3 ) = 4 x12 + 10 x2 2 + 11x3 2 − 4 x1 x2 + 12 x1 x3 − 12 x2 x3
to diagonal form by using Lagrange’s reduction.
Solution: We have
q ( x1 , x2 , x3 ) = 4 x12 + 10 x2 2 + 11x3 2 − 4 x1 x2 + 12 x1 x3 − 12 x2 x3 .
Collecting the terms containing x1 and factoring out x1 from the terms consisting of
x1 x2 and x1 x3 , we have
q = 4 { x12 − x1 ( x2 − 3 x3 ) } + 10 x2 2 + 11x3 2 − 12 x2 x3 .
Completing a square on the term x1 by adding and subtracting suitable terms as
needed, we get
2
q = 4  x1 − ( x2 − 3 x3 ) − ( x2 − 3 x3 )2 + 10 x2 2 + 11x3 2 − 12 x2 x3
1
 2 
2
= 4  x1 − x2 + x3  + 9 x2 2 − 6 x2 x3 + 2 x3 2
1 3
 2 2 
2
= 4  x1 − x2 + x3  + 9  x2 2 − x2 x3  + 2 x3 2 .
1 3 2
 2 2   3 
Now completing square on x2 , we have
2 2
q = 4  x1 − x2 + x3  + 9  x2 − x3  − x3 2 + 2 x3 2
1 3 1
 2 2   3 
2 2
= 4  x1 − x2 + x3  + 9  x2 − x3  + x3 2 .
1 3 1
 2 2   3 
Now we put
1 3
y1 = x1 − x2 + x3
2 2
1
y2 = x2 − x3
3
y3 = x3
which is of the form Y = PX with the transformation matrix
 1 − 12 3
2
 
P = 0 1 − 13 
0 0 1

which is non-singular with determinant equal to 1.
With this transformation, the given quadratic form reduces to the diagonal form
q = 4 y12 + 9 y2 2 + y3 2 .
If we choose z1 = 2 y1 , z 2 = 3 y2 , z 3 = y3 , then we have q = z12 + z 2 2 + z 3 2 , so
that the given quadratic form is reduced to a unit quadratic form.
328

Example 14: Using Lagrange’s reduction reduce the quadratic form


q ( x1 , x2 , x3 ) = x1 x2 + 2 x1 x3 + 3 x2 x3
to a diagonal form.
Solution:Set
x1 = y1 , x2 = y1 + y2 , x3 = y3 .
1 0 0
The transformation X = PY = 1 1 0 Y is non-singular because
 
0 0 1

det P = 1 ≠ 0.
The quadratic form q now becomes
q = y1 ( y1 + y2 ) + 2 y1 y3 + 3 ( y1 + y2 ) y3
= y12 + y1 y2 + 5 y1 y3 + 3 y2 y3
= { y12 + y1 ( y2 + 5 y3 )} + 3 y2 y3
1 1
= { y1 + ( y2 + 5 y3 )}2 − ( y2 2 + 10 y2 y3 + 25 y3 2 ) + 3 y2 y3
2 4
2
=  y1 + y3  − ( y2 2 + 10 y2 y3 − 12 y2 y3 ) −
1 5 1 25
y2 + y3 2
 2 2  4 4
2
=  y1 + y3  − ( y2 2 − 2 y2 y3 ) −
1 5 1 25
y2 + y3 2
 2 2  4 4
2
=  y1 + y3  − ( y2 − y3 )2 +
1 5 1 1 25
y2 + y3 2 − y3 2
 2 2  4 4 4
2
=  y1 + y3  − ( y2 − y3 )2 − 6 y3 2 .
1 5 1
y2 +
 2 2  4
1 5 1 1 5
Now put z1 = y1 + y2 + y3 = x1 + x2 + x3
2 2 2 2 2
z 2 = y2 − y3 = x2 − x1 − x3
z 3 = y3 = x3
so that the non-singular transformation Z = QX i. e.,
 z1   12 12 52   x1 
z  =  −1 1 −1  x 
 2    2
z 3   0 0 1  x3 

with det Q = 1 will reduce the given quadratic form to the diagonal form
1
q = z12 − z 2 2 − 6z 3 2 .
4
329

Comprehensive Exercise 2

1. Write the matrix and find the rank of each of the following quadratic forms :
(i) x 12 − 2 x1 x2 + 2 x 22
(ii) 4 x 12 + x 22 − 8 x 32 + 4 x1 x2 − 4 x1 x3 + 8 x2 x3 .
2. Determine a non-singular matrix P such that P ′ AP is a diagonal matrix, where
0 1 2
A =  1 0 3 ⋅
 
 2 3 0 
3. Reduce each of the following quadratic forms in three variables to real
canonical form and find its rank and signature. Also write in each case the
linear transformation which brings about the normal reduction.
(i) x 2 − 2 y 2 + 3z 2 − 4 yz + 6zx.
(ii) x 2 + 2 y 2 + 2z 2 − 2 xy − 2 yz + zx.
4. Reduce the following quadratic form to canonical form and find its rank and
signature :
x 2 + 4 y 2 + 9z 2 + t 2 − 12 yz + 6zx − 4 xy − 2 xt − 6zt.
5. Using Lagrange’s reduction reduce the quadratic form
 5 −2 0   x1 
[ x1 x2 x3 ]  − 2 6 2   x2 
  
 0 2 7   x3 
to a diagonal form.
6. Using Lagrange’s reduction reduce the quadratic form
 3 1 0 0   x1 
 1 3 0 0  x 
[ x1 x2 x3 x4 ]    2
 0 0 3 − 1   x3 
 0 0 −1 3   x 
   4
to a diagonal form.
7. Using Lagrange’s reduction reduce the quadratic form
q ( x1 , x2 , x3 ) = ( x1 + x2 + x3 ) x2 to a diagonal form.
8. Using Lagrange’s reduction reduce the quadratic form
q ( x1 , x2 , x3 ) = x1 x2 + x2 x3 + x3 x1 to a diagonal form.
9. Using Lagrange’s reduction reduce the quadratic form
2 2 2
q ( x1 , x2 , x3 ) = x1 + 4 x2 + 16 x3 + 4 x1 x2 + 8 x1 x3 + 17 x2 x3
to a diagonal form.
330

A nswers 2

 4 2 −2 
 1 − 1
1. (i)  , rank 2 (ii)  2 1 4 , rank 3
 −1 2   
 −2 4 −8 
 1 − 12 −3 
 
2. P=1 1
2
−2 
0 0 1

3. (i) The linear transformation X = PY where
1 0 − 32 
 
P = 0 1 − 12  , X = [x y z ]′ , Y = [ y1 y2 y3 ]′
√2
0 0 1 
 2 
transforms the given quadratic form to the normal form y12 − y22 − y32
The rank of the given quadratic form is 3 and its signature is −1
(ii) The linear transformation X = PY where
1 1 0
P = 0 1 1/ √ 6 
 
 0 0 √ (2 / 3) 
2 2 2
transforms the given quadratic form to the normal form y1 + y2 + y3
Rank of the given quadratic form is 3 and its signature is 3
2 2 2
4. y1 − y2 + y4
Rank of the given quadratic form is 3 and its signature is 1
1 2 10 2 81 2 1 2 8 2 1 2 8 2
5. y + y + y 6. y + y + y + y
5 1 13 2 13 3 3 1 3 2 3 3 3 4
1 2 2 1 2 2 2
7. ( y − y2 ) 8. (z1 − z 2 ) − z 3
4 1 4
2 1 2 2
9. z1 + (z 2 − z 3 )
4

8.16 Value Class of a Real Quadratic Form. Definite,


Semi-definite and Indefinite Real Quadratic Forms
Definitions: Let φ = X ′ AX be a real quadratic form in n variables x1 , … , x n . The form
φ is said to be
331

(i) Positive Definite (PD) if φ ≥ 0 for all real values of the variables x1 , … , x n and
φ = 0 only if X = O i. e., φ = 0 ⇒ x1 = x2 = … = x n = 0.
For example the quadratic form x12 − 4 x1 x2 + 5 x2 2 in two variables is positive
definite because it can be written as
( x1 − 2 x2 )2 + x2 2 ,
which is ≥ 0 for all real values of x1 , x2 and
( x1 − 2 x2 )2 + x2 2 = 0 ⇒ x1 − 2 x2 = 0, x2 = 0
⇒ x1 = 0, x2 = 0.
Similarly the quadratic form x12 + x2 2 + x3 2 in three variables is a positive
definite form.
(ii) Negative definite (ND) if φ ≤ 0 for all real values of the variables x1 , … , x n and
φ = 0 only if x1 = x2 = … = x n = 0.
For example − x12 − x2 2 − x3 2 is a negative definite form in three variables.
(iii) Positive semi-definite (PSD) if φ ≥ 0 for all real values of the variables x1 , … , x n
and φ = 0 for some non-zero real vector X i. e., φ = 0 for some real values of the variables
x1 , x2 , … , x n not all zero.
For example the quadratic form
x12 + x2 2 + 2 x3 2 − 2 x1 x3 − 2 x2 x3
is positive semi-definite because it can be written in the form
( x1 − x3 )2 + ( x2 − x3 )2 ,
which is ≥ 0 for all real values of x1 , x2 , x3 but is zero for non-zero values also, for
example, x1 = x2 = x3 = 1.
Similarly the quadratic form x12 + x2 2 + 0 x3 2 in three variables x1 , x2 , x3 is
positive semi-definite. It is non-negative for all real values of x1 , x2 , x3 and it is zero
for values x1 = 0, x2 = 0, x3 = 2 which are not all zero.
(iv) Negative semi-definite (NSD) if φ ≤ 0 for all real values of the variables
x1 , … , x n and φ = 0 for some values of the variables x1 , … , x n not all zero.
For example the quadratic form − x12 − x2 2 − 0 x3 2 in three variables x1 , x2 , x3 is
negative semi-definite.
(v) Indefinite (I) if φ takes positive as well as negative values for real values of the variables
x1 , … , x n .
For example the quadratic form x12 − x2 2 + x3 2 in three variables is indefinite. It
takes positive value 1 when x1 = 1, x2 = 1, x3 = 1 and it takes negative value − 1
when x1 = 0, x2 = 1, x3 = 0.
Note 1: The above five classes of real quadratic forms are mutually exclusive and
are called value classes of real quadratic forms. Every real quadratic form must
belong to one and only one value class.
332

Note 2:A form which is positive definite or negative definite is called definite and a
form which is positive semi-definite or negative semi-definite is called semi-definite.
Non-negative definite quadratic form:
Definition: A real quadratic form φ = X ′AX in n variables x1 , … , x n , is said to be
non-negative definite if it takes only non-negative values for all real values of x1 , … , x n .
Thus φ is non-negative definite if φ ≥ 0 for all real values of x1 , … , x n . A
non-negative definite quadratic form may be positive definite or positive
semi-definite. It is positive definite if it takes the value 0 only when
x1 = x2 = … = x n = 0.
Classification of real-symmetric matrices:
Definite, semi-definite and indefinite real symmetric matrices:
Definition: A real symmetric matrix A is said to be definite, semi-definite or indefinite if
the corresponding quadratic form X ′AX is definite, semi-definite or indefinite respectively.
Positive definite real symmetric matrix:
Definition: A real symmetric matrix A is said to be positive definite if the corresponding
form X ′ AX is positive definite.
Non-negative definite real symmetric matrix:
Definition:A real symmetric matrix A is said to be non-negative definite if the associated
quadratic form X ′AX is non-negative definite.
Theorem 1: All real equivalent real quadratic forms have the same value class.
Proof: Let φ = X ′ AX and ψ = Y′ BY be two real equivalent real quadratic forms.
Then there exists a real non-singular matrix P such that P′AP = B and
(P −1 )′ BP −1 = A. The real non-singular linear transformation X = PY transforms
the quadratic form φ into the quadratic form ψ and the inverse transformation
Y = P −1 X transforms the quadratic form ψ into the quadratic form φ. The two
quadratic forms have the same ranges of values. The vectors X and Yfor which φ and
ψ have the same value are connected by the relations X = PY and Y = P −1 X. Thus
the vector Y for which ψ has the same value as φ has for the vector X is given by
Y = P −1 X. Similarly the vector X for which φ has the same value as ψ has for the
vector Y is given by X = PY.
Now we shall discuss the five cases separately.
Case I: φ is positive definite if and only if ψ is positive definite.
Suppose φ is positive definite.
Then φ ≥ 0 and φ = 0 ⇒ X = O.
Since φ and ψ have same ranges of values, therefore
φ ≥ 0 ⇒ ψ ≥ 0.
333

Also ψ = 0 ⇒ Y′ BY = 0
⇒ (PY) ′ A(PY) = 0 [ ∵ φ has the same value for the vector PY as ψ
has for the vector Y]
⇒ PY = O [∵ φ is positive definite means X ′AX = 0 ⇒ X = O]
⇒ P −1 (PY) = P − 1 O
⇒ Y = O.
Thus ψ is also positive definite.
Conversely suppose that ψ is positive definite.
Then ψ ≥ 0 and ψ = 0 ⇒ Y = O.
Since φ and ψ have the same ranges of values, therefore
ψ ≥ 0 ⇒ φ ≥ 0.
Also φ = 0 ⇒ X ′ AX = 0
⇒ (P −1 X) ′ B (P −1 X) = 0 [ ∵ ψ has the same value for the vector
P −1 X as φ has for the vector X]
⇒ P −1 X = O [ ∵ ψ is positive definite]
⇒ P (P −1 X) = PO
⇒ X = O.
Thus φ is also positive definite.
Case II: φ is negative definite if and only if ψ is negative definite.
The proof is the same as in case I.
The only difference is that we are to replace the expressions φ ≥ 0, ψ ≥ 0 by the
expressions φ ≤ 0, ψ ≤ 0.
Case III: φ is positive semi-definite if and only if ψ is positive semi-definite.
Since φ and ψ have the same ranges of values, therefore φ ≥ 0 if and only if ψ ≥ 0.
Further since P is non-singular, therefore
X ≠ O ⇒ Y = P −1 X ≠ O
and Y ≠ O ⇒ X = PY ≠ O.
Also the vectors X and Y for which φ and ψ have the same values are connected by
the relations X = PY and Y = P −1 X. Therefore φ = 0 for some non-zero vector X if
and only if ψ = 0 for some non-zero vector Y.
Hence φ is positive semi-definite if and only if ψ is positive semi-definite.
Case IV: φ is negative semi-definite if and only if ψ is negative semi-definite.
For proof replace the expressions φ ≥ 0, ψ ≥ 0 in case III by the expressions
φ ≤ 0, ψ ≤ 0.
Case V: φ is indefinite if and only if ψ is indefinite. Since φ and ψ have the same ranges
of values, therefore the result follows immediately.
Thus the proof of the theorem is complete.
334

Criterion for the value of a real quadratic form in terms of its rank and
signature:
Theorem 2: Suppose r is the rank and s is the signature of a real quadratic form
φ = X ′ AX in n variables. Then φ is (i) positive definite if and only if s = r = n, (ii) negative
definite if and only if − s = r = n, (iii) positive semi-definite if and only if s = r < n, (iv)
negative semi-definite if and only if − s = r < n ; and (v) indefinite if and only if | s | ≠ r.
Proof: Let ψ = y12 + … + y p 2 − y p + 12 − … − y r 2 …(1)
be the real canonical form of the real quadratic form φ of rank r and signature s.
Then s = 2 p − r. Since φ and ψ are real equivalent real quadratic forms, therefore
they have the same value class.
(i) Suppose s = r = n. Then p = n and the real canonical form of φ becomes
y12 + … + y n2 . But this is a positive definite quadratic form. So φ is also
positive definite.
Conversely suppose that φ is positive definite. Then ψ is also a positive
definite form in n variables. So we must have ψ = y12 + … + y n2 .
Hence r = n, p = n, 2p − r = s = n.
(ii) Suppose − s = r = n. Then s = 2 p − r gives p = 0. The real canonical form of φ
becomes − y12 − … − y n2 which is negative definite and so φ is also
negative definite.
Conversely if φ is negative definite, then ψ is also negative definite and
so we must have ψ = − y12 − … − y n2 .
Hence r = n, p = 0, 2 p − r = s = − n i. e., − s = n.
(iii) Suppose s = r < n. Then s = 2 p − r gives p = r and the real canonical form of φ
becomes y12 + … + y r 2 where r < n. But this is a positive semi-definite form
in n variables. So φ is also positive semi-definite.
Conversely if φ is positive semi-definite, then ψ is also a positive
semi-definite form in n variables. So we must have ψ = y12 + … + y r 2
where r < n.
Therefore p = r < n and s = 2 p − r = r. Thus s = r < n.
(iv) Suppose − s = r < n. Then s = 2 p − r gives p = 0 and the real canonical form
of φ becomes − y12 − … − y r 2 where r < n. This is a negative semi-definite
form in n variables. So φ is also negative semi-definite.
Conversely if φ is negative semi-definite, then ψ is also a negative
semi-definite form in n variables. So we must have
ψ = − y12 − … − y r 2 , where r < n.
Therefore p = 0 and s = 2 p − r = − r. Thus − s = r < n.
(v) Suppose | s | ≠ r. Then | 2p − r | ≠ r. Therefore p ≠ 0 and p ≠ r and so
0 < p < r. Then in this case the canonical form of φ has positive as well as
negative terms and so it is an indefinite form. Consequently φ is also
indefinite.
335

Conversely if φ is indefinite, then ψ is also indefinite. So there must be positive as


well as negative terms in ψ. Therefore | s | ≠ r.
Criterion for the value class of a real quadratic form in terms of the
eigenvalues of its matrix:
Suppose φ = X ′AX is a real quadratic form in n variables. Then A is a real symmetric
matrix of order n. Suppose r is the number of non-zero eigenvalues of A. Then
r = rank of the quadratic form φ. Further if s = the number of positive eigenvalues
of A − the number of negative eigenvalues of A, then s = signature of φ. Hence with
the help of theorem 2, we arrive at the following conclusion.
Theorem 3:A real quadratic form φ = X ′AX in n variables is
(i) positive definite if and only if all the eigenvalues of A are positive.
(ii) negative definite if and only if all the eigenvalues of A are negative.
(iii) positive semi-definite if and only if all the eigenvalues of A are ≥ 0 and at least one
eigenvalue is 0.
(iv) negative semi-definite if and only if all the eigenvalues of A are ≤ 0 and at least one
eigenvalue of A is 0.
(v) indefinite if and only if A has positive as well as negative eigenvalues.
On account of its importance we shall give an independent proof of case (i).
Theorem 4: A real symmetric matrix is positive definite if and only if all its eigenvalues
are positive.
Proof:Let A be a real symmetric matrix of order n. Then there exists an orthogonal
matrix P such that
P −1 AP = P ′AP = D = diag [λ 1 , λ 2 , … , λ n ]
where λ 1 , λ 2 , … , λ n are the eigenvalues of A.
Let X ′ AX be the real quadratic form corresponding to the matrix A. Let us
transform this quadratic form by the real non-singular linear transformation
X = PY where Y = [ y1 , y2 , … , y n ]′ . Then
X ′AX = (PY) ′A (PY) = Y′ P′APY = Y′ DY.
Therefore X ′AX = λ 1 y12 + λ 2 y2 2 + … + λ n y n2 . …(1)
Now suppose that λ 1 , λ 2 , … , λ n are all positive. Then the right hand side of (1)
ensures that X ′AX ≥ 0 for all real vectors X. Also
X ′AX = 0
⇒ λ 1 y12 + … + λ n y n2 = 0
⇒ y1 = y2 = … = y n = 0 [ ∵ λ 1 , … , λ n are all positive]
⇒ Y =O
⇒ P −1 X = O [ ∵ X = PY ⇒ Y = P − 1 X]
⇒ P (P −1 X) = PO
⇒ X = O.
336

Thus if λ 1 , λ 2 , … , λ n are all positive, then X ′ AX is positive definite and so the


matrix A is positive definite.
Conversely suppose that A is a positive definite matrix. Then the quadratic form
X ′ AX is positive definite. So
X ′AX ≥ 0 for all real vectors X
⇒ λ 1 y12 + … + λ n y n2 ≥ 0 for all real vectors Y
⇒ λ 1 , … , λ n are all ≥ 0.
Also X ′AX = 0 only if X = O
⇒ λ 1 y12 + … + λ n y n2 = 0 only if PY = O
⇒ λ 1 y12 + … + λ n y n2 = 0 only if Y = O
[ ∵ P is non-singular means PY = O only if Y = O]
⇒ λ 1 , … , λ n are all not equal to zero.
Therefore if A is positive definite, then λ 1 , λ 2 , … , λ n are all > 0.
This completes the proof of the theorem.
Corollary 1: A positive definite real symmetric matrix is non-singular.
Proof: Suppose A is a positive definite real symmetric matrix. Then the
eigenvalues of A are all positive. Also there exists an orthogonal matrix P such that
P −1 AP = D,
where D is a diagonal matrix having the eigenvalues of A as its diagonal elements.
So all diagonal elements of D are positive and thus D is non-singular.
Now A = PDP −1 ⇒ A is non-singular.
Corollary 2: If the real quadratic form XT AX is positive definite, then there exists a
non-singular transformation X = PY such that XT AX = YT Y.
Theorem 5: A real matrix A is symmetric and positive definite if and only if PT AP is
symmetric and positive definite for any real and non-singular P.
Proof: The condition is necessary. Suppose that A is real, symmetric and
positive definite.
This implies that the real quadratic form XT AX is positive definite. Put X = PY,
where Y ≠ O if X ≠ O because P is non-singular. Hence from
XT AX = (PY) T A (PY) = YT (PT AP) Y
we conclude that the right hand side is positive for all non-zero values of Y.
So the quadratic form YT (PT AP) Y is positive definite and hence the matrix PT AP
is positive definite.
Also PT AP = PT AT P [ ∵ A is symmetric ⇒ AT = A ]
= (PT AP ) T .
∴ PT AP is symmetric.
Hence PT AP is symmetric and positive definite.
337

The condition is sufficient: Now suppose that PT AP is symmetric and positive


definite for any real and non-singular P. Putting P = I, we get
PT AP = IT AI = IAI = A.
Since PT AP is symmetric and positive definite, therefore A is symmetric and
positive definite.
Theorem 6:A real symmetric matrix A is positive definite if and only if there exists a
non-singular matrix Q such that
A = Q ′ Q.
Proof: Suppose A is positive definite. Then all the eigenvalues of A are positive
and we can find an orthogonal matrix P such that
P −1AP = P ′ AP = D = diag. [λ , … , λ ]
1 n

where each λ i > 0. Let D1 = diag. [√ λ 1 , … , √ λ n ]. Then D1 2 = D and D1 ′ = D1 . We


have
A = PDP −1 = PD12 P −1 = PD1 D1P ′
= (PD1 ) (PD1 ) ′ = Q ′ Q where Q = (PD1 ) ′ .
Clearly, Q is non-singular since P and D1 are non-singular.
Conversely suppose that A = Q′ Q where Q is non-singular. We have for all real
vectors X,
X ′ AX = X ′ Q′ QX = (QX) ′ (QX)
= Y′ Y, where Y = QX is a real n-vector
≥ 0.
Also X ′ AX = 0 ⇒ Y′ Y = 0
⇒ Y = O ⇒ QX = O [ ∵ Y = QX ]
⇒ Q −1 (QX) = Q −1 O ⇒ X = O.
∴ X ′ AX is a positive definite real quadratic form and so the symmetric matrix A is
positive definite.
Theorem 7: Every real non-singular matrix A can be written as a product A = PS, where
S is a positive definite symmetric matrix and P is orthogonal.
Proof: Since A is non-singular, therefore by theorem 6, A ′A is a positive definite
real symmetric matrix. Let Q be an orthogonal matrix such that
Q −1 (A ′A) Q = Q′ (A ′A) Q = D = diag. [λ 1 , … , λ n ],
where λ 1 , … , λ n are the positive real eigenvalues of A ′A. Let
D1 = diag. [√ λ 1 , … , √ λ n ].
Then D1 2 = D and D1 ′ = D1 .
Now let S = QD1 Q′ . Clearly S ′ = S and so S is symmetric. Moreover S is positive
definite because it is similar to D1 which has positive eigenvalues.
Also S 2 = QD1 Q′ QD1 Q′ = QD1 Q −1 QD1 Q′
= QD1 2 Q −1 = QDQ −1 = A ′A.
338

Now let P = AS −1 . Then P is orthogonal because


P′ P = (AS −1 ) ′ AS −1 = (S −1 ) ′ A ′AS −1
= (S −1 ) ′ S 2 S −1 [ ∵ A ′A = S 2 ]
= (S −1 ) ′ S SS −1 = (S −1 ) ′ S
= (S ′ ) −1 S = S −1 S [ ∵ S is symmetric]
= I.
Thus S = QD1 Q′ is a positive definite real symmetric matrix and P = AS −1 is an
orthogonal matrix and we have
PS = AS −1 S = A.
Hence the result.
Note: The decomposition A = PS obtained in theorem 7 is called the polar
factorization of A.

8.17 Criterion for Positive-Definiteness of a Quadratic


Form in Terms of Leading Principal Minors of its Matrix
Leading principal minors of a matrix. Definition : Let
A = [ a ij ] n × n
be a square matrix of order n. Then
 a11 a12 
A1 = a11 , A2 =  ,
 a21 a22 
 a11 a12 a13   a11 … a1n 
A3 =  a21 a22 a23  , … , An = … … … 
   
 a31 a32 a33   a n1 … a nn 
are called the leading principal minors of A.
Before stating the main theorem we shall prove the following Lemma.
Lemma : If A is the matrix of a positive definite form, then
| A | > 0.
Proof: If X ′AX is a positive definite real quadratic form, then there exists a real
non-singular matrix P such that
P ′AP = I.
∴ | P ′AP | = | I | = 1
or | P ′ || A || P | = 1 or | A | = 1/| P |2
[ ∵ | P | = | P ′ | ≠ 0]
Therefore | A | is positive.
Now we shall state and prove the main theorem.
339

Theorem: Sylvester’s Criterion for Positive Definiteness:A necessary and


sufficient condition for a real quadratic form X ′ AX to be positive definite is, that the leading
principal minors of A are all positive.
Proof: The condition is necessary: Suppose X ′AX is a positive definite
quadratic form in n variables. Let k be any natural number such that k ≤ n. Putting
x k + 1 = 0, … , x n = 0 in the positive definite form X ′AX, we get a positive definite
form in k variables x1 , … , x k . The determinant of the matrix of this new quadratic
form is the leading principal minor of order k of A and is positive by virtue of the
lemma we have just proved. Thus every leading principal minor of the matrix of a
positive definite quadratic form is positive.
The condition is sufficient: Now it is given that the leading principal minors of
A are all positive and we are to prove that the form X ′AX is positive definite. Here
we shall use the principle of mathematical induction.
The result is true for quadratic forms in one variable since a11 x 2 is positive definite
when a11 is positive.
Assume as our induction hypothesis that the theorem is true for quadratic forms in
m variables. Then we shall prove that it is also true for quadratic forms in (m + 1)
variables.
Now let S be any real symmetric matrix of order (m + 1) and let the leading principal
minors of S be all positive. We partition S as follows :
B B1 
S=
λ 
,
 B1 ′
where B is a real symmetric matrix of order m and B1 is an m × 1 column matrix.
By hypothesis the leading principal minors of S are all positive. Therefore| S | and
the leading principal minors of B are all positive. Thus B is a real symmetric matrix
of order m having all its leading principal minors positive. So by induction
hypothesis the quadratic form corresponding to B is positive definite. Therefore
there exists a non-singular matrix P of order m such that P′ BP = I m .
Since | B | > 0, therefore B is non-singular. Let C = − B −1 B 1 . Then C is an m × 1
column matrix. Also
C ′ = − (B −1 B1 ) ′ = − B1 ′ (B −1 ) ′ = − B1 ′ (B′ ) −1 = − B1 ′ B −1 ,
since B′ = B, B being symmetric. We have

 P′ O  P C  P′ O  B B1   P C
 =
1   O
S
 C′ 1   C ′ 1   B1 ′ λ   O 1 

 P′ B P′ B 1 P C
= 
 C′ B + B1 ′ C ′ B 1 + λ   O 1 

 P′ BP P′ BC + P′ B 1 
= 
 C ′ BP + B 1 ′ P C ′ BC + B 1 ′ C + C ′ B 1 + λ 
340

Im O  −1 −1
 O B ′ C + λ  [∵ P ′ BP = I m , C = − B B1 , C ′ = − B1 ′ B ]
 1 
 P ′ O P C I m O 
Thus   S = B1 ′ C + λ 

C ′ 1  O 1  O
Taking determinants of both sides, we get
| P′ | . | S | . | P | = | I m | . | B 1 ′ C + λ | = B 1 ′ C + λ
because B1 ′ C + λ is an 1 × 1 matrix.
∴ | P |2 .| S | = B 1 ′ C + λ [ ∵ | P | = | P ′ |].
Since | S | > 0 and | P | ≠ 0, therefore B1 ′ C + λ is positive. Let B1 ′ C + λ = α 2 ,
where α is real. Then
 P′ O  P C  Im O 
 = ⋅
1   O 
S
 C′ 1   O α 2 
Pre-multiplying and post-multiplying both sides with
 Im O 
  , we get
 O α −1 

Im O   P ′ O   P C  I m O 
 O   C′ 1  S  O 1   O = I m+1 .
 α −1      α −1 

 P C  Im O 
Now let Q =   ⋅
O 1   O α −1 
Then Q is non-singular as it is the product of two non-singular matrices. Also
Im O   P′ O 
Q′=    C′ 1  ⋅
 O α −1  
Therefore, we have
Q ′ SQ = I m + 1 .
Thus the real symmetric matrix S of order m + 1 is congruent to I m + 1 . So the
quadratic form corresponding to S is positive definite.
The proof is now complete by induction.
Corollary: The real quadratic form
n n
q ( x1 , … , x n ) = Σ Σ a ij x i x j
i =1 j =1

is negative definite if and only if


 a11 a12 a13 
 a11 a12 
 > 0, a21  < 0, … .
a11 < 0, a22 a23
 a21 a22   
 a31 a32 a33 
Proof: For q ( x1 , … , x n ) to be negative definite, − q ( x1 , … , x n ) must be positive
definite.
341

Applying Sylvester’s criterion for positive definiteness of − q, we get successively


 − a11 − a12 − a13 
 − a11 − a12 
 > 0, − a21  > 0, … .
− a11 > 0, − a22 − a23
 − a21 − a22   
 − a31 − a32 − a33 
From this follows the condition stated in the statement of the above corollary.

8.18 Some More Criteria to Check the Value Class of


a Real Quadratic Form
Principal minors of a matrix:
Definition: Let A = [ a ij ] n × n be a square matrix of order n.Any minors of A obtained by
deleting the corresponding rows and columns of A are called principal minors of A.
Consider the square matrix
 a11 a12 a13 
A =  a21 a22 a23 ⋅
 
 a31 a32 a33 
The principal minors of A of order 1 are a11 , a22 , a33 .
The principal minors of A of order 2 are
 a
 11
a12   a11
 ,
a13   a22
 ,
a23 
⋅
 a21 a22   a31 a33   a32 a33 
These have obtained respectively by deleting the third row and the third column,
the second row and the second column and the first row and the first column.
The principal minor of A of order 3 is
 a11 a12 a13 
 a21 a22 a23 ⋅
 
 a31 a32 a33 
Now we give some theorems without proof.
Theorem 1:A real quadratic form XT A X is positive definite if and only if all the principal
minors of A are positive.
Theorem 2: A real quadratic form XT A X is negative definite if and only if all the
principal minors of A of even order are positive and those of odd order are negative.
Theorem 3: A real quadratic form XT A X is positive semi-definite if and only if A is
singular and all its principal minors are non-negative.
Theorem 4: A real quadratic form XT A X is negative semi-definite if and only if A is
singular and all principal minors of even order of A are non-negative while those of odd order
are non-positive.
342

Theorem 5:A real quadratic form XT A X is positive semi-definite if the leading principal
minors A1 , … , An − 1 of A are positive and det A = 0.
Theorem 6:A real symmetric matrix A is indefinite if and only if at least one of the following
conditions is satisfied :
(a) A has a negative principal minor of even order.
(b) A has a positive principal minor of odd order and a negative principal minor of odd order.

Example 15: Prove that the quadratic form


6 x1 + 3 x2 2 + 3 x3 2 − 4 x1 x2 − 2 x2 x3 + 4 x3 x1
2

in three variables is positive definite.


Solution: The matrix A of the given quadratic form is
 6 −2 2 
A =  − 2 3 − 1 ⋅
 
 2 −1 3 
The leading principal minors of A are
6 −2
A1 = 6, A2 = = 18 − 4 = 14,
−2 3
6 −2 2 0 1 −7
A3 = −2 3 −1 = 0 2 2 , by R2 + R3 , R1 − 3 R3
2 −1 3 2 −1 3
= 2 (2 + 14) = positive.
Since the leading principal minors of A are all positive, therefore the given
quadratic form is positive definite.
Example 16: Write the matrix A of the quadratic form
6 x 2 + 35 y 2 + 11z 2 + 4 zx.
Find the eigenvalues of A and hence determine the value class of the given quadratic form.
Solution: The matrix A of the given quadratic form is
 6 0 2
A =  0 35 0  ⋅
 
 2 0 11
The characteristic equation of A is
6−λ 0 2
0 35 − λ 0 =0
2 0 11 − λ
343

or (6 − λ ) (35 − λ ) (11 − λ ) − 2 × 2 (35 − λ ) = 0


or (35 − λ ) [(6 − λ ) (11 − λ ) − 4] = 0
or (35 − λ ) [λ2 − 17λ + 62] = 0
or (35 − λ ) = 0, (λ2 − 17λ + 62) = 0.
17 ± √ (41)
∴ the eigenvalues of A are 35, ⋅
2
Since the eigenvalues of A are all positive, therefore the quadratic form is positive
definite.
Example 17: Show that the quadratic form
5 x12 + 26 x2 2 + 10 x3 2 + 4 x2 x3 + 14 x3 x1 + 6 x1 x2
in three variables is positive semi-definite and find a non-zero set of values of x1 , x2 , x3 which
makes the form zero.
Solution: The matrix of the given form is
5 3 7
A =  3 26 2  ⋅
 
 7 2 10 
5 3 7  1 0 0   1 0 0 
We write  3 26 2  =  0 1 0  A  0 1 0 
     
 7 2 10   0 0 1  0 0 1 
Now we shall reduce A to diagonal form by applying congruence operations on it.
Performing
3 3 7 7
R2 → R2 − R1 , C2 → C2 − C1 ; R3 → R3 − R1 , C3 → C3 − C1 ,
5 5 5 5
5 0 0   1 0 0 1 −5 −5 
3 7

we get  0 121/ 5 −11/ 5  =  −3 / 5 1 0  A  0 1 


0 ⋅
    
 0 −11/ 5 1/ 5   −7 / 5 0 1  0 0 1 

1 1
Performing R3 → R3 + R2 , C3 → C3 + C2 , we get
11 11
5 0 0  1 0 0  1 − 35 − 16
11

0 121 0 =  − 3  
1 0 A 0 1 1 

 5   5 11
0 0 0 − 16
11
1
11

1 0 0 
1

Therefore the linear transformation


 x1   1 − 35 − 16  y
11  1 
x  =  0 1 1 y 
 2  11   2 
 x3   0 0 1   y3 
3 16 1
i. e., x1 = y1 − y2 − y3 , x2 = y2 + y3 , x3 = y3
5 11 11
344

transforms the given form to the diagonal form


121 2
5 y12 + y2 + 0 y3 2 . …(1)
5
But the quadratic form (1) in 3 variables is positive semi-definite and equivalent
quadratic forms have the same value class. Therefore the given quadratic form is
positive semi-definite.
The set of values y1 = 0, y2 = 0, y3 = 1makes (1) zero. Corresponding to this set
1 16
of values, we have x3 = 1, x2 = , x1 = − ⋅ This is a non-zero set of values of
11 11
x1 , x2 , x3 which makes the given quadratic form zero.
Example 18: Show that the form
x12 + 2 x2 2 + 3 x3 2 + 2 x2 x3 − 2 x3 x1 + 2 x1 x2
in three variables is indefinite and find two sets of values of x1 , x2 , x3 for which the form
assumes positive and negative values.
Solution: The matrix of the given quadratic form is
 1 1 − 1
A =  1 2 1 ⋅
 
 −1 1 3 
 1 1 − 1  1 0 0   1 0 0
   
We write 1 2 1 = 0 1 0 A  0 1 0  ⋅
     
 −1 1 3   0 0 1  0 0 1
Now we shall reduce A to diagonal form by applying congruent operations on it.
Performing
R2 → R2 − R1 , C2 → C2 − C1 ; and R3 → R3 + R1 , C3 → C3 + C1 , we get
 1 0 0   1 0 0   1 −1 1
 0 1 2  =  −1 1 0  A 0 1 0 ⋅
     
 0 2 2   1 0 1 0 0 1
Performing R3 → R3 − 2 R2 , C3 → C3 − 2C2 , we get
 1 0 0  1 0 0  1 −1 3
0 1 0 = −1 1 0 A 0 1 −2.
     
0 0 −2  3 −2 1 0 0 1
∴ the linear transformation
 x1   1 −1 3   y1 
x  =  0 1 − 2   y2 
 2   
 x3   0 0 1  y3 
x1 = y1 − y2 + 3 y3 

i. e., x2 = y2 − 2 y3  …(A)
x3 = y3 

345

transforms the given form to the diagonal form


y12 + y2 2 − 2 y3 2 . …(1)
The form (1) is indefinite and so the given quadratic form is also indefinite.
Obviously y1 = 0, y2 = 0, y3 = 1makes the form (1) negative and y1 = 0, y2 = 1,
y3 = 0 makes the form (1) positive. Substituting these values in the relations (A),
we see that the sets of values x1 = 3, x2 = − 2 , x3 = 1 ; x1 = − 1, x2 = 1, x3 = 0
respectively make the given form negative and positive.
Example 19: Show that every real non-singular matrix A can be expressed as
A = QDR ,
where Q and R are orthogonal and D is real diagonal.
Solution:Since A is a real non-singular matrix, therefore A ′A is a positive defi-
nite real symmetric matrix. Let P be an orthogonal matrix such that
P −1 (A ′A) P = P ′ (A ′A) P = diag. [λ 1 , … , λ n ],
where λ 1 , … , λ n are the positive real eigenvalues of the positive definite matrix
A ′ A.
Let D = diag. [√ λ 1 , … , √ λ n ]. Then D is a real diagonal matrix and D ′ = D.
We have D ′ D = D 2 = diag. [λ 1 , … , λ n ]
⇒ D ′ D = P ′A ′AP
⇒ (P ′ ) −1 D ′ DP −1 = (P ′ ) −1 P ′A ′APP −1
⇒ (P ′ ) −1 D ′ DP −1 = A ′A
⇒ (A ′ ) −1 (P ′ ) −1 D ′ DP −1 A −1 = I
⇒ (A −1 ) ′ PD ′ DP ′ A −1 = I [ ∵ P is orthogonal → P ′ = P −1 ]
⇒ (DP ′A −1 ) ′ (DP ′A −1 ) = I
⇒ DP ′A −1 is orthogonal.
Let S = DP ′ A −1 .
Then S is an orthogonal matrix.
Now let Q = S −1 ; then Q is an orthogonal matrix. Also let R = P ′ . Then R is an
orthogonal matrix.
We have QDR = (DP ′A −1 ) −1 DP ′ = A (P ′ ) −1 D−1 DP ′ = A (P ′ ) −1 P ′ = A.
Hence the result.
Example 20:If A is a positive definite real symmetric matrix, show that there exists a
positive definite real symmetric matrix B such that B 2 = A.

Solution: Since A is a positive definite real symmetric matrix, therefore the eigenvalues
λ 1 , … , λ n of A are all real and positive. Also there exists an orthogonal matrix P such that
P −1 AP = D = diag. [λ 1 , … , λ n ].
346

Let D1 = diag. [√ λ 1 , … , √ λ n ].Then D1 2 = D, D1 ′ = D1 , and the eigenvalues of D1


are all positive.
Now suppose that B = P D1 P −1 = P D1 P ′ .
We have B′ = (PD1 P ′ ) ′ = PD1 ′ P ′ = PD1 P ′ = B.
∴ B is a real symmetric matrix.
Also B = PD1 P −1 ⇒ B is similar to D1 . So B and D1 have the same eigenvalues.
Therefore the eigenvalues of B are all positive. So B is a positive definite real
symmetric matrix.
Finally, we have
B 2 = (PD1 P −1 ) 2 = PD1 P −1 PD1 P −1 = PD1 2 P −1 = PDP −1 = A.
Hence the result.
Example 21: Determine the definiteness of the following quadratic form :
 2 0 −2   x1 
q ( x1 , x2 , x3 ) = [ x1 , x2 , x3 ]  1 5 2   x2  ⋅
  
 −1 1 1   x3 

Solution: The given quadratic form is


q = 2 x12 − 2 x1 x3 + x2 x1 + 5 x2 2 + 2 x2 x3 − x3 x1 + x3 x2 + x3 2
1 3 1
= 2 x1 x1 + x1 x2 − x1 x3 + x2 x1 + 5 x2 x2
2 2 2
3 3 3
+ x2 x3 − x3 x1 + x3 x2 + x3 x3 .
2 2 2
The real symmetric matrix associated with the above quadratic form is
 2 12 − 32 
 3 ⋅
A =  12 5 2
− 3 3 1
 2 2
39 1 11
The principal minors of A of order 1 are 2 , 5, 1, those of order 2 are , − and ;
4 4 4
33
the principal minor of order three is − ⋅
4
We see that A has a negative principal minor of even order, namely of order 2.
Hence, by theorem 6 of 3.18, the matrix A is indefinite. Therefore, the given
quadratic form is indefinite.
Example 22: Determine the value class of the real quadratic form
− x12 − 2 x2 2 − 2 x3 2 + 2 x1 x2 + 2 x2 x3
in three variables.
Solution: The given quadratic form is
q ( x1 , x2 , x3 ) = − x12 − 2 x2 2 − 2 x3 2 + 2 x1 x2 + 2 x2 x3
= − x1 x1 + x1 x2 + 0 x1 x3 + x2 x1 − 2 x2 x2 + x2 x3
+ 0 x3 x1 + x3 x2 − 2 x3 x3
347

− 1 1 0
T 
= X AX, where A = 1 − 2 1 is a symmetric matrix.
 
 0 1 − 2
The leading principal minors of A are
−1 1
A1 = − 1, A2 = = 2 − 1 = 1,
1 −2

− 1 1 0  − 1 0 0
A3 =  1 − 2 1 =  1 − 1 1 , C2 → C2 + C1
   
 0 1 − 2  0 1 − 2

= − 1 (2 − 1) = − 1.
We see that A1 < 0, A2 > 0, A3 < 0.
So the given quadratic form is negative definite.

Comprehensive Exercise 3

1. Prove that the quadratic form


2 2 2
6 x1 + 3 x2 + 14 x3 + 4 x2 x3 + 18 x3 x1 + 4 x1 x2
in three variables is positive definite.
2. Prove that the quadratic form
2 2 2
2 x1 + x2 − 3 x3 − 8 x2 x3 − 4 x3 x1 + 12 x1 x2
in three variables is indefinite.
3. Prove that the quadratic form 6 x 2 + 49 y 2 + 51z 2 − 82 yz + 20 zx − 4 xy
in three variables is positive definite.
4. Show that the quadratic form 6 x 2 + 17 y 2 + 3z 2 − 20 xy − 14 yz + 8zx
in three variables is positive semi-definite and find a non-zero set of values of
x, y, z which makes the form zero.
5. Classify the following forms in three variables as definite, semi-definite and
indefinite
(i) 2 x 2 + 2 y 2 + 3z 2 − 4 yz − 4zx + 2 xy
(ii) 26 x 2 + 20 y 2 + 10 z 2 − 4 yz − 16zx − 36 xy
2 2
(iii) x12 + 4 x2 + x3 − 4 x2 x3 + 2 x3 x1 − 4 x1 x2 .

6. Show that a real symmetric matrix is positive definite if and only if all its
characteristic roots are positive.
7. Show that a real symmetric matrix A is positive definite iff A −1 exists and is
positive definite.
348

8. Show that the quadratic form 2 x 2 − 4 xy + 3 xz + 6 y 2 + 6zy + 8z 2


in three variables is positive definite.
9. Show that the quadratic form y 2 + 2z 2 − 2 yz + 2zx − 2 xy
in three variables is indefinite.
10. Determine the value class of the real quadratic form in three variables
2 2 2
x1 + 3 x2 + 6 x3 − 2 x1 x2 + 4 x1 x3 .
11. Determine the value class of the real quadratic form in three variables
2 2 2
− 2 x1 − 2 x2 − 2 x3 + 4 x2 x3 .
12. Determine the definiteness of the real quadratic form in three variables
2 2 2
4 x1 + x2 + 9 x3 − 4 x1 x2 + 12 x1 x3 .

A nswers 3
10. positive semi-definite. 11. negative semi-definite.
12. indefinite.

8.19 Hermitian Forms


Hermitian form: Definition :
 x1 
 x 
Let X = 
2 
be any n-vector in the unitary space Cn and let H = [hij ] n × n be a Hermitian
… 
 x 
 n 
matrix of order n over the complex field C. Then the expression
n n
h ( x1 , x2 , … , x n ) = X θ H X = Σ Σ hij x i x j
i =1 j =1

is called a Hermitian form of order n in the n complex variables x1, x2 , … , xn. The Hermitian
matrix H is called the matrix of this Hermitian form.
If H is real, then a Hermitian form is called the real Hermitian form. Also, det H is
defined as the discriminant of the Hermitian form, and a Hermitian form is called
singular if its determinant is zero, otherwise it is called non-singular.
θ
Note: A matrix H over the complex field Cis called a Hermitian matrix if H = H,
θ
where H denotes the conjugate transpose of the matrix H i. e.,
θ
H = ( H )′ = ( H )T = ( H ′ ).
Some authors denote the conjugate transpose of a matrix H by the symbol H * . A
real symmetric matrix A is always a Hermitian matrix.
349

Theorem 1: A Hermitian form X θ H X assumes only real values for all complex
n-vectors X.
Proof : Suppose X θ H X is a Hermitian form. Then H is a Hermitian matrix and
θ
so H = H.
Since X θ H X is a 1 × 1 matrix, therefore it is symmetric and so
(X θ H X)′ = X θ H X.

Now ( X θ H X ) = ( X θ H X )′ = ( X θ H X ) θ
= Xθ H θ
(X θ ) θ = X θ H X.
Thus X θ H X and its conjugate are equal.
∴ X θ H X is a real 1 × 1 matrix.
Hence, X θ H X has only real values.
Theorem 2 : The determinant and every leading principal minor of a Hermitian matrix H
are real.
Proof : We have | H | = | H | = |( H )′|
= | H θ | = | H |. [∵ H θ
= H, H being Hermitian]
∴ | H | is real.
Since every leading principal sub-matrix of a Hermitian matrix is Hermitian, we
conclude that every leading principal minor of a Hermitian matrix is real.
This completes the proof of the theorem.
Theorem 3 : A Hermitian form X θ H X remains Hermitian by a non-singular linear
transformation of coordinates defined by
X= PY
where P is a non-singular matrix over the complex field C.
Proof : Substituting X = P Y in h = X θ H X, we get
h = ( P Y) θ H ( P Y) = Yθ ( P θ
HP) Y = Yθ Q Y,
θ
where Q= P HP.
θ θ
We have Q = (P HP) θ = P θ
H θ
(P θ )θ = P θ
HP = Q.
∴ Q is Hermitian.
Hence, the transformed form Y θ Q Y is Hermitian.

Remark: It can be easily seen that under this non-singular transformation, the rank
of a Hermitian form remains invariant. We know that the rank of a matrix does not
alter by pre-multiplication or post-multiplication by a non-singular matrix. Since P
and P θ are both non-singular, therefore rank H = rank ( P θ HP) = rank Q.
θ
Hence rank X H X = rank Y θ Q Y.
350

8.20 Unitary Reduction of a Hermitian Form


Diagonal and Unit Hermitian forms: Definition :
A Hermitian form represented by
h = c1 x1 x1 + c 2 x2 x2 + … + c n x n x n
where the coefficients c i ’s are real as h is real, is referred to as diagonal Hermitian form.
If all c i ’s can be reduced to one, then a Hermitian form given by
h = x1 x1 + x2 x2 + … + x n x n
is called a unit Hermitian form.
Theorem 1 : Every Hermitian form X θ H X of order n is unitarily equivalent (under a
transformation X = P Y, P unitary) to the diagonal form
λ 1 y1 y1 + … + λ n y n y n
where λ 1 , … , λ n are eigenvalues of H.
Proof: Since H is a Hermitian matrix, therefore there exists a unitary matrix P
such that
−1 θ
P HP = P HP = D = diag. [λ 1 , … , λ n ]
where λ 1 , … , λ n are the eigenvalues of H.
Consider the unitary linear transformation X = P Y. We have
X θ H X = ( P Y) θ H ( P Y) = Yθ P θ
HP Y = Yθ D Y
= λ 1 y1 y1 + … + λ n y n y n .
Hence, the result.
Remark: If the rank of the Hermitian form X θ H X is r, then only r of its n
eigenvalues λ 1 , … , λ n will be non-zero and consequently only r terms in the above
diagonal form will be non-zero.

8.21 Value Class of a Hermitian Form. Definite,


Semi-definite and Indefinite Hermitian Forms
Definitions : A Hermitian form X θ H X is said to be
(i) Positive definite if X θ H X ≥ 0 for all complex n-vectors X and X θ H X = 0
only if X = O i. e., x1 = x2 = … = x n = 0.
(ii) Negative definite if X θ H X ≤ 0 for all complex n-vectors X and X θ H X = 0
only if X = O i. e., x1 = x2 = … = x n = 0.
(iii) Positive semi-definite if X θ H X ≥ 0 for all complex n-vectors X and
X θ H X = 0 for some non-zero complex n-vector X i. e., X θ H X = 0 for some
complex values of the variables x1 , x2 , … , x n not all zero.
351

(iv) Negative semi-definite if X θ H X ≤ 0 for all complex n-vectors X and


X θ H X = 0 for some non-zero complex n-vector X i. e., X θ H X = 0 for some
complex values of the variables x1 , x2 , … , x n not all zero.
(v) Indefinite if X θ H X takes positive as well as negative values for some complex
values of the variables x1 , x2 , … , x n .
(vi) Non-negative definite if X θ H X ≥ 0 for all complex n-vectors X.
Non-negative definite and positive definite Hermitian matrices. Definition.
A Hermitian matrix H is called non-negative definite if the Hermitian form X θ H X is
non-negative definite . A Hermitian matrix H is called positive definite if the Hermitian form
X θ H X is positive definite.
Similar definitions are for other cases.

8.22 Some Results for Hermitian Forms Parallel to the


Corresponding Results for Real Quadratic Forms
Index and Signature of a Hermitian form:
Definition : Let X θ H X be a Hermitian form of order n. Let
λ 1 z1 z1 + λ 2 z 2 z 2 + … + λ p z p z p − λ p + 1 z p + 1 z p + 1 − … − λ r z r z r
be its diagonal form where λ 1 , λ 2 , … , λ p , λ p + 1 , … , λ r are all positive.
Here, r is the rank of the Hermitian form X θ H X.
The number p of positive terms in this diagonal form is called the index of the
Hermitian form X θ H X. The excess of the number of positive terms over the number
of negative terms in this diagonal form i. e., p − (r − p) = 2 p − r is called the signature of
the Hermitian form X θ H X and is denoted by s.

Theorem 1 : Sylvester’s law of Inertia for Hermitian forms:


Under all non-singular linear transformations the rank r and the index p are invariant of a
Hermitian form and consequently the signature s of a Hermitian form is also invariant under all
non-singular linear transformations.
Equivalent Hermitian forms: Definition: Two Hermitian forms X θ H X and
Yθ Q Y are said to be equivalent over the complex field C if one can be obtained from
the other by a non-singular transformation over C. Thus X θ H X and Yθ Q Y are
equivalent if and only if there exists a non-singular transformation X = P Y such that
X θ H X = ( PY) θ H ( PY) = Yθ ( P θ
HP) Y
= Yθ Q Y, where Q = P θ
HP.
Theorem 2 : Equivalence theorem of Hermitian forms: Two Hermitian forms are
equivalent if and only if they have the same rank and the same index.
352

Theorem 3 : Let X θ H X be a Hermitian form of order n, rank r, and signature s. Then


X θ H X is

(i) positive definite if and only if s = r = n.


(ii) negative definite if and only if − s = r = n.
(iii) positive semi-definite if and only if s = r < n.
(iv) negative semi-definite if and only if − s = r < n.
(v) indefinite if and only if | s | ≠ r i. e., if and only if | s | < r.
Theorem 4 : If the Hermitian form X θ H X is positive definite, then there exists a
non-singular transformation X = P Y such that X θ H X = Yθ Y.
Theorem 5 : Sylvester’s criterion for positive definiteness of Hermitian
forms:
A Hermitian form X θ H X , where H = [ hij ] n × n , is positive definite if and only if all the
leading principal minors of H are positive i. e., iff
h11 … h1n
 11
h h12 
… … …
h11 ,  , … ,  are all positive.
 h21 h22  … … …
 
hn1 … hnn
Theorem 6 : A Hermitian form X θ H X is positive definite if and only if all the principal
minors of H are positive.
Theorem 7 : The Hermitian form
n n
h ( x1 , … , x n ) = X θ H X = Σ Σ hij x i x j
i =1 j =1

is negative definite if and only if the leading principal minors of H are alternately negative and
positive i. e., iff
h11 h12 h13
 h11 h12 
 > 0,h21 h22 h23 < 0, … .
h11 < 0,
 h21 h22   
h31 h32 h33
Theorem 8 : A Hermitian form X θ H X is negative definite if and only if all the
principal minors of H of even order are positive and those of odd order are negative.
Theorem 9 : A Hermitian form X θ H X is
(i) Positive definite if and only if all the eigenvalues of H are positive.
(ii) Negative definite if and only if all the eigenvalues of H are negative.
(iii) Positive semi-definite iff all the eigenvalues of H are ≥ 0 with at least one zero eigenvalue.
(iv) Negative semi-definite if and only if all the eigenvalues of H are ≤ 0 with at least one zero
eigenvalue.
(v) Indefinite iff H has at least one positive eigenvalue and one negative eigenvalue.
353

(vi) Non-negative definite iff the eigenvalues of H are all non-negative.


Theorem 10 : A Hermitian form X θ H X is positive semi-definite iff H is singular and
all its principal minors are non-negative.
Theorem 11 : A Hermitian form X θ H X is negative semi-definite iff H is singular and
the principal minors of even order of H are non-negative while those of odd order are
non-positive.
Theorem 12 : A Hermitian matrix H is indefinite iff at least one of the following
conditions is satisfied :
(i) H has a negative principal minor of even order.
(ii) H has a positive principal minor of odd order and a negative principal minor of odd order.

Example 23 : Determine the definiteness of the following Hermitian forms X θ H X in C3


where
0 − i − i  − 2 1 + 2 i 0
(i) H = i 0 − i (ii) H = 1 − 2 i − 4 0 ⋅
   
 i i 0  0 0 0
Solution : (i) The characteristic equation of the matrix H is
| H − λI | = 0
0 − λ −i −i − λ −i − i
i. e.,  i 0−λ − i = 0 or  i −λ − i = 0
   
 i i 0 − λ  i i − λ
or − λ (λ2 + i2 ) + i (− λi + i2 ) − i (i2 + λi) = 0
or − λ (λ2 − 1) − λi2 + i3 − i3 − λi2 = 0
or − λ (λ2 − 1) + λ + λ = 0
or λ [− (λ2 − 1) + 2] = 0 or λ (λ2 − 3) = 0.
∴ the eigenvalues of H are 0, √ 3, − √ 3.
Since the given Hermitian matrix H has a positive eigenvalue as well as a negative
eigenvalue, therefore the given Hermitian form X θ H X is indefinite.
Remark: The given Hermitian form reduces to the diagonal form
√ 3 y1 y1 − √ 3 y2 y2 .
Its canonical form is z1 z1 − z 2 z 2 .
The order of the given Hermitian form is 3. The rank r of the given Hermitian form
= the number of non-zero terms in its diagonal form = 2.
The index p of the given Hermitian form = the number of positive terms in its
diagonal form = 1.
354

The signature s of the given Hermitian form = the number of positive terms in its
diagonal form − the number of negative terms in its diagonal form = 1 − 1 = 0.
(ii) The characteristic equation of the matrix H of the given Hermitian form
X θ H X is | H − λI | = 0
− 2 − λ 1+ 2 i 0
or 1 − 2 i −4−λ 0 = 0
 
 0 0 − λ
or − λ [(− 2 − λ ) (− 4 − λ ) − (1 − 2 i) (1 + 2 i)] = 0
or − λ [(λ + 2) (λ + 4) − (1 + 4)] = 0
or − λ (λ2 + 6 λ + 8 − 5) = 0 or λ (λ2 + 6 λ + 3) = 0.
− 6 ± √ (36 − 12) − 6 ± 24
∴ λ = 0, i. e., i. e., − 3 ± 6.
2 2
The eigenvalues − 3 + 6 and − 3 − 6 are both negative.
Thus the eigenvalues of the matrix H are all ≤ 0 and at least one of them is zero.
Hence the given Hermitian form X θ H X is negative semi-definite.
Remark: Order of the given Hermitian form = order of the matrix H = 3.
Rank r of the given Hermitian form = 2.
Index p of the given Hermitian form = 0.
Signature s of the given Hermitian form = 0 − 2 = − 2.
A diagonal form of the given Hermitian form
= − (3 − 6) y1 y1 − (3 + 6) y2 y2 .
Its canonical form or normal form is
− z1 z1 − z 2 z 2 .
Example 24 : Determine whether or not the following Hermitian forms in C2 are
equivalent.
(i) 2 x1 x1 + 3i x2 x1 − 3i x1 x2 − x2 x2 , (2 + i) x2 x1 + (2 − i) x1 x2
(ii) x1 x1 + 2 x2 x2 + (1 + 2 i) x1 x2 + (1 − 2 i) x2 x1 , i x1 x2 − i x2 x1 .
Solution : (i) Out of the two given Hermitian forms the first Hermitian form is
2 x1 x1 − 3i x1 x2 + 3i x2 x1 − x2 x2
 2 − 3i 
= X θ H1 X, where H1 =  is a Hermitian matrix.
 3i − 1 
The characteristic equation of H1 is | H1 − λI | = 0
 2−λ − 3i 
i. e.,  = 0
 3i −1− λ
or (2 − λ ) (− 1 − λ ) + 9i2 = 0
or − 2 − λ + λ2 − 9 = 0
or λ2 − λ − 11 = 0.
355

∴ the eigenvalues of H1 are


1 ± 1 + 44 1 ± 45 1± 3 5
i. e., i. e., ⋅
2 2 2
1+ 3 5 1− 3 5
The eigenvalue is positive and the eigenvalue is negative.
2 2
The rank r of the Hermitian form X θ H1 X = the number of non-zero eigenvalues
of H1 = 2 , the index p of the Hermitian form X θ H1 X = the number of positive
eigenvalues of H1 = 1.
The second Hermitian form is
0 x1 x1 + (2 − i) x1 x2 + (2 + i) x2 x1 + 0 x2 x2
 0 2− i
= Xθ H2 X, where H2 =  is a Hermitian matrix.
2+i 0 
The characteristic equation of H2 is
 −λ 2 − i
| H2 − λI | = 0 i. e.,  = 0
2 + i −λ

or λ2 − (2 + i) (2 − i) = 0 or λ2 − 5 = 0
∴ λ2 = 5 or λ = ± 5.
∴ the eigenvalues of H2 are + 5, − 5.
The rank r of the Hermitian form X θ H2 X = the number of non-zero eigenvalues
of H2 = 2 , the index p of the Hermitian form X θ H2 X = the number of positive
eigenvalues of H2 = 1.
Thus the Hermitian forms X θ H1 X and X θ H2 X have the same rank and the
same index. Hence, they are equivalent.
(ii) Out of the two given Hermitian forms the first Hermitian form is
x1 x1 + (1 + 2 i) x1 x2 + (1 − 2 i) x2 x1 + 2 x2 x2
 1 1+ 2 i
= Xθ H1 X, where H1 =   is a Hermitian matrix.
 1 − 2 i 2 
The characteristic equation of H1 is | H1 − λI | = 0
 1− λ 1 + 2 i
i. e.,  = 0
1 − 2 i 2−λ

or (1 − λ ) (2 − λ ) − (1 − 2 i) (1 + 2 i) = 0
or λ2 − 3 λ + 2 − 5 = 0 or λ2 − 3 λ − 3 = 0 .
3 ± 9 + 12 3 ± 21
∴ λ = = ⋅
2 2
3 + 21 3 − 21
∴ the eigenvalues of H1 are , ⋅
2 2
356

3 + 21 3 − 21
The eigenvalue is positive and the eigenvalue is negative.
2 2
The rank r of the Hermitian form X θ H1 X = the number of non-zero eigenvalues
of H1 = 2 ,
the index pof this Hermitian form = the number of positive eigenvalues of H1 = 1.
The second Hermitian form is
0 x1 x1 + i x1 x2 − i x2 x1 + 0 x2 x2
 0 i 
= X θ H2 X, where H2 =   is a Hermitian matrix.
 − i 0 
The characteristic equation of H2 is | H2 − λI | = 0
 −λ i 
i. e.,  = 0 or λ2 + i2 = 0 or λ2 − 1 = 0 .
 − i − λ 
∴ λ2 = 1 or λ = ± 1.
∴ the eigenvalues of H2 are 1, − 1.
The rank r of the Hermitian form X θ H2 X = the number of non-zero eigenvalues
of H2 = 2 and its index p = the number of positive eigenvalues of H2 = 1.
Thus the Hermitian forms X θ H1 X and X θ H2 X have the same rank and the
same index. Hence, they are equivalent.
Example 25 : Reduce the Hermitian form X θ H X, where
 2 1− 2 i 
H= ,
 1+ 2 i − 2 
to the canonical form by a unitary transformation and hence find the rank and signature of the
given Hermitian form.
Solution : The characteristic equation of H is
 2−λ 1− 2 i 
| H − λI | = 0 i. e.,  = 0
1 + 2 i − 2 − λ

or (λ − 2) (λ + 2) − (1 + 2 i) (1 − 2 i) = 0
or λ2 − 4 − (1 + 4) = 0 or λ2 − 9 = 0 .
∴ the eigenvalues of H are − 3, 3.
 x1 
The eigenvector X =   corresponding to the eigenvalue − 3 is given by the
 x2 
equation
 5 1− 2 i   x1   0 
( H + 3I ) X = O or  1+ 2 i  x =0 
 1  2   
i. e., 5 x1 + (1 − 2 i) x2 = 0, and (1 + 2 i) x1 + x2 = 0.
Obviously x1 = 1 − 2 i, x2 = − 5 is a solution.
357

 1− 2 i 
∴ X1 =   is an eigenvector corresponding to the eigenvalue λ = − 3.
 −5
Corresponding to the eigenvalue 3, the eigenvector is given by the equation
( H − 3I ) X = O
 −1 1 − 2 i   x1   0 
or  1+ 2 i =
 − 5   x2   0 

i. e., − x1 + (1 − 2 i) x2 = 0,
and (1 + 2 i) x1 − 5 x2 = 0.
Obviously x1 = 5, x2 = 1 + 2 i is a solution.
 5 
∴ X2 =   is an eigenvector corresponding to λ = 3.
 1+ 2 i 
Length of the vector X1 = √ [|1 − 2 i|2 + | − 5 |2 ]
= √ (5 + 25) = 30
and the length of the vector X2 = √ [| 5 |2 + |1 + 2 i|2 ]
= 25 + 5 = 30.
∴ the unitary matrix P that will transform H to diagonal form is
 1 1 
P= X1 , X2 
 30 30 
 (1 − 2 i) / 30 5 / 30 
= ⋅
 − 5 / 30 (1 + 2 i) / 30 
−1 θ
Also P = P

−1 θ  −3 0
and P HP = P HP =  = diag. [− 3, 3].
 0 3 
The unitary transformation X = P Y will transform the given Hermitian form to
the equivalent diagonal form
 −3 0
Y θ diag.  Y = − 3 y1 y1 + 3 y2 y2 .
 0 3 
The canonical form of the given Hermitian form is
− z1 z1 + z 2 z 2 .
The rank of the given Hermitian form = the number of non-zero eigenvalues of its
matrix H = 2 .
The signature of the given Hermitian form = the number of positive eigenvalues of
H − the number of negative eigenvalues of H = 1 − 1 = 0.
358

Comprehensive Exercise 4

1. Determine the definiteness of the following Hermitian forms X θ H X in C3


where
 2 1− i 2  − 2 1− i i 
(i) H = 1 + i 2 1 + i 
(ii) H = 1 + i − 2 2 + i
   
 2 1 − i 2   − i 2 − i − 6 
 2 1− i i 

(iii) H = 1 + i 4 2 + i ⋅
 
 − i 2 − i 8 
2. Reduce the Hermitian form X θ H X, where
 2 i 0
H = − i 1 − i ,
 
 0 i 2
to the equivalent diagonal form by a unitary transformation and hence find
the rank and signature of the given Hermitian form.
3. Show that a Hermitian matrix H is positive definite if and only if its
eigenvalues are all positive and is non-negative definite if and only if its
eigenvalues are all non-negative.
4. Show that a Hermitian matrix H is positive definite if and only if there
exists a non-singular matrix Q such that H = Q θ Q.
5. If H is a positive definite or a positive semi-definite Hermitian matrix, show
that there exists a Hermitian matrix B such that B2 = H.

A nswers 4
1. (i) Positive semi-definite. (ii) Negative definite.
(iii) Positive definite.
2. The unitary matrix P that will transform H to diagonal form is
− i / 6 1 / 2 i / 3
 
P=  2/ 6 0 1 / 3 ⋅
 
− i / 6 − 1 / 2 i / 3
The unitary transformation X = P Y will transform the given Hermitian
form to the equivalent diagonal form 2 y2 y2 + 3 y3 y3 .
The rank of the given Hermitian form is 2 and its signature is 2.
359

Objective Type Questions

Multiple Choice Questions


Indicate the correct answer for each question by writing the corresponding letter from
(a), (b), (c) and (d).

1. A real quadratic form XT A X in three variables is equivalent to the diagonal


form 3 y12 − 4 y2 2 + 5 y3 2 . Then the quadratic form XT A X is
(a) Positive definite (b) Negative definite
(c) Indefinite (d) Positive semi-definite.

2. A real quadratic form XT A X in three variables is equivalent to the normal


form y12 − y2 2 − y3 2 . If s is the signature of this quadratic form, then
(a) s = 1 (b) s = − 1
(c) s = 2 (d) s = 3.

3. A real quadratic form XT A X is positive semi-definite if and only if


(a) the eigenvalues of the matrix A are all ≤ 0
(b) the eigenvalues of the matrix A are all > 0
(c) the eigenvalues of the matrix A are all negative
(d) the eigenvalues of the matrix A are all ≥ 0 and at least one eigenvalue of A
is zero.
4. If n is the order, r is the rank and s is the signature of a real quadratic form in n
variables, then the quadratic form is negative semi-definite if
(a) s = r = n (b) − s = r = n
(c) s = r < n (d) − s = r < n.
5. A real quadratic form in three variables is equivalent to the diagonal form
6 y12 + 3 y2 2 + 0 y3 2 .
Then the quadratic form is
(a) Positive definite (b) Indefinite
(c) Positive semi-definite (d) Negative.

Fill in the Blank(s)


Fill in the blanks ‘‘……’’ so that the following statements are complete and correct.
1. If the bilinear form
b ( X , Y ) = X T AY = 2 x1 y1 + x1 y2 − 2 x2 y1 + 3 x2 y2 − 3 x1 y3 ,
then the matrix A = …… .
360

2. The bilinear form b ( X , Y ) = X T AY is said to be a symmetric bilinear form


if the matrix A is a …… matrix.
n n
3. An expression of the form Σ Σ a ij x i x j ,where a ij ’s are elements of a field F,
i =1 j =1

is called a …… in the n variables x1 , x2 , … , x n over the field F.


4. There exists a one-to-one correspondence between the set of all quadratic
forms in n variables over a field F and the set of all n-rowed …… matrices over
F.
5. The matrix A corresponding to the quadratic form
d1 x12 + d2 x2 2 + d3 x3 2 + d4 x4 2 + d5 x5 2
in five variables x1 , x2 , … , x5 is A = …… .
6. The quadratic form corresponding to the symmetric matrix diag.
[λ 1 , λ 2 , … , λ n ] is …… .
7. The quadratic form corresponding to the matrix
2 1 5
 1 3 −2  is ...... .
 
 5 −2 4 

8. Two real quadratic forms in n variables are equivalent if and only if they have
the same …… and the same index.
9. The number of positive terms in any two normal reductions of a real
quadratic form is the …… .
10. The rank of a real quadratic form XT A X is equal to the number of ……
eigenvalues of the symmetric matrix A.
11. The real quadratic form XT A X is positive definite if and only if the
eigenvalues of the matrix A are all …… .
12. The real quadratic form XT A X is positive definite if XT A X …… 0, for all
non-zero vectors X.
13. The eigenvalues of a Hermitian matrix are always …… .
14. A Hermitian form X θ H X always takes …… values for all complex
n-vectors X.
15. A Hermitian form X θ H X is negative definite iff the eigenvalues of the
matrix H are all …… .
True or False
Write ‘T’ for true and ‘F’ for false statement.
1. The bilinear form
3 x1 y1 + x1 y2 + x2 y1 − 2 x2 y2 − 4 x2 y3 − 4 x3 y2 + 3 x3 y3
is symmetric.
361

2. The matrix corresponding to the quadratic form


d1 x12 + d2 x2 2 + d3 x3 2 + d4 x4 2
is a diagonal matrix.
3. Every real quadratic form over a field F in n variables x1 , x2 , … , x n can be
expressed in the form X ′B X where X′ = [ x1 , x2 , … , x n ]′ is a column vector
and B is a skew-symmetric matrix of order n over the field F.
4. A real quadratic form in three variables is equivalent to the diagonal form
7 16
6 y12 + y2 2 − y3 2 .
3 7
Then the rank of the quadratic form is 2.
5. A real quadratic form XT AX is positive definite iff the leading principal
minors of the matrix A are all positive.
6. A real quadratic form XT A X is negative definite iff the leading principal
minors of the matrix A are all negative.
7. A real quadratic form in three variables is equivalent to the diagonal form
4 y12 + 3 y2 2 .
Then the quadratic form is positive definite.
8. A real quadratic form in four variables is equivalent to the normal form
y12 + y2 2 − y3 2 − y4 2 .
Then the signature of the quadratic form is 0.
9. A Hermitian form X θ H X in three variables is equivalent to the diagonal
form
4 y1 y1 + 5 y2 y2 + 7 y3 y3 .
Then the Hermitian form is positive definite.
10. A Hermitian form X θ H X in three variables is equivalent to the diagonal
form
3
y1 y1 + 5 y2 y2 .
2
Then the Hermitian form is positive definite.

A nswers

Multiple Choice Questions


1. (c) 2. (b) 3. (d)
4. (d) 5. (c)
362

Fill in the Blank(s)

 2 1 −3 
1.  −2 3 0 
 
2. symmetric 3. quadratic form
4. symmetric 5. diag. [d1 , d2 , d3 , d4 , d5 ]
6. λ 1 x12 + λ 2 x2 2 + … + λ n x n2
7. 2 x12 + 3 x2 2 + 4 x3 2 + 2 x1 x2 + 10 x1 x3 − 4 x2 x3
8. rank 9. same 10. non-zero
11. positive 12. > 13. real
14. real 15. negative

True or False

1. T 2. T 3. F
4. F 5. T 6. F
7. F 8. T 9. T
10. F

You might also like