Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Advanced Engineering Mathematics Prof. Pratima Panigrahi Department of Mathematics Indian Institute of Technology, Kharagpur

Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

Advanced Engineering Mathematics

Prof. Pratima Panigrahi


Department of Mathematics
Indian Institute of Technology, Kharagpur
Lecture No. # 01
Review Groups, Fields & Matrices

(Refer Slide Time: 00:24)

We learn the topic Linear Algebra. Linear Algebra linear algebra is an important, basic
topic of our Mathematics. It is also an important tool in Engineering and Sciences, it has
lots of application in almost all areas of Engineering and also Science disciplines. The
topic linear algebra is based on an important algebraic structure that is called vector
space or linear space.
Again in a vector space, we deal with two types of elements that is, vectors and scalars.
So, first let us know what is scalars? To know scalars, we have to recall two important
basic algebraic structures called groups and fields. So, we will start with this groups; this
is very basic topic in Abstract Algebra.

So, a group G is a non-empty set together with, non-empty set together with a binary
operation star. This may be represented as, may be represented as a pair (G, star). And it
is satisfying the following conditions.
First one is that is called closure property; for any two elements a and b in G, a star b
belongs to G. This is called closure property. This is called closure property. For any
three elements a, b, c in G, a star b star c is equal to a star b star c. This is called
associative. And third property is this; there exists an element e in G such that a star e is
equal to e star a is equal to a, for all a in G. This element e is also called identity element;
e is called identity element of G.
(Refer Slide Time: 04:54)

And fourth property is this. For every element a in G, there exist b in G such that, a star b
is equal to b star a is equal to e. Here, a and b are called inverse of each other. Again, we
say that a group G is commutative, if for every element a and b in G, a star b is equal to b
star a.
Next, let us see some examples, some examples of groups. First one is that very natural
one that, set of all real numbers with usual addition operation is a commutative group is a
commutative group. Similarly, we can also see that set of all integers with addition
operation is a commutative group. However, if we see the set of all natural numbers with
respect to addition is not a group, where n is the set of all natural numbers, where n is the
set of all natural numbers.

(Refer Slide Time: 08:05)

So, next we shall see the definition of a field, which is very much useful in vector space,
in defining a vector space. We need this algebraic structure that is called a field. A field
F is a non-empty set together with two binary operations two binary operations plus and
dot. This may be represented as, may be represented as a triplet; F, plus, dot. And it is
satisfying the following conditions or axioms, following axioms.
First one is, F plus is a commutative group. And, second condition is F minus 0 with
respect to this dot operation is a commutative group. Therefore, where this 0 is the 0 is
the additive identity, additive identity of F, that is, identity element of identity element of
this group F, plus. And third condition is also called compatibility condition. In any
structure, whenever we are having more than one operation, we should have this
compatibility property and this is also called distributive property. For any three
elements a, b, c in F, a dot b plus c is equal to a dot b plus a dot c. In other words, this
dot operation is distributive over addition. This is called distributive law.

(Refer Slide Time: 11:45)

So, here are some terminologies or some definitions that we refer and again and again
that will give as a remark set of thing that in a in a field, positive mark is this, in a field F
plus dot; this plus is called addition and dot is called multiplication. Of course, they
are not necessarily the addition and multiplication operation on the set of real numbers.
This plus and dot symbols, they just represent to binary operations. And, their names
have been in given as addition and multiplication. There need not be usual additional and
multiplication operations on set of real numbers. There may be any binary operation.
So, that we will see example that, here this zero is called or this, is the identity element
of F, plus is called the additive identity and denoted by zero. The identity element of F
minus 0, dot is called the multiplicative identity and is denoted by one. Then fourth
remark is like this. It is known that identity element; identity element of a group is
unique. So, zero and one are also unique. So, fifth observation is like this. It is also
known that inverse of an element in a group is unique.

(Refer Slide Time: 15:49)

Therefore, for any element a in F, the additive inverse the additive the additive inverse of
a is denoted by minus a. Next, we shall see some examples of fields. So, again we are
having this. Very natural example is this. Set of all real numbers with usual addition and
multiplication, this is a field. And second one also we are having this set of all complex
numbers with respect to addition and multiplication. This is also a field. We will see
another example that set of all integers with respect to addition and multiplication. This
is not a field because we will not get multiplicative inverse of elements in Z. So, fourth
example is, we will see an example of a finite field here all the examples we have given
are examples of infinite field. So, let us see one example of a finite field.
So, for any prime for any prime p, Z p, Z p is a field, where Z p is the set of integers
modulo p; that is, in other words, Z p is the set of remainders set of remainders of
integers when divided by p divided by p.

(Refer Slide Time: 19:44)

Of course, the operations on Z p with respect to, which Z p is a field are addition modulo
p are addition modulo p and multiplication modulo p. For example, consider Z 5. So, Z 5
will be the set of remainders 0, 1, 2, 3, 4. Here on Z 5, we will have this addition modulo
5 operation is like this; 3 plus 4 that is equal to 2 modulo 5 and 3 into 2 that is equal to 1
modulo 5. So, now these fields, here we can have this note that the field R, the fields R,
plus, dot and C, plus, dot are called real fields and complex field respectively.
So, now we are ready to give definition of a vector space, but before that we like to recall
another important objects that we need in this linear algebra are matrices. Matrices play
an important role in linear algebra to represent various concepts. So, let us see matrices.
So, as you know this, matrix A, matrix of size A matrix, a matrix A of size m by n
denoted by a i j m by n is rectangular array of m rows and n columns and is enclosed
with brackets.

(Refer Slide Time: 24:24)

Here this matrices, here these entries, here the entries a i j, entries a i j of A may come
from may come from a field, may come from a field F. And, in this case, A is called a
matrix over F. If m is equal to n, then A is called a square matrix is called a square
matrix. For a square matrix a i j n cross n, the entries a 1 1, a 2 2... a n n is, the entries are
called the main diagonal or main or principal diagonal. Again, if all the entries of A if all
the entries of A are equal to 0, this 0 is 0 of the field of course, of the field, then A is
called a zero matrix. And, sometime we denote by 0 also and is sometime denoted by 0.
(Refer Slide Time: 27:18)

Here, we will have also diagonal matrices. That we will see. So, we can have this
diagonal matrix. A square matrix A is called a diagonal matrix if all the off diagonal
entries are 0; that is, if the entries of A are a i j, then a i j equal to 0, for i not equal to j.
So, now we can also say that two matrices are equal. That equality of matrices, two
matrices A and B are said to be equal, two matrices of same size that m by n and B is
also a matrix of size m by n are called equal if a i j is equal to b i j, for all i and j.
So, now we have an algebra of matrices that we can add to matrices. We can subtract two
matrices. We can have scalar two multiplications of matrices and also we have product
of matrices or multiplication of matrices. So, now we can say this algebra of matrices.
So, first we shall see addition of matrices, but suppose we are having two matrices A and
B of same size, A be a matrix of size m by n and B be a matrix of same size m by n over
the same field F.
(Refer Slide Time: 31:02)

Then addition of A and B then addition of a and b, denoted by A plus B, is the matrix C,
whose entries are c i j; where c i j is equal to a i j plus b i j, for all i and j. We can also
have scalar multiplication of matrices. But we can have also this scalar multiplication of
matrices. That is, for any matrix A with entries a i j of size m by n over a field F and any
element alpha in F, alpha A is the matrix is the matrix with entries alpha a i j, that is,
alpha A is equal to alpha a i j of size m by n and is called a scalar multiple of A.

(Refer Slide Time: 33:57)

So, next we will have some properties of addition and scalar multiplication, which are
very useful. But we will have some properties, some properties of addition and scalar
multiplication of matrices scalar multiplication of matrices. That, first property is A plus
B is equal to B plus A; that is commutative matrix, addition is commutative. Here A and
B are any matrices over same field and same size, are matrices over F and are of same
size.
We will also have that, A plus B plus C is equal to A plus B plus C; that is associative
property. Here also matrices A, B, C are over the same field F of same size. A, B, C are
matrices over F and of same size. So, here if we add the 0 matrix that, A plus 0 is equal
to A for every matrix A; of course the size of this matrix A and the size of this matrix 0
are same, where A and 0 are of same size. So, fourth property is that, we can also have
this negative of A matrix that A plus minus A. That is equal to 0. So, where A is a i j
implies minus A is equal to minus a i j.

(Refer Slide Time: 37:24)

So, next we will have some properties of scalar multiplication. That is fifth property. For
scalars, alpha and beta in the field, elements of this field a power four is called scalars for
alpha, beta belongs to F and matrix A over F alpha plus beta times A is equal to alpha A
plus beta A. And A for alpha in F and matrices A and B over F of same size; alpha into
A plus B is equal to alpha A plus alpha B. Or in other words, the scalar multiplication is
distributive over matrix addition.
Also, we are having this. For scalars alpha, beta in F and matrix A over F alpha times is
beta A; this is equal to alpha beta A. So, next we will see multiplication of two matrices
or also called a product of matrices. So, that is called matrix multiplication or product of
two matrices. So, it is defined like this. If A be a matrix of size m by n and B is a matrix
with entries b i j and size n cross p, n by p are matrices over a field F, then their
multiplication or product is the matrix.

(Refer Slide Time: 40:49)

C with entries c i j and this will be of size m by p. And, this entries are given by c i j is
equal to sum of a i k b k j and k runs from 1 to n. Next, we will see some properties of
matrix multiplication properties of matrix multiplication.
So, one important property of matrix multiplication is that, this need not be commutative.
That matrix multiplication need not be commutative; that is, there exist matrices A and B
such that, A B is not equal to B A. However, this matrix multiplication is a associative.
That, for matrices A with the entries a i j, that is m of size m by n, B is a matrix with size
n by p and C is a matrix of size p by q. Then, we have this A into B into C is equal to A
into B into C. That is associative.
Also, this matrix multiplication is distributive over addition. For matrices of suitable
sizes that we have A dot B plus C that is equal to A dot B plus A dot C. That is left
distributive. This property is called left distributive. Similarly, we have that right
distributive property that also holds for this matrix multiplication.

(Refer Slide Time: 44:29)

So, that is, for suitable matrices or matrices of suitable sizes, this A plus B dot C that is
equal to, I mean A, that is equal to, sorry, that A C plus B C. This is called right
distributive. Then, we are having some special matrices. Before that, we will see this
transpose of a matrix, transpose of a matrix. This is very useful throughout this linear
algebra. So, if we are having the matrix A with entries a i j and size m by n be a matrix
over F, the transpose of A will denote like this; denoted by A transpose is the matrix A
transpose is equal to a j i that is of size n by m. Or in other words, we get transpose of A
by writing the rows of a h columns of a transpose in order.
So, we can have this properties of transpose of a matrix. So, first property is like this. For
any matrix A, if we take its double transpose, transpose the A transpose that will get the
same matrix back; that equal to A. Second property is, for matrices A and B over F and
of same size, we have this transpose of A plus B; that is equal to transpose of A plus
transpose of B.

(Refer Slide Time: 48:32)

So, another property we will have is this. So, if A and B or for matrices A and B over F
of sizes m by n and n by p respectively; that transpose of product of A and B. this is
equal to product of transpose of B and transpose of A. So, it will be reverse. The product
will be in reverse order.
So, next we will see some special matrices. First one is this symmetric matrices. They
are also very important type of matrices. So, a matrix A, basically we will see that square
matrix of size n by n is called symmetric is called symmetric, if A transpose is equal to
A. Or in other words, that entries of this, that is a i j is equal to a j i, for all i and j. Here
the entry is can be element of any field, of course. Here a i j can be elements of any field.
Of course some people, they consider elements of a symmetric are real numbers. But, in
general we can consider elements be elements of any arbitrary field.
Then, we say that a matrix A is Skew-symmetric, that is, Skew-symmetric matrices. A
matrix A with entries a i j, this is also square matrix, and is called Skew-symmetric, if A
transpose is equal to minus A. That is, it satisfies this condition that a i j is equal to
minus a j i, for all i and j.

(Refer Slide Time: 52:45)

So, another important kind of matrices are Hermitian matrices and they are defined over
complex fields only. So, let A be a square matrix over complex field, over C. Then, A is
called Hermitian a is called hermitian, if A conjugate transpose is equal to A; that is
where, thus A conjugate is equal to a i j bar, that is the complex conjugate, a i j bar are
the complex conjugates of a i j.
Here, also we define these Skew-Hermitian matrices. So, we say, again the complex
matrix, a complex matrix A with entries a i j, of course this is square matrix. It is called
Skew-Hermitian skew hermitian if A conjugate transpose is equal to minus A.

(Refer Slide Time: 55:53)

So, quickly we can have some properties of this matrices; that properties of special
matrices. That first property is that, symmetric matrices and Hermitian matrices agree on
the real field. Second property is that, for Hermitian matrices all diagonal entries are real,
all diagonal entries are real. And for skew-Hermitian matrices diagonal entries are either
0 or pure imaginary. So, next we will also discuss.

You might also like