Inverse of Matrix
Inverse of Matrix
Inverse of Matrix
a11 a12 · · · a1n x1
a21 a22 · · · a2n x2
AX = .. .. .. .. · ..
. . . . .
am1 am2 · · · amn xn
a11 x1 + a12 x2 + . . . + a1n xn
a21 x1 + a22 x2 + . . . + a2n xn
= .. .. .. ..
. . . .
am1 x1 + am2 x2 + . . . + amn xn
and we see that AX is a column vector whose entries are the left hand sides
of the equations. Thus, the matrix equation AX = B simply says that the
ith entry of column vector AX must be equal to the ith entry of column vec-
tor B, i.e., that for each equation in the system, the left hand side of the
equation must equal the right hand side of the equation.
1
We see that AX = B is a matrix representation of the SLE. The aug-
mented matrix (A|B), i.e., the coefficient matrix for the system with the
column vector of right hand side values appended as an extra column, is just
a form of short-hand for this matrix equation AX = B.
2
Because an n × 1 column vector can also be written as an n-vector, we
often give column vectors names in vector notation, rather than in matrix
notation. That is, rather than talking about the column vectors X and B,
we can refer to these as the vectors ~x and ~b. Thus, we usually write the
matrix form of an SLE as A~x = ~b, rather than as AX = B. This is the
convention which we will use from now on. However, you should bear in
mind that when we write A~x = ~b for a system of m equations in n variables,
since A is (as always) the m × n coefficient matrix, ~x, the vector of variables,
is used here as an n × 1 column vector, and ~b, the vector of right hand side
values, is actually an m × 1 column vector, so that this equation makes sense
mathematically.
AB = BA = I
then we say that A is invertible (or nonsingular) and that B is the inverse
of A, written B = A−1 . If A has no inverse (i.e., if no such matrix B exists),
then A is said to be noninvertible (or singular).
3
1 2 −2 1
Example 1. For A = and B = 3 ,
3 4 2
− 12
show that the matrix B is the inverse of matrix A.
=I
−2 1 1 2
BA = 3
2
− 12 3 4
(−2)(1) + (1)(3) (−2)(2) + (1)(4)
=
(3/2)(1) + (−1/2)(3) (3/2)(2) + (−1/2)(4)
1 0
=
0 1
=I
Proof: Suppose that a square matrix A has two inverses, say B and C. We
show that it must be true that B = C.
Let n be the order of the square matrix A. Then B and C must also be
square matrices of order n. Of course, the n × n matrix B can be multiplied
by the identity matrix of order n. This matrix multiplication leaves the
4
matrix unchanged. That is, we have:
B = BI
B(AC) = (BA)C
IC = C
and we see that, in fact, B = C. That is, we see that in order for B and C
to both be inverses of A, they must be the same matrix, so any invertible
matrix has a unique inverse.
What this theorem tells us is that, for instance, we only really needed to
compute one of AB or BA in example 1 to prove that B is the inverse of A.
5
transforming the augmented matrix to RREF is not affected by this change,
though.
Then
Solution:
(a) We wish to find the inverse of a 3 × 3 matrix, so we start by forming
the augmented matrix obtained by appending the 3 × 3 identity matrix. We
then row reduce this augmented matrix to obtain RREF.
1 1 1 1 0 0 1 1 1 1 0 0
R2→R2−R1
[A|I] = 1 2 3 0 1 0 −−−−−−−→ 0 1 2 −1 1 0
R3→R3−2R1
2 1 2 0 0 1 0 −1 0 −2 0 1
1 0 −1 2 −1 0
R1→R1−R2
−−−−−−−→ 0 1 2 −1 1 0
R3→R3+R2
0 0 2 −3 1 1
1
R3→ R3
1 0 −1 2 −1 0
−−−−2−→ 0 1 2 −1 1 0
3 1 1
0 0 1 −2 2 2
6
1
− 21 1
1 0 0 2 2
R1→R1+R3
−−−−−−−→ 0 1 0 2 0 −1
R2→R2−2R3 3 1 1
0 0 1 −2 2 2
This matrix is now in RREF. We see that the original matrix (i.e. the left
side of the augmented matrix) has been transformed into the 3 × 3 identity
matrix. This tells us that A is invertible and that the columns on the right
side of the RREF augmented matrix are the columns of A−1 . Thus we see
that 1
− 12 1
2 2
A−1 = 2 0 −1
3 1 1
−2 2 2
Check:
1
− 12 1
1 1 1 2 2
AA−1 = 1 2 3 2 0 −1
3 1 1
2 1 2 −2 2 2
1 3 1 1 1
− 1 + 12
2
+ 2 − 2
− 2
+ 0 + 2 2
= 12 + 4 − 29 − 21 + 0 + 32 21 − 2 + 32
1 + 2 − 3 −1 + 0 + 1 1 − 1 + 1
1 0 0
= 0 1 0
0 0 1
1 1 1 1 0 0 1 0 −1 0 −3 2
R.R.E.F.
[A|I] = 1 2 3 0 1 0 −−−−−→ 0 1 2 0 2 −1
2 3 4 0 0 1 0 0 0 1 1 −1
Since the matrix on the left is not an identity matrix, we see that the matrix
A is singular, i.e. has no inverse.
Notice: During the process of row reducing the matrix, as soon as the bottom
row becomes all 0’s in the left part of the augmented matrix, we can tell that
the original matrix is not going to be transformed into an identity matrix.
Therefore we could stop at that point. There is no need to continue reducing
the matrix to RREF.
7
The method of Inverses
Returning to the problem of solving the linear system A~x = ~b, we see
that if A is a square invertible matrix then we have
A~x = ~b
⇒ I~x = A−1~b
⇒ ~x = A−1~b
Therefore, if we can find A−1 , we are able to solve the system simply by
multiplying A−1 times ~b.
x + y + z = 4
x + 2y + 3z = 6
2x + y + 2z = 5
Solution:
The matrix form of the system is
1 1 1 x 4
~
A~x = b : 1 2 3
y = 6
2 1 2 z 5
8
1
− 12 1
2 2
We saw in Example 2(a) that for this matrix A, we have A−1 = 2 0 −1
− 23 1
2
1
2
Therefore,
1
− 21 1
3
2 2
4 2
−1~
~x = A b = 2 0 −1 6 = 3
− 32 1
2
1
2
5 − 21
3
x 2
so y = 3 , i.e. (x, y, z) = 23 , 3, − 21
z − 21
is the unique solution to this system.
Notice: For any SLE for which the coefficient matrix A is square and in-
vertible, A−1 is unique and thus so is A−1~b. Therefore any system for which
the method of inverses can be used has a unique solution.
Solution:
The coefficient matrix for this system is:
1 1 1
A= 1 2 3
1 2 4
9
But then, no matter what the vector ~b = (b1 , b2 , b3 ) is, we can find the unique
solution to the system as:
x 2 −2 1 b1 2b1 − 2b2 + b3
y = A−1~b = −1 3 −2 b2 = −b1 + 3b2 − 2b3
z 0 −1 1 b3 −b2 + b3
10