Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Lecture 14

Download as pdf or txt
Download as pdf or txt
You are on page 1of 21

Numerical Methods for

Engineers
EE0903301
Lecture 14: Matrix Inverse and Condition
Dr. Yanal Faouri
The Matrix Inverse

• If a matrix [A] is square and nonsingular (det ≠ 0), there is another matrix [A]−1, called the inverse of [A], for
which [A][A]−1 = [A]−1[A] = [I].

• Calculating the Inverse


• The inverse can be computed in a column-by-column fashion by generating solutions with unit vectors as the
right-hand-side constants. i.e. {b}T = [1 0 0], {b}T = [0 1 0] and {b}T = [0 0 1].

• The best way to implement such a calculation is with LU factorization. Recall that one of the great strengths of LU
factorization is that it provides a very efficient means to evaluate multiple right-hand-side vectors. Thus, it is
ideal for evaluating the multiple unit vectors needed to compute the inverse.
Example: Matrix Inversion

• Problem Statement.
• Employ LU factorization to determine the matrix inverse for the following matrix:

• [A] has an LU factorization as:


Example: Matrix Inversion

• Solution. The first column of the matrix inverse can be determined by performing the forward-substitution
solution procedure with a unit vector (with 1 in the first row) as the right-hand-side vector. Thus, the lower
triangular system can be set up as in [L]{d} = {b}:

• This vector can then be used as the right-hand side of the upper triangular system [U]{x} = {d}:

• which can be solved by back substitution for {x}T = ⌊0.33249 −0.00518 −0.01008⌋, which is the first column of the
matrix inverse:
Example: Matrix Inversion

• To determine the second column of [A]-1. vector b is rewritten with 1 in the second row then solve [L]{d} = {b} for
{d} and then solve [U]{x} = {d} for {x}. The values of x will be the second column of [A]-1:

• Repeat with {b}T = [0 0 1]:


Stimulus-Response Computations

• For balance equations, the terms of [A]{x} = {b} have a definite physical interpretation, where {x} represents the
system’s state or response, that needed to be determined.
• The right-hand-side vector {b} contains those elements of the balance that are independent of behavior of the
system—that is, they are constants. In many problems, they represent the forcing functions or external stimuli
that drive the system.
• Finally, the matrix of coefficients [A] usually contains the parameters that express how the parts of the system
interact or are coupled. Consequently, Eq. [A]{x} = {b} might be re-expressed as:
[Interactions]{response} = {stimuli}
• There are a variety of ways to solve Eq. [A]{x} = {b}. One of these methods is by using the matrix inverse. The
formal solution can be expressed as {x} = [A]−1{b}
Stimulus-Response Computations

• Thus, we find that the inverted matrix itself, aside from providing a solution, has extremely useful properties.
That is, each of its elements represents the response of a single part of the system to a unit stimulus of any other
part of the system.
• Notice that these formulations are linear and, therefore, superposition and proportionality hold.
• Superposition means that if a system is subject to several different stimuli (the b’s), the responses can be
computed individually, and the results summed to obtain a total response.
• Proportionality means that multiplying the stimuli by a quantity results in the response to those stimuli being
multiplied by the same quantity. Thus, the coefficient a11−1 is a proportionality constant that gives the value of x1
due to a unit level of b1. This result is independent of the effects of b2 and b3 on x1, which are reflected in the
coefficients a12−1 and a13−1 , respectively. Therefore, we can draw the general conclusion that the element aij−1 of
the inverted matrix represents the value of xi due to a unit quantity of bj.
• Using the example of the structure, element aij−1 of the matrix inverse would represent the force in member i
due to a unit external force at node j. Even for small systems, such behavior of individual stimulus-response
interactions would not be intuitively obvious. As such, the matrix inverse provides a powerful technique for
understanding the interrelationships of component parts of complicated systems.
Example: Analyzing the Bungee Jumper Problem

• Problem Statement.
• Recall the three individuals suspended vertically connected by bungee cords example. The system was derived,
and a system of linear algebraic equations based on force balances for each jumper was as follow,

• Use MATLAB to compute the matrix inverse and interpret what it means.
Example: Analyzing the Bungee Jumper Problem

• Solution. Start up MATLAB and enter the coefficient matrix:


>> K = [150 −100 0;−100 150 −50;0 −50 50];
• The inverse can then be computed as
>> KI = inv(K)

• Each element of the inverse, kij−1 of the inverted matrix represents the vertical change in position (in meters) of
jumper i due to a unit change in force (in Newtons) applied to jumper j.
• First, observe that the numbers in the first column (j = 1) indicate that the position of all three jumpers would
increase by 0.02 m if the force on the first jumper was increased by 1 N. This makes sense, because the
additional force would only extend the first cord by that amount.
Example: Analyzing the Bungee Jumper Problem

• In contrast, the numbers in the second column (j = 2) indicate that applying a force of 1 N to the second jumper
would move the first jumper down by 0.02 m, but the second and third by 0.03 m. The 0.02-m extension of the
first jumper makes sense because the first cord is subject to an extra 1 N regardless of whether the force is
applied to the first or second jumper.
• However, for the second jumper the elongation is now 0.03 m because along with the first cord, the second cord
also elongates due to the additional force. And of course, the third jumper shows the identical translation as the
second jumper as there is no additional force on the third cord that connects them.
• As expected, the third column (j = 3) indicates that applying a force of 1 N to the third jumper results in the first
and second jumpers moving the same distances as occurred when the force was applied to the second jumper.
However, now because of the additional elongation of the third cord, the third jumper is moved farther
downward.
• Superposition and proportionality can be demonstrated by using the inverse to determine how much farther the
third jumper would move downward if additional forces of 10, 50, and 20 N were applied to the first, second,
and third jumpers, respectively. This can be done simply by using the appropriate elements of the third row of
the inverse to compute,
Δx3 = k31−1 ΔF1 + k32−1 ΔF2 + k33−1 ΔF3 = 0.02(10) + 0.03(50) + 0.05(20) = 2.7 m.
Error Analysis and System Condition

• Aside from its engineering and scientific applications, the inverse also provides a means to distinguish whether
systems are ill-conditioned. Three direct methods can be devised for this purpose:
1. Scale the matrix of coefficients [A] so that the largest element in each row is 1. Invert the scaled matrix and if
there are elements of [A]−1 that are several orders of magnitude greater than one, it is likely that the system is ill-
conditioned.
2. Multiply the inverse by the original coefficient matrix and assess whether the result is close to the identity
matrix. If not, it indicates ill-conditioning.
3. Invert the inverted matrix and assess whether the result is sufficiently close to the original coefficient matrix. If
not, it again indicates that the system is ill-conditioned.
• Although these methods can indicate ill-conditioning, it would be preferable to obtain a single number that
could serve as an indicator of the problem.
• Attempts to formulate such a matrix condition number are based on the mathematical concept of the norm.
Vector and Matrix Norms

• A norm is a real-valued function that provides a measure of the size or “length” of


multicomponent mathematical entities such as vectors and matrices.
• A simple example is a vector in three-dimensional Euclidean space that can be
represented as:
⌊F⌋ = ⌊a b c⌋
• Where a, b, and c are the distances along the x, y, and z axes, respectively. The length of
this vector (the distance from the coordinate (0, 0, 0) to (a, b, c)) can be simply computed
as:

• where the nomenclature ∥F∥e indicates that this length is referred to as the Euclidean
norm of [F].
• Similarly, for an n-dimensional vector ⌊X⌋ = ⌊x1 x2 ・ ・ ・ xn⌋, a Euclidean norm would be
computed as:
Vector and Matrix Norms

• The concept can be extended further to a matrix [A], as in:

• Which is given a special name “the Frobenius norm”. It provides a single value to quantify the “size” of [A].
• It should be noted that there are alternatives to the Euclidean and Frobenius norms. For vectors, there are
alternatives called ‘p’ norms that can be represented generally by:

• It can be seen that the Euclidean norm and the 2 norm, ∥X∥2, are identical for vectors.
• Other important examples are (p = 1)

• Which represents the norm as the sum of the absolute values of the elements.
Vector and Matrix Norms

• Another is the maximum-magnitude or uniform-vector norm ( p = ∞),

• Which defines the norm as the element with the largest absolute value.
• Using a similar approach, norms can be developed for matrices. For example,

• That is, a summation of the absolute values of the coefficients is performed for each column, and the largest of
these summations is taken as the norm. This is called the column sum norm.
• A similar determination can be made for the rows, resulting in a uniform-matrix or row-sum norm:
Vector and Matrix Norms

• It should be noted that, in contrast to vectors, the 2 norm and the Frobenius norm for a matrix are
not the same. since the matrix 2 norm ∥A∥2 is calculated as:
∥A∥2 = (μmax)1/2

• Where μmax is the largest eigenvalue of [A]T[A] while the Frobenius norm is calculated from the ∥A∥f
equation.
• For the time being, the important point is that the ∥A∥2, or spectral norm, is the minimum norm and,
therefore, provides the tightest measure of size.
Matrix Condition Number

• The concept of the norm can be used to define:


Cond[A] = ∥A∥・∥A−1∥
• Where Cond[A] is called the matrix condition number.
• Note that for a matrix [A], this number will be greater than or equal to 1.
• It can be shown that:

➔ The relative error of the norm of the computed solution can be as large as the relative error of the norm of the
coefficients of [A] multiplied by the condition number.
• For example, if the coefficients of [A] are known to t-digit precision (i.e., rounding errors are on the order of 10−t)
and Cond[A] = 10c, the solution [X ] may be valid to only t − c digits (rounding errors ≈10c−t).
Example: Matrix Condition Evaluation

• Problem Statement.
• The Hilbert matrix, which is notoriously ill-conditioned, can be represented generally as:

• Use the row-sum norm to estimate the matrix condition number for the 3 × 3 Hilbert matrix:
Norms and Condition Number in MATLAB

• MATLAB has built-in functions to compute both norms and condition numbers:
>> norm (X,p)
• and
>> cond (X,p)
• Where X is the vector or matrix and p designates the type of norm or condition
number (1, 2, inf, or 'fro'). Note that the cond function is equivalent to:
>> norm (X,p) * norm (inv(X),p)
• Also, note that if p is omitted, it is automatically set to 2.
Example: Matrix Condition Evaluation

• Solution. First, the matrix can be normalized so that the maximum element in each row is 1:

• Summing each of the rows gives 1.833, 2.1667, and 2.35. Thus, the third row has the largest sum, and the row-
sum norm is:

• The inverse of the scaled matrix can be computed as

The fact that the condition number is much greater than unity suggests that the system is ill-
➔ Cond[A] = 2.35(192) = 451.2 conditioned. The extent of the ill-conditioning can be quantified by calculating c = log 451.2 =
2.65. Hence, the last three significant digits of the solution could exhibit rounding errors.
Example: Matrix Condition Evaluation with
MATLAB
• Problem Statement.
• Use MATLAB to evaluate both the norms and condition numbers for the scaled Hilbert matrix described in the
example:

• (a) first compute the row-sum versions (p = inf).


• (b) Also compute the Frobenius (p = 'fro') and the spectral (p = 2) condition numbers.
• Solution:
• (a) First, enter the matrix:
>> A = [1 1/2 1/3;1 2/3 1/2;1 3/4 3/5];
• Then, the row-sum norm and condition number can be computed as:
Example: Matrix Condition Evaluation with
MATLAB
>> norm (A,inf)
➔ ans = 2.3500
>> cond (A,inf)
➔ ans = 451.2000

• (b) The condition numbers based on the Frobenius and spectral norms are
>> cond(A,'fro')
➔ ans = 368.0866
>> cond(A)
➔ ans = 366.3503

You might also like