Iterative Methods For Solving Ax B - Jacobi's Method
Iterative Methods For Solving Ax B - Jacobi's Method
Iterative Methods For Solving Ax B - Jacobi's Method
The Jacobi method is a method of solving a matrix equation on a matrix that has no zeros along its main diagonal
(Bronshtein and Semendyayev 1997, p. 892). Each diagonal element is solved for, and an approximate value
plugged in. The process is then iterated until it converges. This algorithm is a stripped-down version of the Jacobi
transformation method of matrix diagonalization.
The Jacobi method is easily derived by examining each of the equations in the linear system of equations
in isolation. If, in the th equation
(1)
solve for the value of while assuming the other entries of remain fixed. This gives
(2)
In this method, the order in which the equations are examined is irrelevant, since the Jacobi method treats them
independently. The definition of the Jacobi method can be expressed with matrices as
(3)
where the matrices , , and represent thediagonal, strictly lower triangular, and strictly upper triangular parts
of , respectively.
Author(s):
David M. Strong
Perhaps the simplest iterative method for solving Ax = b is Jacobi’s Method. Note that the simplicity of
this method is both good and bad: good, because it is relatively easy to understand and thus is a good
first taste of iterative methods; bad, because it is not typically used in practice (although its potential
usefulness has been reconsidered with the advent of parallel computing). Still, it is a good starting point
for learning about more useful, but more complicated, iterative methods.
Given a current approximation
so that
Example 1
Let's apply Jacobi's Method to the system
.
.
So, if our initial guess x(0) = (x1(0), x2(0), x3(0)) is the zero vector 0 = (0,0,0) — a common initial guess unless
we have some additional information that leads us to choose some other — then we find x(1) =
(x1(1), x2(1), x3(1) ) by solving
So x(1) = (x1(1), x2(1), x3(1)) = (3/4, 9/6, −6/7) ≈ (0.750, 1.500, −0.857). We iterate this process to find a
sequence of increasingly better approximations x(0), x(1), x(2), … . We show the results in the table below,
with all values rounded to 3 decimal places.
We are interested in the error e at each iteration between the true solution x and the
approximation x(k): e(k) = x − x(k) . Obviously, we don't usually know the true solution x. However, to better
understand the behavior of an iterative method, it is enlightening to use the method to solve a
system Ax = b for which we do know the true solution and analyze how quickly the approximations are
converging to the true solution. For this example, the true solution is x = (1, 2, −1).
The norm of a vector ||x|| tells us how big the vector is as a whole (as opposed to how large each element
of the vector is). The vector norm most commonly used in linear algebra is the l2 norm:
In this module, we will always use the l2 norm (including for matrix norms in subsequent tutorials), so that
|| || always signifies || ||2.
For our purposes, we observe that ||x|| will be small exactly when each of the elements x1, x2, …, xn in x =
(x1, x2, …, xn ) is small. In the following table, the norm of the error becomes progressively smaller as the
error in each of the three elements x1, x2, x3 becomes smaller, or in other words, as the approximations
become progressively better.
k x(k) x(k) − x(k-1) e(k) = x − x(k) ||e(k)||
1 0.750 1.500 -0.857 -0.000 -0.000 -0.857 0.250 0.500 -0.143 0.557
2 0.911 1.893 -0.964 0.161 0.393 -0.107 0.089 0.107 -0.036 0.144
3 0.982 1.964 -0.997 0.071 0.071 -0.033 0.018 0.036 -0.003 0.040
4 0.992 1.994 -0.997 0.010 0.029 0.000 0.008 0.006 -0.003 0.011
5 0.999 1.997 -1.000 0.007 0.003 -0.003 0.001 0.003 0.000 0.003
6 0.999 2.000 -1.000 0.000 0.003 0.001 0.000 0.001 0.000 0.001
7 1.000 2.000 -1.000 0.001 0.000 0.000 0.000 0.000 0.000 0.000
8 1.000 2.000 -1.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000
For this example, we stop iterating after all three ways of measuring the current error,