Cmdie
Cmdie
Cmdie
8. Mai 2003
Finding an Eigenvector and Eigenvalue, with Newtons Method for Solving Systems of Nonlinear Equations
Report
Harald R ck o
hroeck@cosy.sbg.ac.at
Introduction
Let A be an n n Matrix, we wish to nd a nonzero vector x and a scalar such that Ax = x Such a scalar is called Eigenvalue, and x is a corresponding eigenvector. Newtons method for solving systems of nonlinear equations is a xed point iteration algorithm for nding roots of a function. A root of a function f : Rm Rn is a vector x Rm such that f (x) = O. Such a root is not unique, for Newtons method it depends on the initial seed wich root will be found. Newtons method is based on the truncated Taylor series of a differentiable function f : Rn Rn : f (x + h) f (x) + J f (x) h where J f (x) is the Jacobian matrix of f , {J f (x)}i j =
fi (x) x j .
If J f (x) h = f (x), then f (x + h) is taken as an approximate zero, and x + h is an approximation of a root of this function. That follows we have replaced the system of nonlinear equation with a system of linear equations which have to be solved at each iteration step.
1.1
Alghorithm
Algorithm [Newtons Method]: x0 = initial guess for k = 0, 1, 2, . . . calculate J f (xk ) and f (xk ) solve J f (xk )hk = f (xk ) for hk xk+1 = xk + hk
end
Harald R ck o
Eigenvalue Problems
Solution
How can we use this algorithm to calculate an eigenvalue and a corresponding eigenvector x of a n n Matrix A? a1,1 a1,2 a1,n x1 x2 a2,1 a2,2 a2,n A := . . . x := . ... . . . . . . . . an,1 an,2 an,n xn
2.1
A New Function
We see f (x, ) = O if and only if Ax = x and xT x = 1. This is only satised if x is a normilized eigenvector and its corresponding eigenvalue. To use Newtons Method we need the Jacobian Matrix J f of f . We devise Jacobian Matrix as follows: If we write f as an vector we get a1,1 x1 + a2,1 x2 + + a1,n xn x1 f1 a2,1 x1 + a2,2 x2 + + a2,n xn x2 f2 . . . . . =: . . . . . . f (x, ) = . . . . . . . an,1 xn + an,1 x2 + + an,n xn xn fn 2 2 2 fn+1 x1 + x2 + + xn 1 The Jacobian matrix is dened as f1 f1 f1 x2 x1 . . . . . . J f (x, ) := . . . fn+1 fn+1 fn+1 x1 x2 = J f (x, ) = the
x1 x2 . .. . . . 0
A I n x 2xT 0
Harald R ck o
Eigenvalue Problems
2.2
Iteration
Now we should have all what we need for the Newton iteration. In the kth iteration step we have to solve the system of linear equations for [hk k ]T : A I n xk T 2xk 0 hk k = Axk k xk T xk xk 1
At this point we have to talk about a stopping criteria and the initial guess for the iteration. For my implementation I used only a simple check of the size of the updating vector [hk k ]T as stopping criteria. The algorithm will stop if ||[hk k ]T ||inf < , and can be dened by the user, if not it will be = 0.001. It will also stop if a maxima of iteration steps are reached. The standard value for this parameter is 100, but it can be changed by the user. To start our iteration we need an initial guess for x0 and 0 . For x0 we use a random T vector and normalize it, and for we use = x0 Ax0
Results
Followed you nd some testing results 2.9766 0.3945 0.4198 1.1159 0.3945 2.7328 0.3097 0.1129 A= 0.4198 0.3097 2.5675 0.6079 = 4 , x = 1.1159 0.1129 0.6079 1.7231 4 0.5 0 0.1498 5 0.6 = 3.1357 , x = 0.2589 A = 0.6 0 0.5 3 0.9542 0.7606 0.1850 0.3890 0.4858
I also tested the program with some random matrices of size 100100 and 1000 1000 and compared the results with the build in function of Matlab(eig). The result of the function my_eig is most times the biggest real eigenvalue. Harald R ck o Eigenvalue Problems
35
30
25
4
20
3
15
2
10
100
200
300
400
500
600
700
800
900
1000
200
400
600
800
1000
1200
1400
1600
1800
2000
Abbildung 1: The graphs are showing the time used to calculate eigenvalues/eigenvectors for increasing matrix dimensions
Complexity
The complexity of this algorithm depends on the used algorithm for the solution of the linear equations. The Matlab proler found the same part of the code. For small matrices the proler detected the calculation of the different matrices as most expensive, but for larger systems its only the part where the linear equations are solved. I did some timing messurements and compared the results with the build-in function of Matlab. The build-in function eig calculates all real and complex eigenvalues of a matrix. This function is surely implemented very efcient and uses an algorithm wich depends on the input matrix. That means it uses implicit a different algorithm for different matrices. Matlab internaly uses Lapack routines to compute eigenvalues and eigenvectors. The method my_eig is only implemented in Matlab script language and not in binary form as the algorithm used by Matlab. It calculates only one eigenvector and eigenvalue, and this is always a real eigenvalue. If the matrix has no real eigenvalue the output isnt correct. i.e. the maximal count of iterations will be reached and it outputs a warning that the result may not be correct. Therefore the function eig is faster although it calculates all eigenvectors and eigenvalues. But for big matrices, i.e. matrices with more elements then 100100, my implementation will be faster most times. At gure 1 you can nd the results of my timing messurements. The y-axe is the time in seconds and on the x-axe you nd the matrix size. The biggest matrix was of size 2000 2000. I tested some bigger matrices but matlab runs out of memory Harald R ck o Eigenvalue Problems
Comments
The method for calculating one eigenvector/eigenvalue pair described above is just one way do nd such a pair. This method nds also only one pair, and this only of the real space. But sometimes eigenvalues/eigenvectors are of the complex number space. Its not easy to nd out whetever a matrix has a real eigenvalue/eigenvector or not. If not this method is not useable. We know for special forms of matrices their eigenvalues, but not for general matrices. Note that a matrix with real numbers can have only complex eigenvalues/eigenvectors. For some reasons it is possible that we have to know more or all eigenvalues of a matrix. For this problem we can use this method in combination with deation(comp. SC M.Heath). But this approach is less than ideal. The better way would be a simultaneous iteration, as we known from our lecture. The described method in this paper is also an example how numerical analysis works. The basic general strategy for nding a solution to a problem, is to replace the problem by an already known one with a similar solution. In this special case we used an algorithm for solving nonlinear equations to nd an eigenvalue. But this algorithm known as Newtons Method is just an iteration of an algorithm to solving linear equations. To solve the linear equation we also use an approximation. We see that we are using two steps of approximation: Eigenvalue Problem Nonlinear Equations Linear Equations Although we are using more steps the solution can be very accurate. This strategy is the usual process how to nd a new alghorithm.
Code
Harald R ck o
Eigenvalue Problems
Harald R ck o
Eigenvalue Problems
% lambda will be the eigenvalue for x lambda = x*A*x; % start the iteration for j = 1:max if debug disp([Step: ,int2str(j)]) end % create the Jacobian Matrix M M = [ [A -x] ; 2*x 0 ]; for i = 1:n-1 M(i,i) = M(i,i) - lambda; end % calculate the function vector B = [ A*x-lambda*x ; x*x-1 ]; % Newton step s = M \ -B; x = x+s(1:n-1); lambda = lambda+s(n); % stop if the maximal difference between two steps % is smaller then eps if norm(s,inf) < eps break; end end if j == max warning([maximal count was reached solution may ... not be correct. , Please repeat with a higher value of "max"]) end % return values val = lambda; if nargout == 2 % check number of outputs vec = x; end return
Harald R ck o
Eigenvalue Problems