Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

MATLAB Linear Algebra Functions

Download as pdf or txt
Download as pdf or txt
You are on page 1of 15
At a glance
Powered by AI
The document discusses the various linear algebra functions available in MATLAB organized by category and how to perform basic matrix operations like addition, subtraction and multiplication.

The main categories of linear algebra functions discussed are matrix analysis, solving linear equations, eigenvalues and singular values, and matrix functions.

To add or subtract matrices in MATLAB, the matrices must have the same dimensions and the operation is performed element-wise, just like for arrays. An error will result if the dimensions are incompatible.

2

Linear Algebra

MATLAB Linear Algebra Functions


The MATLAB matfun directory contains linear algebra functions. For a
complete list, brief descriptions, and links to reference pages, type:
help matfun

The following table lists the MATLAB linear algebra functions by category.
Function Summary

2-2

Category

Function

Description

Matrix analysis

norm

Matrix or vector norm

normest

Estimate the matrix


2-norm

rank

Matrix rank

det

Determinant

trace

Sum of diagonal
elements

null

Null space

orth

Orthogonalization

rref

Reduced row echelon


form

subspace

Angle between two


subspaces

MATLAB Linear Algebra Functions

Function Summary (Continued)


Category

Function

Description

Linear equations

\ and /

Linear equation
solution

inv

Matrix inverse

cond

Condition number for


inversion

condest

1-norm condition
number estimate

chol

Cholesky factorization

ichol

Incomplete Cholesky
factorization

linsolve

Solve a system of linear


equations

lu

LU factorization

ilu

Incomplete LU
factorization

qr

Orthogonal-triangular
decomposition

lsqnonneg

Nonnegative
least-squares

pinv

Pseudoinverse

lscov

Least squares with


known covariance

2-3

Linear Algebra

Function Summary (Continued)


Category

Function

Description

Eigenvalues and
singular values

eig

Eigenvalues and
eigenvectors

svd

Singular value
decomposition

eigs

A few eigenvalues

svds

A few singular values

poly

Characteristic
polynomial

polyeig

Polynomial eigenvalue
problem

condeig

Condition number for


eigenvalues

hess

Hessenberg form

qz

QZ factorization

schur

Schur decomposition

expm

Matrix exponential

logm

Matrix logarithm

sqrtm

Matrix square root

funm

Evaluate general
matrix function

Matrix functions

2-4

Matrices in the MATLAB Environment

Adding and Subtracting Matrices


Addition and subtraction of matrices is defined just as it is for arrays, element
by element. Adding A to B and then subtracting A from the result recovers B:
A = pascal(3);
B = magic(3);
X = A + B
X =
9
4
5

2
7
12

7
10
8

1
5
9

6
7
2

Y = X - A
Y =
8
3
4

Addition and subtraction require both matrices to have the same dimension, or
one of them be a scalar. If the dimensions are incompatible, an error results:
C = fix(10*rand(3,2))
X = A + C
Error using plus
Matrix dimensions must agree.
w = v + s
w =
9

Vector Products and Transpose


A row vector and a column vector of the same length can be multiplied in
either order. The result is either a scalar, the inner product, or a matrix,
the outer product :
u = [3; 1; 4];
v = [2 0 -1];
x = v*u

2-7

Linear Algebra

x =
2
X = u*v
X =
6
2
8

0
0
0

-3
-1
-4

For real matrices, the transpose operation interchanges aij and aji. MATLAB
uses the apostrophe operator (') to perform a complex conjugate transpose,
and uses the dot-apostrophe operator (.') to transpose without conjugation.
For matrices containing all real elements, the two operators return the same
result.
The example matrix A is symmetric, so A' is equal to A. But B is not symmetric:
B = magic(3);
X = B'
X =
8
1
6

3
5
7

4
9
2

Transposition turns a row vector into a column vector:


x = v'
x =
2
0
-1

If x and y are both real column vectors, the product x*y is not defined, but
the two products
x'*y

2-8

Matrices in the MATLAB Environment

and
y'*x

are the same scalar. This quantity is used so frequently, it has three different
names: inner product, scalar product, or dot product.
For a complex vector or matrix, z, the quantity z' not only transposes the
vector or matrix, but also converts each complex element to its complex
conjugate. That is, the sign of the imaginary part of each complex element
changes. So if
z = [1+2i 7-3i 3+4i; 6-2i 9i 4+7i]
z =
1.0000 + 2.0000i
7.0000 - 3.0000i
6.0000 - 2.0000i
0 + 9.0000i

3.0000 + 4.0000i
4.0000 + 7.0000i

then
z'
ans =
1.0000 - 2.0000i
7.0000 + 3.0000i
3.0000 - 4.0000i

6.0000 + 2.0000i
0 - 9.0000i
4.0000 - 7.0000i

The unconjugated complex transpose, where the complex part of each element
retains its sign, is denoted by z.':
z.'
ans =
1.0000 + 2.0000i
7.0000 - 3.0000i
3.0000 + 4.0000i

6.0000 - 2.0000i
0 + 9.0000i
4.0000 + 7.0000i

For complex vectors, the two scalar products x'*y and y'*x are complex
conjugates of each other, and the scalar product x'*x of a complex vector
with itself is real.

Multiplying Matrices
Multiplication of matrices is defined in a way that reflects composition of
the underlying linear transformations and allows compact representation of

2-9

Linear Algebra

systems of simultaneous linear equations. The matrix product C = AB is


defined when the column dimension of A is equal to the row dimension of B,
or when one of them is a scalar. If A is m-by-p and B is p-by-n, their product
C is m-by-n. The product can actually be defined using MATLAB for loops,
colon notation, and vector dot products:
A =
B =
m =
for

pascal(3);
magic(3);
3; n = 3;
i = 1:m
for j = 1:n
C(i,j) = A(i,:)*B(:,j);
end

end

MATLAB uses a single asterisk to denote matrix multiplication. The next two
examples illustrate the fact that matrix multiplication is not commutative;
AB is usually not equal to BA:
X = A*B
X =
15
26
41

15
38
70

15
26
39

28
34
28

47
60
43

Y = B*A
Y =
15
15
15

A matrix can be multiplied on the right by a column vector and on the left
by a row vector:
u = [3; 1; 4];
x = A*u
x =
8

2-10

Matrices in the MATLAB Environment

17
30
v = [2 0 -1];
y = v*B
y =
12

-7

10

Rectangular matrix multiplications must satisfy the dimension compatibility


conditions:
C = fix(10*rand(3,2));
X = A*C
X =
17
31
51

19
41
70

Y = C*A
Error using mtimes
Inner matrix dimensions must agree.

Anything can be multiplied by a scalar:


s = 7;
w = s*v
w =
14

-7

Identity Matrix
Generally accepted mathematical notation uses the capital letter I to denote
identity matrices, matrices of various sizes with ones on the main diagonal
and zeros elsewhere. These matrices have the property that AI = A and IA = A
whenever the dimensions are compatible. The original version of MATLAB
could not use I for this purpose because it did not distinguish between

2-11

Systems of Linear Equations

Systems of Linear Equations


In this section...
Computational Considerations on page 2-15
The mldivide Algorithm on page 2-17
General Solution on page 2-18
Square Systems on page 2-18
Overdetermined Systems on page 2-21
Using Multithreaded Computation with Systems of Linear Equations
on page 2-24
Iterative Methods for Solving Systems of Linear Equations on page 2-25

Computational Considerations
One of the most important problems in technical computing is the solution of
systems of simultaneous linear equations.
In matrix notation, the general problem takes the following form: Given two
matrices A and B, does there exist a unique matrix X so that AX = B or XA = B?
It is instructive to consider a 1-by-1 example. For example, does the equation
7x = 21
have a unique solution?
The answer, of course, is yes. The equation has the unique solution x = 3. The
solution is easily obtained by division:
x = 21/7 = 3.
The solution is not ordinarily obtained by computing the inverse of 7, that is
71 = 0.142857..., and then multiplying 71 by 21. This would be more work
and, if 71 is represented to a finite number of digits, less accurate. Similar
considerations apply to sets of linear equations with more than one unknown;

2-15

Linear Algebra

the MATLAB software solves such equations without computing the inverse
of the matrix.
Although it is not standard mathematical notation, MATLAB uses the
division terminology familiar in the scalar case to describe the solution of a
general system of simultaneous equations. The two division symbols, slash, /,
and backslash, \, correspond to the two MATLAB functions mldivide and
mrdivide. mldivide and mrdivide are used for the two situations where the
unknown matrix appears on the left or right of the coefficient matrix:
X = B/A

Denotes the solution to the matrix equation


XA = B.

X = A\B

Denotes the solution to the matrix equation


AX = B.

Think of dividing both sides of the equation AX = B or XA = B by A. The


coefficient matrix A is always in the denominator.
The dimension compatibility conditions for X = A\B require the two matrices
A and B to have the same number of rows. The solution X then has the
same number of columns as B and its row dimension is equal to the column
dimension of A. For X = B/A, the roles of rows and columns are interchanged.
In practice, linear equations of the form AX = B occur more frequently than
those of the form XA = B. Consequently, the backslash is used far more
frequently than the slash. The remainder of this section concentrates on the
backslash operator; the corresponding properties of the slash operator can
be inferred from the identity:
(B/A)' = (A'\B')

The coefficient matrix A need not be square. If A is m-by-n, there are three
cases:

2-16

Systems of Linear Equations

m=n

Square system. Seek an exact solution.

m>n

Overdetermined system. Find a least squares


solution.

m<n

Underdetermined system. Find a basic solution


with at most m nonzero components.

The mldivide Algorithm


The mldivide operator employs different algorithms to handle different kinds
of coefficient matrices. The various cases are diagnosed automatically by
examining the coefficient matrix.

Permutations of Triangular Matrices


mldivide checks for triangularity by testing for zero elements. If a matrix A

is triangular, MATLAB software uses a substitution to compute the solution


vector x. If A is a permutation of a triangular matrix, MATLAB software uses
a permuted substitution algorithm.

Square Matrices
If A is symmetric and has real, positive diagonal elements, MATLAB attempts
a Cholesky factorization. If the Cholesky factorization fails, MATLAB
performs a symmetric, indefinite factorization. If A is upper Hessenberg,
MATLAB uses Gaussian elimination to reduce the system to a triangular
matrix. If A is square but is neither permuted triangular, symmetric and
positive definite, or Hessenberg, MATLAB performs a general triangular
factorization using LU factorization with partial pivoting (see lu).

Rectangular Matrices
If A is rectangular, mldivide returns a least-squares solution. MATLAB
solves overdetermined systems with QR factorization (see qr). For an
underdetermined system, MATLAB returns the solution with the maximum
number of zero elements.
The mldivide function reference page contains a more detailed description
of the algorithm.

2-17

Linear Algebra

General Solution
The general solution to a system of linear equations AX = b describes all
possible solutions. You can find the general solution by:
1 Solving the corresponding homogeneous system AX = 0. Do this using the

null command, by typing null(A). This returns a basis for the solution

space to AX = 0. Any solution is a linear combination of basis vectors.


2 Finding a particular solution to the nonhomogeneous system AX = b.

You can then write any solution to AX = b as the sum of the particular
solution to AX = b, from step 2, plus a linear combination of the basis vectors
from step 1.
The rest of this section describes how to use MATLAB to find a particular
solution to AX = b, as in step 2.

Square Systems
The most common situation involves a square coefficient matrix A and a single
right-hand side column vector b.

Nonsingular Coefficient Matrix


If the matrix A is nonsingular, the solution, x = A\b, is then the same size as
b. For example:
A = pascal(3);
u = [3; 1; 4];
x = A\u
x =
10
-12
5

It can be confirmed that A*x is exactly equal to u.


If A and B are square and the same size, X = A\B is also that size:
B = magic(3);

2-18

Systems of Linear Equations

X = A\B
X =
19
-17
6

-3
4
0

-1
13
-6

It can be confirmed that A*X is exactly equal to B.


Both of these examples have exact, integer solutions. This is because the
coefficient matrix was chosen to be pascal(3), which has a determinant
equal to 1.

Singular Coefficient Matrix


A square matrix A is singular if it does not have linearly independent
columns. If A is singular, the solution to AX = B either does not exist, or is not
unique. The backslash operator, A\B, issues a warning if A is nearly singular
and raises an error condition if it detects exact singularity.
If A is singular and AX = b has a solution, you can find a particular solution
that is not unique, by typing
P = pinv(A)*b
P is a pseudoinverse of A. If AX = b does not have an exact solution, pinv(A)

returns a least-squares solution.


For example:
A = [ 1
-1
1

3
4
10

7
4
18 ]

is singular, as you can verify by typing


det(A)
ans =
0

2-19

Linear Algebra

Note For information about using pinv to solve systems with rectangular
coefficient matrices, see Pseudoinverses on page 2-28.
Exact Solutions. For b =[5;2;12], the equation AX = b has an exact
solution, given by
pinv(A)*b
ans =
0.3850
-0.1103
0.7066

Verify that pinv(A)*b is an exact solution by typing


A*pinv(A)*b
ans =
5.0000
2.0000
12.0000

Least-Squares Solutions. On the other hand, if b = [3;6;0], AX = b does


not have an exact solution. In this case, pinv(A)*b returns a least squares
solution. If you type
A*pinv(A)*b
ans =
-1.0000
4.0000
2.0000

you do not get back the original vector b.


You can determine whether AX = b has an exact solution by finding the
row reduced echelon form of the augmented matrix [A b]. To do so for this
example, enter

2-20

Systems of Linear Equations

rref([A b])
ans =
1.0000
0
0

0
1.0000
0

2.2857
1.5714
0

0
0
1.0000

Since the bottom row contains all zeros except for the last entry, the equation
does not have a solution. In this case, pinv(A) returns a least-squares
solution.

Overdetermined Systems
Overdetermined systems of simultaneous linear equations are often
encountered in various kinds of curve fitting to experimental data. Here is a
hypothetical example. A quantity y is measured at several different values
of time, t, to produce the following observations:
t

0.0

0.82

0.3

0.72

0.8

0.63

1.1

0.60

1.6

0.55

2.3

0.50

Enter the data into MATLAB with the statements


t = [0 .3 .8 1.1 1.6 2.3]';
y = [.82 .72 .63 .60 .55 .50]';

Try modeling the data with a decaying exponential function:


y(t) = c1 + c2et.
The preceding equation says that the vector y should be approximated by a
linear combination of two other vectors, one the constant vector containing all
ones and the other the vector with components et. The unknown coefficients,

2-21

You might also like