Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Chapter 6

Download as pdf or txt
Download as pdf or txt
You are on page 1of 48

Chapter 6

Systems of Linear Differential Equa-


tions

6.1 Introduction

Up to this point the entries in a vector or matrix have been real numbers. In this section,
and in the following sections, we will be dealing with vectors and matrices whose entries are
functions. A vector whose components are functions is called a vector-valued function or
vector function. Similarly, a matrix whose entries are functions is called a matrix function.

The operations of vector and matrix addition, multiplication by a number and matrix
multiplication for vector and matrix functions are exactly as defined in Chapter 5 so there
is nothing new in terms of arithmetic. However, for the purposes of this and the following
sections, there are operations on functions other than arithmetic operations that we need
to define for vector and matrix functions, namely the operations from calculus (limits,
differentiation, integration). The operations from calculus are defined in a natural way.

Calculus of vector functions: Let v(t) = (f1 (t), f2 (t), . . . , fn (t)) be a


vector function whose components are defined on an interval I.
Limit: Let c ∈ I. If lim fi (t) = αi exists for i = 1, 2, . . .n, then
x→c
 
lim v(t) = lim f1 (t), lim f2 (t), . . . , lim fn (t) = (α1 , α2 , . . . , αn ) .
t→c t→c t→c t→c

Limits of vector functions are calculated “component-wise.”

Derivative: If f1 , f2 , . . . , fn are differentiable on I, then v is differentiable


on I, and

v 0 (t) = (f10 (t), f20 (t), . . . , fn0 (t) .
Thus v 0 is the vector function whose components are the derivatives of the
components of v.

345
Integral: Since differentiation of vector functions is done component-wise,
integration must also be component-wise. That is
Z Z Z Z 
v(t) dt = f1 (t) dt, f2 (t) dt, . . . , fn (t) dt .

Calculus of matrix functions: Limits, differentiation and integration of


matrix functions are done in exactly the same way — component-wise.

Converting a Linear Differential Equation to a System of Differential Equa-


tions

Here we show how to convert a linear differential equation into a system of first order
equations.

Consider the second order linear differential equation

y 00 + p(t)y 0 + q(t)y = f (t)

where p, q, f are continuous functions on some interval I. Solving this equation for y 00 ,
we get
y 00 = −q(t)y − p(t)y 0 + f (t).

Now introduce new dependent variables x1 , x2 as follows:

x1 = y
x2 = x01 (= y 0 )

Then
x02 = y 00 = −q(t)x1 − p(t)x2 + f (t)
and the second order equation can be written equivalently as a system of two first order
equations

x01 = x2
x02 = −q(t)x1 − p(t)x2 + f (t).

Note that this system is just a special case of the “general” system of two first-order differ-
ential equations:

x01 = a11 (t)x1 + a12 (t)x2 + b1 (t)

x02 = a21 (t)x1 + a22 (t)x2 + b2 (t).

In a similar manner, consider the third-order linear equation

y 000 + p(t)y 00 + q(t)y 0 + r(t)y = f (t)

346
where p, q, r, f are continuous functions on some interval I. Solving the equation for
y 000 , we get
y 000 = −r(t)y − q(t)y 0 − p(t)y 00 + f (t).

Introduce new dependent variables x1 , x2 , x3 , as follows:

x1 = y
x2 = x01 (= y 0 )
x3 = x02 (= y 00 )

Then
y 000 = x03 = −r(t)x1 − q(t)x2 − p(t)x3 + f (t)

and the third-order equation can be written equivalently as the system of three first-order
equations:

x01 = x2
x02 = x3
x03 = −r(t)x1 − q(t)x2 − p(t)x3 + f (t).

Here we note that this system is just a very special case of the “general” system of three,
first-order differential equations:

x01 = a11 (t)x1 + a12 (t)x2 + a13 (t)x3 (t) + b1 (t)

x02 = a21 (t)x1 + a22 (t)x2 + a23 (t)x3 (t) + b2 (t)

x03 = a31 (t)x1 + a32 (t)x2 + a33 (t)x3 (t) + b3 (t).

Example 1. (a) Consider the second-order homogeneous equation

t2 y 00 − ty 0 − 3y = 0 on I = (0, ∞).

Solving this equation for y 00 , we get


3 1
y 00 = 2
y + y0 .
t t
To convert this equation to an equivalent system, we let x1 = y, x01 = x2 (= y 0 ). Then
x02 = y 00 and we have

x01 = x2
3 1
x02 = 2
x1 + x2
t t

347
(b) Consider the third-order nonhomogeneous equation

y 000 − y 00 − 8y 0 + 12y = 2et on I = (−∞, ∞).

Solving the equation for y 000 , we have

y 000 = −12y + 8y 0 + y 00 + 2et .

Let x1 = y, x01 = x2 (= y 0 ), x02 = x3 (= y 00 ). Then

y 000 = x03 = −12x1 + 8x2 + x3 + 2et

and the equation converts to the equivalent system:

x01 = x2
x02 = x3
x03 = −12x1 + 8x2 + x3 + 2et .

Systems of Linear Differential Equations; General Theory

Let a11 (t), a12 (t), . . . , a1n (t), a21 (t), . . . , ann (t), b1 (t), b2 (t), . . . , bn (t) be continuous
functions on some interval I. The system of n first-order differential equations

x01 = a11 (t)x1 + a12 (t)x2 + · · · + a1n (t)xn (t) + b1 (t)


x02 = a21 (t)x1 + a22 (t)x2 + · · · + a2n (t)xn (t) + b2 (t)
.. (S)
.
x0n = an1 (t)x1 + an2 (t)x2 + · · · + ann (t)xn (t) + bn (t)

is called a first-order linear differential system.

The system (S) is homogeneous if

b1 (t) ≡ b2 (t) ≡ · · · ≡ bn (t) ≡ 0 on I.

System (S) is nonhomogeneous if the functions bi (t) are not all identically zero on I. That
is, (S) is nonhomgeneous if there is at least one point a ∈ I and at least one function bi(t)
such that bi (a) 6= 0.

Let A(t) be the n × n matrix


 
a11 (t) a12 (t) · · · a1n (t)
 
 a21 (t) a22 (t) · · · a2n (t) 
A(t) = 
 .. .. .. 

 . . . 
an1 (t) an2 (t) · · · ann (t)

348
and let x and b(t) be the vectors
   
x1 b1 (t)
   
 x2   b2 (t) 
x=  ..
,
 b(t) = 
 .. .

 .   . 
xn bn (t)
Then (S) can be written in the vector-matrix form

x0 = A(t) x + b(t). (S)

The matrix A(t) is called the matrix of coefficients or the coefficient matrix of the system.

Example 2. The vector-matrix form of the system in Example 1(a) is:


! ! ! !
0 1 0 0 1 x1
x0 = x+ = x, where x = ,
3/t2 1/t 0 3/t2 1/t x2
a homogeneous system.

The vector-matrix form of the system in Example 1 (b) is:


     
0 1 0 0 x1
     
x0 =  0 0 1 x +  0 , where x =  x2 ,
−12 8 1 2et x3
a nonhomogeneous system.

In general, the vector matrix form of the second order equation

y 00 + p(t)y 0 + q(t)y = f (t)

is ! ! !
0 1 0 x1
x0 = x+ where x = ,
−q(t) −p(t) f (t x2
and the vector-matrix form of y 000 + p(t)y 00 + q(t)y 0 + r(t)y = f (t) is:
     
0 1 0 0 x1
     
x0 =  0 0 1 x +  0  where x =  x2  . 
−r(t) −q(t) −p(t) f (t) x3

A solution of the linear differential system (S) is a differentiable vector function


 
x1 (t)
 
 x2 (t) 
x(t) = 
 .. 

 . 
xn (t)
that satisfies (S) on the interval I.

349
!
t3
Example 3. Verify that x(t) = is a solution of the homogeneous system
3t2
! !
0 1 x1
x0 = 2
, I = (0, ∞),
3/t 1/t x2

of Example 1 (a).

SOLUTION.
!0 ! ! !
t3 3t2 ? 0 1 t3
x0 = = =
3t2 6t 2
3/t 1/t 3t2
!
3t2
= ,
6t

and x(t) is a solution. 


  1 t
 
e2t 2 e
 2t   1 t 
Example 4. Verify that x(t) =  2e  +  2 e  is a solution of the nonhomogeneous
1 t
4e2t 2 e
system    
0 1 0 0
0    
x = 0 0 1 x +  0 , I = (−∞, ∞),
−12 8 1 2et
of Example 1 (b).

SOLUTION.
   1 0    1 
e2t 2e
t 2e2t 2e
t

x0 =  2e2t  +  12 et  =  4e2t  +  12 et 
       
1 t 1 t
4e2t 2e 8e2t 2e
   2t   1 t   
0 1 0 e 2e 0
? 
0 0 1   2e2t  +  12 et  +  0 
      
=
1 t
−12 8 1 4e2t 2e 2et
     1   
0 1 0 e2t 0 1 0 2 e
t 0
? 
0 0 1  12 et  +  0 
      
= 0 0 1  2e2t  + 
1 t
−12 8 1 4e2t −12 8 1 2 e 2et
   1       1 
2e2t 2e
t
0 2e2t 2e
t
? 
=  4e2t  +  12 et  +  0  =  4e2t  +  12 et ;
        

8e2t − 32 et 2et 8e2t 1 t


2e

x(t) is a solution. 

350
THEOREM 1. (Existence and Uniqueness Theorem) Let a be any point on the interval
I, and let α1 , α2 , . . . , αn be any n real numbers. Then the initial-value problem
 
α1
 
 α2 
x0 = A(t) x + b(t), x(a) = 
 .. 

 . 
αn

has a unique solution. 

Exercises 6.1

Convert the differential equation into a system of first-order equations.

1. y 00 − ty 0 + 3y = sin 2t.

2. y 00 + y = 2e− 2t.

3. y 000 − y 00 + y = et .

4. my 00 + cy 0 + ky = cos λt, m, c, k, λ are constants.

In Exercises 5 - 8 a matrix function A and a vector function b are given. Write


the system of equations corresponding to x0 = A(t)x + b(t).
! !
2 −1 e2t
5. A(t) = , b(t) = .
3 0 2e−t
! !
t3 t t−1
6. A(t) = , b(t) = .
cos t 2 2
   
2 3 −1 et
   
7. A(t) =  −2 0 1 , b(t) =  2e−t .
2 3 0 e2t
   
t2 3t t − 1 0
   
8. A(t) =  −2 t − 2 t , b(t) =  0 .
2t 3 t 1

Write the system in vector-matrix form.

x01 = −2x1 + x2 + sin t


9.
x02 = x1 − 3x2 − 2 cos t

x01 = et x1 − e2tx2
10.
x02 = e−t x1 − 3et x2

351
x01 = 2x1 + x2 + 3x3 + 3e2t
11. x02 = x1 − 3x2 − 2 cos t
x03 = 2x1 − x2 + 4x3 + t

x01 = t2 x1 + x2 − tx3 + 3
12. x02 = −3et x2 + 2x3 − 2e−2t
x03 = 2x1 + t2 x2 + 4x3
!
t−1
13. Verify that u(t) = is a solution of the system in Example 1 (b).
−t−2
   1 
t
e−3t 2e
   
14. Verify that u(t) =  −3e−3t  +  12 et  is a solution of the system in Example
1 t
9e−3t 2e
1 (a).
 
te2t
 
15. Verify that w(t) =  e2t + 2te2t  is a solution of the homogeneous system asso-
4e2t + 4te2t
ciated with the system in Example 1 (a).
!
− sin t
16. Verify that v(t) = is a solution of the system
− cos t − 2 sin t
! !
0 −2 1 0
x = x+ .
−3 2 2 sin t

 
−2e−2t
 
17. Verify that v(t) =  0  is a solution of the system
3e−2t
 
1 −3 2
 
x0 =  0 −1 0 x.
0 −1 −2

6.2 Homogeneous Systems

In this section we give the basic theory for linear homogeneous systems. This “theory” is
simply a repetition of the results given in Sections 3.2 and 3.7, phrased this time in terms

352
of the system
x01 = a11 (t)x1 + a12 (t)x2 + · · · + a1n (t)xn (t)
x02 = a21 (t)x1 + a22 (t)x2 + · · · + a2n (t)xn (t)
.. (H)
.
x0n = an1 (t)x1 + an2 (t)x2 + · · · + ann (t)xn (t)
or in vector-matrix form
x0 = A(t)x. (H)
 
0
 
 0 
Note first that the zero vector z(t) ≡ 0 = 
 ..  is a solution of (H). As before,

 . 
0
this solution is called the trivial solution. Of course, we are interested in finding nontrivial
solutions.

THEOREM 1. If x1 = x1 (t) and x2 = x2 (t) are solutions of (H), then

u(t) = x1 (t) + x2 (t)

is also a solution of (H); the sum of any two solutions of (H) is a solution of (H). 

THEOREM 2. If x = x(t) is a solution of (H) and α is any real number, then


u(t) = αx(t) is also a solution of (H); any constant multiple of a solution of (H) is a
solution of (H). 

These two theorems can be combined and extended to:

THEOREM 3. If x1 = x1 (t), x2 = x2 (t), . . . , xk = xk (t) are solutions of (H), and if


c1 , c2 , . . . , ck are real numbers, then

v(t) = c1 x1 (t) + c2 x2 (t) + · · · + ck xk (t)

is a solution of (H); any linear combination of solutions of (H) is also a solution of (H).


Note that these are precisely the results given in Sections 3.2 and 3.7 for second order
and higher order linear equations.

Linear Dependence and Linear Independence of Vector Functions

The notions of linear dependence and linear independence of vectors is of fundamental


importance in linear algebra. See Chapter 5, Section 5.7.

353
In this subsection we will look at linear dependence/independence of vector functions
in general. This is an extension of the material in Section 5.8. We will return to linear
differential systems after we treat the general case.

DEFINITION 1. Let
     
v11 (t) v12 (t) v1k (t)
     
 v21 (t)   v22 (t)   v2k (t) 
v1 (t) = 
 .. ,
 v2 (t) = 
 .. , . . . ,
 vk (t) = 
 .. 

 .   .   . 
vn1 (t) vn2 (t) vnk (t)

be n-component vector functions defined on some interval I. The vectors are linearly
dependent on I if there exist k real numbers c1 , c2 , . . . , ck , not all zero, such that

c1 v1 (t) + c2 v2 (t) + · · · + ck vk (t) ≡ 0 on I.

Otherwise the vectors are linearly independent on I. 

THEOREM 4. Let v1 (t), v2 (t), . . . , vn (t) be n, n-component vector functions defined


on an interval I. If the vectors are linearly dependent, then

v11 (t) v12 (t) · · · v1n (t)


v21 (t) v22 (t) · · · v2n (t)
.. .. .. .. ≡ 0 on I.
. . . .
vn1 (t) vn2 (t) · · · vnn (t)

Proof: See the proof of Theorem 1 in Section 5.8. 

The determinant in Theorem 4 is called the Wronskian of the vector functions v1 , v2 , . . . , vn .


We will let W (v1 , v2 , . . ., vn )(t), or simply W (t), denote the Wronskian.

COROLLARY. Let v1 (t), v2 (t), . . . , vn (t) be n, n-component vector functions


defined on an interval I, and let W (t)) be their Wronskian. If W (t) 6= 0 for at least one
t ∈ I, then the vector functions are linearly independent on I. 

It is important to understand that in this general case, W (t) ≡ 0 does not imply that
the vector functions are linearly dependent. An example is given in Section 5.8.

Example 1. (a) The Wronskian of the vector functions


! !
t3 t−1
u(t) = and v(t) = , t > 0,
3t2 −t−2

is:
t3 t−1
W (x) = = −4t 6= 0.
3t2 −t−2

354
Therefore u and v are linearly independent. (Note: u and v are solutions of the homo-
geneous system in Example 3, Section 6.1.)

(b) The Wronskian of the vector functions


     
e2t e−3t te2t
     
v1 (t) =  2e2t , v2 (t) =  −3e−3t , v3 (t) =  e2t + 2te2t 
4e2t 9e−3t 4e2t + 4te2t
is:
e2t e−3t te2t
W (t) = 2e2t −3e−3t e2t + 2te2t .
4e2t 9e−3t 4e2t + 4te2t
A lengthy calculation will show that W (t) = −25et. However, by the Corollary to Theorem
4, it is enough to show that W (t) 6= 0 for some t. A good choice for t here is t = 0:

1 1 0
W (0) = 2 −3 1 = −25 6= 0.
4 9 4
Therefore, the vector functions are linearly independent. 

Back to Linear Differential Systems

When the vector functions x1 , x2 , . . . , xn are n solutions of the homogeneous system


(H) we get a much stronger version of Theorem 4.

THEOREM 5. Let x1 (t), x2 (t), . . . , xn (t) be n solutions of (H). Exactly one of the
following holds:

1. W (x1 , x2 , . . . , xn )(t) ≡ 0 on I and the solutions are linearly dependent.

2. W (x1 , x2 , . . . , xn )(t) 6= 0 for all t ∈ I and the solutions are linearly independent.


Compare this result with Theorem 4, Section 3.2, and Theorem 5, Section 3.7.

It is easy to construct sets of n linearly independent solutions of (H). Simply pick any
point a ∈ I and any nonsingular n × n matrix A. Let a1 be the first column of
A, a2 the second column of A, and so on. Then let x1 (t) be the solution of (H) such
that x1 (a) = a1 , let x2 (t) be the solution of (H) such that x2 (a) = a2 , . . . , and let
xn (t) be the solution of (H) such that xn (a) = an . The existence and uniqueness theorem
guarantees the existence of these solutions. Now

W (x1 , x2 , . . ., xn )(a) = det A 6= 0.

355
Therefore, W (t) 6= 0 for all t ∈ I and the solutions are linearly independent. A
particularly nice set of n linearly independent solutions is obtained by choosing A = In ,
the identity matrix.

THEOREM 6. Let x1 (t), x2 (t), . . . , xn (t) be n linearly independent solutions of (H).


Let u(t) be any solution of (H). Then there exists a unique set of constants C1 , C2 , . . . , Cn
such that
u(t) = C1 x1 (t) + C2 x2 (t) + · · · + Cn xn (t).
That is, every solution of (H) can be written as a unique linear combination of x1 , x2 , . . . , xn .


DEFINITION 2. A set {x1 , x2 , . . . , xn } of n linearly independent solutions of (H) is


called a fundamental set of solutions. A fundamental set of solutions is also called a solution
basis for (H).

If {x1 , x2 , . . . , xn } is a fundamental set of solutions of (H), then the n × n matrix


 
x11 (t) x12 (t) · · · x1n (t)
 
 x21 (t) x22 (t) · · · x2n (t) 
X(t) =  .. .. .. 

 . . . 
xn1 (t) xn2 (t) · · · xnn (t)

(the vectors x1 , x2 , . . ., xn are the columns of X) is called a fundamental matrix for (H).


DEFINITION 3. Let {x1 (t), x2 (t), . . . , xn (t)} be a fundamental set of solutions of


(H). Then
x(t) = C1 x1 (t) + C2 x2 (t) + · · · + Cn xn (t),
where C1 , C2 , . . . , Cn are arbitrary constants, is the general solution of (H). 

Note that the general solution can also be written in terms of the fundamental matrix:
  
x11 (t) x12 (t) · · · x1n (t) C1
  
 x21 (t) x22 (t) · · · x2n (t)  C2 
C1 x1 (t) + C2 x2 (t) + · · · + Cn xn (t) = 
 .. .. ..  .  = X(t)C.
 . 
 . . .  . 
xn1 (t) xn2 (t) · · · xnn (t) Cn
Example 2. The vectors
! !
t3 t−1
u(t) = and v(t) =
3t2 −t−2
form a fundamental set of solutions of
! !
0 0 1 x1
x = 2
.
3/t 1/t x2

356
The matrix !
t3 t−1
X(t) =
3t2 −t−2
is a fundamental matrix for the system and
! ! ! !
t3 t−1 t3 t−1 C1
x(t) = C1 + C2 =
3t2 −t−2 3t2 −t−2 C2

is the general solution of the system. 

Exercises 6.2

Determine whether or not the vector functions are linearly dependent.

! !
2t − 1 −t + 1
1. u = , v=
−t 2t
! !
cos t sin t
2. u = , v=
sin t cos t
! !
t − t2 −2t + 4t2
3. u = , v=
−t 2t
! !
tet et
4. u = , v=
t 1
     
2−t t 2+t
     
5. u =  t , v =  −1 , w =  t − 2 .
−2 2 2
     
cos t cos t 0
     
6. u =  sin t , v= 0 , w =  cos t .
0 sin t sin t
     
et −et 0
     
7. u =  −et , v =  2et , w =  et .
et −et 0
! ! !
2−t t+1 t
8. u = , v= , w=
t −2 t+2
! ! !
et 0 0
9. u = , v= , w=
0 0 et

357
     
cos (t + π/4) cos t) sin t)
     
 0   0   0 
10. u = 

,
 v=

,
 w=



 0   0   0 
0 et et

11. Given the linear differential system


!
0 5 −3
x = x.
2 0

Let ! !
e2t 3e3t
x1 = and x2 = .
e2t 2e3t

(a) Show that x1 , x2 are a fundamental set of solutions of the system.


(b) Let X be the corresponding fundamental matrix. Show that

X 0 = AX.

(c) Give the general solution of the system.


!
1
(d) Find the solution of the system that satisfies x(0) = .
0

12. Let X be the matrix function


!
cos 2t sin 2t
X(t) =
sin 2t − cos 2t

(a) Verify that X is a fundamental matrix for the system


!
0 0 −2
x = x.
2 0
!
2
(b) Find the solution of the system that satisfies x(0) = .
3

13. Let X be the matrix function


 
0 4te−t e−t
 
X(t) =  1 e−t 0 
1 0 0

(a) Verify that X is a fundamental matrix for the system


 
−1 4 −4
 
x0 =  0 −1 1 x.
0 0 0

358
 
0
 
(b) Find the solution of the system that satisfies x(0) =  1 .
2
14. The linear differential system equivalent to the equation

y 000 + p(t)y 00 + q(t)y 0 + r(t)y = 0

is:     
x01 0 1 0 x1
 0    
 x2  =  0 0 1   x2  .
x03 −r(t) −q(t) −p(t) x3
(See Example 2, Section 6.1.) Show that if y = y(t) is a solution of the equation,
 
y(t)
 
then x(t) =  y 0 (t)  is a solution of the system.
y 00 (t)
NOTE: This result holds for linear equations of all orders. However, it is important
to understand that solutions of systems which are not converted from equations do
not have this special form.

15. Find three linearly independent solutions of


 
0 1 0
0  
x = 0 0 1 x.
4 4 −1

16. Find three linearly independent solutions of


 
0 1 0
 
x0 =  0 0 1 x.
−18 3 4

17. Find two linearly independent solutions of


!
0 1
x0 = 2
x.
−6/t −6/t

18. Find two linearly independent solutions of


!
0 0 1
x = 2
x.
−4/t 3/t

19. Let {x1 (t), x2 (t), . . . , xn (t)} be a fundamental set of solutions of (H), and let
X(t) be the corresponding fundamental matrix. Show that V satisfies the matrix
differential equation
X 0 = A(t)X.

359
6.3 Homogeneous Systems with Constant Coefficients, Part I

As emphasized in Section 3.2 (and 3.7), there are no general methods for solving a linear
homogeneous differential equation (as opposed to first order linear equations). Therefore
it follows that there are no general methods for solving linear homogeneous differential
systems. However, just as with linear differential equations (see Section 3.3), there is a
method for solving homogeneous systems with constant coefficients.

A linear homogeneous system with constant coefficients is a differential system having


the form
x01 = a11 x1 + a12 x2 + · · · + a1n xn
x02 = a21 x1 + a22 x2 + · · · + a2n xn
..
.
x0n = an1 x1 + an2 x2 + · · · + ann xn
where a11 , a12 , . . . , ann are constants. The system in vector-matrix form is
    
x01 a11 a12 · · · a1n x1
 0    
 x2   a21 a22 · · · a2n 
 x2  or x0 = Ax.
 
 − = − (1)
  
   − − 
−  − 


0
xn an1 an2 · · · ann xn

Example 1. Consider the 3rd order linear homogeneous differential equation

y 000 + 2y 00 − 5y 0 − 6y = 0.

The characteristic equation is:

r 3 + 2r 2 − 5r − 6 = (r − 2)(r + 1)(r + 3) = 0

and {e2t, e−t , e−3t} is a solution basis for the equation.

The corresponding linear homogeneous system is


 
0 1 0
 
x0 =  0 0 1 x
6 5 −2

and    
e2t 1
 2t  2t  
x1 (t) =  2e  = e  2 
4e2t 4

360
is a solution vector (see Problem 14, Exercises 6.2):
 
2e2t
 
x01 =  4e2t 
8e2t

and     
0 1 0 e2t 2e2t
    
 0 0 1  2e2t  =  4e2t  = x01 .
6 5 −2 4e2t 8e2t
Similarly,
       
e−t 1 e−3t 1
       
x2 (t) =  −e−t  = e−t  −1  and x3 (t) =  −3e−3t  = e−3t  −3 
e−t 1 9e−3t 9

are solution vectors of the system. 

Solutions: Eigenvalues and Eigenvectors

Example 1 suggests that homogeneous systems with constant coefficients might have solu-
tion vectors of the form x(t) = eλt v, for some number λ and some constant vector v.
Indeed, this is the case.

THEOREM 1. Given the linear differential system x0 = Ax where A is an n × n constant


matrix. If λ is an eigenvalue of A with corresponding eigenvector v, then x = eλtv is a
solution of the system.

Proof. Given that Av = λv, set x(t) = eλt v. Then


 
x0 (t) = λeλtv and Ax = A eλt v = eλtAv = eλt λv = λeλtv.

Thus, x0 = Ax and x = eλt v is a solution of the system. 

Example 2. Returning to Example 1, note that


           
0 1 0 1 1 0 1 0 1 −1 1
           
 0 0 1  2  = 2 2 ,  0 0 1   −1  =  1  = −1  −1  ,
6 5 −2 4 4 6 5 −2 1 −1 1

and     
0 1 0 1 1
    
 0 0 1   −3  = −3  −3  .
6 5 −2 9 9

361
   
0 1 0 1
   
Thus, 2 is an eigenvalue of A =  0 0 1  with corresponding eigenvector  2 ,
6 5 −2 4
 
1
 
−1 is an eigenvalue of A with corresponding eigenvector  −1 , and −3 is an
1
 
1
 
eigenvalue of A with corresponding eigenvector  −3 . 
9

Now that we know how to find solutions of x0 = Ax when A is an n × n constant matrix,


we need to consider the question of the linear independence/dependence of the solutions.
The following theorem gives the basic result.

THEOREM 2. Given the linear differential system x0 = Ax where A is an n × n constant


matrix. If A has n distinct eigenvalues λ1 , λ2 , · · · , λn with corresponding eigenvectors
v1 , v2 , · · · , vn, then the solutions

x1 = eλ1 t v1 , x2 = eλ1 t v2 , ··· , xn = eλn t vn

are linearly independent and therefore constitute a fundamental set of solutions for x0 = Ax.

Proof. We will prove the theorem in the case n = 3 to reduce the notation involved. The
proof of the general case is exactly the same.

Suppose that the 3 × 3 constant matrix A has distinct eigenvalues λ1 , λ2 , λ3 with


corresponding eigenvectors
     
v11 v12 v13
     
v1 =  v21  , v2 =  v22  , v3 =  v23  .
v31 v32 v33

To test the solutions x1 = eλ1 t v1 , x2 = eλ2 t v2 , x3 = eλ3 t v3 for independence, we calculate


their Wronskian W (t). By Property 2, Properties of Determinants, in Section 5.6,

eλ1 t v11 eλ2 tv12 eλ3 t v13 v11 v12 v13


(λ1 +λ2 +λ3 )t
W (t) = eλ1 t v21 eλ2 tv22 eλ3 t v23 =e v21 v22 v23
eλ1 t v31 eλ2 tv32 eλ3 t v33 v31 v32 v33
In Section 5.7, Theorem 1, we showed that the eigenvectors corresponding to distinct eigen-
values are linear independent. Therefore, the determinant V on the right-hand side of this
equation is non-zero, and so W (t) 6= 0 and the solutions are linearly independent. 

The case where A does not have distinct eigenvalues; that is, A has at least one eigenvalue
with multiplicity greater than 1 will be taken up in the next section.

362
Example 3. Find a fundamental set of solution vectors of
!
0 1 5
x = x
3 3

and give the general solution of the system.

SOLUTION. First we find the eigenvalues:

1−λ 5
det(A − λI) = = (λ − 6)(λ + 2).
3 3−λ

The eigenvalues are λ1 = 6 and λ2 = −2.

Next, we find corresponding eigenvectors. For λ1 = 6 we have:


! ! !
−5 5 x1 0
(A − 6I)x = = which implies x1 = x2 , x2 arbitrary.
3 −3 x2 0
!
1
Setting x2 = 1, we get the eigenvector v1 = .
1
!
5
Repeating the process for λ2 = −2, we get the eigenvector v2 = .
−3
! !
1 5
Thus x1 (t) = e6t and x2 (t) = e−2t are solution vectors of the system.
1 −3

By Theorem 2, x1 and x2 are linearly independent and they constitute a fundamental


set of solutions. The general solution of the system is
! !
1 5
x(t) = C1 e6t + C2 e−2t .
1 −3

If we let the independent variable t denote time, then we can view the solutions
!
x1 (t)
x(t) =
x2 (t)

as points (x1 (t), x2(t) moving with time in the x1 , x2 - plane. Graphs of solutions are shown
in the following figure.

363
3

-1

-2

-3

-3 -2 -1 0 1 2 3

Figure 1

! !
1 5
Note that the eigenvectors v1 = and v2 = are clearly visible in the figure.
1 −3

Example 4. Find a fundamental set of solution vectors of
 
3 −1 −1
 
x0 =  −12 0 5 x
4 −2 −1
 
1
 
and find the solution that satisfies the initial condition x(0) =  0 .
1

SOLUTION.
3 − λ −1 −1
det(A − λI) = −12 −λ 5 = −λ3 + 2λ2 + λ − 2.
4 −2 −1 − λ
Now

det(A − λI) = 0 implies λ3 − 2λ2 − λ + 2 = (λ − 2)(λ − 1)(λ + 1) = 0.

The eigenvalues are λ1 = 2, λ2 = 1, λ3 = −1.

As you can check, corresponding eigenvectors are:


     
1 3 1
     
v1 =  −1  , v2 =  −1  , v3 =  2  .
2 7 2

364
By Theorem 2, a fundamental set of solution vectors is:
     
1 3 1
x1 (t) = e2t  −1  , x2 (t) = et  −1  ,
     
x3 (t) = e−t  2  .
2 7 2
Thus, the general solution of the system is
     
1 3 1
2t   t  −t  
x(t) = C1 e  −1  + C2 e  −1  + C3 e  2  .
2 7 2

To find the solution vector satisfying the initial condition, solve


 
1
 
C1 x1 (0) + C2 x2 (0) + C3 x3 (0) =  0 
1
which is:        
1 3 1 1
       
C1  −1  + C2  −1  + C3  2  =  0 
2 7 2 1
or     
1 3 1 C1 1
    
 −1 −1 2   C2  =  0  .
2 7 2 C3 1

NOTE: The matrix of coefficients here is the fundamental matrix evaluated at t = 0

Using the solution method of your choice (row reduction, inverse, Cramer’s rule), the
solution is: C1 = 3, C2 = −1, C3 = 1, and
     
1 3 1
2t   t  −t  
x(t) = 3e  −1  − e  −1  + e  2 
2 7 2
is the solution of the initial-value problem. 

Exercises 6.3

Find the general solution of the system x0 = Ax where A is the given matrix. If an
initial condition is given, also find the solution that satisfies the condition.

!
−2 4
1. .
1 1

365
!
−3 2
2. .
1 −2
! !
2 −1 1
3. A = , x(0) = .
0 3 3
!
1 2
4. A = .
3 2
!
1 4
5. A = .
2 3
! !
−1 1 −1
6. A = , x(0) = .
4 2 1
 
3 2 −2
 
7. A =  −3 −1 3 . Hint: 1 is an eigenvalue.
1 2 0
 
15 7 −7
 
8. A =  −1 1 1 . Hint: 2 is an eigenvalue.
13 7 −5
   
3 0 −1 −1
   
9.  −2 2 1 , x(0) =  2 . Hint: 2 is an eigenvalue.
8 0 −3 −8
 
−2 2 1
 
10.  0 −1 0 . Hint: 0 is an eigenvalue.
2 −2 −1
 
8 2 1
 
11.  1 7 3 . Hint: 5 is an eigenvalue.
1 1 6
   
1 −3 1 1
   
12.  −1 1 1 , x(0) =  −2 . Hint: 2 is an eigenvalue.
3 −3 −1 1
13. Given the second order, homogeneous equation, with constant coefficients

y 00 + ay 0 + by = 0.

(a) Transform the equation into a system of first order equations by setting x1 =
y, x2 = y 0 . Then write your system in the vector-matrix form

x0 = Ax.

366
(b) Find the characteristic equation for the coefficient matrix you found in part (a).
(c) Compare your result in (b) with the characteristic equation for y 00 +ay 0 +by = 0.

14. Given the third order, homogeneous equation, with constant coefficients

y 000 + ay 00 + by 0 + cy = 0.

(a) Transform the equation into a system of first order equations by setting x1 =
y, x2 = y 0 , x3 = y 00 . Then write your system in the vector-matrix form

x0 = Ax.

(b) Find the characteristic equation for the coefficient matrix you found in part (a).
(c) Compare your result in (b) with the characteristic equation for

y 000 + ay 00 + by 0 + cy = 0.

6.4 Homogeneous Systems with Constant Coefficients: Part II.

There are two difficulties that can arise in solving a linear homogeneous system with constant
coefficients. We will treat these in this section.

1. A has complex eigenvalues.

If λ = a + bi is a complex eigenvalue of A with corresponding (complex) eigenvector


u + i v, then λ = a − bi (the complex conjugate of λ) is also an eigenvalue of A and
u − i v is a corresponding eigenvector. The corresponding linearly independent complex
solutions of x0 = Ax are:

w1 (t) = e(a+bi)t (u + i v) = eat(cos bt + i sin bt)(u + i v)

= eat [(cos bt u − sin bt v) + i(cos bt v + sin bt u)]

w2 (t) = e(a−bi)t (u − i v) = eat(cos bt − i sin bt)(u − i v)

= eat [(cos bt u − sin bt v) − i(cos bt v + sin bt u)]

Now
1
x1 (t) = 2 [w1 (t) + w2 (t)] = eat(cos bt u − sin bt v)
and
1
x2 (t) = 2i [w1 (t) − w2 (t)] = eat(cos bt v + sin bt u)
are linearly independent solutions of the system, and they are real-valued vector functions.
Note that x1 and x2 are simply the real and imaginary parts of w1 (or of w2 ).
(Review Section 3.3 where you were shown how to convert complex exponential solutions
into real-valued solutions involving sine and cosine.)

367
Example 1. Find the general solution of
!
0 2 −5
x = x.
1 0

SOLUTION.
2 − λ −5
det(A − λI) = = λ2 − 2λ + 5.
1 −λ
The eigenvalues are: λ1 = 1 + 2i, λ2 = 1 − 2i. Next, we find the eigenvectors; it is enough
to find an eigenvector for λ1 = 1 + 2i. We have
! !
−1 − 2i −5 x1
[A − (1 + 2i)I] v = .
1 −1 − 2i x2

The augmented matrix is !


−1 − 2i −5 0
1 −1 − 2i 0
which row reduces to !
1 −1 − 2i 0
.
0 0 0
Thus, x1 = (1 + 2i)x2 , x2 arbitrary. Setting x2 = 1 gives the eigenvector
! ! !
1 + 2i 1 2
v1 = = +i .
1 1 0

for λ1 = 1 + 2i, and it follows immediately that


! ! !
1 − 2i 1 2
v2 = = −i
1 1 0

is an eigenvector for λ2 = 1 − 2i.

Now
" ! !# " ! !#
1 2 1 2
e(1+2i)t +i = et (cos 2t + i sin 2t) +i
1 0 1 0
" ! !# " ! !#
1 2 2 1
= et cos 2t − sin 2t + i et cos 2t + sin 2t
1 0 0 1

and, by our discussion above, the real and imaginary parts of this vector is a fundamental
set of solution vectors for the system:
" ! !# " ! !#
1 2 2 1
x1 (t) = et cos 2t − sin 2t , x2 (t) = et cos 2t + sin 2t .
1 0 0 1

368
The general solution of the system is
" ! !# " ! !#
t 1 2 t 2 1
x(t) = C1 e cos 2t − sin 2t + C2 e cos 2t + sin 2t .
1 0 0 1

As in Example 3 of the preceding section, we can give a geometric interpretation of the


solutions by regarding the solutions
!
x1 (t)
x(t) =
x2 (t)

as points (x1 (t), x2(t)) moving with time in the x1 , x2 - plane. Graphs of solutions are shown
in the following figure

-1

-2

-3

-3 -2 -1 0 1 2 3

Figure 1

The sine and cosine terms generate the spiraling motion and the positive exponential factor
et causes the solutions to spiral “out,” that is, move away from the origin as t increases.


Example 2. Find the general solution of


!
−3 −5
x0 = x.
2 −1

SOLUTION.
−3 − λ −5
det(A − λI) = = λ2 + 4λ + 13.
2 −1 − λ

369
The eigenvalues are: λ1 = −2 + 3i, λ2 = −2 − 3i. As you can check, the corresponding
eigenvectors are:
! ! ! ! ! !
−1 + 3i −1 3 −1 − 3i −1 3
v1 = = +i , v2 = = −i .
2 2 0 2 1 0

Now
" ! !# " ! !#
(−2+3i)t −1 3 −2t −1 3
e +i =e (cos 3t + i sin 3t) +i
2 0 2 0
" ! !# " ! !#
−1 3 3 −1
= e−2t cos 3t − sin 3t + i e−2t cos 3t + sin 2t .
2 0 0 2

The real and imaginary parts give a fundamental set of solution vectors for the system:
" ! !# " ! !#
−1 3 3 −1
x1 (t) = e−2t cos 3t − sin 3t , x2 (t) = e−2t cos 3t + sin 2t ,
2 0 0 2
and the general solution of the system is
" ! !# " ! !#
−2t −1 3 −2t 3 −1
x(t) = C1 e cos 3t − sin 3t +C2 e cos 3t + sin 2t .
2 0 0 2

As in Example 2, we give a geometric interpretation of the solutions by regarding the


solutions !
x1 (t)
x(t) =
x2 (t)
as points (x1 (t), x2(t)) moving with time in the x1 , x2 - plane. Graphs of solutions are shown
in the figure below

-1

-2

-3

-3 -2 -1 0 1 2 3

Figure 2

370
As above, the sine and cosine terms generate the spiraling motion. Here, the negative
exponential factor e−2t means that the solutions spiral “in” toward the origin as t increases.


Example 3. Determine a fundamental set of solution vectors of


 
1 −4 −1
 
x0 =  3 2 3 x.
1 1 3

SOLUTION.

1−λ −4 −1
det(A − λI) = 3 2−λ 3 = −λ3 + 6λ2 − 21λ + 26 = −(λ − 2)(λ2 − 4λ + 13).
1 1 3−λ

The eigenvalues are: λ1 = 2, λ2 = 2 + 3i, λ3 = 2 − 3i. The corresponding eigenvectors


are:        
1 −5 + 3i −5 3
       
v1 =  0  , v2 =  3 + 3i  =  3 +i 3 
−1 2 2 0
     
−5 − 3i −5 3
     
v3 =  3 − 3i  =  3  − i  3  .
2 2 0

Now
       
−5 3 −5 3
e(2+3i)t     2t    
 3  + i  3  = e (cos 3t + i sin 3t)  3  + i  3 
2 0 2 0
         
−5 3 3 −5
= e2t cos 3t  3  − sin 3t  3  + i e2t cos 3t  3  + sin 3t  3  .
         

2 0 0 2

A fundamental set of solution vectors for the system is:


      
1 −5 3
2t   2t     
x1 (t) = e  0  , x2 (t) = e cos 3t  3  − sin 3t  3  ,
−1 2 0
    
3 −5
x3 (t) = e2t cos 3t  3  + sin 3t  3  .
    

0 2

371
2. A has an eigenvalue of multiplicity greater than 1

We will treat the case where A has an eigenvalue of multiplicity 2. At the end of the
section we indicate the possibilities when A has an eigenvalue of multiplicity 3. You will
see that the complexity increases with the multiplicity.

Example 4. Determine a fundamental set of solution vectors of


 
1 −3 3
 
x0 =  3 −5 3 x.
6 −6 4

SOLUTION.
1−λ −3 3
det(A − λI) = 3 −5 − λ 3 = −λ3 + 12λ − 16 = −(λ − 4)(λ + 2)2 .
6 −6 4 − λ

The eigenvalues are: λ1 = 4, λ2 = λ3 = −2.


 
1
 
As you can check, an eigenvector corresponding to λ1 = 4 is v1 =  1 .
2

We’ll carry out the details involved in finding an eigenvector corresponding to the “dou-
ble” eigenvalue −2.
    
3 −3 3 x1 0
    
[A − (−2)I]v =  3 −3 3   x2  =  0  .
6 −6 6 x3 0

The augmented matrix for this system of equations is


   
3 −3 3 0 1 −1 1 0
   
 3 −3 3 0  which row reduces to  0 0 0 0 
6 −6 6 0 0 0 0 0

The solutions of this system are: x1 = x2 − x3 , x2 , x3 arbitrary. We can assign values to


x2 and x3 independently and obtain two linearly independent
  eigenvectors. For example,
1
 
setting x2 = 1, x3 = 0, we get the eigenvector v2 =  1 . Reversing these values, we
0
 
−1
 
set x2 = 0, x3 = 1 to get the eigenvector v3 =  0 . Clearly v2 and v3 are linearly
1
independent. You should understand that there is nothing magic about our two choices for
x2 , x3 ; any choice which produces two independent vectors will do.

372
The important thing to note here is that this eigenvalue of multiplicity 2 produced two
independent eigenvectors.

It now follows that a fundamental set of solutions for the differential system
 
1 −3 3
0  
x = 3 −5 3 x
6 −6 4

is      
1 1 −1
x1 (t) = e4t  1  ,
     
x2 (t) = e−2t  1  , x3 (t) = e−2t  0  . 
2 0 1

Remark. Example 4 illustrates an important fact: Linear differential systems are more
general than linear differential equations. Every linear differential equation can be re-written
equivalently as a linear differential system, but the converse is not true. If the system in
Example 2 is equivalent to a third order linear differential equation

y 000 + ay 00 + by 0 + cy = 0,

then the solutions corresponding to the double root λ = −2 would be y1 (t) = e−2t and
y2 (t) = te−2t . The solution of the system which corresponds to y2 = te−2t is
       
y te−2t 0 1
       
x(t) =  y 0  =  e−2t − 2te−2t  = e−2t  1  + te−2t  −2 
y 00 −4e−2t + 4te−2t −4 4

But this solution vector is clearly not a linear combination of the solutions x1 , x2 , x3 found
in the Example. Therefore, the system in Example 2 is not equivalent to any third order
linear equation with constant coefficients. 
!
0 1 −1
Example 5. Find a fundamental set of solution vectors for x = x.
1 3

SOLUTION.

1−λ −1
det(A − λI) = = λ2 − 4λ + 4 = (λ − 2)2 .
1 3−λ

Eigenvalues: λ1 = λ2 = 2.

Eigenvectors vectors:
! ! !
−1 −1 x1 0
(A − 2I)v = = .
1 1 x2 0

373
The augmented matrix is
! !
−1 −1 0 1 1 0
→ .
1 1 0 0 0 0

The solutions are: x


!1 = −x2 , x2 arbitrary; there is only one eigenvector. Setting x2 = −1,
1
we get v = .
−1
!
1
The vector x1 (t) = e2t is a solution of the system, but we have a problem;
−1
the double eigenvalue produced only one solution of the system. To have a fundamental set
of solutions we need a second solution which is independent of x1 . 

Here is another example which illustrates this problem.


 
3 1 −1
 
Example 6. Let A =  2 2 −1 . Find linearly independent solutions of
2 2 0

x0 = Ax

SOLUTION.

3−λ 1 −1
det(A − λI) = 2 2 − λ −1 = −λ3 + 5λ2 − 8λ + 4 = −(λ − 1)(λ − 2)2 .
2 2 −λ

The eigenvalues are: λ1 = 1, λ2 = λ3 = 2.




1
 
An eigenvector corresponding to λ1 = 1 is v1 =  0  (verify this).
2

We show the details involved in finding an eigenvector corresponding to the “double”


eigenvalue 2.     
1 1 −1 v1 0
    
(A − 2I)v =  2 0 −1   v2  =  0  .
2 2 −2 v3 0
The augmented matrix for this system of equations is
   
1 1 −1 0 1 1 −1 0
   
 2 0 −1 0  which row reduces to  0 −2 1 0 
2 2 −2 0 0 0 0 0

374
The solutions of this system are: v3 = 2v2 , v1 = −v2 +v3 = v2 , v2 arbitrary. Thereis only

1
 
one eigenvector corresponding to the eigenvalue 2. Setting v2 = 1, we get v2 =  1 .
2
Thus, two independent solutions of the given linear differential system are
   
1 1
t  2t  
x1 = e  0  , x2 = e  1  .
2 2
We need another solution corresponding to the eigenvalue 2, one which is independent of
x2 . 

Examples 4 - 6 illustrate the problem presented by having an eigenvalue of multiplicity


2; there may or may not be enough eigenvectors to generate a fundamental set of solutions.

In the next example we use a third order equation with a ”double” root to see how to
construct a solution which is independent of the eigenvalue/eigenvector solution in the case
where the eigenvalue has only one eigenvector

Example 7. Consider the third order differential equation

y 000 + y 00 − 8y 0 − 12y = 0.

The characteristic equation is

r 3 + r 2 − 8r − 12 = (r − 3)(r + 2)2 = 0

The roots are: r1 = 3, r2 = r3 = −2 and a fundamental set of solutions is {y1 = e3t , y2 =


e−2t , y3 = te−2t }.

The linear differential system that is equivalent to the given equation is


 
0 1 0
 
x0 =  0 0 1 x,
12 8 −1

−λ 1 0
det(A − λI) = 0 −λ 1 = −λ3 − λ2 + 8λ − 12 = −(λ − 3)(λ + 2)2 ,
12 8 −1 − λ
and, as we already know, the eigenvalues are: λ1 = 3, λ2 = λ3 = −2.

The correspondence between the solutions y1 and y2 of the equation and solution vectors
of the system is:
   
1 1
y1 = e3t → e3t  3  = x1 ,
   
y2 = e−2t → e−2t  −2  = x2 .
9 4

375
It is the third solution y3 = te−2t that were are interested in. This solution of the equation
produces the solution vector
       
y3 (t) te−2t 0 1
 0    −2t   −2t  
x3 (t) =  y3 (t)  =  e−2t − 2te−2t  = e  1  + te  −2 
y300 (t) −4e−2t − 4te−2t −4 4

of the corresponding system. You can check that x3 is independent of x1 and x2 .


Therefore, the solution vectors x1 , x2 , x3 are a fundamental set of solutions of the
system.
 
0
 
The question is: What is the significance of the vector w =  1 ? How is it related
−4
to the eigenvalue −2 which generated it, and to the corresponding eigenvector?

We look at [A − (−2)I]w = [A + 2I]w:


    
2 1 0 0 1
    
[A + 2I]w =  0 2 1   1  =  −2  = v2 ;
12 8 1 −4 4

A − (−2)I “maps” w onto the eigenvector v2 . The corresponding solution of the system
has the form
x3 (t) = e−2t w + te−2t v2

where v2 is the eigenvector corresponding to −2 and w satisfies

[A − (−2)I]w = v2 . 

An Eigenvalue of Multiplicity 2: The General Result

Given the linear differential system x0 = Ax. Suppose that A has an eigenvalue λ of
multiplicity 2. Then exactly one of the following holds:

1. λ has two linearly independent eigenvectors, v1 and v2 . Corresponding linearly


independent solution vectors of the differential system are x1 (t) = eλtv1 and x2 (t) =
eλtv2 .

2. λ has only one eigenvector v. Then a linearly independent pair of solution vectors
corresponding to λ are:

x1 (t) = eλt v and x2 (t) = eλt w + teλt v

where w is a vector that satisfies (A − λI)w = v. The vector w is called a


generalized eigenvector corresponding to the eigenvalue λ.

376
We can now finish Examples 5 and 6.
!
1 −1
Example 5. (continued) Find a fundamental set of solution vectors for x0 = x
1 3
and give the general solution.

SOLUTION. We found one solution


!
1
x1 = e2t .
−1

By our work above, a second solution, independent of x1 , is x2 (t) = e2t w + te2t v where
w satisfies (A − 2I)w = v:
! ! !
−1 −1 w1 1
(A − 2I)w = = .
1 1 w2 −1

The augmented matrix for this system is


! !
−1 −1 1 1 1 −1
which row reduces to .
1 1 −1 0 0 0

The solutions of this system are w1 = −1 − w2 , w2 arbitrary.


! If we choose w2 = 0 (any
−1
choice for w2 will do), we get w1 = −1 and w = . Thus
0
! !
−1 1
x2 (t) = e2t + te2t
0 −1

is a solution of the system independent of x1 . The solutions


! ! !
2t 1 2t −1 1
x1 (t) = e , x2 (t) = e + te2t
−1 0 −1

are a fundamental set of solutions for the given system, and the general solution is
! " ! !#
1 −1 1
x(t) = C1 e2t + C2 e2t + te2t . 
−1 0 −1

Graphs of solutions are shown in the figure below.

377
3

-1

-2

-3

-3 -2 -1 0 1 2 3

Figure 3

!
1
Note that the eigenvector v1 = is clearly visible in the figure. 
−1


3 1 −1
 
Example 6. (continued) Let A =  2 2 −1 . Find a fundamental set of solutions of
2 2 0
x0 = Ax

SOLUTION. Previously we found the solution vectors


   
1 1
t
x2 = e2t  1  .
  
x1 = e  0 ,
2 2
We now know that a third solution x3 has the form

x3 (t) = e2t w + te2t v2

where w satisfies (A − 2I)w = v2 . That is:


    
1 1 −1 w1 1
    
 2 0 −1   w2  =  1  .
2 2 −2 w3 2
The augmented matrix is
   
1 1 −1 1 1 1 −1 1
   
 2 0 −1 1  which row reduces to  0 −2 1 −1 
2 2 −2 2 0 0 0 0

378
The solutions of this system are

w3 = −1 + 2w2 , w1 = 1 − w2 + w3 = 1 − w2 + (−1 + 2w2 ) = w2 , w2 arbitrary.

If we choose w2 = 0 (any choice for w2 will do), we get w1 = 0, w2 = 0, w3 = −1 and


 
0
 
w =  0 . Thus
−1
   
0 1
2t   2t  
x3 = e  0  + te  1 
−1 2
is a solution of the system independent of x2 (and of x1 ). The solutions
       
1 1 0 1
x1 = et  0  , x2 = e2t  1  , x3 = e2t  0  + te2t  1 
       

2 2 −1 2
are a fundamental set of solutions of the system. 

Eigenvalues of Multiplicity 3.

Given the differential system x0 = Ax. Suppose that λ is an eigenvalue of A of


multiplicity 3. Then exactly one of the following holds:

1. λ has three linearly independent eigenvectors v1 , v2 , v3 . Then three linearly


independent solution vectors of the system corresponding to λ are:

x1 (t) = eλt v1 , x2 (t) = eλt v2 , x3 (t) = eλtv3 .

2. λ has two linearly independent eigenvectors v1 , v2 . Then two linearly independent


solutions of the system corresponding to λ are:

x1 (t) = eλt v1 , x2 (t) = eλt v2

A third solution, independent of x1 and x2 has the form

x3 (t) = eλtw + teλt v

where v is an eigenvector corresponding to λ and (A − λI)w = v.

3. λ has only one (independent) eigenvector v. Then three linearly independent solu-
tions of the system will have the form:

x1 = eλt v, x2 = eλtw + teλt v,

x3 (t) = eλt z + teλt w + t2 eλt v


where (A − λI)w = v and (A − λI)z = w.

379
Exercises 6.4

Find the general solution of the system x0 = Ax where A is the given matrix. If an
initial condition is given, also find the solution that satisfies the condition.

!
2 4
1. .
−2 −2
! !
−1 2 1
2. , x(0) = .
−1 −3 3
!
−1 1
3. .
−4 3
!
5 2
4. .
−2 1
! !
3 2 3
5. , x(0) = .
−8 −5 −2
!
−1 1
6. .
−4 −5
   
3 −4 4 2
   
7.  4 −5 4 , x(0) =  1 . Hint: 3 is an eigenvalue.
4 −4 3 −1
 
−3 0 −3
 
8.  1 −2 3 . Hint: −2 is an eigenvalue.
1 0 1
 
0 4 0
 
9.  −1 0 0 . Hint: −1 is an eigenvalue.
1 4 −1
 
5 −5 −5
 
10.  −1 4 2 . Hint: 2 is an eigenvalue.
3 −5 −3
   
1 1 2 −1
   
11.  0 1 0 , x(0) =  3 . Hint: 3 is an eigenvalue.
0 1 3 2

380
   
−3 1 −1 1
   
12.  −7 5 −1 , x(0) =  0 . Hint: 4 is an eigenvalue.
−6 6 −2 −1
 
0 1 1
 
13.  1 1 −1 . Hint: 2 is an eigenvalue.
−2 1 3
 
0 0 −2
 
14.  1 2 1 . Hint: 2 is an eigenvalue.
1 0 3
   
2 −1 −1 1
   
15.  2 1 −1 , x(0) =  −2 . Hint: 2 is an eigenvalue.
0 −1 1 0
 
−2 1 −1
 
16.  3 −3 4 . Hint: 1 is an eigenvalue.
3 −1 2
 
2 2 −6
 
17.  2 −1 −3 . Hint: 6 is an eigenvalue.
−2 −1 1
 
8 −6 1
 
18.  10 −9 2 . Hint: 3 is an eigenvalue.
10 −7 0

6.5 Nonhomogeneous Systems

The treatment in this section parallels exactly the treatment of linear nonhomogeneous
equations in Sections 3.4 and 3.7.

Recall from Section 6.1 that a linear nonhomogeneous differential system is a system of
the form
x01 = a11 (t)x1 + a12 (t)x2 + · · · + a1n (t)xn (t) + b1 (t)
x02 = a21 (t)x1 + a22 (t)x2 + · · · + a2n (t)xn (t) + b2 (t)
.. (N)
.
x0n = an1 (t)x1 + an2 (t)x2 + · · · + ann (t)xn (t) + bn (t)
where a11 (t), a12 (t), . . . , a1n (t), a21 (t), . . . , ann (t), b1 (t), b2 (t), . . . , bn (t) are continuous
functions on some interval I and the functions bi(t) are not all identically zero on I; that
is, there is at least one point a ∈ I and at least one function bi(t) such that bi (a) 6= 0.

381
Let A(t) be the n × n matrix
 
a11 (t) a12 (t) · · · a1n (t)
 
 a21 (t) a22 (t) · · · a2n (t) 
A(t) = 
 .. .. .. 

 . . . 
an1 (t) an2 (t) · · · ann (t)

and let x and b(t) be the vectors


   
x1 b1 (t)
   
 x2   b2 (t) 
x=
 .. ,
 b(t) = 
 .. .

 .   . 
xn bn (t)

Then (N) can be written in the vector-matrix form

x0 = A(t) x + b(t). (N)

The corresponding linear homogeneous system

x0 = A(t) x (H)

is called the reduced system of (N).

THEOREM 1. If z1 (t) and z2 (t) are solutions of (N), then

x(t) = z1 (t) − z2 (t)

is a solution of (H). (C.f. Theorem 1, Section 3.4.)

Proof: Since z1 and z2 are solutions of (N),

z01 (t) = A(t)z1 (t) + b(t) and z02 (t) = A(t)z2 (t) + b(t)).

Let x(t) = z1 (t) − z2 (t). Then

x0 (t) = z01 (t) − z02 (t) = [A(t)z1 (t) + b(t)] − [A(t)z2 (t) + b(t)]

= A(t) [z1 (t) − z2 (t)] = A(t)x(t).

Thus, x(t) = z1 (t) − z2 (t) is a solution of (H). 

Our next theorem gives the “structure” of the set of solutions of (N).

382
THEOREM 2. Let x1 (t), x2 (t), . . . , xn(t) be a fundamental set of solutions the reduced
system (H) and let z = z(t) be a particular solution of (N). If u = u(t) is any solution
of (N), then there exist constants c1 , c2 , . . . , cn such that

u(t) = c1 x1 (t) + c2 x2 (t) + · · · + cn xn (t) + z(t)

(C.f. Theorem 2, Section 3.4.)

Proof: Let u = u(t) be any solution of (N). By Theorem 1, u(t) − z(t) is a solution of
the reduced system (H). Since x1 (t), x2 (t), . . . , xn (t) are n linearly independent solutions
of (H), there exist constants c1 , c2 , . . . , cn such that

u(t) − z(t) = c1 x1 (t) + c2 x2 (t) + · · · + cn xn (t).

Therefore
u(t) = c1 x1 (t) + c2 x2 (t) + · · · + cn xn (t) + z(t). 

According to Theorem 2, if x1 (t), x2 (t), . . . , xn(t) are linearly independent solutions


of the reduced system (H) and z = z(t) is a particular solution of (N), then

x(t) = C1 x1 (t) + C2 x2 (t) + · · · + Cn xn (t) + z(t) (1)

represents the set of all solutions of (N). That is, (1) is the general solution of (N). Another
way to look at (1) is: The general solution of (N) consists of the general solution of the
reduced equation (H) plus a particular solution of (N):

x
|{z} = C1 x1 (t) + C2 x2 (t) + · · · + Cn xn (t) + z(t).
| {z } |{z}
general solution of (N) general solution of (H) particular solution of (N)

Variation of Parameters

Let x1 (t), x2 (t), . . . , xn(t) be a fundamental set of solutions of (H) and let X(t) be the cor-
responding fundamental matrix (X is the n×n matrix whose columns are x1 , x2 , . . . , xn ).
Then, as we saw in Section 6.3, the general solution of (H) can be written
 
C1
 
 C2 
x(t) = X(t)C where C=
 .. .

 . 
Cn

In Exercises 6.3, Problem 19, you were asked to show that X satisfies the matrix differential
system
X 0 = A(t)X.

383
That is, X 0 (t) = A(t)X(t).

We replace the constant vector C by a vector function u(t) which is to be determined


so that
z(t) = X(t)u(t)

is a solution of (N). Differentiating z, we get

z0 (t) = [X(t)u(t)]0 = X(t)u0 (t) + X 0 (t)u(t) = X(t)u0 (t) + A(t)X(t)u(t).

Since z is to satisfy (N), we have

z0 (t) = A(t)z(t) + b(t) = A(t)X(t)u(t) + b(t).

Therefore
X(t)u0 (t) + A(t)X(t)u(t) = A(t)X(t)u(t) + b(t),

from which it follows that


X(t)u0 (t) = b(t).

Since X is a fundamental matrix, it is nonsingular, and so we can solve for u0 :


Z
u0 (t) = X −1 (t)b(t) which implies u(t) = X −1 (t)b(t) dt.

Finally, we have Z
z(t) = X(t) X −1 (t)b(t) dt (2)

is a solution of (N).

By Theorem 2, the general solution of (N) is given by


Z
x(t) = X(t)C + X(t) X −1 (t)b(t) dt. (3)

Compare this result with the general solution of first order linear differential equation given
by equation (2) in Section 2.1

Example 1. Find the general solution of the nonhomogeneous linear differential system
! !
t
2 2 e
x0 = x+ .
2 −1 2et

SOLUTION. First we find a fundamental set of solutions of the reduced system


!
0 2 2
x = x
2 −1

384
!
2 2
by calculating the eigenvalues and eigenvectors of .
2 −1

2−λ 2
det (A − λI) = = λ2 − λ − 6 = (λ + 2)(λ − 3).
2 −1 − λ

Thus, the eigenvalues of A are λ1 = −2 and λ2 = 3.

As you can check, corresponding eigenvectors are


! !
1 2
v1 = and v2 = .
−2 1

and a fundamental set of solutions is


! ! ! !
−2t 1 e−2t 3t 2 2e3t
x1 = e = and x2 = e =
−2 −2e−2t 1 e3t

is a fundamental set of solutions. The corresponding fundamental matrix is


!
e−2t 2e3t
X(t) = .
−2e−2t e3t

The inverse of X is given by


! !
1 2t
5e − 52 e2t 1 e2t −2e2t
X −1 = 2 −3t 1 −3t
= .
5e 5e
5 2e−3t e−3t

We now use equation (2) to calculate a particular solution z of the given nonhomogeneous
system:
!Z ! !
e−2t 2e3t 1 e2t −2e2t et
z= dt
−2e−2t e3t 5 2e−3t e−3t 2et
!Z !
1 e−2t 2e3t −3e3t
= dt
5 −2e−2t e3t 4e−2t
! ! ! !
1 e−2t 2e3t −e3t 1 −5et −et
= = = .
5 −2e−2t e3t −2e−2t 5 0 0
!
−et
Thus, x = is a particular solution of the given system, and the general solution
0
of the system is:
! ! !
−2t 1 3t 2 −et
x(t) = C1 e + C2 e + . 
−2 1 0

385
Example 2. Find the general solution of the nonhomogeneous linear differential system
! !
0 0 1 2
x = x+ .
−2t−2 2t−1 3t−1

SOLUTION. First, we consider the reduced system


!
0 0 1
x = −2
x.
−2t 2t−1

The form of the coefficient matrix indicates that this system is equivalent to the second
order equation
2 2
y 00 − 2 y 0 + 2 = 0 (see Section 6.1)
t t
which has solutions of the form y = tr . As you can verify, y1 = t and y2 = t2 are a funda-
mental set of solutions of the differential equation. These solutions convert to corresponding
solutions of the reduced system
! !
t 2 t2
y1 = t −→ x1 = , y2 = t −→ x2 = .
1 2t

from which it follows that !


t t2
X(t) = .
1 2t
is a fundamental matrix for the reduced system. The inverse of X is
!
−1 2t−1 −1
X (t) = .
−t−2 t−1

We are now ready to calculate z using the equation (2):


!Z ! !
t t2 2t−1 −1 2
z= dt
1 2t −t−2 t−1 3t−1
!Z !
t t2 t−1
= dt
1 2t t−2
! ! !
t t2 ln t t ln t − t
= =
1 2t −t−1 ln t − 2

Therefore, the general solution of the given nonhomogeneous system is


! ! ! ! ! !
t t2 C1 t ln t − t t t2 t ln t − t
x(t) = + = C1 +C2 + . 
1 2t C2 ln t − 2 1 2t ln t − 2

386
Initial-Value Problems

Example 3. Find the solution of the initial-value problem


! ! !
t
2 2 e 2
x0 = x+ , x(0) = .
2 −1 2et 4

SOLUTION. We found the general solution of the differential system in Example 1:


! ! !
−2t 1 3t 2 −et
x(t) = C1 e + C2 e + .
−2 1 0
!
2
To solve the initial-value problem, we put t = 0 and set the result equal to . This
4
gives
! ! ! ! ! ! !
1 2 −1 2 1 2 3
C1 + C2 + = and C1 + C2 = .
−2 1 0 4 −2 1 4

The corresponding system of equations is:

C1 + 2C2 = 3
−2C1 + C2 = 4

and, as you can check. the solution set is C1 = −1, C2 = 2. Thus, the solution of the
initial-value problem is:
! ! !
t
1 2 −e
x(t) = −e−2t + 2e3t + . 
−2 1 0

By fixing a point a on the interval I, the general solution of (1) given by (3) can be
written as Z t
x(t) = X(t)C + X(t) X −1 (s)b(s) ds, t ∈ I.
a
This form is useful in solving system (1) subject to an initial condition x(a) = xo . Substi-
tuting t = a in (3) gives

xo = X(a)C which implies C = X −1 (a)xo.

Therefore the solution of the initial-value problem

x0 = A(t) x + b(t), x(a) = xo

is given by
Z t
−1
x(t) = X(t)X (a)xo + X(t) X −1 (s)b(s) ds. (4)
a

387
Exercises 6.5

Find the general solution of the system x0 = A(t)x + b(t) where A and b are given.

! !
4 −1 e−t
1. A(t) = , b(t) =
5 −2 2e−t
! !
2 −1 0
2. A(t) = , b(t) =
3 −2 4t
! !
2 2 1
3. A(t) = , b(t) =
−3 −3 2t
! !
3 2 2 cos t
4. A(t) = , b(t) =
−4 −3 2 sin t
! !
−3 1 3t
5. A(t) = , b(t) =
2 −4 e−t
! !
0 −1 sec t
6. A(t) = , b(t) =
1 0 0
! !
1 −1 et cos t
7. A(t) = , b(t) =
1 1 et sin t
! !
3t2 t 4t2
8. A(t) = , b(t) =
0 t−1 1
   
1 1 0 et
   
9. A(t) =  1 1 0 , b(t) =  e2t 
0 0 3 te3t
   
1 −1 1 0
   
10. A(t) =  0 0 1 , b(t) =  et 
0 −1 2 et

Solve the initial-value problem.


! ! !
2t
3 −1 4e 1
11. x0 = x+ , x(0) =
−1 3 4e4t 1
! ! !
0 3 −2 −2e−t 2
12. x = x+ , x(0) =
1 0 −2e−t −1

388
6.6 Direction Fields and Phase Planes

There are many types of differential equations which do not have solutions which can
be easily written in terms of elementary functions such as exponentials, sines and cosine,
or even as integrals of such functions. Fortunately, when these equations are of first or
second order one can still gain a good understanding of the behavior of their solutions
using geometric methods. In this section we discuss the basics of phase plane analysis, an
extension of the method of slope fields discussed in Section 2.8.

Let us consider the differential equation

y 0 = f (y),

and think about it geometrically. The equality implies that the graph of a solution of this
equation in xy -plane, must have slope equal to f (y) at the point (x, y). For instance, for
the differential equation
y 0 = 2y(1 − y)

the slope of the solution equals 0 at all points for which y = 1. Indeed, since the solution
satisfying the initial condition y(0) = 1 is the constant solution y(x) ≡ 1, this is what we
expect. The following figure shows this solution (in red), along with the solution satisfying
y(0) = 0.1.

Let us now turn to autonomous differential equations in two variables

x01 = f (x1 , x2 )
x02 = g(x1, x2 ).

Drawing slope fields for x1 and x2 separately will not work, since each slope field depends
on both variables. However, note that any solution of this equation, (x1 (t), x2 (t)), para-
metrically defines a curve in the x1 , x2 - plane. Indeed, the vector (x01 (t0 ), x02(t0 )) is the
tangent vector to this parametric curve at the point (x1 (t0 ), x2(t0 )). Therefore, we can
sketch the solutions of the differential system by selecting a number of points in the plane.

389
At each point in the collection we draw a vector emanating from the point, so that the
vector (f (x1 , x2 ), g(x1, x2 )) emanates from the point (x1 , x2). The collection of these vec-
tors is called a vector field. In practice we may have to scale the length of the vectors by a
constant factor.

Example 1. Let us start with the constant coefficient system

x01 = x1 + 5x2
x02 = 3x1 + 3x2 .

considered in Example 3 of section 5.3. The figure below shows vectors attached to points
spaced 0.1 units apart in the horizontal and vertical direction. For instance, to the point
with coordinates x1 = 0.3, and x2 = 0.5 we attach the vector with components x1 + 5x2 =
0.3 + 5 × 0.5 = 2.8 in the horizontal and 3x1 + 3x2 = 3 × 0.5 + 3 × 0.3 = 2.4 in the vertical
direction. Similarly, the vector (−3.2, −3.6) emanates from the point (−0.7, −0.5). The
length of all vectors is scaled by an equal factor so that they all fit in the figure.
1.0

0.5

-1.0 -0.5 0.5 1.0

-0.5

-1.0

Also shown are two solutions of the differential system, one with initial condition
(x1 (0), x2(0)) = (−0.5, 0.25), and the other with (x1 (0), x2(0)) = (0.2, −0.1). The arrows
point in the direction in which the solutions are traversed. You can also see that the solu-
tions diverge from the origin in the direction of the eigenvector (1, 1) corresponding to the
positive eigenvalue.

Let us next consider the following equation

y 00 + y 0 + y = 0.

This equation is known as the linear damped pendulum. If we think of y as the angular
displacement from the resting position and y 0 as the angular velocity of the pendulum, then
the solutions of the equation will describe its oscillation around the equilibrium position at
y = 0. As we will see shortly,  measures the amount of damping.

390
If we let x1 = y, and x2 = y 0 , then we obtain the following pair of equations

x01 = x2 (1)
x02 = −x2 − x1 . (2)

This is a linear system, and we could solve it using the methods of Section 6.3. 6.4. Instead,
let us look at the phase plane, and see what happens as we vary .

In the figure below you will see the vector field and solutions with initial condition
x1 = x2 = 0.8, in both cases. In the left figure  = 0.1, while on the right  = 0.2. As you
could guess, a pendulum that is subject to more damping will oscillate fewer times before
reaching the equilibrium at the origin.
1.0 1.0

0.5 0.5

-1.0 -0.5 0.5 1.0 -1.0 -0.5 0.5 1.0

-0.5 -0.5

-1.0 -1.0

The linear system (1-2) only describes the behavior of the pendulum accurately when
the displacement from the rest position is small. For larger displacements it is necessary to
use the nonlinear equation

x01 = x2 (3)
x02 = −x2 − sin(x1 ). (4)

Although this equation may not look much more complicated than the previous one, it is
much more difficult to solve. However, a phase plane analysis can be easily performed in this
case as well. In the figure below  = 0.05, and the initial conditions are x1 = 0, x2 = 2π/3.

391
3

-3 -2 -1 1 2 3

-1

-2

-3

Exercises 6.6

1. Sketch the phase plane for the following systems


x0 = 1 x0 = x x0 = x2 − 1
(a) (b) (c)
y0 = y y0 = y y0 = x − y

2. Solve the system for the damped pendulum (1-2) when  = 0. Sketch the vector field
and the solutions. What happens to the amplitude of the solutions as the pendulum
oscillates in this case?
Solution: In this case the solutions have the form

x1 (t) = C1 cos t + C2 sin t, x2 = C2 cos −C1 sin t.

They oscillate forever with constant amplitude.


1.0

0.5

-1.0 -0.5 0.5 1.0

-0.5

-1.0

3. Solve the equation for the damped linear pendulum (1-2). Show that when  > 0
equations oscillate with diminishing amplitude.

392

You might also like