Chapter 6
Chapter 6
Chapter 6
6.1 Introduction
Up to this point the entries in a vector or matrix have been real numbers. In this section,
and in the following sections, we will be dealing with vectors and matrices whose entries are
functions. A vector whose components are functions is called a vector-valued function or
vector function. Similarly, a matrix whose entries are functions is called a matrix function.
The operations of vector and matrix addition, multiplication by a number and matrix
multiplication for vector and matrix functions are exactly as defined in Chapter 5 so there
is nothing new in terms of arithmetic. However, for the purposes of this and the following
sections, there are operations on functions other than arithmetic operations that we need
to define for vector and matrix functions, namely the operations from calculus (limits,
differentiation, integration). The operations from calculus are defined in a natural way.
345
Integral: Since differentiation of vector functions is done component-wise,
integration must also be component-wise. That is
Z Z Z Z
v(t) dt = f1 (t) dt, f2 (t) dt, . . . , fn (t) dt .
Here we show how to convert a linear differential equation into a system of first order
equations.
where p, q, f are continuous functions on some interval I. Solving this equation for y 00 ,
we get
y 00 = −q(t)y − p(t)y 0 + f (t).
x1 = y
x2 = x01 (= y 0 )
Then
x02 = y 00 = −q(t)x1 − p(t)x2 + f (t)
and the second order equation can be written equivalently as a system of two first order
equations
x01 = x2
x02 = −q(t)x1 − p(t)x2 + f (t).
Note that this system is just a special case of the “general” system of two first-order differ-
ential equations:
346
where p, q, r, f are continuous functions on some interval I. Solving the equation for
y 000 , we get
y 000 = −r(t)y − q(t)y 0 − p(t)y 00 + f (t).
x1 = y
x2 = x01 (= y 0 )
x3 = x02 (= y 00 )
Then
y 000 = x03 = −r(t)x1 − q(t)x2 − p(t)x3 + f (t)
and the third-order equation can be written equivalently as the system of three first-order
equations:
x01 = x2
x02 = x3
x03 = −r(t)x1 − q(t)x2 − p(t)x3 + f (t).
Here we note that this system is just a very special case of the “general” system of three,
first-order differential equations:
t2 y 00 − ty 0 − 3y = 0 on I = (0, ∞).
x01 = x2
3 1
x02 = 2
x1 + x2
t t
347
(b) Consider the third-order nonhomogeneous equation
x01 = x2
x02 = x3
x03 = −12x1 + 8x2 + x3 + 2et .
Let a11 (t), a12 (t), . . . , a1n (t), a21 (t), . . . , ann (t), b1 (t), b2 (t), . . . , bn (t) be continuous
functions on some interval I. The system of n first-order differential equations
System (S) is nonhomogeneous if the functions bi (t) are not all identically zero on I. That
is, (S) is nonhomgeneous if there is at least one point a ∈ I and at least one function bi(t)
such that bi (a) 6= 0.
348
and let x and b(t) be the vectors
x1 b1 (t)
x2 b2 (t)
x= ..
,
b(t) =
.. .
. .
xn bn (t)
Then (S) can be written in the vector-matrix form
The matrix A(t) is called the matrix of coefficients or the coefficient matrix of the system.
is ! ! !
0 1 0 x1
x0 = x+ where x = ,
−q(t) −p(t) f (t x2
and the vector-matrix form of y 000 + p(t)y 00 + q(t)y 0 + r(t)y = f (t) is:
0 1 0 0 x1
x0 = 0 0 1 x + 0 where x = x2 .
−r(t) −q(t) −p(t) f (t) x3
349
!
t3
Example 3. Verify that x(t) = is a solution of the homogeneous system
3t2
! !
0 1 x1
x0 = 2
, I = (0, ∞),
3/t 1/t x2
of Example 1 (a).
SOLUTION.
!0 ! ! !
t3 3t2 ? 0 1 t3
x0 = = =
3t2 6t 2
3/t 1/t 3t2
!
3t2
= ,
6t
SOLUTION.
1 0 1
e2t 2e
t 2e2t 2e
t
x0 = 2e2t + 12 et = 4e2t + 12 et
1 t 1 t
4e2t 2e 8e2t 2e
2t 1 t
0 1 0 e 2e 0
?
0 0 1 2e2t + 12 et + 0
=
1 t
−12 8 1 4e2t 2e 2et
1
0 1 0 e2t 0 1 0 2 e
t 0
?
0 0 1 12 et + 0
= 0 0 1 2e2t +
1 t
−12 8 1 4e2t −12 8 1 2 e 2et
1 1
2e2t 2e
t
0 2e2t 2e
t
?
= 4e2t + 12 et + 0 = 4e2t + 12 et ;
x(t) is a solution.
350
THEOREM 1. (Existence and Uniqueness Theorem) Let a be any point on the interval
I, and let α1 , α2 , . . . , αn be any n real numbers. Then the initial-value problem
α1
α2
x0 = A(t) x + b(t), x(a) =
..
.
αn
Exercises 6.1
1. y 00 − ty 0 + 3y = sin 2t.
2. y 00 + y = 2e− 2t.
3. y 000 − y 00 + y = et .
x01 = et x1 − e2tx2
10.
x02 = e−t x1 − 3et x2
351
x01 = 2x1 + x2 + 3x3 + 3e2t
11. x02 = x1 − 3x2 − 2 cos t
x03 = 2x1 − x2 + 4x3 + t
x01 = t2 x1 + x2 − tx3 + 3
12. x02 = −3et x2 + 2x3 − 2e−2t
x03 = 2x1 + t2 x2 + 4x3
!
t−1
13. Verify that u(t) = is a solution of the system in Example 1 (b).
−t−2
1
t
e−3t 2e
14. Verify that u(t) = −3e−3t + 12 et is a solution of the system in Example
1 t
9e−3t 2e
1 (a).
te2t
15. Verify that w(t) = e2t + 2te2t is a solution of the homogeneous system asso-
4e2t + 4te2t
ciated with the system in Example 1 (a).
!
− sin t
16. Verify that v(t) = is a solution of the system
− cos t − 2 sin t
! !
0 −2 1 0
x = x+ .
−3 2 2 sin t
−2e−2t
17. Verify that v(t) = 0 is a solution of the system
3e−2t
1 −3 2
x0 = 0 −1 0 x.
0 −1 −2
In this section we give the basic theory for linear homogeneous systems. This “theory” is
simply a repetition of the results given in Sections 3.2 and 3.7, phrased this time in terms
352
of the system
x01 = a11 (t)x1 + a12 (t)x2 + · · · + a1n (t)xn (t)
x02 = a21 (t)x1 + a22 (t)x2 + · · · + a2n (t)xn (t)
.. (H)
.
x0n = an1 (t)x1 + an2 (t)x2 + · · · + ann (t)xn (t)
or in vector-matrix form
x0 = A(t)x. (H)
0
0
Note first that the zero vector z(t) ≡ 0 =
.. is a solution of (H). As before,
.
0
this solution is called the trivial solution. Of course, we are interested in finding nontrivial
solutions.
is also a solution of (H); the sum of any two solutions of (H) is a solution of (H).
is a solution of (H); any linear combination of solutions of (H) is also a solution of (H).
Note that these are precisely the results given in Sections 3.2 and 3.7 for second order
and higher order linear equations.
353
In this subsection we will look at linear dependence/independence of vector functions
in general. This is an extension of the material in Section 5.8. We will return to linear
differential systems after we treat the general case.
DEFINITION 1. Let
v11 (t) v12 (t) v1k (t)
v21 (t) v22 (t) v2k (t)
v1 (t) =
.. ,
v2 (t) =
.. , . . . ,
vk (t) =
..
. . .
vn1 (t) vn2 (t) vnk (t)
be n-component vector functions defined on some interval I. The vectors are linearly
dependent on I if there exist k real numbers c1 , c2 , . . . , ck , not all zero, such that
It is important to understand that in this general case, W (t) ≡ 0 does not imply that
the vector functions are linearly dependent. An example is given in Section 5.8.
is:
t3 t−1
W (x) = = −4t 6= 0.
3t2 −t−2
354
Therefore u and v are linearly independent. (Note: u and v are solutions of the homo-
geneous system in Example 3, Section 6.1.)
1 1 0
W (0) = 2 −3 1 = −25 6= 0.
4 9 4
Therefore, the vector functions are linearly independent.
THEOREM 5. Let x1 (t), x2 (t), . . . , xn (t) be n solutions of (H). Exactly one of the
following holds:
2. W (x1 , x2 , . . . , xn )(t) 6= 0 for all t ∈ I and the solutions are linearly independent.
Compare this result with Theorem 4, Section 3.2, and Theorem 5, Section 3.7.
It is easy to construct sets of n linearly independent solutions of (H). Simply pick any
point a ∈ I and any nonsingular n × n matrix A. Let a1 be the first column of
A, a2 the second column of A, and so on. Then let x1 (t) be the solution of (H) such
that x1 (a) = a1 , let x2 (t) be the solution of (H) such that x2 (a) = a2 , . . . , and let
xn (t) be the solution of (H) such that xn (a) = an . The existence and uniqueness theorem
guarantees the existence of these solutions. Now
355
Therefore, W (t) 6= 0 for all t ∈ I and the solutions are linearly independent. A
particularly nice set of n linearly independent solutions is obtained by choosing A = In ,
the identity matrix.
(the vectors x1 , x2 , . . ., xn are the columns of X) is called a fundamental matrix for (H).
Note that the general solution can also be written in terms of the fundamental matrix:
x11 (t) x12 (t) · · · x1n (t) C1
x21 (t) x22 (t) · · · x2n (t) C2
C1 x1 (t) + C2 x2 (t) + · · · + Cn xn (t) =
.. .. .. . = X(t)C.
.
. . . .
xn1 (t) xn2 (t) · · · xnn (t) Cn
Example 2. The vectors
! !
t3 t−1
u(t) = and v(t) =
3t2 −t−2
form a fundamental set of solutions of
! !
0 0 1 x1
x = 2
.
3/t 1/t x2
356
The matrix !
t3 t−1
X(t) =
3t2 −t−2
is a fundamental matrix for the system and
! ! ! !
t3 t−1 t3 t−1 C1
x(t) = C1 + C2 =
3t2 −t−2 3t2 −t−2 C2
Exercises 6.2
! !
2t − 1 −t + 1
1. u = , v=
−t 2t
! !
cos t sin t
2. u = , v=
sin t cos t
! !
t − t2 −2t + 4t2
3. u = , v=
−t 2t
! !
tet et
4. u = , v=
t 1
2−t t 2+t
5. u = t , v = −1 , w = t − 2 .
−2 2 2
cos t cos t 0
6. u = sin t , v= 0 , w = cos t .
0 sin t sin t
et −et 0
7. u = −et , v = 2et , w = et .
et −et 0
! ! !
2−t t+1 t
8. u = , v= , w=
t −2 t+2
! ! !
et 0 0
9. u = , v= , w=
0 0 et
357
cos (t + π/4) cos t) sin t)
0 0 0
10. u =
,
v=
,
w=
0 0 0
0 et et
Let ! !
e2t 3e3t
x1 = and x2 = .
e2t 2e3t
X 0 = AX.
358
0
(b) Find the solution of the system that satisfies x(0) = 1 .
2
14. The linear differential system equivalent to the equation
is:
x01 0 1 0 x1
0
x2 = 0 0 1 x2 .
x03 −r(t) −q(t) −p(t) x3
(See Example 2, Section 6.1.) Show that if y = y(t) is a solution of the equation,
y(t)
then x(t) = y 0 (t) is a solution of the system.
y 00 (t)
NOTE: This result holds for linear equations of all orders. However, it is important
to understand that solutions of systems which are not converted from equations do
not have this special form.
19. Let {x1 (t), x2 (t), . . . , xn (t)} be a fundamental set of solutions of (H), and let
X(t) be the corresponding fundamental matrix. Show that V satisfies the matrix
differential equation
X 0 = A(t)X.
359
6.3 Homogeneous Systems with Constant Coefficients, Part I
As emphasized in Section 3.2 (and 3.7), there are no general methods for solving a linear
homogeneous differential equation (as opposed to first order linear equations). Therefore
it follows that there are no general methods for solving linear homogeneous differential
systems. However, just as with linear differential equations (see Section 3.3), there is a
method for solving homogeneous systems with constant coefficients.
y 000 + 2y 00 − 5y 0 − 6y = 0.
r 3 + 2r 2 − 5r − 6 = (r − 2)(r + 1)(r + 3) = 0
and
e2t 1
2t 2t
x1 (t) = 2e = e 2
4e2t 4
360
is a solution vector (see Problem 14, Exercises 6.2):
2e2t
x01 = 4e2t
8e2t
and
0 1 0 e2t 2e2t
0 0 1 2e2t = 4e2t = x01 .
6 5 −2 4e2t 8e2t
Similarly,
e−t 1 e−3t 1
x2 (t) = −e−t = e−t −1 and x3 (t) = −3e−3t = e−3t −3
e−t 1 9e−3t 9
Example 1 suggests that homogeneous systems with constant coefficients might have solu-
tion vectors of the form x(t) = eλt v, for some number λ and some constant vector v.
Indeed, this is the case.
and
0 1 0 1 1
0 0 1 −3 = −3 −3 .
6 5 −2 9 9
361
0 1 0 1
Thus, 2 is an eigenvalue of A = 0 0 1 with corresponding eigenvector 2 ,
6 5 −2 4
1
−1 is an eigenvalue of A with corresponding eigenvector −1 , and −3 is an
1
1
eigenvalue of A with corresponding eigenvector −3 .
9
are linearly independent and therefore constitute a fundamental set of solutions for x0 = Ax.
Proof. We will prove the theorem in the case n = 3 to reduce the notation involved. The
proof of the general case is exactly the same.
The case where A does not have distinct eigenvalues; that is, A has at least one eigenvalue
with multiplicity greater than 1 will be taken up in the next section.
362
Example 3. Find a fundamental set of solution vectors of
!
0 1 5
x = x
3 3
1−λ 5
det(A − λI) = = (λ − 6)(λ + 2).
3 3−λ
If we let the independent variable t denote time, then we can view the solutions
!
x1 (t)
x(t) =
x2 (t)
as points (x1 (t), x2(t) moving with time in the x1 , x2 - plane. Graphs of solutions are shown
in the following figure.
363
3
-1
-2
-3
-3 -2 -1 0 1 2 3
Figure 1
! !
1 5
Note that the eigenvectors v1 = and v2 = are clearly visible in the figure.
1 −3
Example 4. Find a fundamental set of solution vectors of
3 −1 −1
x0 = −12 0 5 x
4 −2 −1
1
and find the solution that satisfies the initial condition x(0) = 0 .
1
SOLUTION.
3 − λ −1 −1
det(A − λI) = −12 −λ 5 = −λ3 + 2λ2 + λ − 2.
4 −2 −1 − λ
Now
364
By Theorem 2, a fundamental set of solution vectors is:
1 3 1
x1 (t) = e2t −1 , x2 (t) = et −1 ,
x3 (t) = e−t 2 .
2 7 2
Thus, the general solution of the system is
1 3 1
2t t −t
x(t) = C1 e −1 + C2 e −1 + C3 e 2 .
2 7 2
Using the solution method of your choice (row reduction, inverse, Cramer’s rule), the
solution is: C1 = 3, C2 = −1, C3 = 1, and
1 3 1
2t t −t
x(t) = 3e −1 − e −1 + e 2
2 7 2
is the solution of the initial-value problem.
Exercises 6.3
Find the general solution of the system x0 = Ax where A is the given matrix. If an
initial condition is given, also find the solution that satisfies the condition.
!
−2 4
1. .
1 1
365
!
−3 2
2. .
1 −2
! !
2 −1 1
3. A = , x(0) = .
0 3 3
!
1 2
4. A = .
3 2
!
1 4
5. A = .
2 3
! !
−1 1 −1
6. A = , x(0) = .
4 2 1
3 2 −2
7. A = −3 −1 3 . Hint: 1 is an eigenvalue.
1 2 0
15 7 −7
8. A = −1 1 1 . Hint: 2 is an eigenvalue.
13 7 −5
3 0 −1 −1
9. −2 2 1 , x(0) = 2 . Hint: 2 is an eigenvalue.
8 0 −3 −8
−2 2 1
10. 0 −1 0 . Hint: 0 is an eigenvalue.
2 −2 −1
8 2 1
11. 1 7 3 . Hint: 5 is an eigenvalue.
1 1 6
1 −3 1 1
12. −1 1 1 , x(0) = −2 . Hint: 2 is an eigenvalue.
3 −3 −1 1
13. Given the second order, homogeneous equation, with constant coefficients
y 00 + ay 0 + by = 0.
(a) Transform the equation into a system of first order equations by setting x1 =
y, x2 = y 0 . Then write your system in the vector-matrix form
x0 = Ax.
366
(b) Find the characteristic equation for the coefficient matrix you found in part (a).
(c) Compare your result in (b) with the characteristic equation for y 00 +ay 0 +by = 0.
14. Given the third order, homogeneous equation, with constant coefficients
y 000 + ay 00 + by 0 + cy = 0.
(a) Transform the equation into a system of first order equations by setting x1 =
y, x2 = y 0 , x3 = y 00 . Then write your system in the vector-matrix form
x0 = Ax.
(b) Find the characteristic equation for the coefficient matrix you found in part (a).
(c) Compare your result in (b) with the characteristic equation for
y 000 + ay 00 + by 0 + cy = 0.
There are two difficulties that can arise in solving a linear homogeneous system with constant
coefficients. We will treat these in this section.
Now
1
x1 (t) = 2 [w1 (t) + w2 (t)] = eat(cos bt u − sin bt v)
and
1
x2 (t) = 2i [w1 (t) − w2 (t)] = eat(cos bt v + sin bt u)
are linearly independent solutions of the system, and they are real-valued vector functions.
Note that x1 and x2 are simply the real and imaginary parts of w1 (or of w2 ).
(Review Section 3.3 where you were shown how to convert complex exponential solutions
into real-valued solutions involving sine and cosine.)
367
Example 1. Find the general solution of
!
0 2 −5
x = x.
1 0
SOLUTION.
2 − λ −5
det(A − λI) = = λ2 − 2λ + 5.
1 −λ
The eigenvalues are: λ1 = 1 + 2i, λ2 = 1 − 2i. Next, we find the eigenvectors; it is enough
to find an eigenvector for λ1 = 1 + 2i. We have
! !
−1 − 2i −5 x1
[A − (1 + 2i)I] v = .
1 −1 − 2i x2
Now
" ! !# " ! !#
1 2 1 2
e(1+2i)t +i = et (cos 2t + i sin 2t) +i
1 0 1 0
" ! !# " ! !#
1 2 2 1
= et cos 2t − sin 2t + i et cos 2t + sin 2t
1 0 0 1
and, by our discussion above, the real and imaginary parts of this vector is a fundamental
set of solution vectors for the system:
" ! !# " ! !#
1 2 2 1
x1 (t) = et cos 2t − sin 2t , x2 (t) = et cos 2t + sin 2t .
1 0 0 1
368
The general solution of the system is
" ! !# " ! !#
t 1 2 t 2 1
x(t) = C1 e cos 2t − sin 2t + C2 e cos 2t + sin 2t .
1 0 0 1
as points (x1 (t), x2(t)) moving with time in the x1 , x2 - plane. Graphs of solutions are shown
in the following figure
-1
-2
-3
-3 -2 -1 0 1 2 3
Figure 1
The sine and cosine terms generate the spiraling motion and the positive exponential factor
et causes the solutions to spiral “out,” that is, move away from the origin as t increases.
SOLUTION.
−3 − λ −5
det(A − λI) = = λ2 + 4λ + 13.
2 −1 − λ
369
The eigenvalues are: λ1 = −2 + 3i, λ2 = −2 − 3i. As you can check, the corresponding
eigenvectors are:
! ! ! ! ! !
−1 + 3i −1 3 −1 − 3i −1 3
v1 = = +i , v2 = = −i .
2 2 0 2 1 0
Now
" ! !# " ! !#
(−2+3i)t −1 3 −2t −1 3
e +i =e (cos 3t + i sin 3t) +i
2 0 2 0
" ! !# " ! !#
−1 3 3 −1
= e−2t cos 3t − sin 3t + i e−2t cos 3t + sin 2t .
2 0 0 2
The real and imaginary parts give a fundamental set of solution vectors for the system:
" ! !# " ! !#
−1 3 3 −1
x1 (t) = e−2t cos 3t − sin 3t , x2 (t) = e−2t cos 3t + sin 2t ,
2 0 0 2
and the general solution of the system is
" ! !# " ! !#
−2t −1 3 −2t 3 −1
x(t) = C1 e cos 3t − sin 3t +C2 e cos 3t + sin 2t .
2 0 0 2
-1
-2
-3
-3 -2 -1 0 1 2 3
Figure 2
370
As above, the sine and cosine terms generate the spiraling motion. Here, the negative
exponential factor e−2t means that the solutions spiral “in” toward the origin as t increases.
SOLUTION.
1−λ −4 −1
det(A − λI) = 3 2−λ 3 = −λ3 + 6λ2 − 21λ + 26 = −(λ − 2)(λ2 − 4λ + 13).
1 1 3−λ
Now
−5 3 −5 3
e(2+3i)t 2t
3 + i 3 = e (cos 3t + i sin 3t) 3 + i 3
2 0 2 0
−5 3 3 −5
= e2t cos 3t 3 − sin 3t 3 + i e2t cos 3t 3 + sin 3t 3 .
2 0 0 2
371
2. A has an eigenvalue of multiplicity greater than 1
We will treat the case where A has an eigenvalue of multiplicity 2. At the end of the
section we indicate the possibilities when A has an eigenvalue of multiplicity 3. You will
see that the complexity increases with the multiplicity.
SOLUTION.
1−λ −3 3
det(A − λI) = 3 −5 − λ 3 = −λ3 + 12λ − 16 = −(λ − 4)(λ + 2)2 .
6 −6 4 − λ
We’ll carry out the details involved in finding an eigenvector corresponding to the “dou-
ble” eigenvalue −2.
3 −3 3 x1 0
[A − (−2)I]v = 3 −3 3 x2 = 0 .
6 −6 6 x3 0
372
The important thing to note here is that this eigenvalue of multiplicity 2 produced two
independent eigenvectors.
It now follows that a fundamental set of solutions for the differential system
1 −3 3
0
x = 3 −5 3 x
6 −6 4
is
1 1 −1
x1 (t) = e4t 1 ,
x2 (t) = e−2t 1 , x3 (t) = e−2t 0 .
2 0 1
Remark. Example 4 illustrates an important fact: Linear differential systems are more
general than linear differential equations. Every linear differential equation can be re-written
equivalently as a linear differential system, but the converse is not true. If the system in
Example 2 is equivalent to a third order linear differential equation
y 000 + ay 00 + by 0 + cy = 0,
then the solutions corresponding to the double root λ = −2 would be y1 (t) = e−2t and
y2 (t) = te−2t . The solution of the system which corresponds to y2 = te−2t is
y te−2t 0 1
x(t) = y 0 = e−2t − 2te−2t = e−2t 1 + te−2t −2
y 00 −4e−2t + 4te−2t −4 4
But this solution vector is clearly not a linear combination of the solutions x1 , x2 , x3 found
in the Example. Therefore, the system in Example 2 is not equivalent to any third order
linear equation with constant coefficients.
!
0 1 −1
Example 5. Find a fundamental set of solution vectors for x = x.
1 3
SOLUTION.
1−λ −1
det(A − λI) = = λ2 − 4λ + 4 = (λ − 2)2 .
1 3−λ
Eigenvalues: λ1 = λ2 = 2.
Eigenvectors vectors:
! ! !
−1 −1 x1 0
(A − 2I)v = = .
1 1 x2 0
373
The augmented matrix is
! !
−1 −1 0 1 1 0
→ .
1 1 0 0 0 0
x0 = Ax
SOLUTION.
3−λ 1 −1
det(A − λI) = 2 2 − λ −1 = −λ3 + 5λ2 − 8λ + 4 = −(λ − 1)(λ − 2)2 .
2 2 −λ
374
The solutions of this system are: v3 = 2v2 , v1 = −v2 +v3 = v2 , v2 arbitrary. Thereis only
1
one eigenvector corresponding to the eigenvalue 2. Setting v2 = 1, we get v2 = 1 .
2
Thus, two independent solutions of the given linear differential system are
1 1
t 2t
x1 = e 0 , x2 = e 1 .
2 2
We need another solution corresponding to the eigenvalue 2, one which is independent of
x2 .
In the next example we use a third order equation with a ”double” root to see how to
construct a solution which is independent of the eigenvalue/eigenvector solution in the case
where the eigenvalue has only one eigenvector
y 000 + y 00 − 8y 0 − 12y = 0.
r 3 + r 2 − 8r − 12 = (r − 3)(r + 2)2 = 0
−λ 1 0
det(A − λI) = 0 −λ 1 = −λ3 − λ2 + 8λ − 12 = −(λ − 3)(λ + 2)2 ,
12 8 −1 − λ
and, as we already know, the eigenvalues are: λ1 = 3, λ2 = λ3 = −2.
The correspondence between the solutions y1 and y2 of the equation and solution vectors
of the system is:
1 1
y1 = e3t → e3t 3 = x1 ,
y2 = e−2t → e−2t −2 = x2 .
9 4
375
It is the third solution y3 = te−2t that were are interested in. This solution of the equation
produces the solution vector
y3 (t) te−2t 0 1
0 −2t −2t
x3 (t) = y3 (t) = e−2t − 2te−2t = e 1 + te −2
y300 (t) −4e−2t − 4te−2t −4 4
A − (−2)I “maps” w onto the eigenvector v2 . The corresponding solution of the system
has the form
x3 (t) = e−2t w + te−2t v2
[A − (−2)I]w = v2 .
Given the linear differential system x0 = Ax. Suppose that A has an eigenvalue λ of
multiplicity 2. Then exactly one of the following holds:
2. λ has only one eigenvector v. Then a linearly independent pair of solution vectors
corresponding to λ are:
376
We can now finish Examples 5 and 6.
!
1 −1
Example 5. (continued) Find a fundamental set of solution vectors for x0 = x
1 3
and give the general solution.
By our work above, a second solution, independent of x1 , is x2 (t) = e2t w + te2t v where
w satisfies (A − 2I)w = v:
! ! !
−1 −1 w1 1
(A − 2I)w = = .
1 1 w2 −1
are a fundamental set of solutions for the given system, and the general solution is
! " ! !#
1 −1 1
x(t) = C1 e2t + C2 e2t + te2t .
−1 0 −1
377
3
-1
-2
-3
-3 -2 -1 0 1 2 3
Figure 3
!
1
Note that the eigenvector v1 = is clearly visible in the figure.
−1
3 1 −1
Example 6. (continued) Let A = 2 2 −1 . Find a fundamental set of solutions of
2 2 0
x0 = Ax
378
The solutions of this system are
2 2 −1 2
are a fundamental set of solutions of the system.
Eigenvalues of Multiplicity 3.
3. λ has only one (independent) eigenvector v. Then three linearly independent solu-
tions of the system will have the form:
379
Exercises 6.4
Find the general solution of the system x0 = Ax where A is the given matrix. If an
initial condition is given, also find the solution that satisfies the condition.
!
2 4
1. .
−2 −2
! !
−1 2 1
2. , x(0) = .
−1 −3 3
!
−1 1
3. .
−4 3
!
5 2
4. .
−2 1
! !
3 2 3
5. , x(0) = .
−8 −5 −2
!
−1 1
6. .
−4 −5
3 −4 4 2
7. 4 −5 4 , x(0) = 1 . Hint: 3 is an eigenvalue.
4 −4 3 −1
−3 0 −3
8. 1 −2 3 . Hint: −2 is an eigenvalue.
1 0 1
0 4 0
9. −1 0 0 . Hint: −1 is an eigenvalue.
1 4 −1
5 −5 −5
10. −1 4 2 . Hint: 2 is an eigenvalue.
3 −5 −3
1 1 2 −1
11. 0 1 0 , x(0) = 3 . Hint: 3 is an eigenvalue.
0 1 3 2
380
−3 1 −1 1
12. −7 5 −1 , x(0) = 0 . Hint: 4 is an eigenvalue.
−6 6 −2 −1
0 1 1
13. 1 1 −1 . Hint: 2 is an eigenvalue.
−2 1 3
0 0 −2
14. 1 2 1 . Hint: 2 is an eigenvalue.
1 0 3
2 −1 −1 1
15. 2 1 −1 , x(0) = −2 . Hint: 2 is an eigenvalue.
0 −1 1 0
−2 1 −1
16. 3 −3 4 . Hint: 1 is an eigenvalue.
3 −1 2
2 2 −6
17. 2 −1 −3 . Hint: 6 is an eigenvalue.
−2 −1 1
8 −6 1
18. 10 −9 2 . Hint: 3 is an eigenvalue.
10 −7 0
The treatment in this section parallels exactly the treatment of linear nonhomogeneous
equations in Sections 3.4 and 3.7.
Recall from Section 6.1 that a linear nonhomogeneous differential system is a system of
the form
x01 = a11 (t)x1 + a12 (t)x2 + · · · + a1n (t)xn (t) + b1 (t)
x02 = a21 (t)x1 + a22 (t)x2 + · · · + a2n (t)xn (t) + b2 (t)
.. (N)
.
x0n = an1 (t)x1 + an2 (t)x2 + · · · + ann (t)xn (t) + bn (t)
where a11 (t), a12 (t), . . . , a1n (t), a21 (t), . . . , ann (t), b1 (t), b2 (t), . . . , bn (t) are continuous
functions on some interval I and the functions bi(t) are not all identically zero on I; that
is, there is at least one point a ∈ I and at least one function bi(t) such that bi (a) 6= 0.
381
Let A(t) be the n × n matrix
a11 (t) a12 (t) · · · a1n (t)
a21 (t) a22 (t) · · · a2n (t)
A(t) =
.. .. ..
. . .
an1 (t) an2 (t) · · · ann (t)
x0 = A(t) x (H)
z01 (t) = A(t)z1 (t) + b(t) and z02 (t) = A(t)z2 (t) + b(t)).
x0 (t) = z01 (t) − z02 (t) = [A(t)z1 (t) + b(t)] − [A(t)z2 (t) + b(t)]
Our next theorem gives the “structure” of the set of solutions of (N).
382
THEOREM 2. Let x1 (t), x2 (t), . . . , xn(t) be a fundamental set of solutions the reduced
system (H) and let z = z(t) be a particular solution of (N). If u = u(t) is any solution
of (N), then there exist constants c1 , c2 , . . . , cn such that
Proof: Let u = u(t) be any solution of (N). By Theorem 1, u(t) − z(t) is a solution of
the reduced system (H). Since x1 (t), x2 (t), . . . , xn (t) are n linearly independent solutions
of (H), there exist constants c1 , c2 , . . . , cn such that
Therefore
u(t) = c1 x1 (t) + c2 x2 (t) + · · · + cn xn (t) + z(t).
represents the set of all solutions of (N). That is, (1) is the general solution of (N). Another
way to look at (1) is: The general solution of (N) consists of the general solution of the
reduced equation (H) plus a particular solution of (N):
x
|{z} = C1 x1 (t) + C2 x2 (t) + · · · + Cn xn (t) + z(t).
| {z } |{z}
general solution of (N) general solution of (H) particular solution of (N)
Variation of Parameters
Let x1 (t), x2 (t), . . . , xn(t) be a fundamental set of solutions of (H) and let X(t) be the cor-
responding fundamental matrix (X is the n×n matrix whose columns are x1 , x2 , . . . , xn ).
Then, as we saw in Section 6.3, the general solution of (H) can be written
C1
C2
x(t) = X(t)C where C=
.. .
.
Cn
In Exercises 6.3, Problem 19, you were asked to show that X satisfies the matrix differential
system
X 0 = A(t)X.
383
That is, X 0 (t) = A(t)X(t).
Therefore
X(t)u0 (t) + A(t)X(t)u(t) = A(t)X(t)u(t) + b(t),
Finally, we have Z
z(t) = X(t) X −1 (t)b(t) dt (2)
is a solution of (N).
Compare this result with the general solution of first order linear differential equation given
by equation (2) in Section 2.1
Example 1. Find the general solution of the nonhomogeneous linear differential system
! !
t
2 2 e
x0 = x+ .
2 −1 2et
384
!
2 2
by calculating the eigenvalues and eigenvectors of .
2 −1
2−λ 2
det (A − λI) = = λ2 − λ − 6 = (λ + 2)(λ − 3).
2 −1 − λ
We now use equation (2) to calculate a particular solution z of the given nonhomogeneous
system:
!Z ! !
e−2t 2e3t 1 e2t −2e2t et
z= dt
−2e−2t e3t 5 2e−3t e−3t 2et
!Z !
1 e−2t 2e3t −3e3t
= dt
5 −2e−2t e3t 4e−2t
! ! ! !
1 e−2t 2e3t −e3t 1 −5et −et
= = = .
5 −2e−2t e3t −2e−2t 5 0 0
!
−et
Thus, x = is a particular solution of the given system, and the general solution
0
of the system is:
! ! !
−2t 1 3t 2 −et
x(t) = C1 e + C2 e + .
−2 1 0
385
Example 2. Find the general solution of the nonhomogeneous linear differential system
! !
0 0 1 2
x = x+ .
−2t−2 2t−1 3t−1
The form of the coefficient matrix indicates that this system is equivalent to the second
order equation
2 2
y 00 − 2 y 0 + 2 = 0 (see Section 6.1)
t t
which has solutions of the form y = tr . As you can verify, y1 = t and y2 = t2 are a funda-
mental set of solutions of the differential equation. These solutions convert to corresponding
solutions of the reduced system
! !
t 2 t2
y1 = t −→ x1 = , y2 = t −→ x2 = .
1 2t
386
Initial-Value Problems
C1 + 2C2 = 3
−2C1 + C2 = 4
and, as you can check. the solution set is C1 = −1, C2 = 2. Thus, the solution of the
initial-value problem is:
! ! !
t
1 2 −e
x(t) = −e−2t + 2e3t + .
−2 1 0
By fixing a point a on the interval I, the general solution of (1) given by (3) can be
written as Z t
x(t) = X(t)C + X(t) X −1 (s)b(s) ds, t ∈ I.
a
This form is useful in solving system (1) subject to an initial condition x(a) = xo . Substi-
tuting t = a in (3) gives
is given by
Z t
−1
x(t) = X(t)X (a)xo + X(t) X −1 (s)b(s) ds. (4)
a
387
Exercises 6.5
Find the general solution of the system x0 = A(t)x + b(t) where A and b are given.
! !
4 −1 e−t
1. A(t) = , b(t) =
5 −2 2e−t
! !
2 −1 0
2. A(t) = , b(t) =
3 −2 4t
! !
2 2 1
3. A(t) = , b(t) =
−3 −3 2t
! !
3 2 2 cos t
4. A(t) = , b(t) =
−4 −3 2 sin t
! !
−3 1 3t
5. A(t) = , b(t) =
2 −4 e−t
! !
0 −1 sec t
6. A(t) = , b(t) =
1 0 0
! !
1 −1 et cos t
7. A(t) = , b(t) =
1 1 et sin t
! !
3t2 t 4t2
8. A(t) = , b(t) =
0 t−1 1
1 1 0 et
9. A(t) = 1 1 0 , b(t) = e2t
0 0 3 te3t
1 −1 1 0
10. A(t) = 0 0 1 , b(t) = et
0 −1 2 et
388
6.6 Direction Fields and Phase Planes
There are many types of differential equations which do not have solutions which can
be easily written in terms of elementary functions such as exponentials, sines and cosine,
or even as integrals of such functions. Fortunately, when these equations are of first or
second order one can still gain a good understanding of the behavior of their solutions
using geometric methods. In this section we discuss the basics of phase plane analysis, an
extension of the method of slope fields discussed in Section 2.8.
y 0 = f (y),
and think about it geometrically. The equality implies that the graph of a solution of this
equation in xy -plane, must have slope equal to f (y) at the point (x, y). For instance, for
the differential equation
y 0 = 2y(1 − y)
the slope of the solution equals 0 at all points for which y = 1. Indeed, since the solution
satisfying the initial condition y(0) = 1 is the constant solution y(x) ≡ 1, this is what we
expect. The following figure shows this solution (in red), along with the solution satisfying
y(0) = 0.1.
x01 = f (x1 , x2 )
x02 = g(x1, x2 ).
Drawing slope fields for x1 and x2 separately will not work, since each slope field depends
on both variables. However, note that any solution of this equation, (x1 (t), x2 (t)), para-
metrically defines a curve in the x1 , x2 - plane. Indeed, the vector (x01 (t0 ), x02(t0 )) is the
tangent vector to this parametric curve at the point (x1 (t0 ), x2(t0 )). Therefore, we can
sketch the solutions of the differential system by selecting a number of points in the plane.
389
At each point in the collection we draw a vector emanating from the point, so that the
vector (f (x1 , x2 ), g(x1, x2 )) emanates from the point (x1 , x2). The collection of these vec-
tors is called a vector field. In practice we may have to scale the length of the vectors by a
constant factor.
x01 = x1 + 5x2
x02 = 3x1 + 3x2 .
considered in Example 3 of section 5.3. The figure below shows vectors attached to points
spaced 0.1 units apart in the horizontal and vertical direction. For instance, to the point
with coordinates x1 = 0.3, and x2 = 0.5 we attach the vector with components x1 + 5x2 =
0.3 + 5 × 0.5 = 2.8 in the horizontal and 3x1 + 3x2 = 3 × 0.5 + 3 × 0.3 = 2.4 in the vertical
direction. Similarly, the vector (−3.2, −3.6) emanates from the point (−0.7, −0.5). The
length of all vectors is scaled by an equal factor so that they all fit in the figure.
1.0
0.5
-0.5
-1.0
Also shown are two solutions of the differential system, one with initial condition
(x1 (0), x2(0)) = (−0.5, 0.25), and the other with (x1 (0), x2(0)) = (0.2, −0.1). The arrows
point in the direction in which the solutions are traversed. You can also see that the solu-
tions diverge from the origin in the direction of the eigenvector (1, 1) corresponding to the
positive eigenvalue.
y 00 + y 0 + y = 0.
This equation is known as the linear damped pendulum. If we think of y as the angular
displacement from the resting position and y 0 as the angular velocity of the pendulum, then
the solutions of the equation will describe its oscillation around the equilibrium position at
y = 0. As we will see shortly, measures the amount of damping.
390
If we let x1 = y, and x2 = y 0 , then we obtain the following pair of equations
x01 = x2 (1)
x02 = −x2 − x1 . (2)
This is a linear system, and we could solve it using the methods of Section 6.3. 6.4. Instead,
let us look at the phase plane, and see what happens as we vary .
In the figure below you will see the vector field and solutions with initial condition
x1 = x2 = 0.8, in both cases. In the left figure = 0.1, while on the right = 0.2. As you
could guess, a pendulum that is subject to more damping will oscillate fewer times before
reaching the equilibrium at the origin.
1.0 1.0
0.5 0.5
-0.5 -0.5
-1.0 -1.0
The linear system (1-2) only describes the behavior of the pendulum accurately when
the displacement from the rest position is small. For larger displacements it is necessary to
use the nonlinear equation
x01 = x2 (3)
x02 = −x2 − sin(x1 ). (4)
Although this equation may not look much more complicated than the previous one, it is
much more difficult to solve. However, a phase plane analysis can be easily performed in this
case as well. In the figure below = 0.05, and the initial conditions are x1 = 0, x2 = 2π/3.
391
3
-3 -2 -1 1 2 3
-1
-2
-3
Exercises 6.6
2. Solve the system for the damped pendulum (1-2) when = 0. Sketch the vector field
and the solutions. What happens to the amplitude of the solutions as the pendulum
oscillates in this case?
Solution: In this case the solutions have the form
0.5
-0.5
-1.0
3. Solve the equation for the damped linear pendulum (1-2). Show that when > 0
equations oscillate with diminishing amplitude.
392