FullNotes - ODE
FullNotes - ODE
MATH 2310
Contents
Course Notes
1. Differential equations we can solve 1
1.1. Preliminaries 1
1.2. First order ordinary differential equations 2
1.3. Higher order linear ODE’s with constant coefficients 9
2. Existence-Uniqueness 12
3. Direction Fields 15
4. Numerical solutions of differential equations 19
5. The Laplace Transform 26
6. The Fourier Transform 41
6.1. Fourier series 41
6.2. Periods other than 2π. 43
6.3. Complex Fourier series. 44
6.4. Fourier integrals 45
6.5. Sine and Cosine Fourier transforms 46
6.6. The Fourier transform 49
7. Higher order homogeneous linear Ordinary Differential Equations: the
Wronskian 50
8. Series solutions 54
9. Other methods for higher-order ODEs 60
Exercises
1. Exercises — Solvable DE’s 68
2. Exercises — Existence-Uniqueness 71
3. Exercises — Direction Fields 73
4. Exercises — Numerical solutions of differential equations 74
5. Exercises — The Laplace Transform 75
6. Exercises — the Fourier Transform 79
7. Exercises — Wronskians 80
8. Exercises — Series solutions 82
Date: Semester 1, 2006.
DAVID PASK, REVISED BY AIDAN SIMS
For instance, the differential equation xy 0 = 2y given in Example 1.5 is an example where
F (x, y, y 0 ) = xy 0 − 2y. There are special classes of first order differential equations which
we shall study:
(A) Separable: A first order separable differential equation has the form
dy
(1.1) = Y (y)X(x),
dx
where Y is a function of y only and X a function of x only – these differential equa-
tions are not necessarily linear (see Example 1.6 below). First order separable ordinary
differential equations occur in a variety of contexts: Newton’s law of heating/cooling,
Population models such as the Logistic equation, Radioactive decay, Compound inter-
est, Reaction rates, Disease infection models, Mass transfer problems such as the ones
given in Applications 1.8 below.
We solve a differential equation of the form (1.1) by “separating the variables”, which
involves moving all the y’s across to the left side and the x’s to the right side. Integrating
the resulting equation with respect to x we then have
Z Z
1 dy
dx = X(x)dx, which is the same as
Y (y) dx
Z Z
1
dy = X(x)dx by the integration by substitution rule.
Y (y)
√
dy y
Example 1.6. To solve = , where y ≥ 0 and x 6= 0. We separate the variables to
dx x
get
Z Z
1 1
√ dy = dx.
y x
Z Z
n xn+1 1
Recall that x dx = + c if n 6= −1 and if n = −1, then = ln |x| + c. So if
n+1 x
we integrate the above expressions we have
1
(1.2) 2y 2 = ln |x| + c and so if we square then
µ ¶2
1 c 1 c
y= ln |x| + provided that ln |x| + ≥ 0,
2 2 2 2
which gives a “one–parameter family of solutions” since there is a solution for each value
of c – this (infinite) family is sometimes called the general solution of the differential
equation. The graphs of these functions form a family of curves in the plane. The curves
fill out a region in the plane for which the differential equation makes sense (here y ≥ 0,
MATH 2310 – Ordinary Differential Equations Strand 3
and x 6= 0).
y..
....
.........
........
........ c = −1 ..
...
...
...
c = −1 ..
... .......
.......
.
........ ... ........
c=2 .......
.......
.......
.......
.
...
.
.
. ...
.
.
....
...
..
...
... .......
.......
..... ........ c=2
....... .. .. .......
....... .. . .. .
... .......
.......
....... ....
. .. .... ...
.
...
. .
...........
..
.......
... .
.. ....
..
... ... .......
......
...... ... ...
... ......
...... ... ... ... ......
...... .. .. ... ... ... ......
...... . .... ...
. .
. ... .. . .. ........
...... .. .. ... ... ......
...... ... ... .... .. ... .....
...... ... .. ... .. ... ......
...... ... .. ... .. ... .....
...... .... .. ..... ......
c=1 ............
...........
...........
......
..... .......
........
..
....
..
.
...
. .
.
....
...
...
...
...
...
..
...... .....
..... .....
.......
..
...
...........
...........
..........c=1
...........
........... .. . .. ...... .......... .... .
. .. ......... .......... ... . . . . . . ...........
..... ....... .
...........
.......
. ..... .. . ... . ... ..
..... ...........
...........
........ ..... ........ ... ... .....
.....
..... .........
.......
........ ...........
...........
.......... ................. ..... ..... ... ......... ....................
c=0 ...................................................................
.......... .. .. ..... ... ......................
...........
.......
..................
..... . . . ... ....
.
.
....
...
..
.... ...... ...
..........
..........
.
....... .......... ..........
.
............................
................. c=0
...........
.......................... ..... ... ......... ......... .......... .....................
..............................................................
...... ...................
............ ..........
x . ...
..................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
...
−1 1
The existence and uniqueness theorem in the next section (Theorem 2) essentially
says that under certain hypotheses there is a unique curve through any point in this
region. For instance the differential equation y 0 = 2x has general solution y = x2 + c,
and these parabolas, as c varies, fill out the plane.
To obtain a solution to our differential equation we need to specify the value of the
arbitrary constant c, which is usually done by giving an “initial (or boundary) condition”
which corresponds to choosing a point in the plane which the solution curve must pass
through.
For instance if we wish to solve the initial value problem:
√
dy y
= , where y(1) = 0
dx x
which corresponds to finding a solution curve which passes through p the point (1, 0).
From (1.2) the general solution to the differential equation is 2 y(x) = ln |x| + c, and
to solve the initial
√ value problem we substitute x = 1, y = 0 and determine the value of
c. This gives 2 0 = ln(1) + c which implies that c = 0 and so the required solution is
√ 1
2 y = ln |x| = ln(x) or y = (ln x)2 , where x ≥ 1,
4
since ln x ≥ 0 if x ≥ 1. This is the curve labelled c = 0 on the right hand side of
the above diagram. Note that if y(−1) = 0 then we would have a different solution:
√
2 y = ln(−x) since x < 0. This is the curve labelled c = 0 on the left hand side of the
above diagram.
(B) Linear: A first order linear differential equations is of the form
dy
(1.3) a1 (x) + a0 (x)y = h(x)
dx
where a1 (x), a0 (x), h(x) are functions of x only. Note that if h(x) = 0 then the differential
equation (1.3) is also separable. First order linear ordinary differential equations occur
in a variety of contexts: Newton’s law of cooling/heating, Radioactive decay, Compound
interest, and Electrical circuit theory such as the one given in Applications 1.8 below.
We solve a differential equation of the form (1.3) by first dividing through by a1 (x)
to rewrite this equation in the standard form:
dy
(1.4) + p(x)y = q(x).
dx
MATH 2310 – Ordinary Differential Equations Strand 4
If we now multiply equation (1.4) through by I = I(x) we get
dy
(1.5) I + Ipy = Iq.
dx
Suppose that the function I is such that the left–hand side of (1.5) is the derivative of
Iy, that is of the form
dy dI d
(1.6) I + y= (Iy),
dx dx dx
then comparing the left–hand sides of the two expressions (1.5) and (1.6) we find that
I must satisfy the first order seperable differential equation
Z Z
dI dI
= Ip and so = p(x)dx or
dx I
Z R
0 p(x)dx
ln |I| = p(x)dx + c in which case I(x) = c e ,
where c0 is a constant which we may choose to be 1. The function I(x) is called the
integrating factor. For such a function I, the differential equation (1.5) or (1.4) then
becomes
d
(Iy) = Iq,
dx
This differential equation is exact and we can solve it by integrating both sides with
respect to x. Hence we can see that solution to (1.5) or (1.4) is then
Z Z
1 c
Iy = Iq.dx + c or y(x) = I(x)q(x).dx + .
I(x) I(x)
Example 1.7. Find the general solution of the differential equation
µ ¶
dM 3
+ M = 7 where M = M (t), t > 0,
dt t + 100
which is already in the standard form (1.4). To compute the integrating factor we
evaluate R 3
dt
µ(t) = e t+100 = e3 ln |t+100|
Since t > 0 we have t + 100 > 0 and so |t + 100| = t + 100 and hence µ(t) = (t + 100)3 .
If however t < −100 then t + 100 < 0 and so |t + 100| = −(t + 100) and then µ(t) =
−(t + 100)3 .
Returning to the case where t > 0 we multiply the differential equation through by
the integrating factor µ we get
dM
(t + 100)3 + 3(t + 100)2 M = 7(t + 100)3 or
dt
d ¡ ¢
(t + 100)3 M = 7(t + 100)3 which integrates to
dt
7
(t + 100)3 M = (t + 100)4 + A and so the general solution is
4
7 A
M = (t + 100) + ,
4 (t + 100)3
MATH 2310 – Ordinary Differential Equations Strand 5
where A is a constant. Observe that the solution has a discontinuity at t = −100, the
value of t at which the original differential equation is undefined. If M (0) = 0 then a
short calculation shows that A = − 74 × 108 . A plot of the solution curve is shown by
the part of the graph in the picture below to the right of t = −100.
If t < −100, then as mentioned above the integrating factor is µ(t) = −(t + 100)3 .
Multiplying the differential equation through by the integrating factor we get
dM
−(t + 100)3 − 3(t + 100)2 M = −7(t + 100)3 or
dt
d ¡ ¢
−(t + 100)3 M = −7(t + 100)3 which integrates to
dt
7
−(t + 100)3 M = − (t + 100)4 + A and so the general solution is
4
7 A
M = (t + 100) − ,
4 (t + 100)3
where A is a constant. Observe that the solution in this case is essentially the same
as for t > −100 and has a discontinuity at t = −100. If M (−200) = 0 then a short
calculation shows that A = 47 × 108 . A plot of the solution curve is shown by the part
of the graph in the picture below to the left of t = −100.
.... M (t)
...
... .......... .
..
.. ... ....
.. ...
.. ... ...
.... ... ...
... ...
.... ... ...
... ... ...
... ...
...
.. ...
... ... ...
... ...
... ... ...
.
. ... ...
... ...
... ... ...
.. ... .
.. ... .........................
.. ... ........................
. ......................
−200 .
... .
.....
. .
.
. ...
. ..
...
...
..................
...................
.............
.
0
.
.
......................................................................................................................................................................................................................................................................................................................................................................................................................................................................
. t
...................... ... .... ..
......................... ... ...
................................................... .
. .
.. ...
.......................................... .
..
. ...
...... ...
........................... ..
. ...
... .... ...
... .
. ...
..
. ...
... .
... ...
... . ...
..
. ...
... .... ...
... .
. ...
..
. ...
.. .... ...
... ... ...
.
. ...
.. . ...
... ...
..
Applications 1.8.
(i) First order linear separable differential equations frequently occur in models of
some process where the rate of change of some quantity is proportional to the
amount of that quantity.
If we suppose that dry ice sublimes at a rate proportional to its (exposed)
surface area. Then the equation which models this behaviour is arrived at as
follows: the independent variable is t and the dependent variable is V = V (t),
the volume of dry ice. In our model we are assuming that
dV
= −kS,
dt
where S = S(t) is the surface area of dry ice at the time its volume is V and
k > 0 is the constant of proportionality. Note the minus sign since k is positive
and the volume is decreasing.
MATH 2310 – Ordinary Differential Equations Strand 6
If the dry ice is in the shape of a spherical ball, and retains its shape as it
4
sublimes, then V = πr3 where r = r(t) is the radius of the ball at time t and
3
then S = 4πr2 so that by the chain rule
dV dV dr 4 dr
= = π3r2 = −k4πr2
dt dr dt 3 dt
which simplifies to
dr
= −k,
dt
which is a first order separable differential equation. Integrating directly we may
see that this differential equation has general solution r = −kt + c. In fact c is
equal to the radius r0 of the sphere at time t = 0, and then substituting we find
that
4
S = 4π(r0 − kt)2 and V = π(r0 − kt)3 .
3
A typical plot of V (t) against t has the form:
V (t) .............
...
.....
.......
... ...
... ....
... ...
... ....
... ...
... ...
... ...
...
... ...
... ...
... ...
... ...
... ...
...
... ...
... ...
...
... ...
... ...
... ....
.....
... .....
... .....
.....
... ......
... .......
... ........
..........
... .
........................................................................................................................................................................
.
t
One may equally apply the same technique to models for radioactive decay or
Newton’s law of cooling/warming. For more examples see the Exercises at the
end of this section (or Kreysig section 1.4).
(ii) A common source of examples of first order linear (or integrating factor) differ-
ential equations come from “mixing” problems (see Kreysig section 1.6): A large
tank initially contains 2000 litres of pure water. Brine containing 0.25 kg. of salt
per litre flows into the tank at a rate of 20 litres per minute. The well–mixed
solution flows out of the tank at the rate of 40 litres per minute. Find the num-
ber of kg. of salt M (t) in the tank at any time t, and the concentration of the
last fluid to flow out of the tank.
The net rate at which M (t) changes at time t is given by
dM
= ( rate at which salt mass enters) − ( rate at which salt mass leaves)
dt
M (t)
= (20 × 0.25) − 40 kg. per minute ,
V (t)
where V (t) is the volume of water in the tank at time t. Alternatively, we may
consider the change ∆M of salt mass in the tank over a small time interval t to
MATH 2310 – Ordinary Differential Equations Strand 7
t + ∆t:
M (t)
∆M = (20 × 0.25)∆t − 40 ∆t
V (t)
letting ∆t → 0 we arrive at the same differential equation. Now
dV
= 20 − 40 = −20 litres per minute, and so V = −20t + A litres.
dt
If the brine is added at time t = 0, then V (0) = A = 2000 and so V (t) =
2000 − 20t litres. Observe that the tank is empty when V (t) = 0, i.e. when
t = 100 minutes. Hence we have the differential equation
µ ¶
dM 40
=5− M or
dt 20(100 − t)
µ ¶
dM 2
(1.7) + M = 5.
dt 100 − t
The integrating factor is then
R 2
dt
µ(t) = e 100−t = e−2 ln |100−t| = (100 − t)−2 as t < 100.
Hence we may now rewrite (1.7) as:
d ¡ ¢ 5
(100 − t)−2 M = so, integrating we get
dt (100 − t)2
1 5
M = + B and so the general solution is
(100 − t)2 100 − t
M = 5(100 − t) + B(100 − t)2 ,
where B is a constant. When t = 0, M = 0 and so 0 = M (0) = 500 + B × 104
which gives B = −5 × 10−2 , hence
M (t) = 5(100 − t) − 5 × 10−2 (100 − t)2 .
A plot of M (t) against t has the form:
M (t)
125
50 100 t
MATH 2310 – Ordinary Differential Equations Strand 8
M (t)
Finally, the concentration C at time t is given by C(t) = . To obtain the
V (t)
differential equation satisfied by the concentration we substitute M = CV into
(1.7) and use the product rule for differentiation to get
µ ¶
d¡ ¢ 2
CV + CV = 5
dt 100 − t
dC dV 2
V +C + 20(100 − t)C = 5 since V = 20(100 − t),
dt dt (100 − t)
dC dV
20(100 − t) − 20C + 40C = 5 since = −20, and so
dt µ ¶ dt
dC 1 5
+ C= .
dt 100 − t 20(100 − t)
This first order linear differential equation has integrating factor (100 − t)−1 and
since C(0) = 0 has solution
1 1
C(t) = − (100 − t),
4 400
so that C(100) = 0.25 and so the last fluid to leave the tank has concentration
0.25 kg. per litre. Note that substituting the solutions for M (t) and V (t) found
M (t)
earlier into C(t) = gives the same answer. Physical considerations might
V (t)
also lead us to the same conclusion without calculation.
(iii) Electrical circuits. The voltage drop across and inductor of inductance L Henry
dI
is L volts where I is the current through the inductor (in Amps). The voltage
dt
Q
drop across a capacitor with charge Q Coulombs and capacitance C Farads is
C
volts. The voltage drop across a resistor of resistance R Ohms is IR where I is
the current through the resistor (in Amps). Given that the current I is the rate
dQ
of flow of charge we have I = .
dt
Kirchoff’s second law says that in any loop the sum of the voltage drops is
equal to the supplied voltage (e.g. from a battery). Hence for a RC–circuit,
which consists of a capacitor, a resistor and an e.m.f. source E(t) we have
1 dQ 1
(1.8) IR + Q = E(t) or R + Q = E(t).
C dt C
For an RL–circuit which consists of a resistor, an inductor and an e.m.f. source
E(t) we have
dI
(1.9) IR + L = E(t).
dt
Both equations (1.8) and (1.9) give rise to first order linear differential equations.
For instance if I = I(t) is the current in an RL–circuit at time t with R = 5
Ohms, L = 4 Henrys and constant supplied voltage of E = 8 volts then (1.9)
MATH 2310 – Ordinary Differential Equations Strand 9
becomes
dI dI 5
(1.10) + 5I = 8 or + I = 2. 4
dt dt 4
The integrating factor is then
R 5
dt 5
µ(t) = e 4 = e 4 t .
Hence the solution to our differential equation is
d ³ 5t ´ 5 5 8 5
e 4 I = 2e 4 t and so e 4 t I(t) = e 4 t + A
dt 5
8 5
the general solution is then I(t) = + Ae− 4 t ,
5
where A is a constant. If the current is zero at time t = 0, i.e. I(0) = 0, then
8 8
0 = + A and so A = − . A plot of I(t) against t has the form:
5 5
... I(t)
..
........ ..
...
..
...
...
8 ...... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... ............................................................................................................................................................
... .........................
5 ......................
... ...............
... ..........
... .
...
..........
... ......
......
... ......
... ......
... ..
......
.
...
... ....
... ...
... ...
... ...
... ...
... ...
... ....
... ...
... ....
... ...
.....
............................................................................................................................................................................................................................................................................................
...
..
.
t
8
Evidently I → Amps as t → ∞. Such a steady state can also be obtained
5
dI
from the original differential equation (1.10) by putting = 0.
dt
1.3. Higher order linear ODE’s with constant coefficients. As we have seen,
there are only a few special cases of first order differential equations that we can solve
explicitly. For higher-order ODE’s, the situation is even worse.
There is really only one class of higher-order ODE’s that we can truly solve system-
atically. These are the higher-order linear ODE’s with constant coefficients.
A linear ODE with constant coefficients is a differential equation of the form
dn y dn−1 y
(1.11) cn+ c n−1 + . . . + c0 y = f (x),
dxn dxn−1
where cn , cn−1 , . . . , c0 are complex numbers.
We handle this in two steps, which we will revisit later when we consider more general
linear ODE’s. The idea is that we first find the complete solution to the homogeneous
differential equation
dn y dn−1 y
(1.12) cn + c n−1 + . . . + c0 y = 0,
dxn dxn−1
MATH 2310 – Ordinary Differential Equations Strand 10
and call this the general solution to the homogeneous ODE, and denote it yg . Then we
“guess” a solution to the non-homogeneous DE (1.11), which we call yp . If we add yp to
yc , we obtain the complete solution to the original ODE.
The homogeneous DE. Inspired by the solution in the case n = 1 we look for a
solution of the form y(x) = erx , where r is a complex number. Substituting into the
equation (1.11) we get
cn rn erx + cn−1 rn−1 erx + . . . + c0 erx = 0
¡ ¢
erx cn rn + cn−1 rn−1 + . . . + c0 = 0
for all x. Since erx is not zero for any x it follows that we must have
(1.13) cn rn + cn−1 rn−1 + . . . + c0 = 0.
The polynomial cn rn +cn−1 rn−1 +. . .+c0 is called the characteristic or auxillary equation.
Suppose now that the characteristic equation has roots r1 , . . . , rn . To the roots of (1.13)
we attach solutions to the homogeneous equation (1.11) as follows:
(1) If rj is distinct from all the other roots then the function yj (x) = erj x is a
corresponding solution of (1.11).
(2) If rj is of multiplicity k > 1 then we obtain a collection of k different solutions
of (1.11): {erj x , xerj x , . . . xk−1 erj x }. For rj = 1 and k = 2 it is straightforward to
use the Wronskian to check that ex and xex are linearly independent.
Since every complex polynomial of degree n factors into n terms of the form (x − zi )
where each zi is a complex number, the above procedure always gives us n solutions
f1 (x), . . . , fn (x) where each fi (x) is of the form fi (x) = xmi ezi x .
To obtain the general solution to the homogeneous higher-order linear ODE with
constant coefficients (1.12), we take all possible combinations of this fundamental set of
n solutions. That is,
yg (x) := A1 f1 (x) + · · · + An fn (x) = A1 xm1 ez1 x + · · · + An xmn ezn x .
We will see later in the section on general higher-order linear ODE’s how to check that
these solutions are all truly distinct and that they combine to give the general solution.
For now we just take it on faith.
Remark 1.9. If all the ci are real, so the DE is a “real” DE, then it is not particu-
larly satisfying to obtain complex solutions. For example, for the DE y 00 + y = 0, the
characteristic equation is r2 + 1 = 0, so the solutions to the DE are f1 (x) = eix and
f2 (x) = e−ix .
Fortunately, there is a way around this. If all the ci are real, then the characteristic
polynomial is a real polynomial, so any complex roots occur in complex conjugate pairs
α + iβ and α − iβ. In this case, the corresponding solutions are e(α+iβ)x and e(α−iβ)x ,
which we can rewrite as
eαx eiβx and eαx e−iβx .
If we use the formula eiθ = cos(θ) + i sin(θ) that we learned in first year, we can deduce
that
{Aeαx eiβx + Beαx e−iβx : A, B ∈ R} = {Ceαx cos(βx) + Deαx sin(βx) : C, D ∈ R}.
MATH 2310 – Ordinary Differential Equations Strand 11
So when all the ci are real, and α + iβ and α − iβ are roots of the characteristic poly-
nomial, we can replace e(α+iβ)x and e(α−iβ)x with eαx cos(βx) and eαx sin(βx) throughout
the resulting fundamental set of solutions.
Examples 1.10. The general solution of
(1) y 00 + 2y 0 − 3y = 0 is y(x) = Aex + Be−3x , since the characteristic equation is
(r + 3)(r − 1) = 0.
(2) y 00 − 4y 0 + 4y = 0 is y(x) = Ae2x + Bxe2x , since the characteristic equation is
(r − 2)2 = 0.
(3) y 00 − 2y 0 + 17y = 0 is y(x) = Ae(1+4i)x + Be(1−4i)x or Aex cos 4x + Bex sin 4x, since
the characteristic equation is (r−1+4i)(r−1−4i) = (r−(1−4i))(r−(1+4i)) = 0
.
(4) y 00 +9y = 0 is Ae3ix +Be−3ix or y(x) = A cos 3x+B sin 3x, since the characteristic
equation is (r − 3i)(r + 3i) = 0.
(5) y 000 −y 0 = 0 is y(x) = Ae0.x +Bex +Ce−x = A+Bex +Ce−x , since the characteristic
equation is r(r − 1)(r + 1) = 0.
(6) y 00 = 0 is y(x) = Ae0.x + Bxe0.x = A + Bx, since the characteristic equation is
r2 = 0.
(7) y (5) − y (4) − y 0 + y = 0 is y(x) = Aex + Bxex + Ce−x + Deix + Ee−ix or Aex +
Bxex + Ce−x + D cos(x) + E sin(x) since the characteristic equation is
(r − 1)2 (r + 1 − i)(r + 1 + i) = (r − 1)2 (r − (−1 + i))(r − (−1 − i)) = 0.
(8) y 00 −2iy 0 −y = 0 is Ae−x +Bxeix since the characteristic equation is r2 −2ir −1 =
(r − i)2 .
The particular solution. Now we need a single solution to the non-homogeneous
DE (1.11). The idea is that we guess. To do this, we look at the right-hand side of the
DE, and try an arbitrary function of that form. We substitute our guess into the DE
and then try to solve to find the coefficients.
The following table indicates what guess to make based on what is on the right-hand
side of the DE. If there is a sum of terms on the right-hand side, we add together the
guesses for the individual terms.
type of f (x) Guess for yp
xn An xn + · · · + A1 x + A0
eαx Aeαx
sin(αx)
A sin(αx) + B cos(αx)
cos(αx)
sinh(αx)
A sinh(αx) + B cosh(αx)
cosh(αx)
xn f (x) (An xn + · · · + A1 x + A0 )×(guess for f (x))
There is only one problem. If the left-hand side is already one of the fundamental set
of solutions to the homogeneous DE, we can’t use it, so we multiply by a large enough
power of x to eliminate the problem, and try that instead.
MATH 2310 – Ordinary Differential Equations Strand 12
Examples 1.11. Solve
(1) y 00 +2y 0 −3y = e2x . yg (x) = Aex +Be−3x as above. For yp (x), guess yp (x) = ke2x .
Then
yp00 + 2yp0 − 3yp = 4ke2x + 2 × 2ke2x − 3ke2x = 5ke2x .
Since RHS of DE is e2x , take k = 15 : yp = 51 e2x . General solution is Aex +Be−3x +
1 2x
5
e .
(2) y − 4y 0 + 4y = 8x. yg (x) = Ae2x + Bxe2x . Try yp = k1 x + k0 . Then
00
For an initial value problem involving a linear differential equation we may use Theorem
2 to find a “largest” interval of assured continuity for the unique solution. Suppose that
(2.1) is of the linear form (1.4), then
∂f
f (x, y) = −p(x)y + g(x) so that = −p(x).
∂y
Theorem 2 guarantees the existence of a unique solution to the initial value problem
provided that p and g are continuous at x0 , Indeed this solution is continuous at least
on α < x < β where α is the discontinuity of {p, g} nearest to and less than x0 and β
is the discontinuity of {p, g} nearest to and greater than x0 . If there is no discontinuity
less than x0 we put α = −∞ and similarly if there is no discontinuity greater than x0
we put β = +∞.
Examples 2.6. (i) State the largest interval in which you can be sure that
1
y0 − y = 3, with y(0) = 3,
x+1
1
has a unique solution. Now p(x) = − is discontinuous at x = −1 and
x+1
g(x) = 3 is continuous for all x. Since x0 = 0, the required interval is (−1, ∞).
(ii) Recall from Example 1.7 the linear differential equation
3
y0 + y=7
x + 100
7 A
has general solution y = (x + 100) + , where A ∈ R. Observe that
4 (x + 100)3
this function has a discontinuity at x = −100 for all A 6= 0. For the initial
condition y(−200) = 0 the continuous solution in question is the one on the
interval (−∞, −100).
MATH 2310 – Ordinary Differential Equations Strand 15
3. Direction Fields
References: Zill and Cullen, §15.3. Stewart, 4th. edition §10.2, 3rd. edition §15.1 or
Kreysig §1.2.
If y = y(x) is a solution of y 0 = f (x, y) then the derivative of y(x) must exist at each x.
Graphically y 0 is the slope of the curve y = y(x) at x, furthermore since it is a solution of
the differential equation we also have y 0 (x) = f (x, y). Thus, if the differential equation
has a solution at the point (x, y) in the plane, the slope of the solution curve there is
equal to f (x, y), and can be represented by a short line segment (lineal element) centred
on (x, y) with the correct slope. Many of the plots in this section have arrow heads
on each segment, indicating the direction of the “flow” as x increases; hence the arrow
heads tend to point to the right which is the direction of increasing x.
Example 3.1. Consider the separable differential equation
y+1
y0 = .
x−2
If a solution curve goes through the point (0, 3) its slope there is equal to −2. Similarly
though (5, 2) the slope there will be 1.
y(x) 2
–2 0 2 x 4 6
–2
–4
The totality of these short line segments is called a direction field for the differential
equation. The direction field suggests the “flow pattern” for the family of solution curves
of the differential equation. In particular, if we want the unique solution (subject to the
MATH 2310 – Ordinary Differential Equations Strand 16
hypotheses of Theorem 2) passing through (0, 3) we construct the curve though (0, 3)
such that at each point its slope is that given by the direction field.
Observe that the solution curve for the initial value problem
y+1
y0 = , with y(0) = 3
x−2
0
is y = −2x + 3. This curve “passes through” (2, −1) where f (x, y) = – it is in
0
fact, two half-lines. For the given initial condition, the half-line y = −2x + 3, x ≤ 2
represents the solution curve. For the initial condition y(3) = −3, the solution curve is
again y = −2x + 3, but this time its is the part of the line where x ≥ 2.
Exercise 3.2. Repeat the above for the point (5, 2) which corresponds to the initial
condition y(5) = 2. Verify that the solution curve is y = x − 3, which also “passes
through” (2, −1) in the same sense as above. Show that all solution curves indeed “pass
through” the point (2, −1) in this sense.
1
As discussed earlier in Example 2.3, the differential equation y 0 = y 2 − 1 has two
4
constant solutions y ≡ 2 and y ≡ −2. The direction field shown below
y(x) 2
–2 0 2 4 x 6 8 10
–2
–4
Question: Did we really need to compute the direction field to discover this?
Answer: No.
1
Observe that y 0 = (y − 2)(y + 2). By considering the behaviour of the concave up
4
quadratic function g(t) = 14 (t − 2)(t + 2)
....
g(t)
....
.
.......... .
.... ... ....
.... ...
.... ... ....
.... ... ....
.... .
. .......
....
.... ...
.... ....
..... ... .....
.....
... .....
..... .....
.....
.....
. −2 .
.
... 2 ..
.....
.
...
............................................................................................................................................................................................................................................................................................................................................................
...... ... ....
......
.......
.......
.......
.
.
....
.
.......
.......
..
........
.
t
........ ... ........
......... ... ........
..........
............ .
. ...
...
...
..........
................ . ........
.................................................................................
..
..
Hence if y < −2 then the solution curve y = y(x) is increasing and will continue to
increase until it reaches (as a limit, since the rate of increase becomes close to zero) −2.
If y = ±2 then y 0 = 0 and so we have a constant solution. If −2 < y < 2 then the
solution curve y = y(x) is decreasing and will continue to decrease until it reaches (as
a limit, since the rate of decrease becomes close to zero) −2. If y > 2 then the solution
curve y = y(x) will increase and its rate of increase will increase and so the curve goes
off to +∞.
y 0 = x2 + y 2 , with y(0) = 1.
By Theorem 2 there is a unique solution (which is also continuous), but we can not find
a formula for it. Using direction fields
MATH 2310 – Ordinary Differential Equations Strand 18
4
y(x)
–1 0 1 x 2 3
–2
we can make a guess as to what the solution curves look like snd what their long-term
behaviour might be. However, as we shall soon see in the next section, we can do a little
better than this, and give a “ball–park” figure for the value of, say y(1) (if it exists)
using approximations coming from the direction field.
Note 3.4. In practice we use computers to help us plot direction fields, since there are so
many computations involved. Many software packages have some facility for this. For
the computer package MAPLE the commands which produced the first direction field
in these notes are:
with(DEtools):
dfieldplot(diff(y(x),x)=(y+1)/(x-2),y(x),x=-2..6,y=-4..4);
MATH 2310 – Ordinary Differential Equations Strand 19
4. Numerical solutions of differential equations
References: Zill and Cullen, §15.4. Stewart, 4th. edition §10.2, 3rd. edition §15.1 or
Kreysig §19.1. From your Tutorial Notes, the relevant exercises are Exercises 8–13.
As we saw at the end of the last section, many differential equations cannot be solved
in terms of simple functions. Instead we may calculate approximate values for points
on the solution curve by a variety of numerical techniques. A class of methods for the
numerical solution of the initial value problem
y 0 = f (x, y), with y(x0 ) = y0
tries to approximate the solution curve over an interval by a straight line segment of
appropriate slope. We can then iterate this process to produce a piecewise linear curve
which approximates the “real solution”. In this course we shall look at three basic
methods:
1. Euler’s method
We start our algorithm at the initial point (x0 , y0 ) given to us by the initial conditions
and repeat the Euler approximation method in steps of equal size. Suppose that after
n steps our solution curve has reached the point (xn , yn ) and we want to approximate
the value of y at xn + h where h is the step size in the x–direction. From the differ-
ential equation we know the value of y 0 at (xn , yn ) is f (xn , yn ). In Euler’s method we
approximate the solution curve y = y(x) over the interval (xn , xn + h) by the straight
line segment through (xn , yn ) with slope f (xn , yn ), that is
yn+1 − yn
= f (xn , yn )
(xn + h) − xn
which simplifies to
(4.1) yn+1 = yn + hf (xn , yn ).
In which case yn+1 is the Euler method value (approximation) for the real value of
y(xn+1 ) where xn+1 = xn + h.
A rough picture to help describe the approximation process used in Euler’s method is
shown below:
.
...Y
..........
....
... ....
.... ....
.. ..... ...
... ..... .
... ..
......
. ....
... ....
.....
• (xn + h, yn + hf (xn , yn )
.
yn+1 ......... ....... ....... ....... ....... ....... ....... ....... ....... ....... ....... ....... ....... ....... ...................... ...... ...
... .... . .
... .......... ..
...
. ..... .
.
......... ....... ....... ....... ....... ....... ....... ....... ....... ....... ....... ....................................................................... ....... ....... ..
..... ..
yn
....
..
...
.
• ....
.
...
....
. slopef (xn , yn ) .... ....
. .... ....
. . .
. ..
. .
. ....
.
.
.
.
. ... ...
. . .
.... .
...
.....
. ..
.
.
.
....
... ..
...
..... .
.
... ...... ... ...
.....
y0 • ...
......... ....... ....... ....
... ...
.
...
..
. .
.
....
.
.
....
...
.. .. ←−− h −−→ ..
.............................................................................................. . . . . . ........................................................................................................................................................
..
x0 ...
...
...
xn xn+1 X
..
MATH 2310 – Ordinary Differential Equations Strand 20
Example 4.1. Consider the initial value problem
y 0 = x2 + y 2 , with y(0) = 1.
Using Euler’s method, and a stepsize of h = 0.1, find a numerical approximation for
y(1). Our data is then f (x, y) = x2 + y 2 , x0 = 0, y0 = y(0) = 1, and so we compute
y1 = y0 + hf (x0 , y0 ) = 1 + 0.1(02 + 12 ) = 1.1 =: y(x1 ),
and x1 = x0 + h = 0.1,
y2 = y1 + hf (x1 , y1 ) = 1.1 + 0.1((0.1)2 + (1.1)2 ) = 1.1 + 0.122 = 1.222,
and x2 = x1 + h = 0.2,
y3 = y2 + h(f (x2 , y2 ) = 1.222 + 0.1((0.2)2 + (1.222)2 ) = . . . = 1.3753284
and x3 = x2 + h = 0.3.
Tabulating our results we find that
xn yn xn yn
.1 1.1 .6 2.199546514
.2 1.222 .7 2.719347001
.3 1.3753284 .8 3.507831812
.4 1.573481221 .9 4.802320214
.5 1.837065536 1.0 7.189548158
.
.....
.....
.....
..... (xn + h, yn + h2 (f (xn , yn ) + f (xn + h, yn + hf (xn , yn )))
...
.
..
.
..... .. .
.
..... ....... .
..
.. . ............ ...
... .....
..... ........
..... .......
...
.................... ..
..
.
.............
........
...
............... ....... ....... ....... ....... ....... ...
. • . .
.............
....... ......
...
... ..
.............
.
.
..
.
. . .
. . .
. .
(xn + h, yn + hf (xn , yn ))
.
....
.
....... ....... . . .
............................ . . . ....
................. .
(xn , yn ) • ..
..
.. ...
....
.
.......................... ....... ....... ....... ....... ....... ....... ....... ....... ....... ........
..
←−−−−−− h −−−−−−→ ..
..................................................................................................................................................................................................................................................................
..
xn X xn+1
This leads to the improved Euler method (or Heun’s formula, which is a Runge-Kutta
method of order 2):
½ ¾
yn+1 − yn 1
= f (xn , yn ) + f (xn + h, yn + hf (xn , yn ))
h 2
which simplifies to
½ ¾
h
yn+1 = yn + f (xn , yn ) + f (xn + h, yn + hf (xn , yn )) .
2
Example 4.5. Consider the initial value problem
y 0 = x2 + y 2 , with y(0) = 1.
If we use the Improved Euler’s method with stepsize h = 0.1, then we compute
hn o
y1 = y0 + f (x0 , y0 ) + f (x0 + h, y0 + hf (x0 , y0 ))
2
0.1 n 2 2 2 2
o
=1+ (0 + 1 ) + f (0.1, 1 + 0.1(0 + 1 ))
2 © ª
= 1 + 0.05 1 + f (0.1, 1.1) = 1 + 0.05(1 + (0.1)2 + (1.1)2 ) = 1.111 . . . ≈ y(0.1)
whereas in Example 4.1 earlier we saw that the Euler method approximation to y(0.1)
was 1.1. If we continue we find that
hn o
y2 = y1 + f (x1 , y1 ) + f (x1 + h, y1 + hf (x1 , y1 ))
2
= 1.251530674 . . . ,
whereas in Example 4.1 earlier we saw that the Euler method approximation to y(0.2)
was 1.222. Tabulating our results we find that
xn yn xn yn
.1 1.111000000 .6 2.600025118
.2 1.251530674 .7 3.529011494
.3 1.436057424 .8 5.371468766
.4 1.688007333 .9 10.34833534
.5 2.048770724 1.0 38.13428520
MATH 2310 – Ordinary Differential Equations Strand 23
so that y10 = 38.13438520 ≈ y(1).
3. Fourth order Runge-Kutta
The most popular Runge-Kutta methods use several “sample” slopes; one of the most
widely known uses 4 (and is known as a 4th. order method).
h³ ´
yn+1 = yn + `1 + 2`2 + 2`3 + `4 , where `1 = f (xn , yn ),
6
³ h 1 ´ ³ h 1 ´
`2 = f xn + , yn + h`1 , `3 = f xn + , yn + h`2 , and `4 = f (xn + h, yn + h`3 ).
2 2 2 2
Derivatives are sampled at four different places marked with • shown below:
..
. .
. ... ↑
.
.
.
.
..... |
.
.
..
.
.
...
... .... |
. ...
.
. ... ...
... . •
.... ... |
.
.
.
.
.
...
... . . .
..
. ↑ |
.
.
.
....
.
.
... .
... .
.....
.
| |
.
.
....
. ...
..
.... ...
.....
| h`
.
.
.
. .....
. ....
. ... . .
| 4
.
.
.
. ...
. .....
. .......
. .
. . . ..
.
.....
↑ h`3 ||
.
.
.
. .......
....
.
...
. . . .
. .... .
|
. . .
.
. .
. . . ......... . •
.... .
..... .. . .
. . .
. .
. .
....
↑| h` | |
. . .
..
... .
.. . . . . 2 | |
.•
. . .. . .
... h`
..
... . .
... . .
. . ......
.
. . . .......... . . .
.
...... . .
.. ............. .
.
... ....
1 | | |
•... .
.. ............. ..
........... ....... ....... ....... ....... .......... ....... ....... ....... ....... ....... .........
↓| ↓ ↓ ↓
..
. ..
− h.............− → .... ←− h.............−
.. . .
.← → ..
............................................................................................................2 ...........................................................2 ....................................................................
xn ..
.. xn+1 X
Examples 4.6. (i) Reconsider the initial value problem
y 0 = x2 + y 2 with y(0) = 1.
Using the Runge-Kutta method, and a step size of h = 0.1 to find a numerical
approximation for y(1). We have f (x, y) = x2 + y 2 , x0 = 0, y0 = 1. Now
h³ ´
y1 = y0 + `1 + 2`2 + 2`3 + `4
6
2 2
where `1 = f (x0 , y0 ) = 0 + 1 = 1,
³ h 1 ´
`2 = f x0 + , y0 + h`1 = f (0.05, 1 + 0.05 × 1) = (0.05)2 + (1.05)2 = 1.105
2 2
³ h 1 ´
`3 = f x0 + , y0 + h`2 = f (0.05, 1 + 0.05 × 1.105) = (0.05)2 + (1.05525)2 = 1.11605 . . .
2 2
`4 = f (x0 + h, y0 + h`3 ) = f (0.1, 1 + 0.1 × 1.11605 . . .) = (0.1)2 + (1.1116 . . .)2 = 1.245666 . . .
and so
0.1 ³ ´
y1 = 1 + 1 + 2 × 1.105 + 2 × 1.116 − 05 . . . + 1.245666 . . . = 1.1114628 ≈ y(x1 )
6
and x1 = x0 + h = 0.1, etc. tabulating our results we get:
What now happens if we decrease h to get more accuracy? Compare the results
for the fourth order Runge-Kutta with the results from Euler’s method:
MATH 2310 – Ordinary Differential Equations Strand 24
xn yn xn yn
.1 1.111462856 .6 2.643860195
.2 1.253015174 .7 3.652200355
.3 1.439665974 .8 5.842013327
.4 1.696097903 .9 14.02182003
.5 2.066961014 1.0 735.0991247
(ii) Use both the Euler and the fourth order Runge-Kutta method with a single step
to calculate an approximate value for y(0.1) for the initial value problem
y 0 = 2y(1 − y) with y(0) = 2.
Here f (x, y) = 2y(1 − y), x0 = 0, y0 = 2. Euler’s method gives us
y1 = y0 + hf (x0 , y0 ) = 2 + 0.1 × f (0, 2) = 2 + 0.1 × 2 × 2 × (1 − 2) = 1.6.
The fourth order Runge-Kutta method gives
h³ ´
y1 = y0 + `1 + 2`2 + 2`3 + `4
6
where `1 = f (x0 , y0 ) = f (0, 2) = 2 × 2 × (1 − 2) = −4,
³ h h ´
`2 = f x0 + , y0 + `1 = f (0.05, 2 + 0.05 × (−4))
2 2
= f (0.05, 1.8) = 2 × 1.8 × (1 − 1.8) = −2.88
³ h h ´
`3 = f x0 + , y0 + `2 = f (0.05, 2 + 0.05 × (−2.88))
2 2
= f (0.05, 1.856) = 2 × 1.856 × (1 − 1.856)) = −3.177472
`4 = f (0.1, 2 + 0.1 × (−3.177472)) = f (0.1, 1.6822528)
= −2.29544 . . . ,
so
0.1 ³ ´
y1 = 2 + − 4 + 2 × (−2.88) + 2 × (−3.177 . . .) + (−2.295 . . .) = 1.693160211 . . . .
6
For h = 0.5 (in which case y2 ≈ y(0.1)) we have y2 = 1.69309 . . ..
Exercise 4.7. For the initial value problem y 0 = x2 + y 2 with y(0) = 1 show that y(1)
does not exist. Hint: Show that y1 (1) does not exist for y1 ≤ y which is the solution
to a certain separable Differential equation.
Note 4.8. For the compute package MAPLE the commands you need are:
MATH 2310 – Ordinary Differential Equations Strand 25
with(DEtools):
dsolve({diff(y(x),x)=x^2+y^2,y(0)=1},y(x),type=numeric, \
method=classical[foreuler],stepsize=0.1, \
value=array([0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1.0]));
use classical[heunform] and classical[rk4] instead for improved Euler and fourth
order Runge-Kutta respectively.
MATH 2310 – Ordinary Differential Equations Strand 26
5. The Laplace Transform
References: Zill and Cullen, Chapter 4, Kreysig, Chapter 6. This section is not covered
in Stewart.
Definition 5.1. Let f : [0, ∞) → R be a real-valued function of one variable t with
domain 0 ≤ t < ∞ then the Laplace transform of f (t) is given by
Z ∞
L{f (t)}(s) = e−st f (t).dt,
0
provided that the improper integral is convergent.
Recall that the improper integral of a function g : [0, ∞) → R is defined to be
Z ∞ Z M
g(t).dt = lim g(t).dt
0 M →∞ 0
provided that the limit on the right-hand side exists and is finite.
Z ∞ Z M
−st
Examples 5.2. (i) L{0}(s) = e 0.dt = lim 0.dt = lim 0 = 0.
M →∞ 0 M →∞
Z ∞ 0 Z M
¡ 1 1 ¢ 1
(ii) L{1}(s) = e−st 1.dt = lim e−st .dt = lim − e−M s + e0 = ,
0 M →∞ 0 M →∞ s s s
provided that
Z ∞ s > 0.
1¡ ¢ 1
(iii) L{t}(s) = e−st t.dt = lim 2 1 − (1 + M s)e−M s = 2 , provided that s > 0.
0 Z M →∞ s s
∞ Z M
1 ¡ (a−s)M ¢
(iv) L{eat }(s) = e−st eas .dt lim e(a−s)t .dt = lim e −1 =
0 M →∞ 0 M →∞ a − s
1
, provided that s > a.
s−a
Following on from Examples 5.2(iii), by induction of n we may show that for integers
n!
n ≥ 0 we have L{tn }(s) = n+1 . Note also that the value of L{f (t)}(s), if it exists, is
s
a function of s, so we frequently write F (s) or f (s) instead of L{f (t)}(s) to emphasise
this.
Theorem 5.3. [Existence]
If f (t) is at least piecewise continuous on [0, ∞), and is of exponential order as t → ∞
then L{f (t)}(s) is defined for s > d where d is defined in Remarks 5.4(i) below.
Remarks 5.4. (i) The function f is of exponential order as t → ∞ if there is a T > 0,
B > 0 and d ∈ R such that
|f (t)| ≤ Bedt for all t ≥ T.
For instance any bounded function is of exponential order, as are f (t) = tn for
2
any n, eat for any a, but not et .
(ii) The function f is piecewise continuous on [0, ∞) if its only allowable discontinu-
ities on [0, ∞) are “finite” jumps. So the characteristic function of the interval
1
[a, b], χ[a,b] is one such, but one with a vertical asymptote such as f (t) =
t−a
MATH 2310 – Ordinary Differential Equations Strand 27
³1´
where a > 0 is not. Neither are functions such as f (t) = sin which behaves
t
badly at the point t = 0.
(iii) The theorem gives sufficient conditions for the existence of the Laplace transform
− 21
L{f (t)}(s) of f (t), but not necessary. For
r instance f (t) = (t) is not piecewise
1 π
continuous on [0, ∞) but L{t− 2 }(s) = , for s > 0.
s
Example 5.5. The function f (t) = cos(bt) is piecewise continuous on [0, ∞) and is of
exponential order as t → ∞ (since its bounded, we can choose d = 0). Hence by
Theorem 5.3 its Laplace transform L{cos(bt)}(s) exists. Indeed
Z M · −st ¸M Z
−st e 1 M −st
e cos(bt).dt = cos(bt) + e (−b) sin(bt).dt
0 −s 0 s 0
(· ¸M Z )
1 e−sM cos(bM ) b e−st sin(bt) b M −st
= − − + e cos(bt).dt
s s s −s 0 s 0
Z
1 e−sM cos(bM ) be−sM sin(bM ) b2 M −st
= − + − 2 e cos(bt).dt
s s s2 s 0
so µ ¶Z M
b2 1 e−sM cos(bM ) be−sM sin(bM )
1+ 2 e−st cos(bt).dt = − + .
s 0 s s s2
1
Now let M → ∞, then provided s > 0 the right hand side converges to . Hence we
s
have shown that for s > 0 we have
1
s s
L{cos(bt)}(s) = b2
= 2 .
1 + s2 s + b2
Similarly we may show that
b
L{sin(bt)}(s) =
s2 + b2
for s > 0 (see also Example 5.18).
There are five important properties of the Laplace transform which we shall make use
of in this course:
1. The shift (or first translation) formula
If F (s) = L{f (t)}(s) for s > d then
R∞ R∞
L{eat f (t)}(s) = 0 e−st eat f (t).dt = 0 e−(s−a)t f (t).dt
(5.1)
= L{f (t)}(s − a) = F (s − a),
for s − a > d.
Examples 5.6. (i) Applying the shift formula (5.1) we have
n!
L{eat tn }(s) = L{tn }(s − a) = .
(s − a)n+1
for s > a.
MATH 2310 – Ordinary Differential Equations Strand 28
(ii) Applying the shift formula (5.1) we have
s−a
L{eat cos(bt)}(s) = L{cos(bt)}(s − a) = .
(s − a)2 + b2
for s > a.
(iii) Applying the shift formula (5.1) we have
b
L{eat sin(bt)}(s) = L{sin(bt)}(s − a) =
(s − a)2 + b2
for s > a.
The second important property shows how to handle functions which are multiplied by
t.
2. Multiplication by t
The Laplace transform of tf (t) is equal to the negative of the derivative of the Laplace
transform of f :
Theorem 5.7. If L{f (t)}(s) exists, then so does L{tf (t)}(s), and it satisfies
d
L{tf (t)}(s) = L{f (t)}(s).
ds
Proof. By the Fundamental Theorem of Calculus, need only show that {−L({f (t)})(s)}
is antiderivative for L({tf (t)}). To see this, first note that, again by the Fundamental
Theorem of Calculus, for any real number a, the function
Z s
Fa (s) := L({tf (t)})(u) du
u=a
is an antiderivative
Ra for L({tf (t)})(s). Reversing the direction of integration, we have
Fa (s) = − u=s L({tf (t)})(u) du.
We now calculate:
Z a Z a Z ∞
− L({tf (t)})(u) du = − e−ut tf (t) dt du
u=s u=s t=0
Z a Z T
= − lim e−ut tf (t) dt du
u=s T →∞ t=0
Z a Z T
= lim − e−ut tf (t) dt du
T →∞ u=s t=0
as integration is continuous
Z T Z a
= lim − e−ut tf (t) dt du
T →∞ t=0 u=s
as we can reverse the order of finite integrals
Z T
£ ¤a
= lim − − e−ut f (t) u=s dt
T →∞ t=0
Z T
= − lim (e−st − e−at )f (t) dt
T →∞ t=0
MATH 2310 – Ordinary Differential Equations Strand 29
Since f has a Laplace transform, we know that as a → ∞, e−at f (t) → 0, so for each
value of s, we have
Z T Z T
−st −at
lim Fa (s) = − lim lim (e − e )f (t) dt = − lim e−st f (t) dt
a→∞ T →∞ a→∞ t=0 T →∞ t=0
and this last is equal to L{f (t)}(s) by definition.
As each Fa is an antiderivative for L({tf (t)})(s), and as L{f (t)} is differentiable
and is equal to the pointwise limit lima→∞ F (a), it must also be an antiderivative for
L({tf (t)})(s) as required. ¤
Corollary 5.8. Under the appropriate hypotheses on f , the Laplace transform of {tn f (t)}
is equal to the nth derivative of the Laplace transform of {f (t)}.
Example 5.9. We have already computed that L{1}(s) = 1s for all s. Hence we can
compute L{t}(s) as follows:
µ ¶0
d 1 1
L{t}(s) = L{t · 1}(s) = − L{1}(s) = − = 2.
ds s s
More generally:
n!
Corollary 5.10. For all n ≥ 0, we have L{tn }(s) = sn+1
.
Proof. We proceed by induction. For n = 0, we have {tn } = {1}, and we computed
earlier that L{1}(s) = 1s as required.
n!
Now suppose as an inductive hypothesis that L{tn }(s) = sn+1 . Then
L{tn+1 }(s) = L{t · tn }(s)
d
= − L{tn }(s)
ds
µ ¶0
n!
= − n+1 by inductive hypothesis
s
¡ ¢0
= −n! s−(n+1)
= −n!(−(n + 1))s−(n+1)−1
(n + 1)!
=
sn+2
as required. ¤
The third important property follows from its definition as an integral.
3. Linearity
The Laplace transform of a sum of functions is the sum of the Laplace transforms, and
the Laplace transform of a function times a scalar is a scalar times the Laplace transform
of the function. These properties add up to what Mathematicians call linearity.
Theorem 5.11. If L{f1 (t)}(s) and L{f2 (t)}(s) both exist then so does L{a1 f1 (t) +
a2 f2 (t)}(s) for all a1 , a2 ∈ R and
L{a1 f1 (t) + a2 f2 (t)}(s) = a1 L{f1 (t)}(s) + a2 L{f2 (t)}(s).
MATH 2310 – Ordinary Differential Equations Strand 30
Proof. The left hand side of the above expression is
Z M Z M Z M
−st −st
lim e [a1 f1 (t) + a2 f2 (t)].dt = lim e a1 f1 (t).dt + e−st a2 f2 (t).dt
M →∞ 0 M →∞ 0 0
Z M Z M
−st
= a1 lim e f1 (t).dt + a2 lim e−st f2 (t).dt
M →∞ 0 M →∞ 0
which is equal to the right hand side since both limits exist. ¤
Example 5.12. We may use Theorem 5.11 to compute
L{4 + 3e6t }(s) = 4L{1}(s) + 3L{e6t }(s)
4 3
= + for s > 6.
s s−6
The fourth important property of the Laplace transform is that its inverse is well–
defined.
4. Uniqueness
The Laplace transform is essentially one–to–one.
Theorem 5.13. There is at most one continuous function f (t) whose Laplace transform
is F (s).
Remark 5.14. The unique “inverse” transform of F (s) is often denoted L−1 {F (s)}(t).
Although there is a general formula for L−1 :
Z γ+i∞
−1 1
L {F (s)}(t) = est F (s).ds.
2πi γ−i∞
However, it is much easier to use the Laplace transform table to calculate inverses.
6 n 6 o
Examples 5.15. (i) If F (s) = = L{e−3t }(t) then L−1 (t) = 6e−3t .
s+3 s+3
2s
(ii) If F (s) = 2 then by partial fractions we find that
s − 8s + 12
2s 2s −1 3
F (s) = 2 = = +
s − 8s + 12 (s − 2)(s − 6) s−2 s−6
= −1L{e2t }(s) + 3L{e6t }(s),
so we may deduce that L−1 {F (s)}(t) = −e2t + 3e6t .
4s + 16
(iii) If F (s) = 2 then
s + 25
4s + 16 4s 16 5
F (s) = 2 = 2 2
+
s + 25 s +5 5 s2 + 52
16
= 4L{cos(5t)}(s) + L{sin(5t)}(s),
5
and so
16
L−1 {F (s)}(t) = 4 cos(5t) + sin(5t).
5
MATH 2310 – Ordinary Differential Equations Strand 31
7s + 11 7(s + 4) − 17
(iv) If F (s) = = then
s2+ 8s + 25 (s + 4)2 + 32
7(s + 4) 17 3
F (s) = 2 2
−
(s + 4) + 3 3 (s + 4)2 + 32
17
= 7L{e−4t cos(3t)}(s) − L{e−4t sin(3t)}(s),
3
and so
17 −4t
L1 {F (s)}(t) = 7e−4t cos(3t) −
e sin(3t).
3
How is this any use for solving Differential equations? This comes from the fifth impor-
tant property of the Laplace transform.
5. Derivative
If a function and its derivative have a Laplace transform then the Laplace transform
of the derivative may be expressed in terms of the Laplace transform of the original
function:
Theorem 5.16. Let f (t) be differentiable on [0, ∞) and be of exponential
© 0 ª order as
0
t → ∞ and let f (t) be piecewise continuous on [0, ∞). Then L f (t) (s) exists for
s > d and © ª
L f 0 (t) (s) = sL{f (t)}(s) − f (0).
Proof. The left hand side is the limit as M → ∞ of
Z M Z M
−st df
£ ¤
−st M
e .dt = f (t)e 0
+s e−st f (t).dt
0 dt 0
Z M
−sM
=e f (M ) − f (0) + s e−st f (t).dt.
0
Taking the limit as M → ∞, provided s > d we have |f (t)| < Bedt for t ≥ T and so the
first term decays to zero – the result then follows. ¤
Corollary 5.17. Under the appropriate hypotheses for f 00 (t) we have
L{f 00 (t)}(s) = s2 L{f (t)}(s) − sf (0) − f 0 (0).
Proof. The left hand side is equal to
£ ¤
sL{f 0 (t)}(s) − f 0 (0) = s sL{f (t)}(s)} − f (0) − f 0 (0).
¤
d
Example 5.18. Since cos(bt) = −b sin(bt) we have
dt
s −b2
sL{cos(bt)}(s) − cos(b.0) = s 2 −1= 2 ,
s + b2 s + b2
= −bL{sin(bt)}(s) = L{−b sin(bt)}(s)
Using Theorem 5.16 we may apply Laplace transform methods to solve initial value
problems. General solutions to differential equations are not normally computed using
the Laplace transform method.
MATH 2310 – Ordinary Differential Equations Strand 32
Examples 5.19. (i) Use the Laplace transform method to solve the initial value prob-
lem
y 00 − y 0 − 2y = 4e3t with y(0) = 2, and y 0 (0) = −1.
Taking Laplace transforms of both sides we have
L{y 00 − y 0 − 2y}(s) = L{y 00 }(s) − L{y 0 }(s) − 2L{y}(s) = 4L{e3t }(s).
That is
£ 2 ¤ £ ¤ 4
s L{y}(s) − sy(0) − y 0 (0) − sL{y}(s) − y(0) − 2L{y}(s) = .
s−3
Collecting terms in L{y}(s) we get
4 4 + (2s − 3)(s − 3)
(s2 − s − 2)L{y}(s) = + 2s + (−1) − 2 =
s−3 s−3
which simplifies to
2s2 − 9s + 13 A B C
L{y}(s) = = + + ,
(s − 3)(s − 2)(s + 1) s−2 s+1 s−3
where (s + 1)(s − 3)A + (s − 2)(s − 3)B + (s − 2)(s + 1)C = 2s2 − 9s + 13.
Putting s = 3 we get 0.A. + 0.B + 4.C = 18 − 27 + 13 = 4 and so C = 1.
Putting s = −1 we get 0.A + 12.B + 0.C = 2 + 9 + 13 = 24 and so B = 2.
Putting s = 2 we get − 3.A + 0.B + 0.C = 8 − 18 + 13 = 3 and so A = −1.
Hence our transformed differential equation is
−1 2 1
L{y}(s) = + +
s−2 s+1 s−3
= −1L{e2t }(s) + 2L{e−t }(s) + L{e3t }(s)
so, taking the inverse transform we find that
y(t) = −1e2t + 2e−t + e3t ,
which is the solution to the initial value problem.
(ii) Use the Laplace transform method to solve the initial value problem
dy
+ 2y = e−2t sin t with y(0) = 3.
dt
Taking Laplace transforms we get
1
L{y 0 }(s) + 2L{y}(s) = L{e−2t sin t}(s) =
(s + 2)2 + 12
1
sL{y}(s) − 3 + 2L{y}(s) = 2
s + 4s + 5
3s2 + 12s + 15
(s + 2)L{y}(s) = +3
s2 + 4s + 5
3s2 + 12s + 16
L{y}(s) =
(s + 2)(s2 + 4s + 5)
MATH 2310 – Ordinary Differential Equations Strand 33
Now
3s2 + 12s + 16 A Bs + C
2
= + 2
(s + 2)(s + 4s + 5) s + 2 s + 4s + 5
where A(s + 4s + 5) + (Bs + C)(s + 2) = 3s2 + 12s + 16 and then
2
3 1 −s − 2
L{y}(s) = + + 2
s + 2 s + 2 s + 4s + 5
4 (s + 2)
= − = 4L{e−2t }(s) − L{e−2t cos t}(s) so
s + 2 (s + 2)2 + 1
y(t) = 4e−2t − e−2t cos t
Since the original differential equation was linear we may solve it using the inte-
grating factor technique. It’s worth comparing the two methods: The integrating
factor is e2t and so
d ¡ 2t ¢
(5.2) e y = e2t e−2t sin t = sin t
dt
y(t) = −e−2t cos t + Ce−2t where C is a constant
y(0) = −1 + C = 3 and so C = 4.
Hence the solution to the initial value problem is y(t) = −e−2t cos t + 4e−2t . It
turns out that this equation was easier to solve directly, because by luck the
integrating factor removed the exponential term from the right-hand side (5.2)
and allowed us to integrate it more easily. If this were not the case (say the right-
hand side were e3t sin t then the problem would be much harder to do directly
and the Laplace transform method would work out easier.
(iii) Use the Laplace transform method to solve the initial value problem
y 00 + 2y 0 + y = 6, where y(0) = 5, y 0 (0) = 10.
Taking Laplace transforms we get
L{y 00 }(s) + 2L{y 0 }(s) + L{y}(s) = L{6}(s)
6
s2 L{y}(s) − 5s − 10 + 2(sL{y}(s) − 5) + L{y}(s) =
s
6 6 + 5s2 + 20s
(s2 + 2s + 1)L{y}(s) = + 5s + 20 =
s s
5s2 + 20s + 6
L{y}(s) =
s(s + 1)2
Now
5s2 + 20s + 6 A B C
2
= + +
s(s + 1) s s + 1 (s + 1)2
MATH 2310 – Ordinary Differential Equations Strand 34
so that 5s2 + 20s + 6 = A(s + 1)2 + Bs(s + 1) + Cs. Putting s = 0 we get 6 = A,
putting s = −1 we get 9 = C and then comparing the coefficient of s2 on each
side we find that 5 = A + B so that B = −1. Therefore
6 1 9
L{y}(s) = − + = 6L{1}(s) − L{e−t }(s) + 9L{te−t }(s)
s s + 1 (s + 1)2
and so
y(t) = 6 − e−t + 9te−t
is the solution to the initial value problem.
Remarks 5.20. (i) A familiarity with the techniques of partial fractions where the
degree of the numerator is less than the degree of the denominator is very helpful
for using the Laplace transform method for solving differential equations.
f (s)
(1) For a fraction of the form the partial fraction expansion is
(s − a)k
A1 A2 Ak
+ 2
+ ... +
s − a (s − a) (s − a)k
f (s)
(2) For a fraction of the form where s2 + as + b is irreducible (i.e.
(s2
+ as + b)k
a2 − 4b < 0), the partial fraction expansion is
A1 s + B1 A2 s + B2 Ak s + Bk
2
+ 2 2
+ ... + 2 .
s + as + b (s + as + b) (s + as + b)k
(ii) It is also important to be able to compute the inverse Laplace transform of
a rational function F (s) with a quadratic denominator. If the zeroes of the
quadratic are
(1) Real and distinct, then treat this as we did for
2s −1 3
F (s) = = + , then
(s − 2)(s − 6) s−2 s−6
L−1 {F (s)}(t) = −e2t + 3e6t .
(2) Real and equal, then treat this as for
3s + 1 3(s − 1) + 4 3 4
F (s) = 2
= 2
= + ,
(s − 1) (s − 1) s − 1 (s − 1)2
and then n
−1 3s + 1 o
L 2
(t) = 3et + 4tet
s − 2s + 1
(3) Complex (so that the quadratic is irreducible), first complete the square as
for
7s + 11 7(s + 4) − 17
F (s) = 2 =
s + 8s + 25 (s + 4)2 + 32
and then write as two fractions which are translated trigonometric functions
17 −4t
L−1 {F (s)}(t) = 7e−4t cos 3t − e sin 3t.
3
MATH 2310 – Ordinary Differential Equations Strand 35
Laplace Transforms of piecewise continuous functions
Consider the function
t2 for 0 ≤ t < 2
g(t) = 3 for 2 ≤ t < 8
t − 4 for 8 ≤ t
which is piecewise continuous on [0, ∞) and of exponential order as t → ∞.
g(t) .............. .......
.......
.
.......
.... .......
... .............
.
.. .......
.......
... .......
.......
4 ..........
.
..
... ..
..
. • .......
.. ...
...
3 ...........
... • .
... ......................................................................................................................................................................................................................
...
... ...
... ...
... ...
... ..
.....
... ...
....
... .....
... .....
... ..
..
......
.
. ....... . .
..............................................................................................................................................................................................................................................................................................................................................................................................................................
.... .. ..
2 8 t
By definition we find that
Z 2 Z 8 Z ∞
2 −st −st
L{g(t)}(s) = .dt + 3e .dt + (t − 4)e−st .dt te
0 2 8
2 ³2 4 1 ´ ³1 1´
= 3 − e−2s 3 + 2 + + e−8s 2 + .
s s s s s s
The Laplace transform L{g(t)}(s) is more easily computed using the unit step (or
Heaviside) functions. For each number c the unit step function uc (t) is defined by
½
0 if t < c
uc (t) =
1 if t ≥ c
which has a graph of the form
.....
........
g(t)
..
1 ...........
....
•
..................................................................................................................................................................................
....
..
...
...
..
............................................................................................................................................................................................................................ . . . . . . . . . . . . . . . . . . . . . . . . . . . .
... .
c ..
t
The same function may be denoted in other texts by u(t − c) or H(t − c) where H is
the Heaviside function H(t) := u0 (t). The graph of uπ (t) cos t is then
.... uπ cos t
.........
.
1 ............. ......
......... ......... .
.... ...... ......
..... ..
... .....
... .....
.
... .
...
.
... ...
. . ...
..................................................................................................................................................................................... . . . . . . ..... . . . . . . . .
.... ..
0 ..
....
π .....
.
...
..
t
... ....
... ...
......
.....
−1 • .......
.
................
We may use the unit step function to represent any piecewise continuous function. If
W (t) = uc1 (t) − uc2 (t) where c1 < c2 then
0 if t < c1
W (t) = 1 if c1 ≤ t < c2
0 if c2 ≤ t
MATH 2310 – Ordinary Differential Equations Strand 36
which has a graph of the form
W (t)
.....
.......
..
1 ...........
... • ...............................................................................................................................................
....
..
...
...
..
... . •
..................................................................................................................................................... . . . . . . . . . . . . . . . . . . . . . . ..............................................................................................................
.
.. c1 c2 t
so that if f (t) is any function W (t)f (t) takes the value f (t) in the interval [c1 , c2 ) and is
zero elsewhere. It is called the characteristic function of the interval [c1 , c2 ). Therefore
the piecewise continuous function
t2 for 0 ≤ t < 2,
g(t) = 3 for 2 ≤ t < 8,
t − 4 for 8 ≤ t,
may be written as
g(t) = t2 [u0 (t) − u2 (t)] + 3[u2 (t) − u8 (t)] + (t − 4)u8 (t) or
g(t) = u0 (t)t2 + u2 (t)(3 − t2 ) + u8 (t)(t − 4 − 3), or perhaps
g(t) = t2 + (3 − t2 )u2 (t) + (t − 7)u8 (t),
since t2 u0 (t) = t2 for all t ≥ 0 (recall that in Laplace transform calculations we are
dealing with functions whose domain consists of the positive real numbers). It is com-
monplace to drop the u0 (t) term in Laplace transform calculations as it does not have
any affect.
Now for any function f (t) we have
Z ∞
L{uc (t)f (t)}(s) = e−st uc (t)f (t).dt
Z0 ∞
= e−st f (t).dt now substitute τ = t − c
Zc ∞ Z ∞
−s(τ +c) −cs
= e f (τ + c).dτ = e e−sτ f (τ + c).dτ and so
0 0
−cs
L{uc (t)f (t)}(s) = e L{f (t + c)}(s)
−1 e−3s e−4s
Examples 5.22. (i) To find L {F (s)}(t) where F (s) = 2 − . First we write
s s
F (s) = e−3s L{t}(s) − e−4s L{1}(s)
= L{u3 (t)(t − 3) − u4 (t).1}(s) so
0 if 0 ≤ t < 3,
−1
L {F (s)}(t) = u3 (t)(t − 3) − u4 (t).1 = t−3 if 3 ≤ t < 4,
t−4 if 4 ≤ t.
1
(ii) To find L−1 {F (s)}(t) where F (s) = e−πs . First we write
s2 + 4
1 2
F (s) = e−πs 2
2 s + 22
n1 o n1 o
F (s) = e−πs L sin 2t (s) = L uπ sin 2(t − π) (s) so
2 2
1 1
L−1 {F (s)}(t) = uπ (t) sin(2t − 2π) = uπ sin 2t.
2 2
³ 1 π ´
(iii) To find L−1 {F (s)}(t) where F (s) = e−πs 2 2 + 2 . First we write
s (s + 1) s(s + 1)
1 π 1 + sπ π 1 −πs − 1
+ = 2 2 = + 2+ 2
s2 (s2 2
+ 1) s(s + 1) s (s + 1) s s s +1
MATH 2310 – Ordinary Differential Equations Strand 38
Then we have
n1 1 ³1 s ´o
F (s) = e−πs − + π − ,
s2 s2 + 1 s s2 + 1
= e−πs L{t − sin t + π − π cos t}(s),
= L{uπ (t)[(t − π) − sin(t − π) + π − π cos(t − π)]}(s),
and so
Applications
One of the most important applications of the Laplace transforms of piecewise continu-
ous functions lies in initial value problems with piecewise continuous non-homogeneous
term. To solve these directly we would have to solve the differential equation in each
interval in which the piecewise continuous function was defined, using the solution in
the previous interval to generate the initial condition for the problem in the current
interval. This would make the calculation potentially messy. The Laplace transform
method takes care of all this. We illustrate this below (though the first example could
probably be more easily done directly):
Examples 5.23. (i) Solve the initial value problem y 0 + 2y = g(t) where y(0) = 0
and
½
3 if 0 ≤ t < 1,
g(t) =
0 if 1 ≤ t.
Since g(t) = 3[u0 (t) − u1 (t)] + 0.u1 (t) = 3u0 (t) − 3u1 (t) we may take the Laplace
transform of the differential equation
(ii) Solve the initial value problem y 00 + y = g(t) where y(0) = 0, y 0 (0) = 1 and
½
t if 0 ≤ t < 2,
g(t) =
2e−(t−2) if 2 ≤ t,
if the series converges to f (x) everywhere except at points where f is not continuous.
In this case, points of discontinuity notwithstanding, we write
∞
X
f (x) = a0 + (an cos(nx) + bn sin(nx)),
n=1
and call the series the Fourier series for f . We call the numbers ai and bi the Fourier
coefficients of f .
This definition begs a question: how do we know that he fourier series is unique if it
exists, and how to we know whether a function can be represented by a Fourier series
in the first place? The next theorem answers the first of these questions.
Theorem 6.4. Suppose that f : R → R is periodic with period 2π and that f represented
by the Fourier series (6.1). Then the Fourier coefficients are given by the Euler formulae:
Z π
1
(a) a0 := f (x) dx.
2πZ −π
1 π
(b) an := f (x) cos(nx) dx for n ≥ 1.
π Z −π
1 π
(c) bn := f (x) sin(nx) dx for n ≥ 1.
π −π
The idea of the proof is to apply the Euler formulae to the Fourier series (6.1),
and show that we recover the Fourier coefficients. Of course, this makes the implicit
assumption that if we integrate term-by-term, we obtain the same answer as if we had
integrated the limit function. This is true, but not obvious. We will not present the
proof here, but we include the following theorem, which is the key step in the proof,
because it is of independent interest in any case.
First we need to define what we mean by orthonormal functions. Recall that we can
think of functions as vectors with one coordinate for each element of R. In this mode
of thinking, we regard the value f (x) of f at x as the x-coordinate of the vector f .
Recall also that for “ordinary” vectors (x1 , . . . , xn ) and (y1 , . . . , yn ) in Rn , we can take
their dot-product x · y := x1 y1 + · · · + xn yn , and that x and y are orthogonal (that is,
at right-angles) precisely if x · y = 0. Recall also that a vector x is normal, meaning it
has length 1, if x · x = 1. Putting these two concepts together, we say that a collection
S of vectors in Rn is orthonormal if x · x = 1 for all x ∈ S, and x · y = 0 for x 6= y ∈ S.
A function which is periodic with period 2π is determined by its values on [−π, π],
so we think of such functions as vectors with infinitely many coordinates (one for each
element of [−π, π]. It makes little sense to try to define the dot product of two functions
by adding up the products of the coordinates because there are uncountably many of
them. Instead, we replace the sum with an integral, and we define the inner-product of
MATH 2310 – Ordinary Differential Equations Strand 43
two periodic functions f, g with period 2π by
Z π
(f |g) := f (x)g(x) dx
−π
whenever the integral converges. We can now say that a collection S of functions is
orthonormal if (f |f ) = 1 for all f ∈ S and (f |g) = 0 for all f 6= g ∈ S.
1 1
Theorem 6.5. The trigonometric system { 2π , π sin(nx), π1 cos(nx) : n ≥ 1} is a system
of orthogonal functions.
Again, we will not prove this theorem. The idea is to notice that we can use the
multiple-angle formulae to rewrite
1³ ´
cos(mx) cos(nx) = cos((m − n)x) + cos((m + n)x)
2
1³ ´
sin(mx) sin(nx) = cos((m − n)x) − cos((m + n)x) and
2
1 ³ ´
sin(mx) cos(nx) = sin((m + n)x) + sin((m − n)x) ,
2
and then that the periodicity of sin and cos force
Z π Z π
cos(kx) dx = sin(kx) dx = 0
−π −π
for each nonzero integer k.
We now know that if f can be represented by a Fourier series, then the Fourier
coefficients are given by the Euler formulae. In particular, the Fourier series for a given
f is unique. The next theorem tells us when f can in fact be represented by a Fourier
series in the first place. Now that we have the Euler formulae in hand, it is not hard
to prove — the Fourier series can only be the one given by the Euler formulae, so one
proves the theorem simply by checking that under the hypotheses of the theorem the
Euler formulae make sense.
Theorem 6.6. Suppose that f : R → R is periodic with period 2π, is piecewise continu-
ous in the interval [−π, π], and suppose that f also has a piecewise continuous derivative
on (−π, π]). Then f can be represented by a unique Fourier series which converges to
f (x) wherever f is continuous. Moreover, for −π ≤ x0 ≤ π such that f is not continuous
at x0 , the Fourier series for f converges to the average 12 (limx%x0 f (x) + limx&x0 f (x)).
6.2. Periods other than 2π. Of course, there are many periodic functions whose
period is not 2π. In general, we say that a function f has period 2L if f (x + 2L) = f (x)
for all x. Now we cannot expect to approximate with a Fourier series in cos(nx) and
sin(nx). Instead, we seek to represent f by a Fourier series
∞ ³
X ¡ nπ ¢ ¡ nπ ¢´
(6.2) f (x) = a0 + an cos x + bn sin x .
n=1
L L
The existence theorem and Euler’s formula are very similar: if (6.2) holds, then
Z L
1
(A) a0 := f (x) dx.
2L −L
MATH 2310 – Ordinary Differential Equations Strand 44
Z L
1 ¡ nπ ¢
(B) an := f (x) cos x dx for n ≥ 1.
L −L L
Z
1 L ¡ nπ ¢
(C) bn := f (x) sin x dx for n ≥ 1.
L −L L
1 1
¡ ¢ ¡ ¢
One proves this by showing that the system { 2L , L sin nπ
L
x , L1 cos nπ
L
x : n ≥ 1} is an
orthonormal system of functions (where the inner-product is now defined by integration
over [−L, L] rather than [−π, π]). The formulae (A)–(C) converge provided that f
is periodic of period 2L, is piecewise continuous on [−L, L] and also has a piecewise-
continuous derivative on the same interval. The series determined by (A)–(C) converges
to f wherever f is continuous, and where f is not continuous, it converges to the average
of the left- and right-hand limits of f .
None of this is particularly surprising — it amounts to adjusting the formulae from
the previous subsection by a scaling constant.
6.3. Complex Fourier series. Recall that Euler’s formula states that eiθ = cos(θ) +
i sin(θ) for all real θ. Since sin is an odd function and cos is an even function, it follows
that e−iθ = cos(−θ) + i sin(−θ) = cos(θ) − i sin(θ). Adding and subtracting these
formulae one from the other, we obtain
eiθ + e−iθ = 2 cos(θ) and eiθ − e−iθ = 2i sin(θ).
Rearranging, we have cos(θ) = 12 (e−θ + e−iθ ) and sin(θ) = 2i1 (eiθ − e−iθ ).
Putting this into (6.2) and into (A)–(C), we obtain
1 X ³ ¡ nπx/L ¢´
∞
−nπx/L
¢ ¡ nπx/L −nπx/L
f (x) = a0 + an e +e − ibn e −e
2 n=1
1 X³ ´
∞
nπx/L −nπx/L
= a0 + (an − ibn )e + (an + ibn )e
2 n=1
where
Z L Z
1 ¡ nπ ¢ 1 L ¡ nπ ¢
an − ibn = f (x) cos x dx − i f (x) sin x dx
L −L L L −L L
Z ³
1 L ¡ nπ ¢ ¡ nπ ¢´
= f (x) cos x − i sin x dx
L −L L L
Z
1 L
= f (x)e−inπx/L dx
L −L
and
Z L Z
1 ¡ nπ ¢ 1 L ¡ nπ ¢
an + ibn = f (x) cos x dx + i f (x) sin x dx
L −L L L −L L
Z ³
1 L ¡ nπ ¢ ¡ nπ ¢´
= f (x) cos x + i sin x dx
L −L L L
Z
1 L
= f (x)einπx/L dx
L −L
MATH 2310 – Ordinary Differential Equations Strand 45
This gives us the complex Fourier series for f :
1 −inπx/L
Theorem 6.7. The exponential system { 2L e : n ∈ Z} is orthonormal with inner-
product defined by integration on [−L, L]. Suppose that f : R → R is periodic with period
2L. Then f is represented by a Fourier series (6.2) if and only if it is also represented
by the complex Fourier series
∞
X
(6.3) f (x) = cn einπx/L
n=−∞
where Z L
1
cn = f (x)e−inπx/L dx for all n ∈ Z.
2L −L
Moreover, these coefficients cn are the only ones for which (6.3) is valid.
6.4. Fourier integrals. The idea is now to study non-periodic functions by letting the
period 2L in the previous section approach infinity. The upshot is that the infinite series
we have seen so far become an improper integral called the Fourier integral.
Definition 6.8. We say that f : R → R is absolutely integrable if
Z ∞ ³ Z 0 ´ ³ Z b ´
|f (x)| dx := lim |f (x)| dx + lim |f (x)| dx
−∞ a→−∞ a b→∞ 0
converges to some finite value.
Fix an absolutely integrable function f . As we did when producing functions of period
2π before, For any given L, and for x ∈ R, define [x][−L,L) ∈ [−L, L) to be the unique
element of [−L, L) such that x = [x][−L,L) + 2nL for some integer n. Then the function
fL (x) := f ([x][−L,L) is periodic with period 2L and agrees with f on the interval [−L, L).
We will suppose from here on in that each fL can be represented by a Fourier series
on [−L, L) as in subsection 6.2. For each n ≥ 1, let wn := nπ L
so that the trigonometric
functions in the Fourier expansion of fL , namely cos(nπx/L) and sin(nπx/L) become
cos(wn x) and sin(wn x). Then the Euler formulae tell us that
Z L ∞ µ Z
1 1X ³ L ´
fL (x) = fL (v) dv + fL (v) cos(wn v) dv cos(wn x)
2L −L L n=1 −L
³Z L ´ ¶
+ fL (v) sin(wn v) dv sin(wn x) .
−L
(n+1)π
Let ∆w := wn+1 − wn = L
− nπL
= Lπ . Then π1 ∆w = L1 . The Fourier series above
then becomes
Z L ∞ µ Z
1 1X ³ L ´
fL (x) = fL (v) dv + fL (v) cos(wn v) dv cos(wn x)∆w
2L −L π n=1 −L
³Z L ´ ¶
+ fL (v) sin(wn v) dv sin(wn x)∆w .
−L
Note that this formula is valid for arbitrarily large finite values of L. In particular, we
may take the limit of this expression as L → ∞, or equivalently as ∆w → 0. Since
MATH 2310 – Ordinary Differential Equations Strand 46
1
R L
f is absolutely integrable, the term 2L f (v) dv on the right-hand side goes to 0.
−L L
Moreover, since each value of x lies in [−L, L) for large enough L, we have
X ∞ ³³ Z π ´ ´
1 ∆w
f (x) = lim f (v) cos(wn v) dv cos(wn x)∆w
π ∆w→0 n=1 π
− ∆w
X∞ ³³ Z π ´ ´
1 ∆w
+ lim f (v) sin(wn v) dv sin(wn x)∆w
π ∆w→0 n=1 π
− ∆w
for all x. We do not even pretend to prove the following statement, but it is believable
at least that the limits of these sums become improper integrals:
Z µZ ∞ ¶
1 ∞
f (x) = f (v) cos(wv) dv cos(wx) dw
π 0 −∞
(6.4) Z µZ ∞ ¶
1 ∞
+ fL (v) sin(wv) dv sin(wx) dw
π 0 −∞
If we define
Z Z
1 ∞ 1 ∞
(6.5) Af (w) := f (v) cos(wv) dv and Bf (w) := f (v) sin(wv) dv,
π −∞ π −∞
then the hoped-for representation (6.4) can be rewritten as
Z ∞
(6.6) f (x) = Af (w) cos(wx) + Bf (w) sin(wx) dw.
0
This is called a representation of f by a Fourier integral provided that it agrees with f
at every point at which f is continuous.
The following theorem justifies the hand-waving we did above, at least under appro-
priate hypotheses.
Theorem 6.9. Suppose that f : R → R is absolutely integrable, piecewise-continuous
on every finite interval, and has a piecewise continuous derivative on every interval as
well. Then f can be represented by the Fourier integral (6.6) where the functions Af
and Bf are given by (6.5). At any point x0 at which f is not continuous, the value of
the Fourier integral is equal to the average of the left- and right-hand limits of f (x) as
x approaches x0
6.5. Sine and Cosine Fourier transforms. The next step in our progression towards
the Fourier transform is to concentrate on the functions Af (w) and Bf (w) appearing in
the Fourier integral in the previous section.
Recall that we say a function g : R → R is even if f (x) = f (−x) for all x, and we
say that h : R → R is odd if h(x) = −h(−x) for all x. The observations which make
this section work are
R∞ R∞
(1) if g is any even function, then −∞ g(x) dx = 2 0 g(x) dx whenever the integrals
converge; and R∞
(2) if h is any odd function, then −∞ h(x) dx = 0.
Why is this important? We obtained the function Af by integrating f (v) cos(wv) from
−∞ to ∞. We then obtained (part of) f (x) from the Fourier integral by integrating
Af (w) cos(wx) dw from 0 to ∞. The point is that if we knew that f (v) cos(wv) was an
MATH 2310 – Ordinary Differential Equations Strand 47
even function of v, we could replace the integral from −∞ to ∞ in the first step with an
integral from 0 to ∞, and the two steps would then become two iterations of the same
step; this single step is what we will call the Cosine Fourier transform.
Suppose that f is an even function. Since cos is an even function, so is v 7→ cos(wv)
for any fixed w. Since a product of two even functions is also even, it follows that
if f is even, then v 7→ f (v) cos(wv) is also even for any fixed w. Furthermore, since
v 7→ sin(wv) is odd, and since the product of an even function and an odd function is
itself odd, we have
Z ∞
Bf (w) = f (v) sin(wv) dv = 0.
−∞
Consequently, if f is even and is represented by a Fourier integral, then
Z
2 ∞
Af (w) = f (v) cos(wv) dv
π 0
and Z Z
∞ ∞
f (x) = Af (w) cos(wx) + Bf (w) sin(wx) dw = Af (w) cos(wx).
0 0
A very similar analysis to the one above, using that the product of two odd functions
is an even function, shows that if f is an odd function which is represented by a Fourier
integral, then
Z
2 ∞
Bf (w) = f (v) sin(wv) dv,
π 0
and Af (w) = 0 for all w so that
Z ∞
f (x) = Bf (w) sin(wx) dw.
0
r µ ¶ r µ ¶
ax 2 1 2 −a
Fc {e }(w) = − =
π 1 + w2 /a2 π a2 + w2
6.6. The Fourier transform.PIn this section we apply the ideas of the last two sections
to the complex Fourier series cn einπ/Lx . This will bring us to the Fouirer transform.
Let f be an arbitrary function. As in Section 6.4, for each L ∈ R, define fL to be
the periodic function fL (x) := f ([x][−L,L) of period 2L. Supposing that each fL can be
represented by a Fourier series, Theorem 6.7 ensures that for any L > |x|,
X∞ Z L
1
f (x) = f (v)e−inπv/L dveinπx/L .
n=−∞
2L −L
Where we have used a different variable, v in the integral defining cn to avoid confusion.
If we again let wn := nπ
L
for each n, and ∆w := wn − wn−1 = Lπ , this becomes
Z
1 X ³ L ´
∞
f (x) = f (x)e−iwn v dv eiwn x ∆w
2π n=−∞ −L
for large enough L. If we take the limit as L → ∞ (so that ∆w → 0, and again act on
the plausible assumption that the infinite sum then becomes an infinite integral, we
obtain Z ∞ ³Z ∞ ´
1
f (x) = f (x)e−iwv dv eiwx dw.
2π −∞ −∞
Rearranging this, we arrive at the complex Fourier integral:
Z ∞Z ∞
1
f (x) = f (x)e−iw(x−v) dv dw.
2π −∞ −∞
It is more important, however, to note the symmetry between the expression for cn
and the expression for f (x) obtained from cn . The former is the integral of f (x) against
MATH 2310 – Ordinary Differential Equations Strand 50
e−iwv as v ranges over R, and the latter is effectively the same except that one integrates
against eiwx as w ranges over R. Apart from a change of variable names, the difference
is only in the sign of the exponent.
This leads us to the definition of the Fourier transform, which is where we have been
headed throughout this section.
Definition 6.15. Given a function f : R → R, we define the Fourier transform of f ,
denoted either F(f ) or fb by
Z ∞
1
(6.7) F(f )(x) = fb(x) = √ f (w)e−iwx dw
2π −∞
at all values of x for which the integral converges. Where fb exists, we call f the inverse
Fourier transform of f , and denote it F −1 (f ).
In terms of the Fourier transform, the Fourier integral for f becomes
Z ∞
1
f (x) = √ F(f )(w)e−wx dw = F(F(f ))(−x).
2π −∞
That is to say, F −1 (g)(x) = F(g)(−x) whenever g has an inverse Fourier transform. As
always, we want to know when the Fourier transform exists, and what properties it has.
Theorem 6.16. The Fourier transform has the following properties:
(1) Suppose that f : R → R is absolutely integrable and piecewise continuous on
every finite interval. Then The Fourier transform fb of f given by (6.7) exists.
(2) suppose that f : R → R is continuous and f (x) → 0 as x → ∞. Suppose
moreover that f 0 (x) is absolutely integrable. Then fb0 exists, and satisfies
fb0 (w) = iwfb(w) for all w.
(3) Suppose that f and g both have Fourier transforms, and let k ∈ R. Then kf + g
has a Fourier transform, and it satisfies
\
kf + g(w) = k fb(w) + gb(w) for all w.
(4) Suppose that f and g are piecewise continuous, bounded and absolutely integrable.
Then the convolution f ∗ g of f and g has a Fourier transform and it satisfies
√
f[∗ g(w) = 2π fb(w)b g (w) for all w.
Our strategy for solving (7.1) will be to first find the general solution yc (x) to the
associated homogeneous equation (7.3), then find a particular solution yp (x) and then
put them together to give the general solution yg (x). Hence we shall first focus on
homogeneous linear differential equations.
What are the general properties of a solution of a homogeneous linear ordinary differ-
ential equation (7.3)?
(1) If y1 (x), y2 (x), . . . , yk (x) are solutions of a homogeneous linear ordinary differen-
tial equation(7.3) on the interval (a, b) then so is the linear combination
for all numbers c1 , . . . , ck . This fact is also called the superposition principle. We
1
saw this in practice in Example 7.2 above: since the function M (t) =
(t + 100)2
A
is a solution of the homogeneous equation (7.4) then so are M (t) =
(t + 100)2
for all values A. It is also the reason why the arbitrary constant which crops up
in the integrating factor for a first order linear ordinary differential equation is
ignored.
For more examples, observe that since ex , e−x , cosh x and sinh x are all solu-
tions to y 00 − y = 0 then so is Aex + Be−x + C cosh x + D sinh x for all values of
A, B, C, D. Though ex + x2 is a solution of y 00 − y = 2 − x2 it is not true that
A(ex + x2 ) is a solution for all values of A since the differential equation was not
1 A
homogeneous. Though y = is a solution of y 00 − 2y 3 = 0, the function y =
x x
is only a solution if A = 0, ±1 since the differential equation is not linear.
(2) The general solution of (7.3) is the linear combination of n linearly indepen-
dent solutions, called a fundamental set of solutions. Recall that the functions
{y1 (x), y2 (x), . . . , yk (x)} are linearly independent if
which converge for all values of x. Hence the general solution of (8.1) may be written
∞
X ∞
X
(−1)n x2n (−1)n x2n+1
y(x) = A +B .
n=0
(2n)! n=0
(2n + 1)!
Can we obtain the power series solutions of (8.1) directly? The method of attack is
similar to the technique of undetermined coefficients. We seek a solution of (8.1) of the
form
X∞
y(x) = an xn = a0 + a1 x + a2 x2 + a3 x3 + . . . ,
n=0
our task is to determine coefficients an for which this series converges to a function
satisfying (8.1). If we assume that term-by-term differentiation of the infinite series is
possible, then
Ã∞ ! ∞
Ã∞ ! ∞
d X n
X
n−1 d2 X
n
X
an x = nan x , and 2 an x = n(n − 1)an xn−2 .
dx n=0 n=0
dx n=0 n=0
Note that the infinite sums for the first and second derivatives now start at n = 1 and
n = 2 respectively. Consequently we have
Ã∞ ! ∞ ∞ ∞
d 2 X X X X
00 n n n−2
(8.2) y +y = 2 an x + an x = n(n − 1)an x + an xn .
dx n=0 n=0 n=0 n=0
To add two series together under one summation sign, both summation indices must
start at the same value (cf. adding definite integrals) and the numerical values of the
powers of x need to be the same for the same value of the summation index. this is not
the case in the formula above; in the first sum n = 2 gives the term x0 whereas in the
second sum n = 2 gives the term x2 . To get around this problem we must adjust the
MATH 2310 – Ordinary Differential Equations Strand 55
summation indices of each sum in the manner shown below: Observe that
Ã∞ ! ∞
d X n
X
an x = nan xn−1
dx n=0 n=0
∞
X ∞
X
n−1
= nan x = (k + 1)ak+1 xk ,
n=1 k=0
where we replace n − 2 by k in the last sum. In general we make the substitution for k
to be the power of x occurring in the sum. Returning to (8.2) we have
∞
X ∞
X ∞
X
0= (k + 2)(k + 1)ak+2 xk + a k xk = {(k + 2)(k + 1)ak+2 + ak }xk
k=0 k=0 k=0
which must be so for some interval about x = 0. Now a power series is identically
zero (i.e. it is zero for all values of x) for some interval of the independent variable
if and only if the coefficient of each power is 0. Thus for k = 0, 1, 2, . . . we have
(k + 2)(k + 1)ak+2 + ak = 0 or
1
ak+2 = − ak ,
(k + 2)(k + 1)
which is a (second order) recurrence relation. Substituting recursively into the recurrence
relation we find that
1 1
if k = 0 then a2 = − a0 , if k = 1 then a3 = − a1 ,
2.1 3.2
1 (−1)2 1 (−1)2
if k = 2 then a4 = − a2 = a0 , if k = 3; then a5 = − a3 = a1 ,
4.3 4! 5.4 5!
1 (−1)3 1 (−1)3
if k = 4 then a6 = − a4 = a0 , if k = 5 then a7 = − a3 = a1 .
6.5 6! 7.6 7!
In general for k = 0, 1, . . . we have
(−1)k (−1)k
a2k = a0 , and a2k+1 = a1 ,
(2k)! (2k + 1)!
MATH 2310 – Ordinary Differential Equations Strand 56
and the solution of (8.1) is
∞
X 1 1 1 1
an xn = a0 x0 + a1 x − a0 x2 − a1 x3 + a0 x4 + a1 x5 − . . .
n=0
2! 3! 4! 5!
µ ¶ µ ¶
x2 x4 x3 x5
= a0 1 − + − . . . + a1 x − + − ...
2! 4! 3! 5!
= a0 y1 (x) + a1 y2 (x),
x2 x4 x3 x5
where y1 (x) = 1 − + − . . . and y2 (x) = x − + − . . .. The infinite series y1 (x)
2! 4! 3! 5!
and y2 (x) are solutions to (8.1) only on their intervals of convergence, which is usually
found by applying the ratio test.
For instance, since successive terms in the infinite series y1 (x) and y2 (x) are an xn and
an+2 xn+2 , then to apply the ratio test we compute
¯ ¯ ¯ ¯
¯ an+2 xn+2 ¯ ¯ an+2 ¯
lim ¯ ¯ 2
= |x| lim ¯¯ ¯ substituting from the recurrence relation we get
n→∞ ¯ an xn ¯ n→∞ an ¯
¯ ¯
¯ −1 ¯
= |x| lim ¯¯
2 ¯ = |x|2 .0 = 0 which is less than 1 for all x.
n→∞ (n + 2)(n + 1) ¯
Hence y1 (x) and y2 (x) are solutions of (8.1) for all x (and a0 y1 (x)+a1 y2 (x) is the general
solution since their Wronskian W (y1 , y2 )(0) = 1 6= 0 for all x and so W 6≡ 0).
Example 8.1. To find the general solution of the second order linear differential equation
(1 − x2 )y 00 − 2xy 0 + 6y = 0
∞
X
about x = 0 we set y(x) = an xn then
n=0
∞
X ∞
X
0 n−1 00
y (x) = nan x and y (x) = n(n − 1)an xn−2 .
n=0 n=0
Hence
∞
X ∞
X ∞
X
(1 − x2 )y 00 − 2xy 0 + 6y = (1 − x2 ) n(n − 1)an xn−2 − 2x nan xn−1 + 6 an xn
n=0 n=0 n=0
∞
X ∞
X ∞
X
= n(n − 1)an (1 − x2 )xn−2 + −2nan xn + 6an xn
n=0 n=0 n=0
∞
X ∞
X ∞
X
= n(n − 1)an xn−2 + n
−n(n − 1)an x + (−2n + 6)an xn
n=0 n=0 n=0
X∞ X∞
= n(n − 1)an xn−2 + {−n(n − 1) − 2n + 6}an xn ,
n=0 n=0
MATH 2310 – Ordinary Differential Equations Strand 57
now put k = n − 2 in the first sum and k = n in the second sum to get
∞
X ∞
X
k
(k + 2)(k + 1)ak+2 x + − (k 2 + k − 6)ak xk
k=0 k=0
∞
X
(8.3) = {(k + 2)(k + 1)ak+2 − (k 2 + k − 6)ak }xk .
k=0
The right hand side of (8.3) is the function which is identically zero if for k = 0, 1, 2, . . .
we have
(k + 3)(k − 2)
(k + 2)(k + 1)ak+2 − (k 2 + k − 6)ak = 0 or ak+2 = ak ,
(k + 2)(k + 1)
which is a (second order) recurrence relation. Substituting values of k into the recurrence
relation we find that
−6 −4 2
for k = 0, we get a2 = a0 = −3a0 , for k = 1, we get a3 = a1 = − a1 ,
2.1 3.2 3
5.0 6.1 1
for k = 2, we get a4 = a2 = 0, for k = 3, we get a5 = a3 = − a1 ,
4.3 5.4 5
7.2 8.3 4
for k = 4, we get a6 = a4 = 0, for k = 5, we get a7 = a5 = − a1 ,
6.5 7.6 35
so a2k = 0 for k = 2, 3, 4, . . . and a2k+1 = f (k)a1 for k = 0, 1, 2, . . . .
1 √
= |x|2 . which is less than 1 for all |x| < 2.
2
(ii) Recall that if f, g are two polynomials then
¯ ¯ 0 if degree(f ) < degree(g)
¯ f (n) ¯ ¯ ∞¯ if degree(f ) > degree(g)
lim ¯ ¯= ¯a¯
n→∞ ¯ g(n) ¯
¯ b ¯ if degree(f ) = degree(g) = k, and
f (n) = ank + . . . , g(n) = bnk + . . .
This may be shown by application of various forms of L’Hôpital’s rule.
∞
X
(iii) Recall that an infinite power series an (x − x0 )n is (absolutely) convergent for
n=0
|x − x0 | < R or x0 − R < x < x0 + R where R (≥ 0) is called the radius of
convergence. If R = 0 the series is only convergent at x0 and if R = ∞ then
the series converges for all x. It can be shown that infinite power series can
be differentiated term by term in their intervals of convergence and these new
infinite series are equal to the derivative of the original infinite series. To apply
the ratio test we compute
¯ ¯ ¯ ¯
¯ an+1 (x − x0 )n+1 ¯ ¯ an+1 ¯
lim ¯ ¯ ¯
= |x − x0 | lim ¯ ¯,
n→∞ ¯ an (x − x0 )n ¯ n→∞ an ¯
the series is (absolutely) convergent for all those x with
¯ ¯
¯ an+1 ¯
|x − x0 | lim ¯¯ ¯ < 1.
n→∞ an ¯
In particular, the radius of convergence is given by
¯ ¯
¯ an ¯
R = lim ¯¯ ¯.
n→∞ an+1 ¯
Reduction of order
How does one go about finding a fundamental set of solutions of a homogeneous linear
differential equation such as (7.3)? If we know one non–trivial solution y = y1 (x) of
(7.3), then one can attempt to complete a fundamental set of solutions by the method
of reduction of order. The basic idea is to substitute y(x) = y1 (x)v(x) into (7.3), and
then solve the resulting differential equation for v.
Examples 9.1. (i) Given that y(x) = cosh x is a solution of y 00 − y = 0 we may
use reduction of order to find a second linearly independent solution as follows.
Substitute y(x) = v(x) cosh x into the differential equation:
y 00 − y = v 00 cosh x + 2v 0 sinh x + v(cosh x − cosh x) = 0 so
v 00 cosh x + 2v 0 sinh x = 0 then
µ ¶
0 0 sinh x
(v ) + 2 v 0 = 0,
cosh x
which is a first order linear differential equation in v 0 . The integrating factor for
this equation is
R sinh x
e2 cosh x
.dx
= e2 ln(cosh x) = (cosh x)2 ,
therefore
³ ´0 A
(cosh x)2 v 0 = 0 × cosh2 x so that v 0 = = A sech2 x,
cosh2 x
hence v(x) = A tanh x + B where A, B are constants. Then
y(x) = v(x) cosh(x) = A sinh x + B cosh x
is the general solution of the differential equation. Hence the second linearly
independent solution is y(x) = sinh x.
(ii) It is easy to see that y(x) = x is a solution of x2 y 00 − xy 0 + y = 0 [if y = x then
y 0 = x, y 00 = 0 and then clearly x2 (0) − x(1) + x = 0]. We may now use reduction
of order to find a second, linearly independent solution, and also the general
solution for x > 0. Substitute y(x) = xv(x) into the differential equation:
y − xy 0 + x2 y 00 = xv − x[v + xv 0 ] + x2 [2v 0 + xv 00 ] = 0 so
v(x − x) + v 0 (−x2 + 2x2 ) + x3 v 00 = 0 and then
1
(v 0 )0 + v 0 = 0, since x > 0,
x
which is a first order linear differential equation in v 0 . The integrating factor is
R 1
.dx
e x = eln |x| = x provided x > 0,
MATH 2310 – Ordinary Differential Equations Strand 61
therefore
¡ ¢0
xv 0 = 0 × x = 0 so
A
v0 = and then
x
v = A ln x + B,
where A, B are constants. Thus the general solution of the differential equation
is
y(x) = xv(x) = Ax ln x + Bx.
The second solution of the differential equation which is linearly independent
from y(x) = x is then y(x) = x ln x. We check
¯ ¯
¯ x x ln x ¯
¯
W (x, x ln x) = ¯ ¯ = x 6≡ 0 on (0, ∞) or (−∞, 0),
1 ln x + 1 ¯
which we would expect from the standard form of the differential equation
d2 y ³ 1 ´ dy ³ 1 ´
+ − + 2 y = 0,
dx2 x dx x
which shows that at least one coefficient is discontinuous at x = 0.
Here we are considering the case of equation (7.1) with G(x) 6≡ 0. As mentioned
earlier, the general solution to such equations is of the form yg (x) = yc (x) + yp (x) where
yc is called the complementary function and is the general solution of the associated
homogeneous equation and yp (x) is a particular solution to the equation itself. If there
are any initial conditions given with the differential equation, they are applied to yg (x)
and not yc (x). There are three main methods for finding a particular solution
(1) The method of undetermined coefficients can sometimes be used; usually
with constant coefficient differential equations. For example to find the general
solution of y 00 − 2y 0 + y = 16e5x we first find the general solution to y 00 − 2y 0 + y =
0 which has characteristic equation r2 − 2r + 1 = 0 = (r − 1)2 Hence the
complementary function is yc (x) = Aex + Bxex for some constants A, B. Next
we find the particular solution by guessing that yp = Ce5x ; upon substituting
into the differential equation we get
yp00 − 2yp0 + yp = (C − 10C + 25C)e5x = 16Ce5x = 16e5x
in which case C = 1 and then yp (x) = e5x . Hence the general solution to the
differential equation is yg (x) = Aex + Bxex + e5x . If we were also given initial
conditions with the differential equation, we would now apply them to yg (x).
If the right-hand side of the differential equation is already one of the funda-
mental set of solutions to the associated homogeneous equation then a constant
multiple of it cannot be a particular solution. We now guess a particular so-
lution which is x times it and so on until we find a function which is not the
solution of the associated homogeneous equation. For instance, for the equation
MATH 2310 – Ordinary Differential Equations Strand 62
y 00 − 2y 0 + y = 16ex we choose yp (x) = Cx2 ex since ex and xex are in the fun-
damental set of solutions to the associated homogeneous equation and cannot
therefore be particular solutions to the nonhomogeneous equation.
(2) The method of variation of parameters was introduced by Lagrange in
1774, and can be used once a fundamental set of solutions of the associated
homogeneous differential equation is known. For example, we saw earlier in
Examples 9.1 (ii) that for x > 0 the set {x, x ln(x)} is a fundamental set of
solutions of x2 y 00 − xy 0 + y = 0. To solve x2 y 00 − xy 0 + y = 2x3 with y(1) = 1 and
y 0 (1) = 2 we try a particular solution of the form yp = xu1 (x) + (x ln(x))u2 (x),
in which case
yp0 = u1 (x) + xu01 (x) + (ln(x) + 1)u2 (x) + (x ln(x))u02 (x)
and then we set
(9.1) xu01 (x) + (x ln(x))u02 (x) = 0
then yp0 = u1 + (ln x + 1)u2 and so
1
yp00 = u01 + u2 + (ln(x) + 1)u02 .
x
Substituting into the differential equation we get
(9.2) 2x3 = x2 yp00 − xyp0 + yp
= x2 u01 + xu2 + (x2 ln(x) + x2 )u2 − [xu1 + (x ln(x))u2 ] + [xu1 + x ln xu2 ]
= u1 (−x + x) + u2 (x − x ln(x) − x + x ln(x)) + x2 u01 + (x2 ln(x) + x2 )u02
(9.3) = x2 u01 + (x2 ln(x) + x2 )u02
Now we solve equations (9.1) and (9.3) simultaneously for u01 and u02 . From (9.1),
dividing by x > 0 and rearranging we get u01 = − ln(x)u02 . Then substituting
for u01 into (9.3) gives x2 u02 = 2x3 that is u02 = 2x (dividing by x2 > 0). Since
u01 = − ln(x)u02 we then have u01 = −2x ln(x). Integrating these equations we get
h x2 i
u2 (x) = x2 , u1 (x) = − x2 ln(x) − .
2
Note that we do not introduce constants of integration here as they will add
unnecessary solutions to the homogeneous equation to our particular solution.
Hence we compute
yp (x) = xu1 (x) + (x ln(x))u2 (x)
³ x2 ´ x3
= −x x2 ln(x) − + (x ln(x))x2 =
2 2
which means that the general solution to the differential equation is
x3
yg (x) = yc (x) + yp (x) = + Ax + Bx ln(x).
2
MATH 2310 – Ordinary Differential Equations Strand 63
Now we apply the initial conditions to get
1 1 1
yg (1) = + A + 0 = A + = 1 and so A = then
2 2 2
2
3x 1
yg0 (x) = + + B(ln(x) + 1) so
2 2
yg0 (1) = 2 + B = 2 and so B = 0.
Therefore the solution to the initial value problem is then
x3 1
y(x) = + x.
2 2
(3) A modified reduction of order can be attempted once one non-trivial solution
of the associated homogeneous differential equation is known. For example, given
that y(x) = x is a solution of x2 y 00 − xy 0 + y = 0 for x > 0 we may find the
general solution of x2 y 00 − xy 0 + y = 2x3 as follows. Try y(x) = xv(x) in the non
homogeneous differential equation. Then
y = xy 0 + x2 y 00 = xv − x[v + xv 0 ] + x2 [2v 0 + xv 00 ] = x3 v 00 + x2 v 0 = 2x3
which leaves us the first order linear equation in v 0
1
(v 0 )0 + v 0 = 2
x
R 1
dx
which had integrating factor e x = eln |x| = x since x > 0, and so
(xv 0 )0 = 2x hence
xv 0 = x2 + A that is
A
v0 = x + which has solution
x
x2
v= + A ln x + B for x > 0 and constants A, B.
2
x3
Therefore the general solution to the differential equation is y(x) = +[Ax ln x+
2
Bx] = yp (x) + yc (x). Hence, in once calculation we not only recover the second
linearly independent solution of the associated homogeneous equation, but also
the particular solution.
Consider the situation of a mass M Kg. attached to a spring of natural length `. When
the spring is extended, or compressed, by a small amount x it exerts a force trying to
return it to its natural length, of magnitude kx (Hooke’s law), where k > 0 is called the
spring constant. In the equilibrium position the spring extends a distance s so that the
downward force of gravity on the mass is balanced by the restoring force of the spring,
i.e.
(9.4) M g = ks,
where g is the acceleration due to gravity.
MATH 2310 – Ordinary Differential Equations Strand 64
Now suppose that we set the mass moving in the vertical direction. It will continue
to move in this direction. Suppose, at time t that the mass is at position y.
..... ..... ..... ..... ..... ..... ..... ..... ..... .....
. . . . . . . . . .
...........................................................................................................
...
.....
..... .........
.. ...
... ...
....
...
...
...
...
` ...
... ...
... ...
.... .........
.. ....
...... .... ...... ... ..
.... .... ...
... ........ .........
... .... ....
... . ...
...
...
.
s
.... ...
.....
... ........
.
.
....
...
...
...
y
.. .... ..
..... .... ...... ... ...
.... ....... ....
....
...
.
....
........ x ..........
M•
.... ....
. . .
Hence with c 6= 0 all “natural” motions decay in time and are called transient motions.
The general solution of the non-homogeneous equation M x00 + cx0 + kx = a cos(ωt) is
h i
r1 t r2 t
x(t) = Ae + Be + xp (t)
where the first two terms represent the transient motion just described, and xp (t) the
particular solution is called the forced response or steady state solution. Again we use
the method of undetermined coefficients to determine the particular solution. If we try
xp (t) = A cos(ωt) + B sin(ωt) then we find that
−M Aω 2 cos(ωt) − M Bω 2 sin(ωt) − Acω sin(ωt)
+ Bcω sin(ωt) + Ak cos(ωt) + Bk sin(ωt) = a cos(ωt)
MATH 2310 – Ordinary Differential Equations Strand 67
which reduces to
[A(k − M ω 2 ) + Bcω − a] cos(ωt) + [B(k − M ω 2 ) − Acω] sin(ωt) = 0.
Since this equation holds for all t we must have
2
AM (ωN − ω 2 ) + Bcω − a = 0 and
2
BM (ωN − ω 2 ) − Acω = 0.
2
BM (ωN − ω)
From the second equation we have A = and substituting into the first
cω
equation we get
BM 2 (ωN 2
− ω 2 )2 + Bc2 ω 2 − acω = 0.
Hence
2
acω aM (ωN − ω2)
B= 2 2 and A = 2
,
M (ωN − ω 2 )2 + c2 ω 2 M 2 (ωN − ω 2 ) 2 + c2 ω 2
which gives
a n o
2 2
xp (t) = 2 2 M (w N − ω ) cos(ωt) + cω sin(ωt)
M (wN − ω 2 )2 + c2 ω 2
which has the same period as the external forcing (i.e. 2π ω
). However, the amplitude is
a
always finite, and is maximum (= cω ) when ω = wN .
Electric circuits:
In an L-R-C circuit, if the current at time t is I(t), and the charge is Q, then
dI 1
L + RI + Q = E(t),
dt C
dQ
where E(t) is the supplied e.m.f. Since I = we have
dt
d2 Q dQ 1
either L 2 +R + Q = E(t),
dt dt C
2
dI dI 1 dE
or L 2 +R + I = .
dt dt C dt
The correspondence with case of the damped motion of a mass on a spring is L ↔ M ,
1 1
R ↔ c and ↔ k. The natural frequency of the L-C circuit is wN = √ . The aim
C LC
of many circuits is to “amplify” the input, here the E(t). So we wish to damp out the
natural oscillations as rapidly as possible.
With R 6= 0, all the natural motions are damped out as t increases from their moment
R
of excitation (i.e. e− 2L t → 0 as t → ∞). The circuit is said to be
4L
overdamped if R2 − > 0,
C
4L
critically damped if R2 − = 0,
C
4L
underdamped if R2 − < 0.
C
MATH 2310 – Ordinary Differential Equations Strand 68
MATH2310, Semester I, 2006, ODE’s strand — Exercises
−x2
(vi) + 3x2 y = e−x , y(0) = 2.
0
(ii) y + 2xy = e . dx
dy dy y cos x ³ π ´ 4
(iii) x − 3y = x4 . (vii) + = ,y = , x > 0.
dx dx x x 2 π
x
dy dy e x
(iv) (x log x) = 2 log x. (viii) + y= x , y(0) = 1.
dx dx ex + 1 e +1
dy dx
(v) − y = 4ex , y(0) = 4. (ix) = x + t + 1, x(0) = 2.
dx dt
4. A tank of capacity 1000 litres contains 500 litres of pure water. Salt solution con-
taining 2mg/litre salt enters the tank at 20 litres/s and the (stirred) mixture flows
out at 10 litres/s. When will the tank start to overflow? What is the concentration
of salt in the tank when it starts to overflow? Write down an initial value problem
for the mass of salt in the tank after overflow has started.
5. Suppose that a drug is added to the body at a rate r(t), and let y(t) represent the
concentration of the drug in the bloodstream at time t hours. In addition, suppose
that the drug is used by the body at the rate ky, where k is a positive constant.
Then, the net rate in y(t) is y 0 = r(t) − ky. If at t = 0, there is no drug in the body,
we determine y(t) by solving the initial value problem y 0 = r(t) − ky, y(0) = 0.
(a) Suppose that r(t) = r, where r > 0 is a constant. Here the drug is added at a
constant rate. Solve the initial value problem and hence determine limt→∞ y(t).
(b) Suppose that r(t) = 1 + sin t and k = 1. Here the drug is added at a periodic
rate. Solve the initial value problem, determine limt→∞ y(t) if it exists and
hence describe what happens to the drug concentration over time.
MATH 2310 – Ordinary Differential Equations Strand 69
(c) Suppose that r(t) = e−t and k = 1. Here the rate at which the drug is added
decreases over time. Solve the initial value problem, determine limt→∞ y(t) if it
exists and hence describe what happens to the drug concentration over time.
6. In a room at 21◦ centigrade, moth repellant sublimes from the solid to the gaseous
state at the rate of k cubic centimetres of solid per second per square centimetre of
exposed surface. If a spherical mothball of radius 12 cm in this room disappears in
6 months, find k. Is it more effective to use one mothball of radius 1 cm, or two
mothballs of radius 12 cm (using the second one when the first one has disappeared)?
Which would be better vale form money?
7. Carbon dating
(a) The remains of a timber artifact are found to contain 50% as much Carbon 14
(relative to Carbon 12) as a living tree (and presumably had when the object
was made). Assume that the half life of Carbon 14 is 5730 years. How old is
the artifact?
(b) Suppose the artifact contains 70% of the “normal” proportion of Carbon 14.
How old is it then?
dQ
[ Hint: = kQ (k < 0) where Q is the mass of Carbon 14 at time t years
dt
after the tree stopped growing. So Q = Aekt = A(0.5)1/5730 where A is the mass
of Carbon 14 at time t = 0]
(c) If the half life of Carbon 14 is not 5730 but 5568 years, how does this change
the answer to (b)?
(d) If the “Shroud of Turin” really was the burial cloth of Jesus Christ (died 33AD)
and the half life of Carbon 14 is between 5568 and 5730 years what proportion
of the original amount of Carbon 14 would you expect it to contain in 1989?
State your answer as an interval.
8. Newton’s law of cooling states that the surface temperature of an object changes at
a rate proportional to the difference between the surface temperature of the object
and that of the surroundings.
Suppose the surface temperature of a cup of coffee is 95◦ when freshly poured, and
that one minute later the temperature is 90◦ in a room at 25◦ . Assuming Newton’s
law of cooling to apply, how long a period must elapse before the coffee reaches a
surface temperature of 65◦ ?
9. A certain small country has $ 10 billion in paper currency in circulation and each day,
$ 50 million comes into the country’s banks. The government decides to introduce
new currency by having the banks replace old bills with new ones whenever old
currency comes into the banks. Let x = x(t) denote the amount of new currency in
circulation at time t, with x(0) = 0.
(a) Determine a mathematical model in the form of an initial value problem that
represents the “flow” of the new currency into circulation
(b) Solve the initial value problem found in part (a).
(c) Find how long it will take for the new bills to account for 90% of the currency
in circulation.
MATH 2310 – Ordinary Differential Equations Strand 70
10. In each of the following, find the general solution and, where requested, use it to
solve the initial value problem stated:
(a) y 00 + y 0 − 2y = 0, y(0) = 1, y 0 (0) = 1.
(g) y (iv) − 5y 00 + 4y = 0.
(i) y 000 − 3y 00 + 3y 0 − y = 0.
(k) y 000 + y = 0.
(l) y (iv) +2y 00 −8y 0 +5y = 0, given that r4 +2r2 −8r+5 = (r−1)2 (r+1+2i)(r+1−2i).
(m) y (iv) + 8y 000 + 26y 00 + 48y 0 + 45y = 0, given that r4 + 8r3 + 26r2 + 48r + 45 =
(r + 3)2 (r + 1 + 2i)(r + 1 − 2i).
11. Use the method of undetermined coefficients to solve the following differential equa-
tions or initial value problems
(a) y 00 + 4y 0 + 13y = 26x − 44, (f ) y 00 + y = ex + x3 , where y(0) = 2
and y 0 (0) = 0,
(b) y 00 + 10y 0 + 25y = 50x,
(g) y 00 −y = xe3x , where y(0) = 0 and
(c) y 00 + 3y 0 + 2y = x2 ,
y 0 (0) = 1,
(d) y 00 − 2y 0 = sin(4x),
(h) y 00 + 4y = x,
(e) y 00 − 4y 0 + 5y = e−x ,
(i) y 00 − 2y 0 + y = e2x .
MATH 2310 – Ordinary Differential Equations Strand 71
2. Exercises — Existence-Uniqueness
1. For each of the following initial value problems
(i) state the largest interval in which there will be a unique solution,
(ii) solve it, if you can
(a) y 0 + (tan x)y = sin 2x, y
(f) y 0 − = x,
where y(0) = 1. x+1
where y(0) = 3.
(b) (cos x)y 0 + (sin x)y = 2 sin x cos2 x, 2
where y(0) = 1. (g) (cos x)y 0 + ex y = sin x,
where y(0) = 1.
(c) xy 0 + 2y = x2 − x + 1,
where y(1) = 21 . (h) xy 0 + (sin x)y = sin x,
y where y(0) = 1.
(d) y 0 − = 4,
x−1 (i) (x log x)y 0 + y = x2 log x,
where y(0) = 3. e2
where y(e) = .
(e) (x − 1)y 0 − y = 4(x − 1), 4
where y(0) = 3
2. For each of the following initial value problems determine wether or not the Fun-
damental Existence and Uniqueness Theorem (for first order ordinary differential
equations) guarantees that the problem has a unique solution in a neighbourhood
of the point indicated.
(i) y 0 = x1/5 y, where y(1) = 0. y2
(iii) y 0 =, where y(0) = 1.
(ii) y 0 = xy 1/5 , where y(1) = 0. x
x
(iv) y 0 = 2 , where y(0) = 1.
y
0 √ π
3. Does the initial value problem y = y sin x, where y( 2 ) = 0 have a unique
solution in any neighbourhood of x = π2 ? If so
(a) find the unique solution, and
(b) show how the existence follows from the Fundamental Existence and Unique-
ness Theorem.
If not
(a) write down at least two solutions, and
(b) show why the Fundamental Existence and Uniqueness Theorem fails to ap-
ply to this initial value problem.
4. Show how the Existence and Uniqueness Theorem guarantees the existence of a
unique solution of y 0 = x(x + y), where y(2) = 5 near x = 2.
5. (a) State the largest open interval in which you can be sure that the initial
value problem
1
y0 + y = 3, where y(0) = 1
2(x − 1)
will have a unique solution and,
(b) solve the initial value problem.
MATH 2310 – Ordinary Differential Equations Strand 72
(c) What is the largest x-interval in which you have found a continuous solution.
6. (a) Find all constant solutions and the general solution of
dy (y + 2)2
= ;
dx x−1
dy (y + 2)2
(b) Solve = , where y(0) = 0.
dx x−1
7. Verify that the solution of the initial value problem y 0 = (x + 1)(x − ln(x + 1) + 3)
is continuous on (−1, ∞).
MATH 2310 – Ordinary Differential Equations Strand 73
3. Exercises — Direction Fields
y−2
1. Which of the following pictures best represents the direction field of y 0 =
(x − 1)2
for the range −4 < x < 4, −4 < y < 4.
(A) (B)
4 4
y(x) y(x)
2 2
–4 –2 0 2 4 –4 –2 0 2 4
x x
–2 –2
–4 –4
(C) (D)
4 4
y(x) y(x)
2 2
–4 –2 0 2 4 –4 –2 0 2 4
x x
–2 –2
–4 –4
(a) y 0 = y 2 (2 − y)
(b) y 0 = y(2 + y)
(c) y 0 = y(2 − y)
(d) y 0 = y 2 (2 + y)
(e) y 0 = y(y 2 − 1)
(f) y 0 = y(1 − y 2 )
MATH 2310 – Ordinary Differential Equations Strand 74
4. Exercises — Numerical solutions of differential equations
1. Use a hand calculator and each of the Euler and Runge-Kutta methods with a
single step to find each of the following. Where appropriate, compare with the
analytic solution obtained by separating the variables.
(a) Find y(1.1) if y 0 = xy 2 and y(1) = −2.
x2
(b) Find y(0.1) if y 0 = and y(0) = 1.
y(1 + x3 )
(c) Find y(0.1) and y(−0.1) if y 0 = 4(1 − y) and y(0) = 0.
(d) Find y(0.1) and y(−0.1) if y 0 = 2y(1 p − y) and y(0) = 2.
0
(e) Find y(0.1) and y(−0.1) if y = sin(xy) + 2 and y(0) = 1.
2. With a hand calculator or Maple (or spreadsheet, computer program etc.), use
the Euler and Runge-Kutta methods with a stepsize of h = 0.1 to find the
following
(a) y(2) if y 0 = xy 2 and y(1) = −2. Compare with the exact answer.
x2
(b) y(1) and y(2) if y 0 = and y(0) = 1. Compare with the exact
y(1 + x3 )
answer.
(c) y(1) and y(3) if y 0 = 4(1 − y) and y(0) = 0. Compare with the exact answer.
(d) y(1) and y(−0.3) if y 0 = 2y(1 − y) and y(0) = 2. Compare with the exact
answer. p
(e) Find y(3) and y(−1) if y 0 = sin(xy) + 2 and y(0) = 1.
3. For the initial value problem y 0 = x(x + y 2 ), where y(0) = 2 use a stepsize of
0.1 to find an approximation to y(1) by Euler’s method and the Runge-Kutta
method. Now repeat these calculations for a stepsize of 0.01.
4. For each of the following initial value problems
(a) Use Euler’s formula with h = 0.1 to find a 4 decimal place approximation
to the required value, then
(b) find a 4 decimal-place approximation using the Runge-Kutta method
(i) y 0 = 2x − 3y + 1, y(1) = 5, find y(1.5).
(ii) y 0 = 1 + y 2 , y(0) = 0, find y(0.5).
(iii) y 0 = e−y , y(0) = 0, find y(0.5).
y
(iv) y 0 = xy 2 − , y(1) = 1, find y(1.5).
x
(c) Use a computer program or spreadsheet to determine the effects of decreas-
ing h to 0.01 in each of the problems (a) and (b).
MATH 2310 – Ordinary Differential Equations Strand 75
5. Exercises — The Laplace Transform
1. Use the definition of the Laplace transform to calculate the Laplace transform
of
½
(a) e4t , 3t + 2 if 0 < t < 5
(i) f (t) =
(b) 21t, 0 if t ≥ 5
−t
(c) 7e , ½
1 − t if 0 < t < 3
(d) −8 cos(3t), (j) f (t) =
0 if t ≥ 3
(e) 2 sin(2t), ½
(f ) 2et , cos(t) if 0 ≤ t ≤ π2
½ (k) f (t) =
1 if 0 ≤ t ≤ 2 0 if t > π2
(g) f (t) = ½
0 if t > 2 sin(t) if 0 ≤ t ≤ 2π
½ (l) f (t) =
0 if 0 ≤ t ≤ 1 0 if t > 2π
(h) f (t) = ½
1 if t > 1 1 if 0 ≤ t ≤ 10
(m) f (t) =
−1 if t > 10
½
t if 0 ≤ t < 2
(n) f (t) =
3 if t ≥ 2
2. Use the definitions of cosh(bt) and sinh(bt) and the Laplace transform table to
find the Laplace transform of
(a) cosh(bt), (c) eat cosh(bt),
(b) sinh(bt), (d) eat sinh(bt).
3. Use the Laplace transform table to find the Laplace transform of the following
functions
(a) 3 + 4t4 + e−2t , (l) 8 cosh(4t),
(b) e5t cos(2t), (m) t cos(3t),
(c) −6e−2t sin(3t) + cos(−t), (n) t sin(7t),
2 3t (o) 3t sin(t),
(d) t e + sin(4t),
(e) 1
2
sin(2t) − t cos(2t), (p) 61 t sin(3t),
(f ) 4t7 e−2t − 8t sin(3t), (q) t5 e−4t ,
(g) et sin(2t), (r) et sinh(t),
(h) et cos(3t), (s) e7t cosh(t),
(i) e−2t cos(4t), (t) e−2t sin(7t),
(j) e−t sin(5t), (u) e5t cos(7t),
(k) te3t , (v) sin(4t) + 4t cos(4t),
t
(l) t2 e ,2 (w) 1 + sin(5t).
MATH 2310 – Ordinary Differential Equations Strand 76
4. Use the Laplace transform table to find the inverse Laplace transforms of
2 1
(i) , (xix) ,
s+5 s2
+9
1 s−2
(ii) 2 , (xx) 2 ,
s s − 4s
1 s2
(iii) 8 , (xxi) ,
s (s + 1)3
1 4s + 2
(iv) , (xxii) 2 ,
(s − 2)2 4s + 4s + 65
1 s−7
(v) , (xxiii) 2 ,
(s + 6)3 s − 14s + 48
1 1
(vi) , (xxiv) 2 ,
(s − 5)6 s − 4s − 12
4 1
(vii) , (xxv) 2 ,
(s − 3)2 s − 2s + 2
s + 13 s−1
(viii) 2 , (xxvi) 2 ,
s +s−6 s − 2s + 50
2s + 7 1
(ix) 2 , (xxvii) 2 ,
s +9 s + 2s − 24
3s − 5 1
(x) 2 , (xxviii) 2 ,
s +4 s + 12s + 37
1 s−7
(xi) 2 , (xxix) 2 ,
s + 15s + 56 s − 14s + 50
s s+1
(xii) 2 , (xxx) 2 ,
s + 16 s + 2s + 37
5 s+2
(xiii) , (xxxi) 2 ,
(s − 3)4 s + 4s + 8
2s + 5 s3 + 3s
(xiv) 2 , (xxxii) 2 ,
s − 6s + 13 (s + 9)2
s s
(xv) 2 , (xxxiii) 2 ,
s − 5s − 14 (s + 16)2
s+3 µ ¶
(xvi) 2 , 4 1 1 s2 − 16
s + 6s + 5 (xxxiv) 2 = − ,
(s + 16)2 8 s2 + 16 (s2 + 16)2
1
(xvii) 2 , 9s + 15
s − 12s + 35 (xxxv) ,
(s + 1)(s + 2)(s + 3)
1
(xviii) 2 , 7s2 + s − 3
s + 12s + 61 (xxxvi) .
(s + 1)2 (s + 2)
MATH 2310 – Ordinary Differential Equations Strand 77
5. Let y be a function of t and Y (s) = L{y(t)}(s) its Laplace Transform.
(a) Let y satisfy the Differential Equation y 00 − 3y 0 + 2y = 6t2 , where y(0) = 2,
y 0 (0) = −1. Use the Laplace Transform table to find Y in terms of s.
(b) Repeat (a) for y 00 − y 0 + y = t2 , where y(0) = 1, y 0 (0) = −2.
(c) Repeat (a) for y 00 + y 0 + 2y = te3t , where y(0) = −1, y(0) = 2.
6. Solve the following initial value problems by the Laplace Transform method:
(a) y 00 − 2y 0 + y = 4e3t , where y(0) = −1, y 0 (0) = 2.
(b) y 00 − 3y 0 + 2y = 3e−t , where y(0) = 3, y 0 (0) = 4.
(c) y 00 − 2y 0 + 5y = 20e−3t , where y(0) = −2, y 0 (0) = 4.
(d) y 00 + 3y 0 − 4y = 0, where y(0) = 1, y 0 (0) = −9.
(e) y 00 − 3y 0 + 2y = cos(t), where y(0) = 0.1, y 0 (0) = 0.7.
(f ) y 00 + 11y 0 + 24y = 0, where y(0) = −1, y 0 (0) = 0.
(g) y 00 + 3y 0 − 10y = 0, where y(0) = −1, y 0 (0) = 1.
(h) y 00 + 4y = 0, where y(0) = 2, y 0 (0) = 0,
(i) y 00 − 12y 0 + 40y = sin(2t), where y(0) = 1, y 0 (0) = 0.
(j) y 00 − y 0 − 12y = e2t − sin(t), where y(0) = −1, y 0 (0) = 1.
7. Write the following piecewise continuous functions as a combination of unit step
functions
½ ½
0 if 0 ≤ t < π, 3 if 0 ≤ t < 2,
(a) f (t) = (e) f (t) =
cos(t) if π ≤ t. t if 2 ≤ t.
½
0 if 0 ≤ t < 2, 4 if 0 ≤ t < 7,
(f ) f (t) =
(b) f (t) = 1 if 2 ≤ t < 3, t + 1 if 7 ≤ t.
0 if 3 ≤ t.
3 if 0 ≤ t < 2,
0 if 0 ≤ t < 1, (g) f (t) = t2 if 2 ≤ t < 5,
e−t if 5 ≤ t.
1 if 1 ≤ t < 3,
(c) f (t) =
−3 if 3 ≤ t < 4
4 if 4 ≤ t.
½
t2 if 0 ≤ t < 3,
(d) f (t) =
6t − 9 if 3 ≤ t.
MATH 2310 – Ordinary Differential Equations Strand 78
8. Use the Laplace
½ transform method to solve the following initial value problems
0 for 0 ≤ t < 1,
(a) y 0 + y = where y(0) = 0.
5 for 1 ≤ t,
½
00 0 0 for 0 ≤ t < π2 ,
(b) y − 3y + 2y = π where y(0) = 1, y 0 (0) = 2.
10 sin(t − 2
) for π2 ≤ t,
2. Decide wether or not the following sets S are fundamental sets of solutions to
the given differential equation.
(a) S = {e−t , e3t , te3t }, y 000 − 5y 00 + 3y 0 + 9y = 0,
(d) S = {e3t , e−2t , e5t , te5t }, y (iv) − 11y 000 + 29y 00 + 35y 0 − 150y = 0,
4. Can {e−t , et , cos(t), sin(t), sin(2t)} be the fundamental set of solutions to a con-
stant coefficient fifth order linear ODE?
5. Can {e2t , et/4 , e−t cos(t), e−t sin(t)} be the fundamental set of solutions to a con-
stant coefficient fifth order linear ODE?
6. Can {e2t , et/4 , e−t , te−t , t2 e−t } be the fundamental set of solutions to a constant
coefficient fourth order linear ODE?
7. Can {e2t , et/4 , e−t cos(t), e−t sin(t)} be the fundamental set of solutions to a con-
stant coefficient fourth order linear ODE?
8. Can {e2t , et/4 , e−t cos(t), et sin(t)} be the fundamental set of solutions to a con-
stant coefficient fourth order linear ODE?
MATH 2310 – Ordinary Differential Equations Strand 81
9. In the following problems verify that the functions y1 (x) and y2 (x) are solutions
of the given differential equation. Determine the values of x for which y1 and y2
(and y3 ) are linearly independent.
(a) y 00 − 3y 0 + 2y = 0, y1 (x) = ex , y2 (x) = ex − e2x .
(b) y 00 − 4y 0 + 4y = 0, y1 (x) = e2x , y2 (x) = (2x − 3)e2x .
2. For each of the following series, find the largest interval (a, b) on which it con-
X∞ ∞
X
xn n!xn
(a) n n2
, (d) ,
n=0
4 n=0
nn
∞
X n n ∞
X
verges: (b) (−2) x
, (e) n!xn ,
n=0
nn n=0
∞
X ∞
X
xn (x + 3)n
(c) , (f ) .
n=0
2n n! n=0
n2n
P
3. For each of the following differential equations, let ∞ n
n=0 an x be the general
solution. Find a recurrence relation for an . Hence express a2 , a3 , a4 , a5 , a2k
and a2k+1 in terms of a0 and a1 . Hence express the general solution as a linear
combination of two power series. Determine for which values of x these power
series converge.
(a) (1 + x2 )y 00 − 2y = 0, (d) y 00 − xy 0 + y = 0,
(b) y 00 + xy 0 + y = 0, (e) (2 + x2 )y 00 + 5xy 0 + 4y = 0.
(c) y 00 + 2xy 0 + 4y = 0,
4. Find two linearly independent solutions about x = 0 for the differential equation
(1 − x2 )y 00 + 2y = 0. Hence solve the initial value problem (1 − x2 )y 00 + 2y = 0
where y(0) = 0 and y 0 (0) = 3. Hence calculate (to 2 decimal places) y(0.5), if
this is possible, and calculate (to 2 decimal places) y(2) of this is possible.
5. Solve the following differential equations about x = 0 by the series method.
Either write the series solution, or give the first five nonzero terms of each series.
(a) y 00 − 11y 0 + 30y = 0, (g) y 00 + x2 y 0 + xy = 0, where y(0) = 0,
(b) (2+3x)y 00 −3y 0 −2y = 0, y 0 (0) = 1,
(b) x2 y 00 − 2xy 0 + 2y = 0, for x > 0 given the solution y(x) = x to the homoge-
neous equation,
(c) (1 − x)y 00 + xy 0 − y = 2(x − 1)2 e−x , for 0 < x < 1, given the solution
y(x) = ex ,
(e) All of the questions in the previous question, with given solutions e−2x ,
1
ex , ex , ex , 1, e2x , omit (g), x5 , x2 , x− 2 cos(x), e−x , sin(x), ex , ex . Which
problems are more efficiently done by this method compared to the variation
of parameters?
MATH 2310 – Ordinary Differential Equations Strand 85
MATH2310, Semester I, 2006, ODE’s strand — Answers
1. Answers — Solvable DE’s
2. The solutions to the separable equations are
1 y2 (vi) log |y + 2| = sin−1 x + C, y = −2.
(i) − = + C, y = 0.
y 2 (vii) log |y + 1| = log |x − 1| + C, y = −1.
2 (viii) ex + e−y + ye−y = C.
(ii) y 2 = log |1 + x3 | + C. √
3 (ix) y = −3 + 5t2 + 5.
(iii) − log |1 − y| = 4x + C, y = 1.
¯ ¯ (x) y = −3.
¯ y ¯
(iv) log ¯¯ ¯ = 2x + C, y = 0,
1 − y¯
y = 1.
(v) log |y−1| = tan−1 x+C, y = 1.
If no constant solution is given, then none exists.
(i) y = e + Ce .
3 (vii) y = x−1 (1 + sin x).
x2
(ii) ye = x + C. x2 2
4
(iii) y = x + Cx . 3 (viii) y = x
+ x .
2(e + 1) e + 1
C (ix) x = −2 + 4et − t.
(iv) y = log x + .
log x
(v) y = 4ex (x + 1).
4. The initial value problem for the volume V (t) is
dV
= 10, where V (0) = 500
dt
which has solution V = 10t + 500. Hence the tank starts to overflow at t = 50
seconds. The mass M (t) then satisfies the initial value problem
dM M
+ = 40, where M (0) = 0, and T ≤ 50
dt t + 50
50000
This has solution M = 20(t + 50) − until t = 50. At t = 50, M = 1500 and
t + 50
the concentration of salt is 1.5 mg/litre. After overflow the initial value problem is
dM M
+ = 40, where M (50) = 1500, and t ≥ 50
dt 50
5. For the drug concentration problem the solutions are
r
(a) y(t) = (1 − e−kt ), so limt→∞ y(t) = kr .
k
1
(b) y(t) = 1 − (e−t + cos t − sin t), so limt→∞ y(t) does not exist, the concentration
2
reaches a periodic state.
(c) y(t) = te−t , so limt→∞ y(t) = 0.
MATH 2310 – Ordinary Differential Equations Strand 86
dV
6. The differential equation is = −kS, where S(t) is the surface area (in square
dt
centimetres) of solid at time t (in seconds) when the solid’s volume is V (in cubic
centimetres). If mothball, initially spherical, retains its spherical symmetry during
4
sublimation V (t) = πr3 (t), with S(t) = 4πr2 (t). Therefore
3
dr dr
4πr2 = −k.4πr2 and so = −k.
dt dt
1 1
Thus r(t) = −kt + c. If r(0) = = c and r(T ) = 0 = −kT + where T is the
2 2
1
number of seconds in 6 months, then k = 2T . Now if r(0) = 1, then r(t) = 1 − kt
1
which is zero when t = = 2T seconds.
k
That is a 1 centimetre radius mothball takes 12 months to disappear. Thus the
1
room has 12 months coverage using either method, but two mothballs of radius
2
1
would cost only as much as one ball of 1 centimetre radius!
4
7. The answers are
(a) 5730 years.
(b) About 2949 years.
(c) It would change it to 2865 years.
(d) Between 78.39 and 78.93 percent.
8. If T (t) is the surface temperature (in degrees Celcius) at time t (in minutes) then
dT
= −k(T − 25) and so T (t) = 25 + De−kt
dt
where D is a constant. Since¡ T (0)¢ = 95 = 25+D we have D = 70. Since T (1)
¡ 70 =
¢ 90 =
25 + 70e−k we have k = log 70 65
≈ 0.074. Hence T (t) = 65 when t = 1
k
log 40
≈ 7.6
minutes.
9. Using 1 billion dollars as the unit for x and 1 day for the unit of t, we obtain the
initial value problem
dx
= 0.005(10 − x) where x(0) = 0.
dt
The reasoning that leads to the differential equation is as follows. Initially, there
is $ 10 billion of old currency in circulation, so all of the $ 50 million returned to
the banks is old. At time t, the amount of new currency is x(t) billion dollars, so
10 − x(t) billion dollars of currency is old. The fraction of circulating money that is
old is 10−x(t)
10
, and the amount of old currency being returned to the banks each day
10−x(t)
is 10 × 0.05 billion dollars. Note: $ 50 million = $ 0.05 billion. This amount of
new currency per day is introduced into circulation, so
dx 10 − x
= × 0.05 = 0.005(10 − x) billion dollars per day.
dt 10
10. The general solutions and the solutions to the IVP’s are
MATH 2310 – Ordinary Differential Equations Strand 87
(a) y = Ae−2x + Bex , y = ex .
(b) y = Ae3x + Bxe3x , y = 4e3x − 2xe3x .
(c) y = Ae−x cos(2x) + Be−x sin(2x), y = 2e−x cos(2x) + e−x sin(2x).
(d) y = A + Be−3x , y = −1.
(e) y = Ae2x + Be−2x , y = sinh(1) cosh 2(x − 1) [recall cosh 2(x − 1) = 21 (e2x−2 +
e−(2x−2) )].
(f ) y = A sin(5x) + B cos(5x), y = − 45 sin(5x).
(g) y = Aex + Be−x + Ce2x + De−2x .
(h) y = Ae−x + Bex + Cxex , y = e−x + ex + xex .
(i) Aex + Bxex + Cx2 ex .
(j) A + B sin(x) + C cos(x), y = 2 + sin(x) − 2 cos(x).
³√ ´ ³√ ´
(k) y = Ae + Be cos 2 x + Ce sin 23 x .
−x x/2 3 x/2
x
–1 1 2 3 4 5 6
2 0
y(x)
1 –1
y(x)
0 –2
–1 1 2 3 4 5 6
x
–1 –3
(c) (d)
3 1
x
–1 1 2 3 4 5 6
2 0
y(x)
1 –1
y(x)
0 –2
–1 1 2 3 4 5 6
x
–1 –3
(e) (f)
2 2
y(x) y(x)
1 1
–1 0 1 2 3 4 5 6 0
–1 1 2 3 4 5 6
x x
–1 –1
–2 –2
100
80
60
y(x)
40
20
4. (a) (a) 1.8207, (b) 2.0533. (c) (a) 0.4198, (b) 0.4055.
(b) (a) 0.5315, (b) 0.5463. (d) (a) 1.2194, (b) 1.333.
MATH 2310 – Ordinary Differential Equations Strand 93
5. Answers — Laplace Transform
1. The answers are
1 e−s
(a) , (h)
s−4 s
µ ¶
21 3 2 3 17
(b) 2 , (i) 2 + − e−5s +
s s s s2 s
7 1 ¡ −3s ¢
(c) , (j) e (2s + 1) + s − 1
s+1 s2
−8s 1 ¡ −s π ¢
(d) , (k) 2 e 2 +s
s2 + 9 s +1
4
(e) , 1 − e2πs
s2 + 4 (l)
s2 + 1
2
(f ) , 1¡ ¢
s−1 (m) 1 − 2e−10s
s
1¡ ¢ µ ¶
(g) 1 − e−s 1 −2s 1 1
s (n) 2 + e − )
s s s2
1 120
(k) , (r) ,
(s − 3)2 (s + 4)6
2 1
(l) , (s) ,
(s + 12 )3 s2 − 2s
8s s−7
(m) , (t) ,
2
s − 16 s2 − 14s + 48
s2 − 9 7
(n) , (u) s,
(s2 + 9)2 s2 + 4s + 53
s−5
14s (v) ,
(o) 2 , s2 − 10s + 74
(s + 49)2
4 4s2 − 64
6s (w) +
(p) 2 , s2 + 16 (s2 + 16)2
(s + 1)2 8s2
= 2 ,
s (s + 16)2
(q) ,
(s2 + 9)2
1 5
(x) + 2 .
s s + 25
4. The answers are
(i) 2e−5t , (xi) e−7t − e−8t ,
(ii) t, (xii) cos(4t),
5
1 7 (xiii) t3 e3t ,
(iii) t, 6
7! 11 3t
(iv) te2t , (xiv) 2e3t cos(2t) + e sin(2t),
2
1 2 −6t 2 −2t 7 7t
(v) te , (xv) e + e ,
2 9 9
1 1
1 (xvi) e−t − e−5t ,
(vi) t5 e5t , 2 2
5! 1 7t 1 5t
(xvii) e − e ,
(vii) 4te3t , 2 2
1
(viii) −2e−3t + 3e2t , (xviii) e−6t sin(5t),
5
7 1
(ix) 2 cos(3t) + sin(3t), (xix) sin(3t),
3 3
5 1 1
(x) 3 cos(2t) − sin(2t), (xx) + e4t ,
2 2 2
MATH 2310 – Ordinary Differential Equations Strand 95
µ ¶
−t 1 2 (xxix) e7t cos(t),
(xxi) e 1 − 2t + t ,
2
(xxx) e−t cos(6t),
(xxii) e−t/2 cos(4t),
(xxxi) e−2t cos(2t),
1 6t 1 8t
(xxiii) e + e , (xxxii) cos 3t − t sin(3t),
2 2
1 1 1
(xxiv) − e−2t + e6t , (xxxiii) t sin(4t),
8 8 8
(xxv) et sin(t), 1
(xxxiv) (sin(4t) − 4t cos(4t)),
32
(xxvi) et cos(7t),
(xxxv) 3e−t + 3e−2t − 6e−3t ,
1 4t 1
(xxvii) e − e−6t , (xxxvi) −16e−t + 3te−t + 23e−2t .
10 10
(xxviii) e−6t sin(t),
1 1 1
(l) y(t) = t3 .u1 (t) − t3 .u2 (t) + t3 .u3 (t).
6 3 6
µ ¶
1 1 1 1
(m) y(t) = 2 t − 3 sin(πt) − t + 3 sin(πt) u1 (t).
π π π2 π
MATH 2310 – Ordinary Differential Equations Strand 97
6. Answers — the Fourier Transform
1. The answers are
10
3 8
i) ii)
1 6
–8 –6 –4 –2 2 4 6 8
–1 4
x
–2
–3 2
–8 –6 –4 –2 0 2 4 6 8
20
y
0.5
15
iii) iv) –8 –6 –4 –2 0 2 4 6 8
10
x
–0.5
5
–1
–8 –6 –4 –2 0 2 4 6 8
x
v) 2.5
2
1.5
1
0.5
–8 –6 –4 –2 2 4 6 8
(c) not a fundamental set of solutions because there are not enough functions.
(d) a fundamental set of solutions.
4. No, the sin(2t) solution comes from a root of 2i of the characteristic equation.
Since the coefficients of the characteristic equation are real, this means that −2i
is also a root, and then the characteristic equation must have 6 roots, which is
impossible for a degree 5 equation.
5. No, there are not enough functions for a fundamental set of solutions.
6. No, there are too many functions. The set indicates that the characteristic
equation has 5 roots, which is impossible for a degree 4 equation.
8. No, the functions e−t cos(t) and et sin(t) point to two non-conjugate solutions
to the characteristic equation. Hence the characteristic equation would have to
have 6 solutions which is impossible for a fourth order equation.
9. Missing answer
MATH 2310 – Ordinary Differential Equations Strand 100
8. Answers — Series
1. The values of k are: (a) 5, (b) 3.
2. The intervals are: (a) (−4, 4), (b) (−∞, ∞), (c) (−∞, ∞), (d) (−e, e), (e)
there is no such interval, (f ) (−5, −1).
3. The solutions are:
¡ n−2 ¢
(a) an+2 = − n+2 an , a2 = a0 , a3 = −a3 1 , a4 = 0, a5 = a151 . In general a2k = 0
(−1) a1 k+1
for k > 1, a2k+1 = (2k−1)(2k+1) . y(x) = a0 y1 (x)+a1 y2 (x) where y1 (x) = 1+x2
converges for all x and y2 (x) converges for −1 ≤ x ≤ 1.
¡ an ¢
(b) an+2 = − n+2 , a2 = −a2 0 , a3 = a31 , a4 = a80 , a5 = a151 . In general a2k =
(−1)k a0 k k
2k k!
a2k+1 = (−1)
, a1 2 k!
(2k+1)!
. y(x) = a0 y1 (x) + a1 y2 (x) where y1 , y2 converge
for all x.
¡ −2 ¢
(c) an+2 = n+1 an , a2 = −2a0 , a3 = −a1 , a4 = 43 a0 , a5 = 13 a1 . In general
k 2k (−1)k
a2k = (−1)(2k)!
2 k!
a0 , a2k+1 = k!
a1 . y(x) = a0 y1 (x) + a1 y2 (x), where y1 , y2
converge³for all x. ´
n−1 −1 −1
(d) an+1 = (n+1)(n+2)
an , a2 = 2 0
a, a3 = 0, a4 = a,
24 0
a5 = 0. In general
−1
a2k = a,
a2k+1 = 0 if k > 0. y(x) = a0 y1 (x) + a1 y2 (x) where
(2k−1)2k k! 0
y2 (x) = x and y1 (x) converge for all x.
³ ´
(e) an+2 = −(n+2)
2(n+1)
an , a2 = a0 , a3 = 34 a1 , a4 = 23 a0 , a5 = 15 a . In gen-
32 1
(−1)k k! (−1)k 1.3.5....(2k+1)
eral a2k = a
1.3.5....(2k−1) 0
and a2k+1 = 2k a1 . y(x) = a0 y1 (x) +
√ 2 k!
a1 y2 (x), where y1 , y2 converge for |x| < 2.
P x2n+1
4. y(x) = a0 y1 (x) + a1 y2 (x) where y1 (x) = 1 − x2 and y2 (x) = x − ∞ n=1 (2n+1)(2n−1) ,
which converges for |x| < 1. The solution to the initial value problem is y(x) =
3y2 (x), then y(0.5) ≈ 1.36, we cannot determine y(2) as y2 diverges.
5. The solutions are:
(a) y(x) = a0 +a1 x+(−15a0 + 11 a )x2 +(−55a0 + 91
2 1
a )x3 +( −455
6 1 4
a0 + 671
24 1
a )x4 +. . .
(b) y(x) = a0 + a1 x − 14 a1 x3 + 3
a x4
16 1
− 9
a x5
80 1
+ ...
(c) y(x) = a0 + a0 x + 12 a0 x2 + 3!1 a0 x3 + 4!1 a0 x4 + . . .
(d) y(x) = a0 + 13 a0 x3 + 1
a x6
18 0
+ 1
a x9
33 3! 0
+ 1
a x12
34 4! 0
+ ...
(−1)2 (−2)2 2!
(e) y(x) = a0 + a1 x − 12 a0 x2 − 13 a1 x3 + 22 2!
a0 x4 + 5!
a1 x5 + ...
2
(f ) y(x) = a0 + a1 x + 12 a0 x2 + −1 a x4 + 2(−1)
8 0
(6−3)! 6
6−2 3!(3−2)! a0 x + . . .
(a) yp (x) = ue−2x +ve−3x where u0 e−2x +v 0 e−3x = 0 and −2u0 e−2x −3v 0 e−3x = 6,
so yp (x) = 1.