Diff Eq Formulas
Diff Eq Formulas
Diff Eq Formulas
Abstract. These notes give a brief summary of the major techniques of the class, and an
example for each. The descriptions are kept short so that this can be a useful, quick reference.
1. Executive Summary
We record below the types of equations we can solve. In the next sections we give more
details, including conditions on the functions and, if possible, explicit solutions. This section
is meant to be a quick list.
• Linear, constant coefficient difference equations: an+1 = c1 an +c2 an−1 +c3 an−2 +
· · · + ck an−k+1 ; for example, an+1 = 3an + 4an−1 − 2an−2 . See §2.
• Exact equations: M (x, y) + N (x, y)dy/dx = 0 with ∂M/∂y = ∂N/∂x. See §3.3.
• Linear systems: →
−
x 0 (t) = A→
−
x (t) + →
−
g (t) with →
−
x (0) = →
−
x 0 . See §6.
2. Difference Equations
2.1. Linear, constant coefficient difference equations.
Statement: Let k be a fixed integer and c1 , . . . , ck given real numbers. Then the general
solution of the difference equation
an+1 = c1 an + c2 an−1 + c3 an−2 + · · · + ck an−k+1
is
an = γ1 r1n + · · · + γk rkn
if the characteristic polynomial
rk − c1 rk−1 − c2 rk−2 − · · · − ck = 0
has k distinct roots. Here the γ1 , . . . , γk are any k real numbers; if initial conditions are given,
these conditions determine these γi ’s.
Example: Consider the equation an+1 = 5an − 6an−1 . In this case k = 2 and we find the
characteristic polynomial is r2 − 5r + 6 = (r − 2)(r − 3), which clearly has roots r1 = 2 and
r2 = 3. Thus the general solution is an = γ1 2n + γ2 3n . If we are given a0 = 1 and a1 = 2, this
leads to the system of equations 1 = γ1 + γ2 and 2 = γ1 · 2 + γ2 · 3, which has the solution
γ1 = 1 and γ2 = 0.
Statement: For a differential equation of the form y 0 (t) + p(t)y(t) = g(t), the general
solution is Z
1
y(t) = µ(t)g(t)dt + C ,
µ(t)
where Z
µ(t) = exp p(t)dt
and C is a free constant (if an initial condition is given, then C can be determined uniquely).
Example: Consider the equation y 0 (t) − 2ty(t) = exp (t2 + t). Then
Z
µ(t) = exp −2tdt = exp(−t2 ),
MATH 209: IMPORTANT FORMULAS 3
and
Z
1 2 2
y(t) = exp(−t ) exp(t + t)dt + C
exp(−t2 )
1
= (exp(t) + C) .
exp(−t2 )
If we have the initial condition y(0) = 2 then we find 2 = 1 + C, or C = 1.
Applications: To be added.
Statement: For a differential equation of the form M (x) + N (y)dy/dx = 0 the general
solution is Z x Z y
M (s)ds + N (s)ds = 0,
x0 y0
where we are using the shorthand notation y0 = y(x0 ) and y = y(x). We could also write the
solution as Z Z
M (x)dx + N (y)dy = C,
and then determine C from the initial conditions. NOTE: if we can write the differential
equation as y 0 = F (v) for v = y/x, then we can convert this to a separable equation: y = vx
v0
so v + xv 0 = F (v) or − x1 + F (v)−v = 0.
Example: Consider the equation 3x2 + cos(y)y 0 = 0. Then M (x) = 3x2 , N (y) = cos(y),
so the solution is Z Z
2
3x dx + cos(y)dy = C,
or
x3 + sin(y(x)) = C.
If we are told y(1) = π, then C = π 3 .
Statement: Consider M (x, y) + N (x, y)dy/dx = 0 with ∂M/∂y = ∂N/∂x. Then there is
a function ψ(x, y) such that the solution to the differential equation is given by ψ(x, y) = C,
with C determined by the initial conditions. One way to find ψ is as follows. Our problem
implies that ∂ψ/∂x = M and ∂ψ/∂y = N . Thus
Z
ψ(x, y) = M (x, y)dx + g(y)
Z
ψ(x, y) = N (x, y)dy + h(x),
4 INSTRUCTOR: STEVEN MILLER
and then determine g(y) and h(x) so that the two expressions are equal. NOTE: sometimes
it is possible to multiply a differential equation by an integrating factor and convert it to an
exact equation; unfortunately in practice it is hard to find an integrating factor which works.
Example: Consider
(3x2 − 2xy + 2) + (6y 2 − x2 + 3)dy/dx = 0.
Thus M (x, y) = 3x2 − 2xy + 2, N (x, y) = 6y 2 − x2 + 3 and ∂M/∂y = ∂N/∂x = −2x. Thus
the differential equation is exact, and we have
Z
3x2 − 2xy + 2 dx + g(y) = x3 − x2 y + 2x + g(y)
ψ(x, y) =
Z
6y 2 − x2 + 3 dy + h(x) = 2y 3 − x2 y + 3y + h(x).
ψ(x, y) =
Therefore we need
x3 − x2 y + 2x + g(y) = 2y 3 − x2 y + 3y + h(x),
which is possible if we let h(x) = x3 + 2x and g(y) = 2y 3 + 3y. In other words,
ψ(x, y) = x3 − x2 y + 2x + 2y 3 − 3y
solves the original equation.
Applications:
Statement: Consider d2 y/dt2 + ady/dt + by = 0. Guessing solutions of the form ert leads
to studying the characteristic polynomial r2 + ar + b; let r1 and r2 be the two roots. If the
roots are distinct, all solutions are of the form y(t) = c1 er1 t + c2 er2 t , where c1 and c2 are
determined by two initial conditions (often y(0) and y 0 (0), though we could have y at two
times). If r1 = r2 = r, the general solution is instead of the form y(t) = c1 ert + c2 tert .
Applications: Physics and engineering problems include motion with friction proportional
to velocity, masses on springs, circuit theory.
MATH 209: IMPORTANT FORMULAS 5
sin(βt) or
cos(βt) A sin(βt) + B cos(βt) A, B free parameters
Application: See the passages in the book about spring motion with external driving
forces.
Example: Consider y 00 (t) + 4y 0 (t) + 4y(t) = cosh t, where the hyperbolic cosine is given
by cosh t = (et + e−t )/2 (and sinh t = (et − e−t )/2). The two solutions to the homogenous
differential equation are y1 (t) = e2t and y2 (t) = te−2t (as there is a repeated root in the
characteristic polynomial). The Wronskian is W (y1 , y2 )(s) = e−4t 6= 0, so the solutions are
linearly independent (and we can divide by the Wronskian for all s) and are now found by
performing the specified integrals.
Application:
5. Series Solution
00
Statement: P∞Consider p(x)y (x) + q(x)y 0 (x) + r(x)y(x) = 0 with p(x0 ) 6= 0. One can
n
guess y(x) = n=0 an (x − x0 ) and attempt to determine a series expansion for the solution.
Doing so involves finding recurrence relations for the an ’s (we’ll have sums of infinite series
expansions equalling zero, and this can only happen if the coefficient of xm vanishes for all
m). Typically there will be two free parameters, and one checks to see if the Wronskian is
non-zero to see if the solutions are linearly independent (ie, if they generate a fundamental
set of solutions). To do so does not require us to know all the coefficients an of each solution,
but only the constant and linear terms. One must investigate the convergence properties of
the expansion, which is not surprisingly related to properties of p(x), q(x) and r(x). For a
review of Taylor series, you can see my notes at
http://www.williams.edu/go/math/sjmiller/public html/
103/MVT TaylorSeries.pdf.
This leads to
∞
X ∞
X ∞
X
n−2 n−1
n(n − 1)an x − n(n − 1)an x + an xn = 0.
n=2 n=2 n=0
m
We want all sums to be of x , so we must shift the indices of summation. The last term
is the easiest – we simply let m = n. For the other two, it is easiest to do the shift of
summation slowly. For the first term, let m = n − 2 (we choose this as xn−2 will become
xm ). Thus
P∞ n = m + 2 and as n ran from 2 to ∞, m runs from 0 to ∞. Thus this sum
m
becomes
P∞ m=0 (m + 2)(m + 1)am+2 x . A similar analysis shows the second term becomes
m
m=1 (m + 1)mam+1 x . We thus find
∞
X ∞
X ∞
X
m m
(m + 2)(m + 1)am+2 x − (m + 1)mam+1 x + am xm = 0.
m=0 m=1 m=0
We combine the terms. Note that two sums start at m = 0 while one starts at m = 1. We
thus group the two m = 0 terms together and then combine the three sums from m = 1 to
∞ and find
∞
X
(2 · 1a2 + a0 ) + [(m + 2)(m + 1)am+2 − (m + 1)mam+1 + am ] xm = 0.
m=1
Note we are able to show y1 (x) and y2 (x) are linearly independent without having computed
all terms in the expansion! Finally, it is worth noting that we can show the two series
expansions converge for all |x| < 1. To see this, note
(m + 1)m|am+1 | + |am |
|am+2 | ≤
(m + 2)(m + 1)
2
m +m+1
≤ max (|am+1 |, |am |) < max (|am+1 |, |am |) .
m2 + 2m + 1
Thus for m ≥ 3, |am | ≤ max (|a0 |, |a1 |) and by the comparison test ∞ m
P
m=0 am x will converge
for |x| < 1. It is not surprising that our analysis for convergence breaks down at x = 1
(to be fair, the above analysis just doesn’t provide any information about what happens at
x = 1; to have the series converge for |x| ≥ 1 we would need the |am | to decay rapidly, and
a more involved analysis shows that this is not the case). Note that the coefficient of y 00 (x)
is 1 − x, and thus when x = 1 this coefficient is zero. This means that at x = 1 we do not
have an ordinary point, and there is a marked change in the nature of the differential equation.
Application:
The situation is only slightly more involved if → −g (t) is not zero (ie, the non-homogenous
case), provided that A is diagonalizable. If A = SΛS −1 (with S as above), then we can solve
uncouple this system and reduce to n first order differential equations which can be solved
by integrating factors. In particular, let →
−
x (t) = S →−y (t) or →
−
y (t) = S −1 →
−
x (t) (we can do this
→
− 0 →
− 0
change of variables as S is invertible). As x (t) = S y (t), we find
→
−
x 0 (t) = A→ −x (t) + →
−g (t)
S y (t) = AS y (t) + →
→
− 0 →
− −g (t)
→
−
y (t) = S AS y (t) + S −1 →
0 −1 →
− −
g (t)
→
− →
−
y 0 (t) = Λ→ −y (t) + h (t),
→
−
where h (t) = S −1 →
−
g (t). This leads to n uncoupled first order linear differential equations
yi0 (t) = λi yi (t) + hi (t),
which can be solved by integrating factors: if µi (t) = exp(−λi t) then
Z
1
yi (t) = µi (t)hi (t)dt + C
µi (t)
Z
λi t −λi t
= e e hi (t)dt + C .
Example: Let
1 2
A =
2 1
→
− 0 →
− →
− 1
and consider the system of differential equations x (t) = A x (t) with x (0) = . The
0
1
solution is →
−
x (t) = exp(At) . To compute exp(At), note the eigenvalues are found by
0
solving det(A − λI) = 0 or (1 − λ)2 − 4 = 0, which after some algebra yields λ= 3 or −1.
1
Solving the linear algebra equation gives the corresponding eigenvectors are for λ1 = 3
1
1
and for λ2 = −1. This yields
−1
1 1 −1 1/2 1/2
S = , S = ,
1 −1 1/2 −1/2
so
e−t e3t −t 3t
− e 2 + e2
3
−1 e 0 −1 2
+ 2
exp(At) = S exp(Λt)S = S S = −t e−t .
0 e−1 − e 2 + e2
3t
2
3t
+ e2
10 INSTRUCTOR: STEVEN MILLER
Application: