Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
8 views

Notes 2

Uploaded by

nanaof82
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

Notes 2

Uploaded by

nanaof82
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 136

MATH 2F05

(APPLIED ADVANCED CALCULUS)


LECTURE NOTES

JAN VRBIK
2
3

Contents

1 Prerequisites 5
A few high-school formulas . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Exponentiation and logarithm . . . . . . . . . . . . . . . . . . . . . . . . 6
Geometric series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Trigonometric formulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Solving equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Differentiation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Basic limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Complex numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

I ORDINARY DIFFERENTIAL EQUATIONS 15


General review of differential equations . . . . . . . . . . . . . . . . . . . 17

2 First-Order Differential Equations 19


’Trivial’ equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Separable equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Scale-independent equation . . . . . . . . . . . . . . . . . . . . . . . . . 21
Linear equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Bernoulli equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Exact equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
More ’exotic’ equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

3 Second Order Differential Equations 33


Reducible to first order . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Linear equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
With constant coefficients . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Cauchy equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

4 Third and Higher-Order Linear ODEs 43


Polynomial roots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Constant-coefficient equations . . . . . . . . . . . . . . . . . . . . . . . . 47
4

5 Sets of Linear, First-Order, Constant-Coefficient ODEs 49


Matrix Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Set (system) of differential equations . . . . . . . . . . . . . . . . . . . . 54
Non-homogeneous case . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

6 Power-Series Solution 65
The main idea . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Sturm-Liouville eigenvalue problem . . . . . . . . . . . . . . . . . . . . . 68
Method of Frobenius . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
A few Special functions of Mathematical Physics . . . . . . . . . . . . . 76

II VECTOR ANALYSIS 83
7 Functions in Three Dimensions — Differentiation 85
3-D Geometry (overview) . . . . . . . . . . . . . . . . . . . . . . . . . . 85
Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
Optional: Curvilinear coordinates . . . . . . . . . . . . . . . . . . . . . . 96

8 Functions in 3-D — Integration 99


Line Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
Double integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
Surfaces in 3-D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
Surface integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
’Volume’ integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
Review exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113

III COMPLEX ANALYSIS 119


9 Complex Functions — Differentiation 121
Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
Introducing complex functions . . . . . . . . . . . . . . . . . . . . . . . . 122
Chapter summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126

10 Complex Functions — Integration 127


Integrating analytic functions . . . . . . . . . . . . . . . . . . . . . . . . 128
Contour integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
5

Chapter 1 PREREQUISITES
A few high-school formulas

(a + b)2 = a2 + 2ab + b2
(a + b)3 = a3 + 3a2 b + 3ab2 + b3
..
.

1
11
121
the coefficients follow from Pascal’s triangle: 1331 , the expansion is
14641
1 5 10 10 5 1
.......
called binomial.
Also:

a2 − b2 = (a − b) (a + b)
a3 − b3 = (a − b)(a2 + ab + b2 )
..
.

(do you know how to continue)?


Understand the basic rules of algebra: addition and multiplication, indi-
vidually, are commutative and associative, when combined they follow the
distributive law

(a + b + c)(d + e + f + h) = ad + ae + af + ah + bd + be + bf + bh + cd + ce + cf + ch

(each term from the left set of parentheses with each term on the right).

Polynomials
their degree, the notion of individual coefficients, basic operations including
synthetic division, e.g.

(x3 − 3x2 + 2x − 4) ÷ (x − 2) = x2 − x [quotient]


x3 − 2x2 (subtract)
2
−x + 2x − 4
−x2 + 2x (subtract)
−4 [remainder]

which implies that (x3 − 3x2 + 2x − 4) = (x2 − x)(x − 2) − 4. The quotient’s degree
equals the degree of the dividend (the original polynomial) minus the degree of
the divisor. The remainder’s degree is always less than the degree of the divisor.
When the remainder is zero, we have found two factors of the dividend.
6

Exponentiation and logarithm


Rules of exponentiation:

aA · aB = aA+B
(aA )B = aAB

Also note that


B)
(aA )B 6= a(A
The solution to
ax = A
is:
x = loga (A)
(the inverse function to exponentiation). When a = e (= 2. 7183...), this is written
as
x = ln(A)
and called natural logarithm.
Its basic rules are:

ln(A · B) = ln(A) + ln(B)


ln(AB ) = B · ln(A)
ln(A)
loga (A) =
ln(a)

Geometric series
First infinite:
1
1 + a + a2 + a3 + a4 + ... =
1−a
when |a| < 1 (understand the issue of series convergence)
and then finite (truncated):

1 − aN+1
1 + a + a2 + a3 + ... + aN =
1−a
valid for all a 6= 1, (we don’t need a = 1, why?).

Trigonometric formulas
such as, for example
(sin a)2 + (cos a)2 ≡ 1
and

sin(α + β) = sin α cos β + sin β cos α


cos(α + β) = cos α cos β − sin α sin β

Our angles are always in radians.


Inverse trigonometric functions: arcsin(x) — don’t use the sin−1 (x) notation!
7

Solving equations
ISingle UnknownJ

Linear: 2x = 7 (quite trivial). ½


q¡ ¢
2 5 5 2 3
Quadratic: x − 5x + 6 = 0 ⇒ x1,2 = 2
± 2
−6=
2
This further implies that x2 − 5x + 6 = (x − 3)(x − 2).
Cubic and beyond: will be discussed when needed.
Can we solve other (non-polynomial) equations? Only when the left hand side
involves a composition of functions which we know how to invert individually,
for example · µ ¶¸
1
ln sin = −3
1 + x2
should pose no difficulty.
On the other hand, a simpler looking
x
sin(x) =
2
can be solved only numerically, usually by Newton’s technique, which works as
follows:
To solve an equation of the f (x) = 0 type, we start with an initial value x0
which should be reasonably close to a root of the equation (found graphically), and
then follow the tangent straight line from [x0 , f (x0 )] till we cross the x-axis at x1 .
This is repeated until the consecutive x-values no longer change.
In summary:
f (xn )
xn+1 = xn −
f 0 (xn )
x
To solve sin(x) − 2
= 0 (our previous example) we first look at its graph

y
0.25 x
1 1.5 2 2.5 3
0

-0.25

-0.5

-0.75

-1

-1.25

which indicates that there is a root close to x = 1.9. Choosing this as our x0
sin(x1 ) − x21
we get: x1 = 1.9 − sin(1.9)−0.95 = 1.895506, x2 = x1 − = 1.895494,
cos(1.9)−0.5
cos(x1 ) − 12
after which the values no longer change. Thus x = 1.895494 is a solution (in
this case, not the only one) to the original equation. ¥
8

IMore Than One UnknownJ

We all know how to solve linear sets (systems) of equations: 2 × 2 (for sure,
2x − 3y = 4 11x = −11
e.g. ⇔ [add 3×Eq.2 to Eq.1] ⇒ x = −1 and y = −2),
3x + y = −5 3x + y = −5
3 × 3 (still rather routinely, I hope), 4 × 4 (gets tedious).
How about any other (nonlinear) case? We can try eliminating one unknown
from one equation and substituting into the other equation, but this will work (in
the 2 × 2 case) only if we are very lucky. We have to admit that we don’t know
how to solve most of these equations (and, in this course, we will have to live with
that).

Differentiation
Interpretation: Slope of a tangent straight line.
Using three basic rules (product, quotient and chain) one can differenti-
ate just about any function, repeatedly if necessary (i.e. finding the second, third,
... derivative), for example:
d ¡ ¢
x sin x2 = sin x2 + 2x2 cos x2
dx
[note that sin x2 ≡ sin(x2 ) and not (sin x)2 — I will always be careful with my
notation, using parentheses whenever an ambiguity may arise].
The main formulas are
d α
(x ) = αxα−1
dx
and
d βx
(e ) = βeβx
dx
The product rule can be extended to the second derivative:

(f · g)00 = f 00 · g + 2f 0 · g 0 + f · g 00

the third:
(f · g)000 = f 000 · g + 3f 00 · g0 + 3f 0 · g00 + f · g000
etc. (Pascal’s triangle again).

IPartial DerivativesJ

Even when the function is bivariate (of two variables), or multivariate (of
several variables), we always differentiate with respect to only one of the variables
at a time (keeping the others constant). Thus, no new rules are needed, the only
new thing is a more elaborate notation, e.g.

∂ 3 f (x, y)
∂x2 ∂y

(A function of a single variable is called univariate).


9

ITaylor (Maclaurin) ExpansionJ

x2 00 x3 x4
f (x) = f (0) + xf 0 (0) + f (0) + f 000 (0) + f iv (0) + ....
2 3! 4!

alternate notation for f iv (0) is f (4) (0).


Remember the expansions of at least these functions:

ex = 1 + x + x2 /2 + x3 /3! + x4 /4! + ....


sin(x) = x − x3 /3! + x5 /5! − x7 /7! + ....
ln(1 + x) = x − x2 /2 + x3 /3 − x4 /4 + ... (no factorials)

and of course
1
= 1 + x + x2 + x3 + x4 + ......
1−x
The bivariate extension of Taylor’s expansion (in a rather symbolic form):
½ ¾2
∂f (0, 0) ∂f (0, 0) 1 ∂ ∂
f (x, y) = f (0, 0) + x +y + x +y f (0, 0)
∂x ∂y 2 ∂x ∂y
½ ¾3
1 ∂ ∂
+ x +y f (0, 0) + ...
3! ∂x ∂y

where the partial derivatives are to be applied to f (x, y) only. Thus


½ ¾3
∂ ∂ ∂3f ∂3f ∂3f 3
3∂ f
x +y f ≡ x3 3 + 3x2 y 2 + 3xy 2 + y
∂x ∂y ∂x ∂x ∂y ∂x∂y 2 ∂y 3

etc.

Basic limits
Rational expressions, e.g.

2n2 + 3
lim =2
n→∞ n2 − 4n + 1

Special limit: µ ¶n
1
lim 1 + =e
n→∞ n
and also ³ a ´n
lim 1 + = ea ≡ exp(a)
n→∞ n
(introducing an alternate notation for ea .
0
L’hôpital rule (to deal with the 0
case):

sin x (sin x)0 cos(0)


lim = lim = =1
x→0 x x→0 (x)0 1
10

Integration
Interpretation: Area between x-axis and f (x) (up is positive, down is
negative).
Basic formulas:
Z
xα+1
xα dx = α 6= −1
α+1
Z
eβx
eβx dx =
β
Z
dx
= ln |x|
x
etc. (use tables).

Useful techniques and tricks:

1. Substitution (change of variable), for example


Z Z
dx 2 z dz 2 2√
√ = = z= 5x − 2
5x − 2 5 z 5 5
√ z2 + 2 dx
where z = 5x − 2, thus x = , and dz
= 25 z ⇒ dx = 25 z dz.
5
2. By parts Z Z
0
fg dx = f g − f 0 g dx
R R R
for example xe−x dx = x(−e−x )0 dx = −xe−x + e−x dx = −(x + 1)e−x .
3. Partial fractions: When integrating a rational expression such as, for
1
example , first rewrite it as
(1 + x)2 (1 + x2 )
a b c + dx
+ 2
+
1 + x (1 + x) 1 + x2
then solve for a, b, c, d based on
1 = a(1 + x)(1 + x2 ) + b(1 + x2 ) + (c + dx)(1 + x)2 (#1)
This can be done most efficiently by substituting x = −1 ⇒ 1 = 2b ⇒ b = 12 .
When this (b = 12 ) is substituted back into (#1), (1 + x) can be factored out,
resulting in
1−x
= a(1 + x2 ) + (c + dx)(1 + x) (#2)
2
Then substitute x = −1 again, get a = 12 , and further simplify (#2) by
factoring out yet another (1 + x):
x
− = c + dx
2
yielding the values of c = 0 and d = − 12 . In the new form, the function can
be integrated easily. ¥
11

Basic properties:
Z Z Z
[cf (x) + dg(x)]dx = c f (x) dx + d g(x) dx (linear)

and
Z b Z c Z b
f (x) dx = f (x) dx + f (x) dx (try picture it)
a a c

These properties are shared with (actually inherited from) summation; we all
know that, for example

X∞ µ ¶ X∞ X∞
i 1 1
3a − 2 = 3 ai −
i=1
i i=1 i=1
i2

Have some basic ideas about double (and, eventually, triple) integration, e.g.
ZZ
(x2 y − 3xy 3 )dA
R

where R is a region of integration (it may be a square, a triangle, a circle


and such), and dA a symbol for an infinitesimal area (usually visualized as a small
rectangle within this region).

The simplest case is the so called separable integral, where the function (of
x and y) to be integrated is a product of a function of x (only) times a function of
y (only), and the region of integration is a generalized rectangle (meaning
that the limits of integration don’t depend on each other, but any of them may be
infinite).

For example:
Z∞ Z3 Z3 Z∞
sin(x)e−y dA = sin(x)dx × e−y dy
y=0 x=1 x=1 y=0

which is a product of two ordinary integrals.

In general, a double integral must be converted to two consecutive univariate


integrations, the first in x and the second in y (or vice versa — the results must
be identical). Notationally, the x and y limits, and the dx and dy symbols should
follow proper ’nesting’ (similar to two sets of parentheses), since that is effectively
the logic of performing the integration in general.

More complicated cases may require a change of variables. This may lead to
polar coordinates and requires understanding of the Jacobian (details to be
discussed when needed).
12

Geometry
IIn Two DimensionsJ

Know that y = 1 − x is an equation of a straight line, be able to identify its


slope and intercept.
The collection of points which lie below this straight line and are in the first
quadrant defines the following two-dimensional region:

y 1

0.9

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

There are two other ways of describing it:

1. 0 ≤ y ≤ 1 − x (conditional y-range), where 0 ≤ x ≤ 1 (marginal


x-range) — visualize this as the triangle being filled with vertical lines, the
marginal range describing the x-scale ’shadow’ of the region.

2. 0 ≤ x ≤ 1 − y (conditional x-range), where 0 ≤ y ≤ 1 (marginal y-range) —


horizontal lines.

The two descriptions will usually not be this symmetric; try doing the same
thing with y = 1 − x2 (a branch
p of a parabola). For the ’horizontal-line’ descrip-
tion, you will get: 0 ≤ x ≤ 1 − y 2 , where 0 ≤ y ≤ 1.
Regions of this kind are frequently encountered in two-dimensional integrals.
The vertical (or horizontal)-line description facilitates constructing the inner and
outer limits of the corresponding (consecutive) integration. Note that in this
context we don’t have to worry about the boundary points being included (e.g.
0 ≤ x ≤ 1) or excluded (0 < x < 1), this makes no difference to the integral’s
value.
Recognize equation of a circle (e.g. x2 + y 2 − 3x + 4y = 9) be able to identify
its center and radius; also that of parabola and hyperbola.

IIn Three DimensionsJ

Understand the standard (rectangular) coordinate system and how to use


it to display points, lines and planes.
Both (fixed) points and (free) vectors have three components, written as:
[2, −1, 4] (sometimes organized in a column). Understand the difference between
the two: point describes a single location, vector defines a length (or strength,
in physics), direction and orientation.
13

Would you be able to describe a three-dimensional region using x, y and z


(consecutive) ranges, similar to (but more complicated than) the 2-D case?
Later on we discuss vector functions (physicists call them ’fields’), which assign
to each point in space a vector. We also learn how to deal with curves and
surfaces.

Matrices  
4 −6 0
such as, for example  −3 2 1  which is a 3 × 3 (n × m in general) array
9 −5 −3
of numbers (these are called the matrix elements).
Understand the following definitions:
 
1 0 0
square matrix (n = m), unit matrix ( 0 1 0 ), zero matrix;
0 0 1
and operations:
 
4 −3 9
matrix transpose ( −6 2 −5 ), matrix addition (for same-size ma-
0 1 −3  
· ¸ 2 −1 · ¸
3 −2 0  6 −9
trices), multiplication ( 0 3 = , [n × m][m ×
5 −3 1 6 −9
−4 5
 −1  1 9 3 
4 −6 0 4 2 2
k] = [n×k] in terms of dimensions), inverse  −3 2 1  = 0 3 1 
3 17 5
9 −5¯ −3 ¯ 4 2 2
¯ a b ¯
(later, we review its construction), and determinant ¯¯ ¯ = ad − bc.
c d ¯
For two square matrices, their product is not necessarily commutative, e.g.
· ¸· ¸ · ¸ · ¸· ¸ · ¸
2 0 3 5 6 10 3 5 2 0 −9 20
= , = .
−3 4 1 −2 −5 −23 1 −2 −3 4 8 −8

Notation:
I stands for the unit matrix, O for the zero matrix, AT for transpose, A−1 for
inverse, |A| for the determinant, AB for multiplication (careful with the order);
A23 is the second-row, third-column element of A.
A few basic rules:
AI = IA = A, AO = OA = O, AA−1 = A−1 A = I, (AB)T = BT AT , and (AB)−1 =
B−1 A−1 , whenever the dimensions allow the operation.
Let us formally prove the second last equality:
n o P P interchange? P P
W hy can we

(AB)T = ( Aik Bk j )T = Ajk Bk i = Bk i Ajk = {BT }ik {AT }k j =


ij k k k k
{B A }ij for each i and j. ¤
T T
14

Complex numbers
Understand the basic algebra of adding and subtracting (trivial), multiplying
(3 − 2i)(4 + i) = 12 + 2 − 8i + 3i = 14 − 5i (distributive law plus i2 = −1) and
dividing:
3 − 2i (3 − 2i)(4 − i) 10 − 11i 10 11
= = = − i
4+i (4 + i)(4 − i) 17 17 17
Notation:
Complex conjugate of a number z = x + yi is z = x − yi (change the sign of
i). p
magnitude (absolute value): |z| = x2 + y 2 .
Real and imaginary parts are Re(z) = x and Im(z) = y, respectively.
Polar representation
of z = x + yi = r eiθ = r cos(θ) + i r sin(θ), where r = |z| and θ = arctan2 (y, x)
which is called the argument of z and is chosen so that θ ∈ [0, 2π).
Here, we have used
eiθ = cos(θ) + i sin(θ)
which follows from Maclaurin expansion (try the proof).
When multiplying two complex numbers in polar representation, their magni-
tudes multiply as well, but their ’arguments’ only add. When raising a complex
number to an integer n, the resulting argument (angle) is simply nθ.
This is quite useful when taking a£ large
¡ integer power
¢ of a¡complex number, ¢¤
31
for example: (3 + 2i)31 = (9 + 4) 2 cos 31 arctan( 23 ) + i sin 31 arctan( 23 ) =
1.5005×1017 −1.0745×1017 i (give at least four significant digits in all your answers).
Part I
ORDINARY DIFFERENTIAL
EQUATIONS

15
17

General review of differential equations


I(Single) Ordinary Differential EquationJ

There is only one independent variable, usually called x, and one dependent
variable (a function of x), usually called y(x). The equation involves x, y(x), y 0 (x),
1 + y(x) · y 0 (x)
and possibly higher derivatives of y(x), for example: y 00 (x) = . We
1 + x2
usually leave out the argument of y to simplify the notation, thus:

00 1 + yy 0
y =
1 + x2
The highest derivative is the order of the equation (this one is of the second
order).
Our task is to find the (unknown) function y(x) which meets the equation. The
solution is normally a family of functions, with as many extra parameters as the
order of the equation (these are usually called C1 , C2 , C3 ,...).
In the next chapter, we study first-order ODE. Then, we move on to higher-
order ODE, but, for these, we restrict our attention almost entirely to the linear
[in y and its derivatives, e.g. y 00 + sin x · y 0 − (1 + x2 ) · y = e−x ] case with constant
coefficients [example: y 00 − 2y 0 + 3y = e2x ]. When the right hand side of such
an equation is zero, the equation is called homogenous.

IA Set (or System) of ODEsJ

Here, we have several dependent (unknown) functions y1 , y2 , y3 , ... of still a


single independent variable x (sometimes called t). We will study only a special
case of these systems, when the equations (and we need as many equations as there
are unknown functions) are of the first order, linear in y1 , y2 , y3 , ... and its
derivative, and having constant coefficients.
Example:

ẏ1 = 2y1 − 3y2 + t2


ẏ2 = y1 + 5y2 + sin t

(when the independent variable is t, ẏ1 , ẏ2 , ... is a common notation for the first
derivatives).

IPartial Differential Equations (PDE)J

They usually have a single dependent variable and several independent vari-
ables. The derivatives are then automatically of the partial type. It is unlikely that
we will have time to discuss even a brief introduction to these. But you will study
them in Physics, where you also encounter systems of PDEs (several dependent
and independent variables), to be solved by all sorts of ingenious techniques. They
are normally tackled on an individual basis only.
18
19

Chapter 2 FIRST-ORDER
DIFFERENTIAL EQUATIONS
The most general form of such an equation (assuming we can solve it for y 0 ,
which is usually the case) is
y 0 = f (x, y)

where f (x, y) is any expression involving both x and y. We know how to solve this
equation (analytically) in only a few special cases (to be discussed shortly).

Graphically, the situation is easier: using an x—y set of coordinates, we can


draw (at as many points as possible) the slope ≡ f (x, y) of a solution passing
through the corresponding (x, y) point and then, by attempting to connect these,
visualize the family of all solutions. But this is not very accurate nor practical.

Usually, there is a whole family of solutions which covers the whole x—y plane,
by curves which don’t intersect. This means that exactly one solution passes
through each point. Or, equivalently: given an extra condition imposed on the
solution, namely y(x0 ) = y0 where x0 and y0 are two specific numbers (the so
called initial condition), this singles out a unique solution. But sometimes it
happens that no solution can be found for some initial conditions, and more than
one (even infinitely many) solutions for others.

We will look at some of these issues in more detail later on, let us now go over
the special cases of the first-order ODE which we know how to solve:

’Trivial’ equation
This case is so simple to solve that it is usually not even mentioned as such, but
we want to be systematic. The equation has the form:

y 0 = f (x)

i.e. y 0 is expressed as a function of x only.

It is quite obvious that the general solution is


Z
y(x) = f (x)dx + C

(graphically, it is the same curve slid vertically up and down). Note that even in
this simplest case we cannot always find an analytical solution (we don’t know how
to integrate all functions).

EXAMPLE: y0 = sin(x).
Solution: y(x) = − cos(x) + C.
20

Separable equation
Its general form is:
y 0 = h(x) · g(y)
(a product of a function of x times a function of y).
dy
Solution: Writing y 0 as dx
[= h(x) · g(y)] we can achieve the actual separation
(of x from y), thus:
dy
= h(x)dx
g(y)
where the left and the right hand sides can be individually integrated (in terms of
y and x, respectively), a constant C added (to the right hand side only, why is this
sufficient?), and the resulting equation solved for y whenever possible (to get the
so called explicit solution). If the equation cannot be solved for y, we leave it in
the so called implicit form.
EXAMPLES:
2 x2 x2
1. y 0 = x · y ⇒ dyy
= x dx ⇒ ln |y| = x2 + C̃ ⇒ y = ±eC̃ · e 2 ≡ Ce 2 (by
definition of a new C, which may be positive or negative). Let us plot these
with C = −3, −2, −1, 0, 1, 2, 3 to visualize the whole family of solutions:

y
20

10

0
-2 -1 0 1 2

x
-10

-20

(essentially the same curve, expanded or compressed along the y-direction).


2 2
2. 9yy 0 + 4x = 0 ⇒ 9y dy = −4x dx ⇒ 9 y2 = −4 x2 + C̃ ⇒ y 2 + 49 x2 = C (by
intelligently redefining the constant). We will leave the answer in its explicit
form, which clearly shows that the family of solutions are ellipses centered
on the origin, with the vertical versus horizontal diameter in the 2 : 3 ratio.
dy 2
3. y 0 = −2xy ⇒ y
= −2x dx ⇒ ln |y| = −x2 + C̃ ⇒ y = Ce−x (analogous to
Example 1).
4. (1 + x2 )y 0 + 1 + y 2 = 0, with y(0) = 1 (initial value problem).
dy dx
Solution: 1+y 2 = − 1+x2 ⇒ arctan(y) = − arctan(x) + C̃ ≡ arctan(C) −
C−x
arctan(x) ⇒ y = tan (arctan(C) − arctan(x)) = 1+Cx [recall the tan(α −
C−0
β) formula]. To find C we solve 1 = 1+C × 0 ⇒ C = 1.
1−x
Answer: y(x) = 1+x
.
¡ 1−x ¢ ¡ 1−x ¢2
d
Check: (1 + x2 ) dx 1+x
+1+ 1+x
= 0 X.
21

Scale-independent equation
looks as follows: ³y ´
y0 = g
x
(note that the right hand side does not change when replacing x → ax and, simul-
taneously, y → ay, since a will cancel out).
y(x)
Solve by introducing a new dependent variable u(x) = or, equivalently,
0 0
x
y(x) = x · u(x). This implies y = u + xu ; substituted into the original equation
yields:
xu0 = g(u) − u
which is separable in x and u:
du dx
=
g(u) − u x

Solve the separable equation for u(x), and convert to y(x) = xu(x).

EXAMPLES:
y/x 1 u2 + 1 2u du
1. 2xyy 0 − y 2 + x2 = 0 ⇒ y 0 = − ⇒ xu0 = − ⇒ 2 =
2 2y/x 2u u +1
dx 2C
− ⇒ ln(1 + u2 ) = − ln |x| + C̃ ⇒ u2 + 1 = (factor of 2 is introduced
x x
for future convenience) ⇒ y 2 + x2 − 2Cx = 0 ⇒ y 2 + (x − C)2 = C 2 (let us
leave it in the implicit form). This is a family of circles having a center at
any point of the x-axis, and being tangent to the y-axis
³ y ´2 y du dx
2 0 2 2 0
2. x y = y + xy + x ⇒ y = + + 1 ⇒ xu0 = u2 + 1 ⇒ 2
= ⇒
x x 1+u x
arctan(u) = ln |x| + C ⇒ u = tan (ln |x| + C) ⇒ y = x · tan(ln |x| + C) ¥

IModified Scale-IndependentJ
y ³y ´
0
y = +g · h(x)
x x
The same substitution gives

xu0 = g(u) · h(x)

which is also separable.


The main point is to be able to recognize that the equation is of this type.

EXAMPLE:
0 y 2x3 cos(x2 ) 0 2x2 cos(x2 ) 2 u2
y = + ⇒ xu = ⇒ u du = 2x cos(x ) dx ⇒ =
x y p u p 2
sin(x2 ) + C̃ ⇒ u = ± 2 sin(x2 ) + C ⇒ y = ±x 2 sin(x2 ) + C ¥

IAny Other Smart SubstitutionJ


22

(usually suggested), which makes the equation separable.

EXAMPLES:
1. (2x − 4y + 5)y 0 + x − 2y + 3 = 0 [suggestion: introduce: v(x) = x − 2y(x),
x−v 1 − v0 1 − v0
i.e. y = and y 0 = ]⇒ (2v + 5) + v + 3 = 0 ⇒ −(v +
2 2 µ2 ¶
5 0 11
v + 52 1
4
)v + 2v + 2 = 0 ⇒ dv = 2 dx ⇒ 1 − dv = 2 dx ⇒
2
v + 11
4
v + 114
v − 14 ln |v + 11
4
| = 2x + C ⇒ x − 2y − 14 ln |x − 2y + 11 4
| = 2x + C. We
have to leave the solution in the implicit form because we cannot solve for
y, except numerically — it would be a painstaking procedure to draw even a
simple graph now).

2. y 0 cos(y)+x sin(y) = 2x, seems to suggest sin(y) ≡ v(x) as the new dependent
variable, since v0 = y 0 cos(y) [by chain rule]. The new equation is thus simply:
v 0 +xv = 2x, which is linear (see the next section), and can be solved as such:
dv 2 x2 x2 x2
v
= −x dx ⇒ ln |v| = − x2 + c̃ ⇒ v = ce− 2 , substitute: c0 e− 2 − xce− 2 +
x2 x2 x2 x2
xce− 2 = 2x ³⇒ c0 = 2xe´ 2 ⇒ c(x) = 2e 2 + C ⇒ v(x) = 2 + Ce− 2 ⇒
x2
y(x) = arcsin 2 + Ce− 2 ¥

Linear equation
has the form of:
y 0 + g(x) · y = r(x)
[both g(x) and r(x) are arbitrary — but specific — functions of x].

The solution is constructed in two stages, by the so called

IVariation-of-Parameters TechniqueJ

which works as follows:

1. Solve the homogeneous equation y 0 = −g(x) · y, which is separable, thus:


R
yh (x) = c · e− g(x)dx

R
2. Assume that c itself is a function of x, substitute c(x)·e− g(x)dx back into the
full equation, and solve the resulting [trivial] differential equation for c(x).

EXAMPLES:
y sin x
1. y 0 + =
x x
y dy dx c
Solve y 0 + = 0 ⇒ =− ⇒ ln |y| = − ln |x| + c̃ ⇒ y = .
x y x x
c0 c c sin x
Now substitute this to the original equation: − 2+ 2 = ⇒ c0 =
x x x x
sin x ⇒ c(x) = − cos x + C (the big C being a true constant) ⇒ y(x) =
23

cos x C
− + . The solution has always the form of yp (x) + Cyh (x), where
x x
yp (x) is a particular solution to the full equation, and yh (x) solves the
homogeneous equation only.
d ³ cos x ´ cos x sin x
Let us verify the former: − − 2 = (check).
dx x x x
2. y 0 − y = e2x .
dy
First y 0 − y = 0 ⇒ y
= dx ⇒ y = cex .
Substitute: c0 ex + cex − cex = e2x ⇒ c0 = ex ⇒ c(x) = ex + C ⇒ y(x) =
e2x + Cex .

3. xy 0 + y + 4 = 0
dy c
Homogeneous: y
= − dx
x
⇒ ln |y| = − ln |x| + c ⇒ y =
x
c c C
Substitute: c0 − x
+ x
= −4 ⇒ c(x) = −4x + C ⇒ y(x) = −4 + .
x
4. y 0 + y · tan(x) = sin(2x), y(0) = 1.
dy − sin x dx
Homogeneous: y
= cos x
⇒ ln |y| = ln | cos x| + c̃ ⇒ y = c · cos x.
Substitute: c0 cos x − c sin x + c sin x = 2 sin x cos x ⇒ c0 = 2 sin x ⇒ c(x) =
−2 cos x + C ⇒ y(x) = −2 cos2 x + C cos x [cos2 x is the usual ’shorthand’ for
(cos x)2 ].
To find the value of C, solve: 1 = −2 + C ⇒ C = 3.
The final answer is thus: y(x) = −2 cos2 x + 3 cos x.
d sin x
To verify: dx
[−2 cos2 x + 3 cos x] + [−2 cos2 x + 3 cos x] · cos x
= 2 cos x sin x
(check).

5. x2 y 0 + 2xy − x + 1 = 0, y(1) = 0
dy
Homogeneous [realize that here −x + 1 is the non-homogeneous part]: y
=
−2 dx
x
⇒ ln |y| = −2 ln |x| + C̃ ⇒ y = xc2
2c 2c x2
Substitute: c0 − 3 + 3 − x + 1 = 0 ⇒ c0 = x − 1 ⇒ c = 2
−x+C ⇒
x x
1 1 C
y= − + 2
2 x x
To meet the initial-value condition: 0 = 12 − 1 + C ⇒ C = 12
(1 − x)2
Final answer: y = .
2x2
µ ¶ µ ¶
2 d (1 − x)2 (1 − x)2
Verify: x + 2x − x + 1 ≡ 0 X.
dx 2x2 2x2
2y
6. y 0 − x
= x2 cos(3x)
dy
First: y
= 2 dx
x
⇒ ln |y| = 2 ln |x| + c̃ ⇒ y = cx2
24

sin(3x)
Substitute: c0 x2 +2cx−2cx = x2 cos(3x) ⇒ c0 = cos(3x) ⇒ c = +C ⇒
3
x2
y= 3
sin(3x) + Cx2
³ ´
To verify the particular solution: d
dx
x2
3
sin(3x) − 2x
3
sin(3x) = x2 cos(3x) X

Bernoulli equation
y 0 + f (x) · y = r(x) · y a
where a is a specific (constant) exponent.
1
Introducing a new dependent variable u = y 1−a , i.e. y = u 1−a , one gets:
1 1 a
1 −1 0
1−a
u 1−a u [chain rule] +f (x) · u 1−a = r(x) · u 1−a .
a
Multiplying by (1 − a)u− 1−a results in:

u0 + (1 − a)f (x) · u = (1 − a)r(x)

which is linear in u0 and u (i.e., of the previous type), and solved as such.
1
The answer is then easily converted back to y = u 1−a .

EXAMPLES:
x
1. y 0 + xy = y
(Bernoulli, a = −1, f (x) ≡ x, g(x) ≡ x) ⇒ u0 + 2xu = 2x where
1
y = u2
du 2
Solving as linear: u
= −2x dx ⇒ ln |u| = −x2 + c̃ ⇒ u = c · e−x
2 2 2 2 2
Substitute: c0 e−x −2xce−x +2xce

−x
= 2x ⇒ c0 = 2xex ⇒ c(x) = ex +C ⇒
−x2
u(x) = 1 + Ce ⇒ y(x) = ± 1 + Ce−x2 (one can easily check that this is
a solution with either the + or the − sign).
1
2. 2xy 0 = 10x3 y 5 +y (terms reshuffled a bit). Bernoulli with a = 5, f (x) = − 2x ,
2
and g(x) = 5x
1
This implies u0 + x2 u = −20x2 with y = u− 4
du
Solving as linear: u
= −2 dx
x
⇒ ln |u| = −2 ln |x| + c̃ ⇒ u = c
x2
c0
− 2 xc3 + 2 xc3 = −20x2 ⇒ c0 =
Substituted back into the full equation: x2
µ ¶− 14
4 5 3 C 3 C
−20x ⇒ c(x) = −4x +C ⇒ u(x) = −4x + 2 ⇒ y(x) = ± −4x + 2 .
x x
x−1
3. 2xyy 0 + (x − 1)y 2 = x2 ex , Bernoulli with a = −1, f (x) = 2x
, and g(x) = x2 ex
x−1 1
This translates to: u0 + x
u = xex with y = u 2
du
Solving homogeneous part: u
= ( x1 − 1) dx ⇒ ln |u| = ln |x| − x + c̃ ⇒
u = cxe−x
Substituted: c0 xe−x + ce−x − cxe−x + (x − 1)ce−x p= xex ⇒ c0 = e2x ⇒
c(x) = 12 e2x + C ⇒ u(x) = x2 ex + Cxe−x ⇒ y(x) = ± x2 ex + Cxe−x ¥
25

Exact equation
First we have to explain the general idea behind this type of equation:

Suppose we have a function of x and y, f (x, y) say. Then ∂f


∂x
dx + ∂f
∂y
dy is the
so called total differential of this function, corresponding to the function’s
increase when its arguments change from (x, y) to (x + dx, y + dy). By setting
this quantity equal to zero, we are effectively demanding that the function does
not change its value, i.e. f (x, y) = C (constant). The last equation is then an
[implicit] solution to
∂f ∂f
dx + dy = 0
∂x ∂y
(the corresponding exact equation).

EXAMPLE: Suppose f (x, y) = x2 y−2x. This means that (2xy−2) dx+x2 dy =


2 C
0 has a simple solution x2 y −2x = C ⇒ y = + . Note that the differential
x x2
1 − xy 2 0
equation can be also re-written as: y 0 = 2 , x y + 2xy = 2, etc.
x2
[coincidentally, this equation is also linear; we can thus double-check the
answer].

We must now try to reverse the process, since in the actual situation we will
be given the differential equation and asked to find the corresponding f (x, y).
There are then two issues to be settled:

1. How do we verify that the equation is exact?

2. Knowing it is, how do we solve it?

∂ 2f ∂2f
To answer the first question, we recall that ≡ . Thus, g(x, y) dx +
∂x ∂y ∂y ∂x
h(x, y) dy = 0 is exact if and only if

∂g ∂h

∂y ∂x
As to solving the equation, we proceed in three stages:
R
1. Find G(x, y) = g(x, y) dx (considering y a constant).
∂G ∂H
2. Construct H(y) = h(x, y) − [must be a function of y only, as ∂x
=
∂y
∂h ∂2G ∂g ∂g
∂x
− ∂x ∂y
= ∂y
− ∂y
≡ 0].
R
3. f (x, y) = G(x, y) + H(y) dy
[Proof: ∂f
∂x
= ∂G
∂x
= g and ∂f
∂y
= ∂G
∂y
+ H = h ¤].

Even though this looks complicated, one must realize that the individual steps
are rather trivial, and exact equations are therefore easy to solve.
26

EXAMPLE: 2x sin(3y) dx + (3x2 cos(3y) + 2y) dy = 0.



Let us first verify that the equation is exact: ∂y
2x sin(3y) = 6x cos 3y,

∂x
(3x2 cos(3y) + 2y) = 6x cos 3y (check)
Solving it: G = x2 sin(3y), H = 3x2 cos(3y) +2y −3x2 cos(3y) = 2y, f (x, y) =
x2 sin(3y) + y 2 .
Answer: y 2 + x2 sin(3y) = C (implicit form) ¥

IIntegrating FactorsJ

Any first-order ODE (e.g. y 0 = xy ) can be expanded in a form which makes it


dy
look like an exact equation, thus: dx = xy ⇒ y dx−x dy = 0. But since ∂(y)
∂y
6= − ∂(x)
∂x
,
this equation is not exact.
The good news is that there is always a function of x and y, say F (x, y), which
can multiply any such equation (a legitimate modification) to make it exact. This
function is called an integrating factor.
The bad news is that there is no general procedure for finding F (x, y) [if there
were, we would know how to solve all first-order differential equations — too good
to be true].
Yet, there are two special cases when it is possible:
Let us write the differential equation in its ’look-like-exact’ form of

P (x, y)dx + Q(x, y)dy = 0


∂Q
where ∂P∂y
6= ∂x
(thus the equation is not exact yet). One can find an integrating
factor from

1.
∂P ∂Q
d ln F ∂y
− ∂x
=
dx Q
iff the right hand side of this equation is a function of x only
∂(F P ) ∂(F Q)
Proof: F P dx+F Qdy = 0 is exact when ∂y
= ∂x
⇒ F ∂P
∂y
= dF
dx
·Q+F ∂Q
∂x
dF ∂P
− ∂Q
assuming that F is a function of x only. Solving for dx F
results in ∂ y Q ∂ x .
When the last expression contains no y, we simply integrate it (with respect
to x) to find ln F. Otherwise (when y does not cancel out of the expression),
the formula is meaningless. ¤

2. or from
∂Q ∂P
d ln F ∂x
− ∂y
=
dy P
iff the right hand side is a function of y only.

EXAMPLES:
27

∂P
− ∂Q 2
1. Let us try solving our y dx − x dy = 0. Since ∂ y Q ∂ x = − , we have ln F =
R x
−2 dx x
= −2 ln x (no need to bother with a constant) ⇒ F = x12 . Thus
y
x2
dx − x1 dy = 0 must be exact (check it). Solving it gives − xy = C̃, or
y = Cx.
The original equation is, coincidentally, also separable (sometimes it happens
that an equation can be solved in more than one way), so we can easily verify
that the answer is correct.
∂Q ∂P
− 2
But this is not the end of this example yet! We can also get: ∂ x P ∂ y = − ,
R dy y
1
which implies that ln F = −2 y = −2 ln y ⇒ F = y2 . Is this also an
integrating factor?
The answer is yes, there are infinitely many of these; one can multiply an
integrating factor (such as x12 ) by any function of what we know must be
a constant (− xy in our case, i.e. the left hand side of our solution). Since
1 1
2
= 2 · 1y 2 , this is also an integrating factor of our equation. One can
y x (− x )
verify that, using this second integration factor, one still obtains the same
simple y = Cx solution.
To formalize our observation: When g dx + h dy = 0 is exact (i.e. g = ∂f ∂x
and h = ∂f
∂y
), so is R(f ) g dx + R(f ) h dy = 0 where R is any function of f.
Proof: ∂(Rg)
∂y
= dRdf
· ∂f
∂y
∂g
· g + R · ∂y = dRdf
∂g
· h · g + R · ∂y . Similarly ∂(Rh)
∂x
=
dR ∂f ∂h dR ∂h ∂g ∂h
df
· ∂x · h + R · ∂x = df · g · h + R · ∂x . Since ∂y ≡ ∂x , the two expressions
are identical. ¤
2. (2 cos y + 4x2 ) dx = x sin y dy [i.e. Q = −x sin y ].
∂P
− ∂Q R 1
sin y+sin y
Since ∂ y Q ∂ x = −2−x sin y
= 1
x
we get ln F = x
dx = ln x ⇒ F = x.
¡ ¢
2x cos y + 4x3 dx − x2 sin y dy = 0
2 4
is therefore
µ ¶ and can be solved as such: x cos y + x = C ⇒ y =
exact,
C
arccos − x2 .
x2
3. (3xey + 2y) dx + (x2 ey + x) dy = 0.
∂P
− ∂Q xey + 1 1 R dx
Trying again ∂ y Q ∂ x = 2 y = , which means that ln F = x
=
x e +x x
ln x ⇒ F = x.
(3x2 ey + 2xy) dx + (x3 ey + x2 ) dy = 0
is exact. Solving it gives: x3 ey + x2 y = C [implicit form]. ¥

More ’exotic’ equations


and methods of solving them:
There are many other types of first-order ODEs which can be solved by all sorts
of ingenious techniques (stressing that our list of ’solvable’ equations has been far
from complete). We will mention only one, for illustration:
28

Clairaut equation:

y = xy 0 + g(y 0 )
where g is an arbitrary function. The idea is to introduce p(x) ≡ y 0 (x) as an
unknown function, differentiate the original equation with respect to x, obtaining
p = p + xp0 + p0 g0 (p) ⇒ p0 · (x + g0 (p)) = 0. This implies that either p ≡ y 0 = C ⇒
y = xC + g(C)
which represents a family of regular solutions (all straight lines), or
x = −g 0 (p)
which, when solved for p and substituted back into y = xp + g(p) provides the so
called singular solution (an envelope of the regular family).
EXAMPLE:
(y 0 )2 − xy 0 + y = 0 (terms reshuffled a bit) is solved by either y = Cx − C 2 ,
2
or x = 2p ⇒ p = x2 ⇒ y = xp − p2 = x4 (singular solution). Let us display
them graphically:

y 25

12.5

0
-10 -5 0 5 10

x
-12.5

-25

-37.5

Note that for an initial condition below or at the parabola two possible so-
lutions exist, above the parabola there is none.
This concludes our discussion of Clairaut equation.
And finally, a useful trick worth mentioning: When the (general) equation
appears more complicated in terms of y rather than x [e.g. (2x + y 4 )y 0 = y ],
one can try reversing the role of x and y (i.e. considering x as the dependent
dy 1
variable and y as the independent one). All it takes is to replace y 0 ≡ dx by dx , for
dy
dx
example (using the previous equation): = 2 xy 3
+ y (after some simplification).
dy
dx
The last equation is linear (in x and and can be solved as such:
dy
)
dx dy
x
= 2 y ⇒ ln |x| = 2 ln |y| + c̃ ⇒ x(y) = y 2 · c(y).
2
dc
Substituted into the full equation: 2yc + y 2 dy = 2 yy c + y 3 ⇒ dc
dy
= y ⇒
y2 y4 2
c(y) = − C [the minus sign is more convenient here] ⇒ x = − Cy .
2 2
This can nowpbe solved for y in terms of x, to get a solution to the original

equation: y = ± C ± C 2 + 2x.
29

Applications
IOf Geometric KindJ

1. Find a curve such that (from each of its points) the distance to the origin is
the same as the distance to the intersection of its normal (i.e. perpendicular
straight line) with the x-axis.
Solution: Suppose y(x) is the equation of the curve (yet unknown). The
equation of the normal is

1
Y −y =− · (X − x)
y0

where (x, y) [fixed] are the points of the curve, and (X, Y ) [variable] are the
points of the normal [which is a straight line passing through (x, y), with
its slope equal minus the reciprocal of the curve’s slope y 0 ]. This normal
intersects the x-axis at Y p = 0 and X = yy 0 + x. The distance between this
and the original (x, y) is (yy 0 )2 + y 2 , the distance from (x, y) to (0, 0) is
p
x2 + y 2 . These two distances are equal when y 2 (y 0 )2 = x2 , or y 0 = ± xy . This
is a separable differential equation easy to solve: y 2 ± x2 = C. The curves are
either circles centered on (0, 0) [yes, that checks, right?], or hyperbolas [with
y = ±x as special cases].

2. Find a curve whose normals (all) pass through the origin.


Solution (we can guess the answer, but let us do it properly): Into the same
equation of the curve’s normal (see above), we substitute 0 for both X and Y,
since the straight line must pass through (0, 0). This gives: −y = yx0 , which
is simple to solve: -y dy = x dx ⇒ x2 + y 2 = C (circles centered on the origin
— we knew that!).

3. A family of curves covering the whole x—y plane enables one to draw lines
perpendicular to these curves. The collection of all such lines is yet another
family of curves orthogonal (i.e. perpendicular) to the original family. If
we can find the differential equation y 0 = f (x, y) having the original family
of curves as its solution, we can find the corresponding orthogonal family by
1
solving y 0 = − f (x,y) . The next set of examples relates to this.

(a) The original family is described by x2 + (y − C)2 = C 2 with C arbitrary


(i.e. collection of circles tangent to the x-axis at the origin). To find the
corresponding differential equation, we differentiate the original equa-
x
tion with respect to x: 2x + 2(y − C)y 0 = 0, solve for y 0 = C−y , and then
eliminate C by solving the original equation for C, thus: x +y 2 −2Cy =
2
2 +y 2 x
0 ⇒ C = x 2y , further implying y 0 = x2 +y2 = x22xy
−y 2
. To find the or-
2y
− y
2 2
thogonal family, we solve y 0 = y 2xy
−x
[scale-independent equation solved
earlier]. The answer is: (x − C) + y 2 = C 2 , i.e. collection of circles
2

tangent to the y-axis at the origin.


30

(b) Let the original family be circles centered on the origin (it should be clear
what the orthogonal family is, but again, let’s solve it anyhow): x2 +y 2 =
C 2 describes the original family, 2x + 2yy 0 = 0 is the corresponding
differential equation (equivalent to y 0 = − xy , this time there is no C to
eliminate). The orthogonal family is the solution to y 0 = xy ⇒ dyy
= dx
x

y = Cx (all straight lines passing through the origin).
(c) Let the original family be described by y 2 = x + C (the y 2 = x parabola
slid horizontally). The corresponding differential equation is 2yy 0 = 1,
the ’orthogonal’ equation: y 0 = −2y.
Answer: ln |y| = −2x + C̃. or y = Ce−2x (try to visualize the curves).
(d) Finally, let us start with y = Cx2 (all parabolas tangent to the x-axis
at the origin). Differentiating: y 0 = 2Cx ⇒ [since C = xy2 ] y 0 = 2 xy . The
x 2
’orthogonal’ equation is y 0 = − 2y ⇒ y 2 + x2 = C [collection of ellipses

centered on the origin, with the x-diameter being 2 times bigger than
the y-diameter).

4. The position of four ships on the ocean is such that the ships form vertices
of a square of length L. At the same instant each ship fires a missile that
directs its motion towards the missile on its right. Assuming that the four
missiles fly horizontally and with the same constant speed, find the path of
each.
Solution: Let us place the origin at the center of the original square. It should
be obvious that when we find one of the four paths, the other three can be
obtained just by rotating it by 90, 180 and 270 degrees. This is actually
true for the missiles’ positions at any instant of time. Thus, if a missile is
at (x, y), the one to its right is at (y, −x) [(x, y) rotated by 90o ]. If y(x) is
the resulting path for the first missile, Y − y = y 0 · (X − x) is the straight
line of its immediate direction. This straight line must pass through (y, −x)
[that’s where the other missile is, at the moment]. This means that, when
we substitute y and −x for X and Y, respectively, the equation must hold:
−x − y = y 0 · (y − x). And this is the differential equation to solve (as scale
2
independent): xu0 + u(= y 0 = x+y x−y
) = 1+u
1−u
⇒ xu0 = 1+u 1−u
1−u
⇒ 1+u 2 du =

dx 1 2 earctan(u)
x
⇒ arctan(u) − 2
ln(1 + u ) = ln |x| + C̃ ⇒ √ = Cx. This solution
1 + u2
becomes a lot easier to understand in polar coordinates [θ = arctan( xy ), and
p eθ
r = x2 + y 2 ], where it looks like this: r = (a spiral).
C

ITo PhysicsJ

√ If a hole is made at a bottom of a container, water will flow out at the rate of
a h, where a is established based on the size (and to some extent, the shape) of
the opening, but to us it is simply a constant, and h is the height of the (remaining)
water, which varies in time. Time t is the independent variable. Find h(t) as a
function of t for:
31

1. A cylindrical container of radius r.


Solution: First we have to establish the volume V of the remaining water as
a function of height. In this case we get simply V (h(t)) = π r2 h(t). Differ-
entiating with respect to t we get: dV = π r2 dh . This in turn must be equal
p dt dt
to −a h(t) since the rate at which the water is flowing out must be equal
• √
to the rate at which its volume is decreasing. Thus π r2 h = −a h, where

dh
h≡ dt
. This is a simple (separable) differential equation for h(t), which we
dh a h1/2 at √ at √
solve by √ = − 2 dt ⇒ 1 = − 2 + C ⇒ h = − 2
+ h0 [h0 is
h πr πr ³2π r
2
2 √ √ ´
the initial height at time t = 0], or equivalently t = 2πar h0 − h .
Subsidiary: What percentage of√time is spent emptying the last 20% of the
2
container? Solution: t1 = 2πar h0 is the time to fully empty the container
³√ q ´
2
(this follows form our previous solution with h = 0). t0.8 = 2πar h0 − h50
is the timeq it takes to empty the first 80% of the container. The answer:
t1 − t0.8
= 15 = 44.72%.
t1
2. A conical container with the top radius (at h0 ) equal to r.
µ ¶2
1 h Rh ³ rx ´2
Solution: V (h(t)) = 3 πh r [follows from µ h0 dx]. Note that
h0 0
¡ ¢1/3
one fifth of the full volume corresponds to h = 15 h0 (i.e. 58.48% of
µ ¶2
h h0
the full height!), obtained by solving 13 π r h = 13 π r2 for h. Thus
h0 5
2π r2 2 • √
h h = −a h is the (separable) equation to solve, as follows: h3/2 dh =
3h20
3ah20 5/2 15ah20 5/2 4πr2 ³ 5/2 5/2
´
− dt ⇒ h = − t + h0 ⇔ t = h − h . This
2π r2 √ 4π r2 √ 15ah20 0
4π r2 h0 4π r2 h0 h ¡ ¢5/6 i t1 − t0.8
implies t1 = and t0.8 = · 1 − 15 ⇒ =
15a 15a t1
¡ 1 ¢5/6
5
= 26.15%.
3. A hemisphere of radius R (this is the radius of its top rim, also equal to the
water’s full height).
Rh
Solution: V (h(t)) = 13 π h2 (3R − h) [follows from π [R2 − (R − x)2 ] dx].
0
Making the right hand side equal to 23 π R3 /5 and solving for h gives the height
of the 20% (remaining) volume. This amounts to solving z 3 − 3z 2 + 25 = 0
(a cubic equation) for z ≡ Rh . We will discuss formulas for solving cubic and
quartic equations in the next chapter, for the time being we extract the de-
z 3 − 3z 2 + 2
sired root by Newton’s technique: zn+1 = zn − n 2 n 5 starting with say
3zn − 6zn
z0 = 0.3 ⇒ z1 = 0.402614 ⇒ z2 = 0.391713 ⇒ z3 = z4 = ... = 0.391600 ⇒
h = 0.391600R. Finish as your assignment.
32
33

Chapter 3 SECOND ORDER


DIFFERENTIAL EQUATIONS
These are substantially more difficult to solve than first-order ODEs, we will thus
concentrate mainly on the simplest case of linear equations with constant coeffi-
cients. Only in the following introductory section we look at two special non-linear
cases:

Reducible to first order


If, in a second-order equation

I y is missing J

(does not appear explicitly; only x, y 0 and y 00 do), then y 0 ≡ z(x) can be considered
the unknown function of the equation. In terms of z(x), the equation is of the first
order only, and can be solved as such. Once we have the explicit expression for
z(x), we need to integrate it with respect to x to get y(x).
The final solution will thus have two arbitrary constants, say C1 and C2 — this
is the case of all second-order equations, in general. With two arbitrary constants
we need two conditions to single out a unique solution. These are usually of two
distinct types

1. initial conditions: y(x0 ) = a and y 0 (x0 ) = b [x0 is quite often 0], specify-
ing a value and a slope of the function at a single point,
or
2. boundary conditions: y(x1 ) = a and y(x2 ) = b, specifying a value each
at two distinct points.

EXAMPLES:
dz e1 ⇒ z = C1 ex ⇒
1. y 00 = y 0 ⇒ z 0 = z [separable] ⇒ = dx ⇒ ln |z| = x + C
z
y = C1 ex + C2 . Let’s impose the following initial conditions: y(0) = 0 and
y 0 (0) = 1. By substituting into the general solution we get: C1 + C2 = 0 and
C1 = 1 ⇒ C0 = −1 ⇒ y = ex − 1 as the final answer.
dz dx e1 ⇒
2. xy 00 + y 0 = 0 ⇒ xz 0 + z = 0 [separable] ⇒ =− ⇒ ln |z| = ln |x| + C
z x
C1
z= ⇒ y = C1 ln |x|+C2 . Let us make this into a boundary-value problem:
x
1
y(1) = 1 and y(3) = 0 ⇒ C2 = 1 and C1 ln 3 + C2 = 0 ⇒ C1 = − ⇒
ln 3
ln |x|
y =1− .
ln 3
dz dx e1
C
3. xy 00 + 2y 0 = 0 ⇒ xz 0 + 2z = 0 [still separable] ⇒ = −2 ⇒z= 2 ⇒
z x x
C1
y= + C2 . Sometimes the two extra conditions can be of a more bizarre
x
34

type: y(2) = 12 and requiring that the solution intersects the y = x straight
line at the right angle. Translated into our notation: y 0 (x0 ) = −1 where x0
C1 C1
is a solution to y(x) = x, i.e. − 2 = −1 with + C2 = x0 . Adding the
x0 x0
C1
original + C2 = 12 , we can solve for C2 = 0, C1 = 1 and x0 = 1 (that is
2
1
where our solution intersects y = x). The final answer: y(x) = . ¥
x
The second type (of a second-order equation reducible to first order) has

I x missing J

(not appearing explicitly) [such as y · y 00 + (y 0 )2 = 0 ]. We again introduce z ≡ y 0 as


a new dependent variable, but this time we see it as a function of y, which becomes
dz dz dy
the independent variable of the new equation! Furthermore, since y 00 = dx = dy · dx
00 dz
[chain rule], we replace y in the original equation by dy · z. We solve the resulting
first-order equation for z (as a function of y), replace z by y 0 [thus creating another
first-order equation, this time for y(x)] and solve again.

EXAMPLES:
dz
1. y · y 00 + (y 0 )2 = 0 ⇒ y dy z + z 2 = 0 [separable] ⇒ dz
z
= − dy
y
⇒ ln |z| =
− ln |y| + C e1 ⇒ z = Ĉ1 ⇒ y 0 = Ĉ1 [separable again] ⇒ y dy = Ĉ1 ⇒
y y
y 2 = C1 x + C2 .
dz
2. y 00 + e2y (y 0 )3 = 0 ⇒ dy z + e2y z 3 = 0 ⇒ dz
z2
= −e2y dy ⇒ − 1z = − 12 e2y − C1 ⇒
1
z= ⇒ (C1 + 12 e2y )dy = dx ⇒ C1 y + 14 e2y = x + C2 .
C1 + 12 e2y
dz
3. y 00 + (1 + y1 )(y 0 )2 = 0 ⇒ dy z + (1 + y1 )z 2 = 0 ⇒ dz
z
= −(1 + y1 )dy ⇒ ln |z| =
− ln |y| − y + C e1 ⇒ z = C1 e−y ⇒ yey dy = C1 dx ⇒ (y − 1)ey = C1 x + C2 . ¥
y

Linear equation
The most general form is

y 00 + f (x)y 0 + g(x)y = r(x) (*)

where f, g and r are specific functions of x. When r ≡ 0 the equation is called


homogeneous.
There is no general technique for solving this equation, but some results re-
lating to it are worth quoting:

1. The general solution must look like this: y = C1 y1 + C2 y2 + yp , where y1 and


y2 are linearly independent ’basic’ solutions (if we only knew how to find
them!) of the corresponding homogeneous equation, and yp is any particu-
lar solution to the full equation. None of these are unique (e.g. y1 + y2 and
y1 − y2 is yet another basic set, etc.).
35

2. When one basic solution (say y1 ) of the homogeneous version of the equa-
tion is known, the other can be found by a technique called variation of
parameters (V of P): Assume that its solution has the form of c(x)y1 (x),
substitute this trial solution into the equation and get a first-order differential
equation for c0 ≡ z.

EXAMPLES:
1. y 00 − 4xy 0 + (4x2 − 2)y = 0 given that y1 = exp(x2 ) is a solution (ver-
ify!). Substituting yT (x) = c(x) · exp(x2 ) back into the equation [remem-
ber: yT0 = c0 y1 + {cy10 } and yT00 = c00 y1 + 2c0 y10 + {cy100 }; also remember that
the c-proportional terms, i.e. {...} and (4x2 − 2)cy1 must cancel out] yields
c00 exp(x2 )+4xc0 exp(x2 )−4xc0 exp(x2 ) = 0 ⇒ c00 = 0. With z ≡ c0 , the result-
ing equation is always of the first order: z 0 = 0 ⇒ z = C1 ⇒ c(x) = C1 x+C2 .
Substituting back to yT results in the general solution of the original homoge-
neous equation: y = C1 x exp(x2 )+C2 exp(x2 ). We can thus identify x exp(x2 )
as our y2 .

2. y 00 + x2 y 0 + y = 0 given that y1 = sinx x is a solution (verify!). yT = c(x) sinx x


substituted: c00 sinx x +2c0 ( cosx x − sin
x2
x
) + x2 c0 sinx x = 0 ⇒ c00 sin x +2c0 cos x = 0 ⇒
z 0 sin x + 2z cos x = 0 ⇒ dz = −2 cossinxxdx ⇒ ln |z| = −2 ln | sin x| + C e1 ⇒
z
z = sin2 x ⇒ c(x) = C1 sin x + C2 ⇒ y = C1 x + C2 x [y2 equals to x ]. ¥
−C1 cos x cos x sin x cos x

When both basic solutions of the homogeneous version are known, a particu-
lar solution to the full non-homogeneous equations can be found by using an
extended V-of-P idea. This time we have two unknown ’parameters’ called u(x)
and v(x) [in the previous case there was only one, called c(x)]. Note that, in this
context, ’parameters’ are actually functions of x.

We now derive general formulas for u(x) and v(x):

The objective is to solve Eq. (*) assuming that y1 and y2 (basic solutions of
the homogeneous version) are known. We then need to find yp only, which we take
to have a trial form of u(x) · y1 + v(x) · y2 , with u(x) and v(x) yet to be found
(the variable parameters). Substituting this into the full equation [note that terms
proportional to u(x), and those proportional to v(x), cancel out], we get:

u00 y1 + 2u0 y10 + v00 y2 + 2v 0 y20 + f (x) (u0 y1 + v0 y2 ) = r(x) (V of P)

This is a single differential equation for two unknown functions (u and v), which
means we are free to impose yet another arbitrary constraint on u and v. This is
chosen to simplify the previous equation, thus:

u0 y1 + v 0 y2 = 0

which further implies (after one differentiation) that u00 y1 + u0 y10 + v00 y2 + v 0 y20 = 0.
The original (V of P) equation therefore simplifies to

u0 y10 + v 0 y20 = r(x)


36

The last two (centered) equations can be solved (algebraically, using Cramer’s rule)
for ¯ ¯
¯ 0 y2 ¯
¯ ¯
¯ r y20 ¯
0
u = ¯¯ ¯
¯
¯ 0y1 y 2 ¯
¯ y1 y2 ¯
0

and ¯ ¯
¯ y1 0 ¯¯
¯ 0
¯ y r ¯
v = ¯¯ 1
0 ¯
¯ y10 y2 ¯¯
¯ y1 y20 ¯
where the denominator it called the Wronskian of the two basic solutions (these
are linearly independent iff their Wronskian is nonzero; one can use this as a check
— useful later when dealing with more than two basic solutions). From the last
two expressions one can easily find u and v by an extra integration (the right hand
sides are known functions of x).
EXAMPLE:
y 00 −4xy 0 +(4x2 −2)y = 4x4 −3. We have already solved the homogeneous ver-
sion getting y1 = ¯exp(x2 ) and y2 = x exp(x2 ). Using
¯
the previous two formu-
¯ 2 ¯
¯
¯
0 x exp(x ) ¯
¯
¯ ¯
¯ 4x4 − 3 (1 + 2x2 ) exp(x2 ) ¯
las we get u0 = ¯¯ 2 2
¯ = (3 − 4x4 )x exp(−x2 ) ⇒
¯
¯
¯
exp(x ) x exp(x ) ¯
¯
¯ 2 2 2 ¯
¯ 2x exp(x ) (1 + 2x ) exp(x ) ¯
¯ ¯
¯ ¯
¯
¯
exp(x2 ) 0 ¯
¯
¯ 2 4 ¯
2 ¯ 2x exp(x ) 4x − 3 ¯
u(x) = ( 52 +4x2 +2x4 )e(−x ) +C1 and v 0 = ¯¯ 2 2
¯ =
¯
¯
¯
exp(x ) x exp(x ) ¯
¯
¯ 2 2 2 ¯
¯ 2x exp(x ) (1 + 2x ) exp(x ) ¯
2 2
(4x4 − 3)e(−x ) ⇒ v(x) = −(3 + 2x2 )xe(−x ) + C2 [the last integration is a bit
more tricky, but the result checks]. Simplifying uy1 + vy2 yields ( 52 + x2 ) +
C1 exp(x2 ) + C2 x exp(x2 ), which identifies 52 + x2 as a particular solution of
the full equation (this can be verified easily). ¥
Given three specific functions y1 , y2 and yp , it is possible to construct a dif-
ferential equation of type (*) which has C1 y1 + C2 y2 + yp as its general solution
(that’s how I set up exam questions).
EXAMPLE:
Knowing that y = C1 x2 + C2 ln x + x1 , we first substitute x2 and ln x for y in
y 00 + f (x)y 0 + g(x)y = 0 [the homogeneous version] to get:
2 + 2x · f + x2 · g = 0
1 1
− 2 + · f + ln x · g = 0
x x
· ¸ · ¸−1 · ¸
f 2x x2 −2 −2 ln x−1
and solve, algebraically, for = 1 1 ⇒ f = x(2 ln x−1)
g x
ln x x2
4
and g = x2 (2 ln x−1) . The left hand side of the equation is therefore y 00 +
37

−2 ln x−1
x (2 ln x−1)
y 0 + x2 (2 ln4 x−1) y [one could multiply the whole equation by x2 (2 ln x−
1) to simplify the answer]. To ensure that x1 is a particular solution, we
substitute it into the left hand side of the last equation(for y), yielding r(x)
[= x33(2(2lnlnx+1)
x−1)
in our case]. The final answer is thus:

3
x2 (2 ln x − 1)y 00 − x(2 ln x + 1)y 0 + 4y = (2 ln x + 1)
x

With constant coefficients


From now on we will assume that the two ’coefficients’ f (x) and g(x) are x-
independent constants, and call them a and b (to differentiate between the two
cases). The equation we want to solve is then

y 00 + ay 0 + by = r(x)

with a and b being two specific numbers. We will start with the

IHomogeneous CaseJ [r(x) ≡ 0]

All we have to do is to find two linearly independent basic solutions y1 and y2 ,


and then combine them in the c1 y1 + c2 y2 manner (as we already know from the
general case).
To achieve this, we try a solution of the following form:

yT = eλx

where λ is a number whose value is yet to be determined. Substituting this into


y 00 + ay 0 + by = 0 and dividing by eλx results in

λ2 + aλ + b = 0

which is the so called characteristic polynomial for λ.


When this (quadratic) equation has two real roots the problem is solved (we
have gotten our two basic solutions). What do we do when the two roots are
complex, or when only a single root exists? Let us look at these possibilities, one
by one.

1. Two (distinct) real roots.


EXAMPLE: y 00 + q y 0 − 2y = 0 ⇒ λ2 + λ − 2 = 0 [characteristic polynomial]
⇒ λ1,2 = − 12 ± 14 + 2 = − 12 ± 32 = 1 and −2. This implies y1 = ex and
y2 = e−2x , which means that the general solution is y = C1 ex + C2 e−2x .

2. Two complex conjugate roots λ1,2 = p ± i q.


This implies that ỹ1 = epx [cos(qx)+i sin(qx)] and ỹ2 = epx [cos(qx)−i sin(qx)]
[remember that eiA = cos A + i sin A]. But at this point we are interested
in real solutions only, and these are complex. But we can take the following
linear combination of the above functions: y1 ≡ ỹ1 +ỹ 2
2
= epx cos(qx) and
38

y2 ≡ ỹ1 2i
−ỹ2
= epx sin(qx), and have a new, equivalent, basis set. The new
functions are both real, thus the general solution can be written as

y = epx [C1 cos(qx) + C2 sin(qx)]

One can easily verify that both y1 and y2 do (individually) meet the original
equation.

EXAMPLE: y 00 − 2y 0 + 10y = 0 ⇒ λ1,2 = 1 ± 1 − 10 = 1 ± 3i. Thus y =
ex [C1 cos(3x) + C2 sin(3x)] is the general solution.

3. One (double) real root.


This can happen only when the original equation has the form of: y 00 + ay 0 +
a2 2
4
y = 0 (i.e. b = a4 ). Solving for λ, one gets: λ1,2 = − a2 ± 0 (double root).
a
This gives us only one basic solution, namely y1 = e− 2 x ; we can find the
other by the V-of-P technique. Let us substitute the following trial solution
a a
c(x) · e− 2 x into the equation getting (after we divide by e− 2 x ): c00 − ac0 +
ac0 = 0 ⇒ c00 = 0 ⇒ c(x) = C1 x + C2 . The trial solution thus becomes:
a a
C1 xe− 2 x + C2 e− 2 x , which clearly identifies
a
y2 = xe− 2 x

as the second basic solution.


Remember: For duplicate roots, the second solution can be obtained by mul-
tiplying the first basic solution by x.
EXAMPLE: y 00 + 8y 0 + 16y = 0 ⇒ λ1,2 = −4 (both). The general solution is
thus y = e−4x (C1 + C2 x). Let’s try finishing this as an initial-value problem
[lest we forget]: y(0) = 1, y 0 (0) = −3. This implies C1 = 1 and −4C1 + C2 =
−3 ⇒ C2 = 1. The final answer: y = (1 + x)e−4x . ¥

For a second-order equation, these three possibilities cover the whole story.

INon-homogeneous CaseJ

When any such equation has a nonzero right hand side r(x), there are two
possible ways of building a particular solution yp :

B Using the variation-of-parameters formulas derived earlier for the general


case.

EXAMPLES:
1. y 00 + y = tan x ⇒ λ2 + 1 = 0 ⇒ λ1,2 = ± i ⇒ sin x and cos x being the
two
¯ basic solutions
¯ of the homogeneous version. The old formulas
¯ give: u¯ 0 =
¯ ¯ ¯
¯
¯
0 cos x ¯¯ ¯ sin x
¯
0 ¯¯¯
¯ ¯ ¯ ¯
¯ tan x − sin x ¯ ¯ cos x tan x ¯
¯ ¯ = sin x ⇒ u(x) = − cos x+C1 and v 0 = ¯ ¯ =
¯ ¯ ¯ ¯
¯ sin x cos x ¯ ¯ sin x cos x ¯
¯ ¯ ¯ ¯
¯ ¯ ¯ ¯
¯ cos x − sin x ¯ ¯ cos x − sin x ¯
39
µ ¶
sin2 x 1 1 + sin x
− = cos x − ⇒ v(x) = sin x − ln + C2 . The final solu-
cos x cos x cos xµ ¶
1 + sin x
tion is thus y = {− cos x sin x + sin x cos x} − cos x ln + C1 sin x +
cos x
C2 cos x [the terms inside the curly brackets cancelling out, which happens
frequently in these cases].
e2x
2. y 00 − 4y 0 + 4y = . Since λ1,2 = 2 ± 0 [double root], the basic solutions
x ¯ ¯
¯ ¯
¯ 0
¯ 2x
xe2x ¯
¯
¯ e 2x ¯
2x 2x 0
¯
x
(1 + 2x)e ¯
are e and xe . u = ¯ 2x ¯
2x
¯ = −1 ⇒ u(x) = −x + C and
¯ 1
¯ e xe ¯
¯ ¯
¯ ¯
¯ 2e2x (1 + 2x)e2x ¯
¯ ¯
¯
¯ e
¯
2x
0 ¯¯¯
¯ 2x ¯
¯ 2e2x e ¯ 1
0 x
v = ¯¯ 2x 2x
¯ =
¯ ⇒ v(x) = ln x + C2 .
¯ e xe ¯ x
¯ ¯
¯ ¯
¯ 2e2x (1 + 2x)e2x ¯

Answer: y = C1 e2x + C̃2 xe2x − xe2x + ln x · xe2x = e2x (C1 + C2 x + x ln x). ¥

B Special cases of r(x)

• r(x) is a polynomial in x:

Use a polynomial of the same degree but with undetermined coefficients


as a trial solution for yp .

EXAMPLE: y00 + 2y0 − 3y = x, λ1,2 = 1 and −3 ⇒ y1 = ex and y2 = e−3x.


yp = Ax + B, where A and B are found by substituting this yp into the full
equation and getting: 2A − 3Ax − 3B = x ⇒ A = − 13 and B = − 29 .
x
Answer: y = C1 ex + C2 e−3x − 3
− 29 .

Exceptional case: When λ = 0, this will not work unless the trial solution yp is
further multiplied by x (when λ = 0 is a multiple root, x has to be raised to the
multiplicity of λ).

EXAMPLE: y00 − 2y0 = x2 + 1. λ1,2 = 0 and 2 ⇒ y1 = 1 and y2 = e2x.


yp = Ax3 + Bx2 + Cx where A, B and C are found by substituting: 6Ax +
2B − 2(3Ax2 + 2Bx + C) = x2 + 1 ⇒ A = − 16 , B = − 14 and C = − 34 .
Answer: y = C1 + C2 e2x − x3
6
− x2
4
− 3x
4
. ¥

• r(x) ≡ keαx (an exponential term):

The trial solution is yp = Aeαx [with only A to be found; this is the undeter-
mined coefficient of this case].

EXAMPLE: y00 + 2y0 + 3y = 3e−2x ⇒ λ1,2 = −1 ± 2 i, yp = Ae−2x substituted
gives: A(4 − 4 + 3)e−2x = 3e−2x ⇒ A = 1.
√ √
Answer: yp = e−x [C1 sin( 2x) + C2 cos( 2x)] + e−2x .
40

Exceptional case: When α = λ [any of the roots], the trial solution must be first
multiplied by x (to the power of the multiplicity of this λ).

EXAMPLE: y00 + y0 − 2y = 3e−2x ⇒ λ1,2 = 1 and −2 [same as α]! yp = Axe−2x,


substituted: A(4x − 4) + A(1 − 2x) − 2Ax = 3 ⇒ A = −1 [this follows from
the absolute term, the x-proportional terms cancel out, as they must].
Answer: y = C1 ex + (C2 − x)e−2x .

• r(x) = ks epx sin(qx) [or kc epx cos(qx), or a combination (sum) of both]:

The trial solution is [A sin(qx) + B cos(qx)]epx .

EXAMPLE: y00 + y0 − 2y = 2e−x sin(4x) ⇒ λ1,2 = 1 and −2 [as before]. yp =


[A sin(4x) + B cos(4x)]e−x , substituted into the equation: −18[A sin(4x) +
B cos(4x)]− 4[A cos(4x) − B sin(4x)] = 2 sin(4x) ⇒ −18A + 4B = 2 and
· ¸ · ¸−1 · ¸ · 9 ¸
A −18 4 2 − 85
−4A − 18B = 0 ⇒ = = 2 ⇒ y =
¡ B −4 ¢−18 0 85
2 9
C1 ex + C2 e−2x + 85 cos(4x) − 85 sin(4x) e−2x .

Exceptional case: When λ = p + i q [both the real and purely imaginary parts
must agree], the trial solution acquires the standard factor of x.

’Special-case’ summary:

We would like to mention that all these special case can be covered by one and
the same rule: When r(x) = Pn (x)eβx , where Pn (x) is an n-degree polynomial in
x, the trial solution is Qn (x)eβx , where Qn (x) is also an n-degree polynomial, but
with ’undetermined’ (i.e. yet to be found) coefficients.
And the same exception: When β coincides with a root of the characteristic
polynomial (of multiplicity ) the trial solution must be further multiplied by x .
If we allowed complex solutions, these rules would have covered it all. Since we
don’t, we have to spell it out differently for β = p + i q:
When r(x) = [Ps (x) sin(qx) + Pc (x) cos(qx)]epx where Ps,c are two polynomials
of degree not higher than n [i.e. n is the higher of the two; also: one P may be
identically equal to zero], the trial solution is: [Qs (x) sin(qx) + Qc (x) cos(qx)]epx
with both Qs,c being polynomials of degree n [no compromise here — they both
have to be there, with the full degree, even if one P is missing].
Exception: If p + i q coincides with one of the λs, the trial solution must be
further multiplied by x raised to the λ’s multiplicity [note that the conjugate root
p − i q will have the same multiplicity; use the multiplicity of one of these — don’t
double it]. ¤
Finally, if the right hand side is a linear combination (sum) of such terms, we
use the superposition principle to construct the overall yp . This means we find
yp individually for each of the distinct terms of r(x), then add them together to
build the final solution.
41

EXAMPLE: y00 + 2y0 − 3y = x + e−x ⇒ λ1,2 = 1, −3. We break the right hand
side into r1 ≡ x and r2 ≡ e−x , construct y1p = Ax + B ⇒ [when substituted
into the equation with only x on the right hand side] 2A − 3Ax − 3B = x ⇒
A = − 13 and B = − 29 , and then y2p = Ce−x , substituted into the equation
with r2 only]: C − 2C − 3C = 1 ⇒ C = − 14 .
x
Answer: y = C1 ex + C2 e−3x − 3
− 29 − 14 e−x .

Cauchy equation
looks like this:
(x − x0 )2 y 00 + a(x − x0 )y 0 + by = r(x)
where a, b and x0 are specific constants (x0 is usually equal to 0, e.g. x2 y 00 + 2xy 0 −
3y = x5 ).
There are two ways of solving it:

IConvertingJ

it to the previous case of a constant-coefficient equation (convenient when r(x) is


a polynomial in either x or ln x — try to figure our why). This conversion is achieved
by introducing a new independent variable t = ln(x − x0 ). We have already derived
the following set of formulas for performing such a conversion: y 0 = dy ẏ
· dt = x−x
dt dx
2 ¡ dt ¢2 dy d2 t 0

and y 00 = ddt2y · dx + dt · dx2 = (x−xÿ


0)
2 − ẏ
(x−x0 )2
. The original Cauchy equation
thus becomes
ÿ + (a − 1)ẏ + by = r(x0 + et )
which we solve by the old technique.
42
EXAMPLE: x2 y00 − 4xy0 + 6y = ⇒ ÿ − 5ẏ + 6y = 42e−4t ⇒ λ1,2 = 2, 3 and
x4
yp = Ae−4t substituted: 16A + 20A + 6A = 42 ⇒ A = 1.
1
Answer: y = C1 e3t + C2 e2t + e−4t = C1 x3 + C2 x2 + x4
[since x = et ].

Problems for extra practice

(solve by undetermined coefficients — not V of P):

y0 3y
1. y 00 − − 2 = ln x + 1 ⇒ [must be multiplied by x2 first] ÿ − 2ẏ − 3y =
x x
te2t + e2t ⇒ ...
C2 ¡ 5 ln x ¢ 2
Answer: y = C1 x3 + − 9+ 3 x.
x
2. x2 y 00 − 2xy 0 + 2y = 4x + sin(ln x) ⇒ ÿ − 3ẏ + 2y = 4et + sin(t) ⇒ ...
1 3
Answer: y = C1 x + C2 x2 − 4x ln x + 10
sin(ln x) + 10
cos(ln x).

Warning: To use the undetermined-coefficients technique (via the t transfor-


mation) the equation must have (or must be brought to) the form of: x2 y 00 +axy 0 +
... ; to use the V-of-P technique, the equation must have the y 00 + xa y 0 + ... form.

IDirectlyJ
42

(This is more convenient when r(x) is either equal to zero, or does not have
the special form mentioned above). We substitute a trial solution (x − x0 )m , with
m yet to be determined, into the homogeneous Cauchy equation, and divide by
(x − x0 )m . This results in:

m2 + (a − 1)m + b = 0

a characteristic polynomial for m. With two distinct real roots, we get our
two basic solutions right away; with a duplicate root, we need an extra factor of
ln(x − x0 ) to construct the second basic solution; with two complex roots, we must
go back to the ’conversion’ technique.

EXAMPLES:
1. x2 y 00 + xy 0 − y = 0 ⇒ m2 − 1 = 0 ⇒ m1,2 = ±1
1
Answer: y = C1 x + C2 .
x
2. x2 y 00 + 3xy 0 + y = 0 ⇒ m2 + 2m + 1 = 0 ⇒ m1,2 = −1 (duplicate) ⇒
y = Cx1 + Cx2 ln x.

3. 3(2x − 5)2 y 00 − (2x − 5)y 0 + 2y = 0. First we have to rewrite it in the standard


form of: (x− 52 )2 y 00 − 16 (x− 52 )y 0 + 16 y = 0 ⇒ m2 − 76 m+ 16 = 0 ⇒ m1,2 = 16 , 1 ⇒
y = C̃1 (x − 52 ) + C̃2 (x − 52 )1/6 = C1 (2x − 5) + C2 (2x − 5)1/6 .

4. x2 y 00 − 4xy 0 + 4y = 0 with y(1) = 4 and y 0 (1) = 13. First m2 − 5m + 4 = 0 ⇒


m1,2 = 1, 4 ⇒ y = C1 x + C2 x4 . Then C1 + C2 = 4 and C1 + 4C2 = 13 give
C2 = 3 and C1 = 1.
Answer: y = x + 3x4 .

5. x2 y 00 −4xy 0 +6y = x4 sin x. To solve the homogeneous version: m2 −5m+6 =


0 ⇒ m1,2 = 2, 3 ⇒ y1 = x2 and y2 = x3 . To use V-of-P formulas the
4 6
equation must be first rewritten in the ’standard’ form of y 00 − y 0 + 2 y =
¯
¯
¯ x x
3 ¯¯
¯
¯
0 x ¯
¯ 2 ¯
¯ x sin x 3x2 ¯
2 0
x sin x ⇒ u = ¯¯ 2 ¯ = −x sin x ⇒ u(x) = x cos x − sin x + C1 and
¯ x
¯
x3 ¯¯¯
¯ ¯
¯ 2x 3x2 ¯
¯ ¯
¯ ¯
¯ x2 0 ¯
¯ ¯
¯ 2 ¯
¯ 2x x sin x ¯
0
v = ¯¯ 2 ¯ = sin x ⇒ v(x) = − cos(x) + C2 .
¯ x
¯
x3 ¯¯¯
¯ ¯
¯ 2x 3x2 ¯

Solution: y = (C1 − sin x)x2 + C2 x3 [the rest cancelled out — common occur-
rence when using this technique].
43

Chapter 4 THIRD AND


HIGHER-ORDER LINEAR ODES
First we extend the general linear-equation results to higher orders. Explicitly,
we mention the third order only, but the extension to higher orders is quite obvious.
A third order linear equation

y 000 + f (x)y 00 + g(x)y 0 + h(x)y = r(x)

has the following general solution: y = C1 y1 + C2 y2 + C3 y3 + yp , where y1 , y2 and


y3 are three basic (linearly independent) solutions of the homogeneous version.
There is no general analytical technique for finding them. Should these be known
(obtained by whatever other means), we can construct a particular solution (to
the full equation) yp by the variation of parameters (this time we skip the details),
getting: ¯ ¯
¯ 0 y2 y3 ¯
¯ ¯
¯ 0 y20 y30 ¯
¯ ¯
¯ r y200 y300 ¯
u0 = ¯¯ ¯
¯
¯ y10 y20 y30 ¯
¯ y1 y2 y3 ¯
¯ 00 00 00 ¯
¯ y1 y2 y3 ¯

with a similar formula for v 0 and for w0 (we need three of them, one for each basic
solution). The pattern of these formulas should be obvious: there is the Wronskian
0
in the denominator, and the same matrix with oneof its  columns (the first for u ,
0
the second for v 0 , and the last for w0 ) replaced by  0  in the numerator.
r
The corresponding constant-coefficient equation can be solved easily by con-
structing its characteristic polynomial and finding its roots, in a manner which is
a trivial extension of the second-degree case. The main difficulty here is finding
roots of higher-degree polynomials. We will take up this issue first.

Polynomial roots
B We start with a general cubic polynomial (we will call its variable x rather
than λ)
x3 + a2 x2 + a1 x + a0 = 0
such as, for example x3 −2x2 −x+2 = 0. Finding its three roots takes the following
steps:

3a1 − a22 £ ¤ 9a1 a2 − 27a0 − 2a32 £ ¤


1. Compute Q = = − 79 and R = = − 10
27
.
9 54
2. Now consider two cases:

(a) Q3 + R2 ≥ 0.
44
q p q p
Compute s = 3 R + Q3 + R2 and t = 3 R − Q3 + R2 [each being
a real but possibly negative number]. The original equation has one
a2 s+t
real root given by s + t − , and two complex roots given by − −
√ 3 2
a2 3
± (s − t) i.
3 2
(b) Q3 + R2 < 0 [our case = − 13 ].
R
Compute θ = arccos p [Q must be negative]. The equation has
−Q3 µ ¶
√ θ + 2πk a2
three real roots given by 2 −Q cos − , where k = 0,
³ q 3´ q3
1 and 2. [In our case θ = arccos − 10 27
729
343
⇒ 2 79 cos( 3θ ) + 23 = 2,
q q
2 79 cos( θ+2π
3
)+ 23 = −1 and 2 79 cos( θ+4π
3
)+ 23 = 1 are the three roots].

Proof (for the s-t case only): Let x1 , x2 and x3 be the three roots of the formula.
a2 s+t a2 2 3
Expand (x − x1 )(x³ − x2 )(x´ − x3 ) = [x + 3 − (s + t)][(x + 2 + 3 ) + 4 (s −
a32 a3
t)2 ] = x3 +a2 x2 + 3
− 3st x+ 272 −sta2 −s3 −t3 . Since −st = Q and −s3 −t3 =
a32 a32
−2R, we can see quite easily that 3
− 3st = a1 and 27
− sta2 − s3 − t3 = a0 .
¤

B And now we tackle the forth-degree (quartic) polynomial

x4 + a3 x3 + a2 x2 + a1 x + a0 = 0

such as, for example x4 + 4x3 + 6x2 + 4x + 1 = 0.

1. First solve the following cubic

y 3 − a2 y 2 + (a1 a3 − 4a0 )y + (4a0 a2 − a21 − a0 a23 ) = 0

[ y 3 − 6y 2 + 12y − 8 = 0 in the case of our example — the three roots are all
qual to 2 ].

2. Then solve the following two quadratic equations


µ q ¶ µ q ¶
2 1 2 1 2
z + a3 ± a3 − 4a2 + 4y1 z + y1 ∓ y1 − 4a0 = 0
2 2

(when 2a1 − a3 y1 ≥ 0), or


µ q ¶ µ q ¶
2 1 2 1 2
z + a3 ± a3 − 4a2 + 4y1 z + y1 ± y1 − 4a0 = 0
2 2

(when 2a1 − a3 y1 ≤ 0), where y1 is the largest real root of the previous cubic
[z 2 + 2z + 1 = 0, twice, in our example]. The resulting four roots are those
of the original quartic [−1, −1, −1 and −1].
45

Proof: We assume that 2a1 −a3 y1 > 0 (the other case would be a carbon copy). By
multiplying the left hand sides of the corresponding two quadratic equations
one gets:
p p
2
a y
3 1 + a3 − 4a2 + 4y1 y12 − 4a0
z 4 + a3 z 3 + a2 z 2 + z + a0
2
It remains to be shown that the linear coefficient is equal to a1 . This amounts
to: q q
2a1 − a3 y1 = a3 − 4a2 + 4y1 y12 − 4a0
2

Since each of the two expressions under square root must be non-negative
(see the Extra Proof below), the two sides of the equation have the same
sign. It is thus legitimate to square them, obtaining a cubic equation for y1 ,
which is identical to the one we solved in (1). ¤
Extra Proof: Since y1 is the (largest real) solution to (a23 − 4a2 + 4y1 ) · (y12 −
4a0 ) = (2a1 − a3 y1 )2 , it is clear that both factors on the LHS must have the
same sign. We thus have to prove that either of them is positive. Visualizing
the graph of the cubic polynomial (a23 − 4a2 + 4y1 ) · (y12 − 4a0 ) − (2a1 − a3 y1 )2 ,
it is obvious that by adding (2a1 − a3 y1 )2 to it, the largest real root can only
decrease. This means that y1 must be bigger than each of the real roots of
a2
(a23 − 4a2 + 4y1 ) · (y12 − 4a0 ) = 0, implying that a2 − 43 < y1 (which further
implies that 4a0 < y12 ). ¤

B There is no general formula for solving fifth-degree polynomials and beyond


(investigating this led Galois to discover groups); we can still find the roots
numerically to any desired precision (the algorithms are fairly tricky though, and
we will not go into this).

I There are special cases

of higher-degree polynomials which we know how to solve (or at least how to


reduce them to a lower-degree polynomial). Let us mention a handful of these:
√ √ ¡ ¢
1. xn = a, by finding√ all (n¡distinct) complex values ¢ of n a, i.e. n a cos 2πk
n
+ i sin 2πk
n
when a > 0 and n −a cos 2πk+π n
+ i sin 2πk+π
n
when a < 0, both with k =
0, 1, 2, ...., n − 1.
√ √ √
Examples: (i) 4 16 = 2, −2, 2i and −2i (ii) 3 −8 = −2 and 1 ± 3i.

2. When 0 is one of the roots, it’s trivial to find it, with its multiplicity.
Example: x4 + 2x3 − 4x2 = 0 has obviously 0 as a double root. Dividing the
equation by x2 makes it into a quadratic equation which can be easily solved.

3. When coefficients of an equation add up to 0, 1 must be one of the roots.


The left hand side is divisible by (x − 1) [synthetic division], which reduces
its order.
Example: x3 − 2x2 + 3x − 2 = 0 thus leads to (x3 − 2x2 + 3x − 2) ÷ (x − 1) =
x2 − x + 2 [quadratic polynomial].
46

4. When coefficients of the odd powers of x and coefficients of the even powers
of x add up to the same two answers, then −1 is one of the roots and (x + 1)
can be factored out.
Example: x3 +2x2 +3x+2 = 0 leads to (x3 +2x2 +3x+2)÷(x+1) = x2 +x+2
and a quadratic equation.

5. Any lucky guess of a root always leads to the corresponding reduction in


order (one would usually try 2, −2, 3, −3, etc. as a potential root).

6. One can cut the degree of an equation in half when the equation has even
powers of x only by introducing z = x2 .
Example: x4 − 3x2 − 4 = 0 thus reduces to z 2 − 3z − 4 = 0 which has two
roots z1,2 = −1, 4. The roots of the original equation thus are: x1,2,3,4 =
i, −i, 2, −2.

7. Similarly with odd powers only.


Example: x5 − 3x3 − 4x = 0. Factoring out x [there is an extra root of 0 ],
this becomes the previous case.

8. All powers divisible by 3 (4, 5, etc.). Use the same trick.


Example: x6 −3x3 −4 = 0. Introduce z = x3 , solve for z1,2 = −1,³ 4 [same as´be-
√ √ √ √ √
fore]. Thus x1,2,3 = 3 −1 = −1, 12 ± 23 i and x4,5,6 = 3 4, 3 4 − 12 ± 23 i =
1.5874, −.7937 + 1.3747 i, = −.7937 − 1.3747 i.

9. When a multiple root is suspected (the question may indicate: ’there is


a triple root’), the following will help: each differentiation of a polynomial
reduces the multiplicity of its every root by one. This means, for example,
that a triple root becomes a single root of the polynomial’s second derivative.
Example: x4 −5x3 +6x2 +4x−8 = 0, given there is a triple root. Differentiating
twice: 12x2 −30x+12 = 0 ⇒ x1,2 = 12 , 2. These must be substituted back into
the original equation [only one of these is its triple root, the other is not a root
at all]. Since 2 meets the original equation ( 12 does not), it must be its triple
root, which can be factored out, thus: (x4 −5x3 +6x2 +4x−8)÷(x−2)3 = x+1
[ending up with a linear polynomial]. The last root is thus, trivially, equal
to −1.

10. Optional: One an take a slightly more sophisticated approach when it comes
to multiple roots. As was already mentioned: each differentiation of the poly-
nomial reduces the multiplicity of every root by one, but may (and usually
does) introduce a lot of extra ’phoney’ roots. These can be eliminated by tak-
ing the greatest common divisor (GCD) of the polynomial and its derivative,
by using Euclid’s algorithm, which works as follows:
To find the GCD of two polynomials p and q, we divide one into the other
to find the remainder (residue) of this operation (we are allowed to multiply
the result by a constant to make it a monic polynomial): r1 =Res(p ÷ q),
then r2 =Res(q ÷ r1 ), r3 =Res(r1 ÷ r2 ), ... until the remainder becomes zero.
The GCD is the previous (last nonzero) r.
47

Example: p(x) = x8 − 28x7 + 337x6 − 2274x5 + 9396x4 − 24312x3 + 38432x2 −


33920x+ 12800. s(x) = GCD(p, p0 ) = x5 − 17x4 + 112x3 − 356x2 + 544x− 320
[the r-sequence was: x6 − 85 4
x4 + 737
4
x3 − 832x3 + 2057x2 − 2632x + 1360, s,
0]. u(x) =GCD(s, s ) = x − 6x + 8 [the r-sequence: x3 − 31
0 2
3
x2 + 34x − 104
3
,
u, 0 ]. The last (quadratic) polynomial can be solved (and thus factorized)
easily: u = (x − 2)(x − 4). Thus 2 and 4 (each) must be a double root of s
and a triple root of p. Taking s ÷ (x − 2)2 (x − 4)2 = x − 5 reveals that 5 is
a single root of s and therefore a double root of p. Thus, we have found all
eight roots of p: 2, 2, 2, 4, 4, 4, 5 and 5. ¥

Constant-coefficient equations
Similarly to solving second-order equations of this kind, we
• find the roots of the characteristic polynomial,
• based on these, construct the basic solutions of the homogeneous equation,
• find yp by either V-P or (more commonly) undetermined-coefficient technique
(which requires only a trivial and obvious extension). ¥

Since the Cauchy equation is effectively a linear equation in disguise, we know


how to solve it (beyond the second order) as well.
EXAMPLES:
1. y iv + 4y 00 +√4y = 0 ⇒ λ4 + 4λ2 + 4 = 0 ⇒ z(= λ2√ )1,2 = −2 [double]
√ ⇒
λ1,2,3,4 =√± 2 i [both are√double roots]⇒ y = C1 sin( 2x) + C2 cos( 2x) +
C3 x sin( 2x) + C4 x cos( 2x).
2. y 000 + y 00 + y 0 + y = 0 ⇒ λ1 = −1, and λ2 + 1 = 0 ⇒ λ2,3 = ± i ⇒ y =
C1 e−x + C2 sin x + C3 cos(x).
3. y v −3y iv +3y 000 −y 00 = 0 ⇒ λ1,2 = 0, λ3 = 1 and λ2 −2λ+1 = 0 ⇒ λ4,5 = 1 ⇒
y = C1 + C2 x + C3 ex + C4 xex + C5 x2 ex .
4. y iv − 5y 00 + 4y = 0 ⇒ z(= λ2 )1,2 = 1, 4 ⇒ λ1,2,3,4 = ±2, ±1 ⇒ y =
C1 ex + C2 e−x + C3 e2x + C4 e−2x .
5. y 000 − y 0 = 10 cos(2x) ⇒ λ1,2,3 = 0, ±1. yp = A sin(2x) + B cos(2x) ⇒
−10 cos(2x) + 10 sin(2x) = 10 cos(2x) ⇒ A = −1, B = 0 ⇒ y = C1 + C2 ex +
C3 e−x − sin(2x).
6. y 000 − 2y 00 = x2 − 1 ⇒ λ1,2 = 0, λ3 = 2. yp = Ax4 + Bx3 + Cx2 ⇒ −24Ax2 +
1 1
24Ax − 12Bx + 6B − 4C = x2 − 1 ⇒ A = − 24 , B = − 12 , C = 18 ⇒
4 3 2
y = C1 + C2 x + C3 e2x − x24 − x12 + x8 .
7. x3 y 000 + x2 y 00 − 2xy 0 + 2y = 0 (homogeneous Cauchy). m(m − 1)(m − 2) +
m(m−1)−2m+2 = 0 is the characteristic polynomial. One can readily notice
(m − 1) being a common factor, which implies m1 = 1 and m2 − m − 2 = 0 ⇒
C2
m2,3 = −1, 2 ⇒ y = C1 x + + C3 x2 .
x
48
49

Chapter 5 SETS OF LINEAR,


FIRST-ORDER,
CONSTANT-COEFFICIENT ODES
First we need to complete our review of

Matrix Algebra
IMatrix inverse & determinantJ

There are two ways of finding these

B The ’classroom’ algorithm

which requires ∝ n! number of operations and becomes not just impractical,


˙
but virtually impossible to use (even for supercomputers) when n is large (>20).
Yet, for small matrices (n ≤ 4) it’s fine, and we actually prefer it. It works like
this:

d −b
−1
a b −c a
• In a 2×2 case, it’s trivial: = where the denominator is
c d ad − bc
the determinant.
−1 5 −4
2 4 22 22
Example: = 3 2 , 22 being the determinant.
-3 5 22 22

• The 3×3 case is done in four steps.


−1
2 4 -1
Example: 0 3 2
2 1 4

1. Construct a 3×3 matrix of all 2 × 2 subdeterminants (striking out one row


and one column — organize the answers accordingly):

10 -4 -6
17 10 -6
11 4 6

2. Transpose the answer:


10 17 11
-4 10 4
-6 -6 6

3. Change the sign of every other element (using the following checkerboard
+ − + 10 -17 11
scheme: − + − , thus: 4 10 -4
+ − + -6 6 6
50

4. Divide by the determinant, which can be obtained easily be multiplying


(’scalar’ product) the first row of the original matrix by the first column
of the last matrix (or vice versa − one can also use the second or third
10 −17 11
42 42 42
row/column — column/row): 4
42
10
42
−4
42 ¤
−6 6 6
42 42 42
¯ ¯
¯ 2 4 -1 ¯
¯ ¯
¯ 0 3 2 ¯
¯ ¯
If we need the determinant only, there is an easier scheme: ¯¯ 2 1 4 ¯¯resulting
¯ 2 4 -1 ¯
¯ ¯
¯ 0 3 2 ¯
in 2×3×4+ 0×1×(−1)+ 2×4×2− (−1)×3×2− 2×1×2− 4×4×0 = 42.

• Essentially the same algorithm can be used for 4 × 4 matrices and beyond,
but it becomes increasingly impractical and soon enough virtually impossible
to carry out.

B The ’practical’ algorithm

requires ∝ n3 operations and can be easily converted into a computer code:

1. The original (n × n) matrix is extended to an n × 2n matrix by appending


it with the n × n unit matrix.

2. By using one of the following three ’elementary’ operations we make


the original matrix into the unit matrix, while the appended part results in
the desired inverse:

(a) A (full) row can be divided by any nonzero number [this is used to make
the main-diagonal elements equal to 1, one by one].
(b) A multiple of a row can be added to (or subtracted from) any other row
[this is used to make the non-diagonal elements of each column equal to
0 ].
(c) Two rows can be interchanged whenever necessary [when a main-diagonal
element is zero, interchange the row with any subsequent row which has
a nonzero element in that position - if none exists the matrix is singu-
lar]. ¤

The product of the numbers we found on the main diagonal (and had to
divide by), further multiplied by −1 if there has been an odd number of
interchanges, is the matrix’ determinant.

• A 4 × 4 EXAMPLE:
51

3 0 1 4 1 0 0 0 ÷3 1 0 13 4
3
1
3
0 0 0
1 5 1 1
1 -1 2 1 0 1 0 0 − 3 r1 0 -1 3 - 3 - 3 1 0 0 ÷(−1)

3 1 -1 1 0 0 1 0 −r1 0 1 -2 -3 -1 0 1 0 +r2
2 1 11 2
-2 0 -1 1 0 0 0 1 + 3 r1 0 0 -3 3 3
0 0 1
1 4 1
1 0 3 3 3
0 0 0 +r3 1 0 0 −2 −1 1 1 0 + 27 r4
5 1 1
0 1 -3 3 3
-1 0 0 −5r3 0 1 0 17 7 -6 -5 0 − 17 r
7 4
⇒ 1 10 4 1 ⇒ 10
0 0 - 3 - 3 - 3 1 1 0 ÷(− 3 ) 0 0 1 10 4 -3 -3 0 − 7 r4
0 0 - 13 11 3
2
3
0 0 1 −r3 0 0 0 7 2 -1 -1 1 ÷7
3 5 5 2
1 0 0 0 −7 7 7 7
0 1 0 0 15 7
- 25
7
- 18
7
- 17
7
⇒ 8 11 11 10 . The last matrix is the inverse of the
0 0 1 0 7
- 7
- 7
- 7
2 1 1 1
0 0 0 1 7
- 7
- 7 7
original matrix, as can be easily verified [no interchanges were needed]. The
determinant is 3 × (−1) × (− 13 ) × 7 = 7. ¥

ISolving n equations for m unknownsJ

For an n×n non-singular problems with n ’small’ we can use the matrix inverse:
Ax = b ⇒ x =A−1 b, but this is not very practical beyond 2 × 2.

B The fully general technique

which is applicable to singular as well as n by m problems works like this:

1. Extend A by an extra column b.

2. Using ’elementary operations’ attempt to make the original A-part of the


matrix into the unit matrix (no need to keep track of interchanges). If you
succeed, the b-part of the matrix is the (unique) solution. This of course
cannot work when the number of equations and the number of unknowns
don’t match. Furthermore, we may run into difficulty for the following two
reasons:

(a) We may come to a column which has 0 on the main diagonal and all
elements below it (in the same column). This column will be then
skipped (as if it never existed, i.e. we will try to get 1 in the same
position of the next column).
(b) Discarding the columns we skipped, we may end up with fewer columns
than rows [resulting in some extra rows with only zeros in their A-part],
or the other way round [resulting in some (nonzero) extra columns,
which we treat in the same manner as those columns which were skipped].
The final number of 1’s [on the main diagonal] is the rank of A.

We will call the result of this part the matrix echelon form of the equa-
tions.

3. To interpret the answer we do this:


52

(a) If there are any ’extra’ (zero A-part) rows, we check the corresponding
b elements. If they are all equal to zero, we delete the extra (redundant)
rows and go to the next step; if we find even a single non-zero element
among them, the original system of equations is inconsistent, and there
is no solution.
(b) Each of the ’skipped’ columns represents an unknown whose value can
be chosen arbitrarily. Each row then provides an expression for one of
the remaining unknowns (in terms of the ’freely chosen’ ones). Note
that when there are no ’skipped’ columns, the solution is just a point
in m (number of unknowns) dimensions, one ’skipped’ column results
in a straight line, two ’skipped’ columns in a plane, etc.

Since the first two steps of this procedure are quite straightforward, we give
EXAMPLES of the interpretation part only:
1.
1 3 0 2 0 2
0 0 1 3 0 1
0 0 0 0 1 4
0 0 0 0 0 0
means that x2 and x4 are the ’free’ parameters (often, they would be renamed
 x1 = 2 − 3x2 − 2x4
c1 and c2 , or A and B). The solution can thus be written as x3 = 1 − 3x4

x5 = 4
or, in a vector-like manner:
       
x1 2 −3 −2
 x2   0   1   0 
       
 x3  =  1  +  0  c1 +  −3  c2
       
 x4   0   0   1 
x5 4 0 0
Note that this represents a (unique) plane in a five-dimensional space; the
’point’ itself and the two directions (coefficients of c1 and c2 ) can be specified
in infinitely many different (but equivalent) ways.
       
x1 5 −3 2
1 3 0 0 0 -2 5  x2   0   1   0 
       
0 0 1 0 0 3 2  x3   2   0   −3 
       
2.
0 0 0 1 0 1 0
⇒
x  =  0  +  0  c1 +  −1  c2 ¥
 4       
0 0 0 0 1 4 3  x5   3   0   −4 
x6 0 0 1
IEigenvalues & EigenvectorsJ

of a square matrix.
If, for a square (n × n) matrix A, we can find a non-zero [column] vector x and
a (scalar) number λ such that
Ax =λx
53

then λ is the matrix’ eigenvalue and x is its right eigenvector (similarly yT A =


λyT would define its left eigenvector yT , this time a row vector). This means
that we seek a non-zero solutions of

(A − λI) x ≡ 0 (*)

which further implies that A − λI must be singular: det(A − λI) = 0.


The left hand side of the last equation is an nth -degree polynomial in λ which
has (counting multiplicity) n [possibly complex] roots. These roots are the eigen-
values of A; one can easily see that for each distinct root one can find at least one
right (and at least one left) eigenvector, by solving (*) for x (λ being known now).
It is easy to verify that

det(λI − A) = λn − λn−1 · Tr(A) + λn−2 · {sum of all 2 × 2 major subdeterminants}


− λn−3 · {sum of all 3 × 3 major subdeterminants} + .... ± det(A)

where Tr(A) is the sum of all main-diagonal elements. This is called the charac-
teristic polynomial of A, and its roots are the only eigenvalues of A.

EXAMPLES:
2 3
1. has λ2 − 0 · λ − 7 as its characteristic polynomial, which means that
1 -2

the eigenvalues are λ1,2 = ± 7.

3 -1 2
2. 0 4 2 ⇒ λ3 − 10λ2 + (12 + 14 + 5)λ − 22 [we know how to find the
2 -1 3
determinant]. The coefficients add up to√
0. This implies that λ1 = 1 and
2 9 7
[based on λ − 9λ + 22 = 0] λ2,3 = 2 ± 2 i .

2 4 -2 3
3 6 1 4
3. ⇒ λ4 − 12λ3 + (0 − 4 + 4 − 4 + 20 − 16)λ2 − (−64 − 15 − 28 −
-2 4 0 2
8 1 -2 4
¡¢ ¡¢
22)λ+ (−106) = 0 [note there are 42 = 6 and 43 = 4 major subdeterminants
of the 2×2 and 3 × 3 size, respectively] ⇒ λ1 = −3.2545, λ2 = 0.88056,
λ3 = 3.3576 and λ4 = 11.0163 [these were obtained from our general formula
for fourth-degree polynomials — let’s hope we don’t have to use it very often].
¥

The corresponding (right) eigenvectors can be now found by solving (*), a


homogenous set of equations with a singular matrix of coefficients [therefore, there
must be at least one nonzero solution — which, furthermore, can be multiplied
by an arbitrary constant]. The number of linearly independent (LI) solutions
cannot be bigger than the multiplicity of the corresponding eigenvalue; establishing
their correct number is an important part of the answer.

EXAMPLES:
54

2 3
1. Using A = [one of our previous examples] (A − λ1 I ) x = 0 amounts
1 -2

2- 7 3√ 0
to , with the second equation being a multiple of the
1 -2- 7 0

first [check it!]. We thus have to solve only√x1 − (2 + 7)x2 = 0, which has
x1 2+ 7
the following general solution: = c, where c is arbitrary [geo-
x2 1
metrically, the solution represents a straight line in the x1 -x2 plane, passing
through the origin]. √Any such vector, when pre-multiplied by A, increases in
length by a factor of 7, without changing direction√ (check it too). Similarly,
√ 2- 7
replacing λ1 by λ2 = − 7, we would be getting c as the correspond-
1
-3√
ing eigenvector. There are many equivalent ways of expressing it, c̃
2+ 7
is one of them.
2. A double eigenvalue may possess either one or two linearly independent
eigenvectors:

1
(a) The unit 2 × 2 matrix has λ = 1 as its duplicate eigenvalue, and
0
0 0 0 0
are two LI eigenvectors [the general solution to ]. This
1 0 0 0
implies that any vector is an eigenvector of the unit matrix..
1 2
(b) The matrix has the same duplicate eigenvalue of +1 [in gen-
0 1
eral, the main diagonal elements of an upper-triangular matrix are
0 2 0
its eigenvalues], but solving i.e. 2x2 = 0 has only one LI
0 0 0
1
solution, namely c¥
0

Finding eigenvectors and eigenvalues of a matrix represents the main step in


solving sets of ODEs; we will present our further examples in that context. So
let us now return to these:

Set (system) of differential equations


of first order, linear, and with constant coefficients typically looks like this:
y10 = 3y1 + 4y2
y20 = 3y1 − y2
[the example is of the homogeneous type, as each term is either yi or yi0 propor-
tional]. The same set can be conveniently expressed in matrix notation as
y0 = Ay
· ¸ · ¸
3 4 y1
where A = and y = [both y1 and y2 are function of x ].
3 −1 y2
55

IThe Main TechniqueJ

for constructing a solution to any such set of n DEs is very similar to what we have
seen in the case of one (linear, constant-coefficient, homogeneous) DE, namely:
y1
y2
We first try to find n linearly independent basic solutions (all having the ..
.
yn
form), then build the general solution as a linear combination (with arbitrary
coefficients) of these.
It happens that the basic solutions can be constructed with the help of matrix
algebra. To find them, we use the following trial solution:

yT = q · eλx

where q is a constant (n-dimensional) vector. Substituting into y0 = Ay and


cancelling the (scalar) eλx gives: λq = Aq, which means λ can be any one of the
eigenvalues of A and q be the corresponding eigenvector. If we find n of these
(which is the case with simple eigenvalues) the job is done; we have effectively
constructed a general solution to our set of DEs.

EXAMPLES:
3 4
1. Solve y0 = Ay, where A = . The characteristic equation is: λ2 − 2λ −
3 -1

15 = 0 ⇒ λ1,2 = 1 ± 16 = −3 and 5. The corresponding eigenvectors (we
6 4 0 (1) (1)
will call them q(1) and q(2) ) are the solutions to ⇒ 3q1 + 2q2 =
3 2 0
2 -2 4
0 ⇒ q(1) = c1 , and [from now on we will assume a zero right
−3 3 -6
(2) (2) 2
hand side]⇒ q1 − 2q2 = 0 ⇒ q(2) = c.
1 2
2 2 5x
The final, general solution is thus y = c1 e−3x +c2 e . Or, if you pre-
−3 1
y1 = 2c1 e−3x + 2c2 e5x
fer, more explicitly: where c1 and c2 can be chosen
y2 = −3c1 e−3x + c2 e5x
arbitrarily.
Often, they are specified via initial conditions, e.g. y1 (0) = 2 and y2 (0) = −3
2c1 + 2c2 = 2 y1 = 2e−3x
⇒ ⇒ c1 = 1 and c2 = 0 ⇒ .
c1 − 3c2 = −3 y2 = −3e−3x

1 -1 1
2. Let us now tackle a three-dimensional problem, with A = 1 1 -1 . The
2 -1 0
3 2
characteristic equation is λ − 2λ − λ + 2 = 0 ⇒ λ1 = −1 and the roots
of λ2 − 3λ + 2 = 0 ⇒ λ2,3 = 1 and 2. The respective eigenvectors are:
56

2 -1 1 1 0 15 -1 0 -1 1 1
3 (1) (2)
1 2 -1 ⇒ 0 1 - 5 ⇒ q = 3 c1 , 1 0 -1 ⇒ q = 1 c2 ,
2 -1 1 0 0 0 5 2 -1 -1 1
-1 -1 1 1 0 −1 1
and 1 -1 -1 ⇒ 0 1 0 ⇒ q(3) = 0 c3 . One can easily verify the
2 -1 -2 0 0 0 1
1 -1 1
correctness of each eigenvector by a simple multiplication, e.g. 1 1 -1 ×
2 -1 0
1 2 1
0 = 0 =2· 0 .
1 2 1
-1 1 1
The general solution is thus y = c1 x
3 e + c2 1 e + c3 0 e2x .
−x

5 1 1

The case of IDouble (Multiple) EigenvalueJ

For each such eigenvalue we must first find all possible solutions of the

qeλx

type (i.e. find all LI eigenvectors), then (if we get fewer eigenvectors than the
multiplicity of λ) we have to find all possible solutions having the form of

(qx + s)eλx

where q and s are two constant vectors to be found by substituting this (trial)
solution into the basic equation y0 = Ay. As a result we get q = (A− λI)q x +(A −
λI)s ⇒ q is such a linear combination of the q-vectors found in the previous step
which allows (A − λI)s = q to be solved in terms of s. Note that both the
components of s and the coefficients of the linear combination of the q vectors are
the unknowns of this problem. One thus needs to append all the q vectors found
in the previous step to A − λI (as extra columns) and reduces the whole (thus
appended) matrix to its echelon form. It is the number of ’skipped’ q-columns
which tells us how many distinct solutions there are (the ’skipped’ columns A − λI
would be adding, to s, a multiple of the qeλx solution already constructed). One
thus cannot get more solutions than in the previous step.
And, if still not done, we have to proceed to
x2
(q + sx + u)eλx
2!
where q and s [in corresponding pairs] is a combination of solutions from the
previous step such that (A − λI)u = s can be solved in terms of u. Find how many
LI combinations of the s-vectors allow a nonzero u-solution [by the ’appending’
technique], solve for the corresponding u-vectors, and if necessary move on to
3 2
the next (q x3! + s x2! + ux + w)eλx step, until you have as many solutions as the
eigenvalue’s multiplicity. Note that (A − λI)2 s = 0, (A − λI)3 u = 0, ....
57

EXAMPLES:
5 2 2
1. A = 2 2 -4 has λ3 − 9λ2 + 108 as its characteristic polynomial [hint:
2 -4 2
there is a double root] ⇒ 3λ2 − 18λ = 0 has two roots, 0 [does not check] and
6 [checks]. Furthermore, (λ3 − 9λ2 + 108) ÷ (λ − 6)2 = λ + 3 ⇒ the three
8 2 2
eigenvalues are −3 and 6 [duplicate]. Using λ = −3 we get: 2 5 -4 ⇒
2 -4 5
1
1 0 2 -1
0 1 -1 ⇒ c1 2 , which, when multiplied by e−3x , gives the first basic
0 0 0 2
-1 2 2 1 -2 -2 2 2
solution. Using λ = 6 yields: 2 -4 -4 ⇒ 0 0 0 ⇒ c2 1 + c3 0
2 -4 -4 0 0 0 0 1
which, when multiplied by e6x , supplies the remaining two basic solutions.

1 -3 1
2. A = 2 -1 -2 has λ3 − 3λ − 2 as its characteristic polynomial, with
2 -3 0
roots: λ1 = −1 [one of our rules] ⇒ (λ3 − 3λ − 2) ÷ (λ + 1) = λ2 −
λ−2 ⇒ λ2= −1 and λ3 = 2. So again, there is one duplicate root.
-1 -3 1 1 0 -1 1
For λ = 2 we get: 2 -3 -2 ⇒ 0 1 0 ⇒ c1 0 e2x . For λ = −1
2 -3 -2 0 0 0 1
2 -3 1 1 0 -1 1
we get: 2 0 -2 ⇒ 0 1 -1 ⇒ c2 1 e−x [a single solution only].
2 -3 1 0 0 0 1
The challenge is to construct the other (last) solution. We have to solve
2 -3 1 1 1 0 -1 12 1
2
1
2 0 -2 1 ⇒ 0 1 -1 0 , getting s = 0 + c 1 , where the sec-
2 -3 1 1 0 0 0 0 0 1
ond part just duplicates the previous basic solution
 and can be discarded.
1
1 2
x + 12
The third basic solution is thus: c3  1 x + 0  e−x ≡ c3 x e−x .
1 0 x

42 -9 9
3. A = -12 39 -9 ⇒ λ3 − 90λ2 + 2700λ − 27000 [hint: triple root] ⇒
-28 21 9
6λ−180 = 0 has a single root of 30 [⇒ triple root of the original polynomial].
12 -9 9 1 - 34 34 3 -3
Finding eigenvectors: -12 9 -9 ⇒ 0 0 0 ⇒ c1 4 and c2 0
-28 21 -21 0 0 0 0 4
30x
are the corresponding eigenvectors [only two] which, when multiplied by e
yield the first two basic solutions. To construct the third, we set up the
58

equations for individual components of s, and for the a1 and a2 coefficients


3 -3 12 -9 9 3 -3
of q ≡ a1 4 +a2 0 , thus: -12 9 -9 4 0 . We bring this to its
0 4 -28 21 -21 0 4
3 3
1 -4 4 0 - 17
matrix echelon form: 0 0 0 1 - 37 which first implies that a1 − 37 a2 =
0 0 0 0 0
1 - 34 34 −c3
0 ⇔ a1 = 3c3 , a2 = 7c3 . This results in 0 0 0 0 to be solved for s1 ,
0 0 0 0
−c3
s2 and s3 [we need a particular solution only] ⇒ s = 0 . The third basic
0
  
3 -3 -1 −12x − 1
solution is thus c3 3 4 +7 0  
x + 0 e = c3 30x
12x e30x .
0 4 0 28x

-103 -53 41
4. A = 160 85 -100 ⇒ λ3 + 165λ2 + 9075λ + 166375 [hint: triple root] ⇒
156 131 -147
-48 -53 41 1 0 14
6λ + 330 = 0 ⇒ λ = −55 [checks] ⇒ 160 140 -100 ⇒ 0 1 -1 ⇒
156 131 -92 0 0 0
1
c1 -4 is the only eigenvector (this, multiplied by e−55x , provides the first ba-
-4
-48 -53 41 1 1 0 14 - 220 9 9
- 220
1 1
sic solution). 160 140 -100 -4 ⇒ 0 1 -1 55 yields s= 55 .
156 131 -92 -4 0 0 0 0 0
9
x − 220
1
The second basic solution is thus c2 (qx + s)e−55x = c2 −4x + 55 e−55x . Fi-
−4x
9 1 131 131
-48 -53 41 - 220 1 0 4 - 48400 - 48400
1 39 39
nally, 160 140 -100 55 ⇒ 0 1 -1 12100 results in u = 12100
156 131 -92 0 0 0 0 0 0
and the corresponding third basic solution:
x2 9 131
³ 2 ´ 2
− 220 x − 48400
c3 q x2 + sx + u e−55x = c3 −2x2 + 55 x 39
+ 12100 e−55x . ¥
−2x2
To deal with IComplex Eigenvalues/VectorsJ
we first write the corresponding solution in a complex form, using the regular
procedure. We then replace each conjugate pair of basic solutions by the real and
imaginary part (of either solution).
EXAMPLE:
59

2 −1 2
√ —1- 2 i -1
y = 0
y ⇒ λ − 6λ + 11 ⇒ λ1,2 = 3 ± 2 i ⇒ √ ⇒
3 4 3 1- 2 i

1- 2 i √
is the eigenvector corresponding to λ1 = 3 + 2 i [its complex
-3

conjugate corresponds to λ2 = 3 − 2 i]. This √ means that the two√ ba-
1 − 2 i (3+√2 i)x 1 − 2i
sic solutions (in their complex form) are e ≡
−3 −3
£ √ √ ¤ 3x
cos( 2x) + i sin( 2x) e and its complex conjugate [i → −i]. Equiv-
alently, we can use the real and imaginary part √ of either
√ of these
√ [up to
cos( 2x) + √ 2 sin( 2x) 3x
a sign, the same answer] to get: y = c1 e +
−3 cos( 2x)
√ √ √
− 2 cos( 2x)√+ sin( 2x) 3x
c2 e . This is the fully general, real solution to
−3 sin( 2x)
the original set of DEs. ¥

Now, we extend our results to the

Non-homogeneous case
of
y0 − Ay = r(x)
where r is a given vector function of x (effectively n functions, one for each equa-
tion). We already know how to solve the corresponding homogeneous version.
There are two techniques to find a particular solution y(p) to the complete
equation; the general solution is then constructed in the usual

c1 y(1) + c2 y(2) + ... + cn y(n) + y(p)

manner.
The first of these techniques (for constructing yp ) is:

IVariation of ParametersJ

As a trial solution, we use


y(T ) = Y·c
where Y is an n by n matrix of functions, with the n basic solutions of the homo-
geneous equation comprising its individual columns:

Y ≡ y(1) y(2) ... y(n)

c1
c2
and c being a single column of the ci coefficients: c ≡ .. , each now considered a
.
cn
function of x [Y·c is just a matrix representation of c1 y(1) + c2 y(2) + ... + cn y(n) ,
with the ci coefficients now being ’variable’].
60

Substituting in the full (non-homogeneous) equation, and realizing that Y0 ≡


A · Y [Y0 represents differentiating, individually, every element of Y] we obtain:
Y0 · c + Y · c0 − A · Y · c = r ⇒ c0 = Y−1 · r. Integrating the right hand side
(component by component) yields c. The particular solution is thus
Z
(p)
y = Y Y−1 · r(x) dx

EXAMPLE:
3 2 4e5x
y0 = y+ ⇒ λ2 −5λ+4 = 0 ⇒ λ1,2 = 1 and 4 with the respective
1 2 0
1 2 ex 2e4x
eigenvectors [easy to construct]: and . Thus Y = ⇒
-1 1 −ex e4x
1 −x
e − 2 e−x 4 4x
e
Y−1 = 13 −4x 1 3−4x . This matrix, multiplied by r(x), yields 34 x . The
3
e 3
e 3
e
componentwise integration of the last vector is trivial [the usual additive
1 4x
e
constants can be omitted to avoid duplication]: 34 x , (pre)multiplied by
3
e
5x
3e 1
Y finally results in: y(p) = 5x . The general solution is thus y = c1
e -1
5x
2 4x 3e
ex + c2 e + 5x .
1 e

µ y2 (0) = −1
Let us make this into an initial-value problem: y1 (0) = 1 and ¶⇔
1 2
1 − 1 3
y(0) = Y(0)c + y(p) (0) = . Solving for c = 31 1
3 − =
-1 3 3
-1 1
2 2 8
3e5x
3 ⇒ y = 3 ex − 3 e4x + . ¥
− 43 − 23 4
3
e5x
The second technique for building yp works only for two
ISpecial CasesJ of r(x)
B When the non-homogeneous part of the equation has the form of
¡ ¢
ak xk + ak−1 xk−1 + ... + a1 x + a0 eβ x
we use the following ’trial’ solution (which is guaranteed to work) to construct y(p) :
¡ ¢
bm xm + bm−1 xm−1 + ... + b1 x + b0 eβ x
where m equals k plus the multiplicity of β as an eigenvalue of A (if β is not an
eigenvalue, m = k, if it is a simple eigenvalue, m = k + 1, etc.).
When β does not coincide with any eigenvalue of A, the equations to solve
to obtain bk , bk−1 , ..., b1 are
(A−βI) bk = −ak
(A−βI) bk−1 = kbk − ak−1
(A−βI) bk−2 = (k − 1)bk−1 − ak−2
..
.
(A−βI) b0 = b1 − a0
61

Since (A−βI) is a regular matrix (having an inverse), solving these is quite


routine (as long as we start from the top).
When β coincides with a simple (as opposed to multiple) eigenvalue of A, we
have to solve

(A−βI) bk+1 = 0
(A−βI) bk = (k + 1)bk+1 − ak
(A−βI) bk−1 = kbk − ak−1
..
.
(A−βI) b0 = b1 − a0

Thus, bk+1 must be the corresponding eigenvector, multiplied by such a constant


as to make the second equation solvable [remember that now (A−βI) is singular].
Similarly, when solving the second equation for bk , a c-multiple of the same eigen-
vector must be added to the solution, with c chosen so that the third equation is
solvable, etc. Each bi is thus unique, even though finding it is rather tricky.
We will not try extending this procedure to the case of β being a double (or
multiple) eigenvalue of A.
B On the other hand, the extension to the case of r(x) = P(x)epx cos(qx) +
Q(x)epx sin(qx), where P(x) and Q(x) are polynomials in x (with vector coeffi-
cients), and p + i q is not an eigenvalue of A is quite simple: The trial solution
has the same form as r(x), except that the two polynomials will have undeter-
mined coefficients, and will be of the same degree (equal to the degree of P(x)
or Q(x), whichever is larger). This trial solution is then substituted into the full
equation, and the coefficients of each power of x are matched, separately for the
cos(qx)-proportional and sin(qx)-proportional terms.
In addition, one can also use the superposition principle [i.e. dividing r(x)
into two or more manageable parts, getting a particular solution for each part
separately, and then adding them all up].

EXAMPLES:
−4 −4 1 2x 0 −x √
1. y0 = y+ e + e ⇒ λ2 + 2λ − 4 = 0 ⇒ λ1,2 = −1 ± 5.
1 2 0 −2
We already know how to construct the solution to the homogeneous part of
the equation, we show only how to deal with y(p) = y(p1 ) + y(p2 ) [for each of
the two r(x) terms]:
-6 -4 -1
y(p1 ) = be2x , substituted back into the equation gives b= ⇒
1 0 0
0
b= 1 .
4

-3 -4 0
Similarly y(p2 ) = be−x [a different b], substituted, gives b= ⇒
1 3 2
- 85
b= 6 .
5
62

0 - 85
The full particular solution is thus y(p) = 1 e2x + 6 e−x .
4 5

-1 2 3 1 0
2. y = 5 -1 -2 y+ 0 e + 4 ⇒ λ3 −λ2 −24λ−7 = 0. If we are interested
0 x

5 3 3 0 0
in the particular solution only, we need to check that neither β = 1 nor β = 0
are the roots of the characteristic polynomial [true].
2
-2 2 3 -1 - 31
Thus y(p1 ) = bex where b solves 5 -2 -2 b = 0 ⇒ b = 20
31
.
25
5 3 2 0 - 31
-1 2 3 0 - 12
7
Similarly y(p2 ) = b where 5 -1 -2 b = -4 ⇒ b = 727
.
52
5 3 3 0 7
2
- 31 - 12
7
20 72
Answer: y(p) = 31
ex + 7
.
- 25
31
52
7

-1 2 3 x−1
0
3. y = 5 -1 -2 y + 2 [characteristic polynomial same as previous
5 3 3 −2x
-1 2 3 -1
(p)
example]. y = b1 x + b0 with 5 -1 -2 b1 = 0 ⇒
5 3 3 2
- 57 -1 2 3 - 57 -1
51 51
b1 = 7 and 5 -1 -2 b0 = 7 − 2 ⇒
- 38
7
5 3 3 - 387
0
155 5
7 7
x + 155
7
b0 = - 1210
7
. Thus y(p) = 51
7
x − 1210
7
.
863
7
- 38
7
x + 863
7

-4 -3 1 −x
4. y0 = y+ e ⇒ λ2 + 3λ + 2 = 0 ⇒ λ1,2 = −1, −2. Now our
2 1 2
β = −1 ’coincides’ with a simple eigenvalue.
-3 -3
y(p) = (b1 x + b0 ) e−x where b =0⇒
2 2 1
1 -3 -3 1 1 -3 -3 1 -1 1 1 - 13 1
3
b1 = c1 and b0 = c1 − ⇔ ⇔
-1 2 2 -1 2 2 2 -1 -2 0 0 1 8
.
-8 3 1
This fixes the value of c1 at −8 ⇒ b1 = and b0 = + c0 . Being
8 0 -1
the last b, we can set c0 = 0 (not to duplicate the homogeneous part of the
solution).
−8x + 3 −x
Answer: y(p) = e .
8x
63

1 1 1 x
5. y = 1 4 -1 y + 0 ⇒ λ3 − 2λ2 − 15λ = 0 ⇒ β = 0 is a simple
0

-5 -8 -3 4
eigenvalue.
1 1 1
We construct y(p) = b2 x2 + b1 x + b0 where 1 4 -1 b2 = 0 ⇒ b2 =
-5 -8 -3
-5
c2 2
3
1 1 1 1 1 1 1 -10 -1 1 0 53 - 44
3
- 43
1 4 -1 b1 = 2b2 − 0 ⇔ 1 4 -1 4 0 ⇔ 0 1 - 23 14
3
1
3
2
-5 -8 -3 0 -5 -8 -3 6 0 0 0 0 1 15
5 28 28
1 0 3 45 45
-5
2
⇒ c2 = − 15 and 0 1 - 23 - 13
45
13
⇒ b1 = - 45 + c1 2
0 0 0 0 0 3
1 1 1 0 1 1 1 -5 2845
1 0 53 - 22
3
125
135
1 4 -1 b0 = b1 − 0 ⇔ 1 4 -1 2 - 45 ⇔ 0 1 - 23
13 7
3
41
- 135
-5 -8 -3 4 -5 -8 -3 3 - 180
45
0 0 0 -15 - 81
45
5 1219 1219
1 0 3 675 675
3
⇒ c1 = − 25 and 0 1 - 23 − 394
675 ⇒ b0 = - 394
675 [no need for c0 ].
0 0 0 0 0
2 2
3
x + 11 9
x + 1219
675
4 2
Answer: y(p) = - 15 x − 119225
x − 394
675
.
2 2 9
- 5 x − 25 x

ITwo Final RemarksJ

Note that an equation of the type

By0 = Ay + r

can be converted to the regular type by (pre)multiplying it by B−1 .

EXAMPLE:

2y10 − 3y20 = y1 − 2y2 + x y10 = 75 y1 + 75 y2 + 15 x − 12


5

y10 + y20 = 2y1 + 3y2 − 4 y20 = 35 y1 + 85 y2 − 15 x − 85
which we know how to solve.

And, finally: Many important systems of differential equations you encounter


in Physics are nonlinear, of second order. A good example is: r̈ + µ rr3 = 0 (µ is
a constant) describing motion of a planet around the sun. Note that r actually
stands for three dependent variables, x(t), y(t) and z(t). There is no general method
for solving such equations, specialized techniques have to be developed for each
particular case. We cannot pursue this topic here.
64
65

Chapter 6 POWER-SERIES SOLUTION


of the following equation

y 00 + f (x)y 0 + g(x)y = 0 (MAIN)


So far we have not discovered any general procedure for solving such equations.
The technique we develop in this chapter can do the trick, but it provides the
Maclaurin expansion of the solution only (which, as we will see, is quite often
sufficient to identify the function).
We already know that once the solution to a homogeneous equation is found,
the V of P technique can easily deal with the corresponding nonhomogeneous case
(e.g. any r(x) on the right hand side of MAIN). This is why, in this chapter, we
restrict our attention to homogeneous equations only.

The main idea


is to express y as a power series in x, with yet to be determined coefficients:
X

y(x) = ci xi
i=0

then substitute this expression into the differential equation to be solved [y 0 =


P∞ P

ici xi−1 needs to be multiplied by f (x) and y 00 = i(i − 1)ci xi−2 by g(x), where
i=1 i=2
both f (x) and g(x) must be expanded in the same manner — they are usually in
that form already] and make the overall coefficient of each power of x is equal to
zero.
This results in (infinitely many, but regular) equations for the unknown coeffi-
cients c0, c1 , c2 , .... These can be solved in a recurrent [some call it recursive]
manner (i.e. by deriving a simple formula which computes ck based on c0 , c1 , ...,
ck−1 ; c0 and c1 can normally be chosen arbitrarily).

EXAMPLES:
P
∞ P

1. y 00 + y = 0 ⇒ i(i − 1)ci xi−2 + ci xi ≡ 0. The main thing is to express the
i=2 i=0
left hand side as a single infinite summation, by replacing the index i of the
P


first term by i∗ + 2, thus: (i∗ + 2)(i∗ + 1)ci∗ +2 xi [note that the lower limit
i∗ =0
had to be adjusted accordingly]. But i∗ is just a dummy index which can be
called j, k or anything else including i. This way we get (combining both
P∞
terms): [(i + 2)(i + 1)ci+2 + ci ] xi ≡ 0 which implies that the expression
i=0
in square brackets must be identically equal to zero. This yields the following
recurrent formula
−ci
ci+2 =
(i + 2)(i + 1)
where i = 0, 1, 2, ...., from which we can easily construct the compete
sequence of the c-coefficients, as follows: Starting with c0 arbitrary, we get
66

k
−c0
c2 = 2×1 −c2
, c4 = 4×3 = c4!0 , c6 = 6×5
−c4
= −c
6!
0
, ...., c2k = (−1)
(2k)!
in general. Similarly,
choosing an arbitrary value for c1 we get c3 = 3! , c5 = c5!1 , .... The complete
−c1

solution is thus
x2 x4 x6 x3 x5 x7
y = c0 (1 − + − + ...) + c1 (x − + − + ...)
2! 4! 6! 3! 5! 7!
where the infinite expansions can be easily identified as those of cos x and
sin x, respectively. We have thus obtained the expected y = c0 cos x + c1 sin x
[check].
We will not always be lucky enough to identify each solution as a combina-
tion of simple functions, but do learn to recognize at least the following
expansions:

(1 − ax)−1 = 1 + ax + a2 x2 + a3 x3 + ...
a2 x2 a3 x3
eax = 1 + ax + + + ...
2! 3!
a2 x2 a3 x3
ln(1 − ax) = −ax − − − ... (no factorials)
2 3
with a being any number [often a = 1].
2 4 6
And realize that 1 − x3! + x5! − x7! + ... [a power of x missing] must be sinx x ,
2 4 6 √ x4 x6 x8
1− 3x2! + 9x4! − 27x
6!
+... is the expansion of cos( 3x),

1+ x2
+ 2!
+ 3!
+ 4!
+...
x x2 x3 sin x
must be exp(x2 ), and 1 − 3! + 5! − 7! + ... is √x .
P
∞ P
∞ P

2. (1 − x2 )y 00 − 2xy 0 + 2y = 0 ⇒ i(i − 1)ci xi−2 − i(i − 1)ci xi − 2 ici xi +
i=2 i=2 i=1
P
∞ P∞
2 ci xi ≡ 0. By reindexing (to get the same xi in each term) we get (i∗ +
i=0 i∗ =0
∗ P
∞ P
∞ P

2)(i∗ + 1)ci∗ +2 xi − i(i − 1)ci xi − 2 ici xi + 2 ci xi ≡ 0. Realizing that,
i=2 i=1 i=0
as a dummy index, i∗ can be called i (this is the last time we introduced i∗ ,
form now on we will call it i directly), our equation becomes:
X

[(i + 2)(i + 1)ci+2 − i(i − 1)ci − 2ici + 2ci ] xi ≡ 0
i=0

[we have adjusted the lower limit of the second and third term down to 0
without affecting the answer — careful with this though, things are not always
that simple]. The square brackets must be identically equal to zero which
implies:
i2 + i − 2 i−1
ci+2 = ci = ci
(i + 2)(i + 1) i+1
where i = 0, 1, 2, .... Starting with an arbitrary c0 we get c2 = −c0 , c4 =
1
c = − 13 c0 , c6 = 34 c4 = − 14 c0 , c6 = − 16 c0, .... Starting with c1 we get c3 = 0,
3 2
c5 = 0, c7 = 0, .... The solution is thus
x4 x6 x8
y = c0 (1 − x2 − − − − ...) + c1 x
3 5 7
67

One of the basic solutions is thus simply equal to x, once we know that we can
use the V of P technique to get an analytic expression for the other solution,
thus: y T (x) = c(x) · x substituted into the original equation gives:

c00 x(1 − x2 ) + 2c0 (1 − 2x2 ) = 0


(1−2x2 )
£A ¤
With c0 = z this gives dz z
= −2 x(1−x B
2 ) = −2 x + 1−x + 1+x
C
(partial frac-
tions). To solve for A, B and C we substitute 0, 1 and −1 into A(1 −
x2 ) + Bx(1 + x)£ + Cx(1 − x) = 0 ⇒ A = ¤1, B = − 12 and C = 12 .
Thus ln z = −2 ln x + 12 ln(1 − x) + 12 ln(1 + x) + c̃ ⇒ z = x2 (1−x ĉ
2) =
h i h i
ĉ xĀ2 + B̄x + 1−x
C̄ D̄
+ 1+x = (after solving in a similar manner) ĉ x12 + 2(1−x)
1 1
+ 2(1+x) ⇒
¡ 1 1 1+x ¢ T
¡ x 1+x
¢
c(x) = ĉ − x + 2 ln 1−x + c1 ⇒ y = c0 1 − 2 ln 1−x + c1 x. One can easily
verify that this agrees with the previous expansion.
P
∞ P
∞ P

3. y 00 − 3y 0 + 2y = 0 ⇒ i(i − 1)ci xi−2 − 3 ici xi−1 + 2 ci xi ≡ 0 ⇒
i=2 i=1 i=0
P

3(i+1)ci+1 −2ci
[(i + 2)(i + 1)ci+2 − 3(i + 1)ci+1 + 2ci ] xi ≡ 0 ⇒ ci+2 = (i+2)(i+1)
. By
i=0
choosing c0 = 1 and c1 = 0 we can generate the first basic solution [c2 =
3×0−2×1
2
= −1, c3 = 3×2×(−1)−2×0
3×2
= −1, ...]:

7 4 1
c0 (1 − x2 − x3 − x − x5 − ...)
12 4
similarly with c0 = 0 and c1 = 1 the second basic solution is:
3 7 5 31 5
c1 (x + x2 + x3 + x4 + x + ...)
2 6 8 120
There is no obvious pattern to either sequence of coefficients. Yet we know
that, in this case, the two basic solutions should be simply ex and e2x . The
trouble is that our power-series technique presents these in a hopelessly en-
tangled form of 2ex − e2x [our first basic solution] and e2x − ex [the second],
and we have no way of properly separating them.
Sometimes the initial conditions may help, e.g. y(0) = 1 and y 0 (0) = 1
[these are effectively the values of c0 and c1 , respectively], leading to c2 =
3
3−2 −1
2
= 12 , c3 = 3−2
3×2
= 16 , c4 = 4×3
2 1
= 24 , ... form which the pattern of the ex -
expansion clearly emerges. We can then conjecture that ci = i!1 and prove it
1
by substituting into (i +2)(i+1)ci+2 −3(i +1)ci+1 +2ci = (i +2)(i+1) (i+2)! −
1 1 1−3+2
3(i + 1) (i+1)! + 2 i! = i! ≡ 0. Similarly, the initial values of y(0) = c0 = 1
2 3
and y 0 (0) = c1 = 2 will lead to 1 + 2x + (2x)
2
+ (2x)
3!
+ ... [the expansion of
2i
2x
e ]. Prove that ci = i! is also a solution of our recurrence equation!
In a case like this, I often choose such ’helpful’ initial conditions; if not,
you would be asked to present the first five nonzero terms of each basic
(entangled) solution only (without identifying the function). ¥

Optional (up to the ⊗ mark):


68

Sturm-Liouville eigenvalue problem


The basic equation of this chapter, namely y 00 + f (x)y 0 + g(x)y = 0 can be always
rewritten in the form of
0
[p(x)y 0 ] + q(x)y = 0
R
where p(x) = e f (x) dx and q(x) = g(x) · p(x) [verify]. In Physics, we often need to
solve such an equation under the following simple conditions of y(x1 ) = y(x2 ) = 0.
A trivial solution to this boundary-value problem is always y ≡ 0, the real task is
to find non-zero solutions, if possible.
Such nonzero solutions will be there, but only under rather special circum-
stances. To give ourselves more flexibility (and more chances of finding a nonzero
solution), we multiply the q-function by an arbitrary number, say −λ, to get:
(py 0 )0 = λqy, or equivalently
(py 0 )0
= λy (S-L)
q
(py 0 )0
One can easily verify that ≡ Ay defines a linear operator [an opera-
q
tor is a ’prescription’ for modifying a function into another function; linear means
A(c1 y1 + c2 y2 ) = c1 Ay1 + c2 Ay2 , where c1 and c2 are arbitrary constants].
The situation is now quite similar to our old eigenvalue problem of Ay = λy,
where A is also a linear operator, but acting on vectors rather than functions. The
analogy is quite appropriate: similarly to the matrix/vector case we can find a
nonzero solution to (S-L) only for some values of λ, which will also be called the
problem’s eigenvalues [the corresponding y’s will be called eigenfunctions].
The only major difference is that now we normally find infinitely many of these.
One can also prove that, for two distinct eigenvalues, the corresponding eigen-
functions are orthogonal, in the following sense:

Zx2
q(x) · y1 (x) · y2 (x) dx = 0
x1

½
(py10 )0 = λ1 y1 q
Proof: By our assumptions: . Multiply the first equation by
(py20 )0 = λ2 y2 q
y2 and the second one by y1 and subtract, to get: y2 (py10 )0 − y1 (py20 )0 =
(λ1 − λ2 )q y1 y2 . Integrate this (the left hand side by parts) from x1 to x2 :
Zx2
0 x2 0 x2
y2 py1 |x1 − y1 py2 |x1 = (λ1 − λ2 ) q y1 y2 dx. The left hand side is zero due to
x1
our boundary conditions, which implies the rest. ¤

Notes:

• When p(x1 ) = 0, we can drop the initial condition y(x1 ) = 0 [same with
p(x2 ) = 0].
69

Zx2
• We must always insist that each solution be integrable in the q y 2 dx sense
x1
[to have a true eigenvalue problem]. From now on, we allow only such inte-
grable functions as solutions to a S-L problem, without saying.

• Essentially the same proof would hold for a slightly more complicated equa-
tion
(py 0 )0 + r y = λ q y
where r is yet another specific function of x (we are running out of letters —
nothing to do with the old nonhomogeneous term, also denoted r). ¥

EXAMPLES:
1. [(1 − x2 )y 0 ]0 + λy = 0 is a so called Legendre equation. Its (integrable,
R1
between x1 = −1 and x2 = 1) solutions must meet y1 (x) · y2 (x) dx = 0
−1
[since 1 − x2 = 0 at each x1 and x2 , we don’t need to impose any boundary
conditions on y].
√ λy
2. [ 1 − x2 y 0 ]0 + √ = 0 is the Chebyshev equation. The solutions meet
1 − x2
R1 y1 (x)·y2 (x)

1−x2
dx = 0 [no boundary conditions necessary].
−1

3. [xe−x y 0 ]0 +λe−x y = 0 is the Laguerre equation (x1 = 0, x2 = ∞). The solu-


R∞
tions are orthogonal in the e−x y1 y2 dx = 0 sense [no boundary conditions
0
necessary]. ⊗

Using our power-series technique, we are able to solve the above equations (and,
consequently, the corresponding eigenvalue problem). Let us start with the

ILegendre EquationJ

(1 − x2 )y 00 − 2xy 0 + λy = 0

(Note that we already solved this equation with λ = 2, see Example 2 of the
’main idea’ section).
The expression to be identically equal to zero is (i + 2)(i + 1)ci+2 − i(i − 1)ci −
λ − (i + 1) i
2ici + λci ⇒ ci+2 = − ci . If we allow the ci -sequence to be infinite,
(i + 2)(i + 1)
R1
the corresponding function is not integrable in the y 2 dx sense [we skip showing
−1
that, they do it in Physics], that is why we have to insist on finite, i.e. polynomial
solution. This can be arranged only if the numerator of the ci+2 = ... formula is
zero for some integer value of i, i.e. iff

λ = (n + 1) n
70

[these are the eigenvalues of the corresponding S-L problem].


We then get the following polynomial solutions — Pn (x) being the standard
notation:

P0 (x) ≡ 1
P1 (x) = x
P2 (x) = 1 − 3x2
5
P3 (x) = x − x3
3
35
P4 (x) = 1 − 10x2 + x4
3
....

i.e., in general, Pn (x) = 1− n·(n+1)


2!
x2 + (n−2)n·(n+1)(n+3)
4!
x4 − (n−4)(n−2)n·(n+1)(n+3)(n+5)
6!
x6 +
... when n is even, and Pn (x) = x − (n−1)·(n+2)
3!
x3 + (n−3)(n−1)·(n+2)(n+4)
5!
x5 −
(n−5)(n−3)(n−1)·(n+2)(n+4)(n+6) 7
7!
x + ... when n is odd.
Realize that, with each new value of λ, the corresponding Pn -polynomial solves
a slightly different equation (that is why we have so many solutions). But we also
know that, for each n, the (single) equation should have a second solution. It does,
(1) (2)
but the form of these is somehow more complicated, namely: Qn−1 + Qn · ln 1+x 1−x
,
(1) (2)
where Qn−1 and Qn are polynomials of degree n − 1 and n, respectively (we have
constructed the full solution for the n = 1 case). These so called Legendre
functions of second kind are of lesser importance in Physics [since they don’t
meet the integrability condition, they don’t solve the eigenvalue problem]. We will
not go into further details.

Optional: IAssociate Legendre EquationJ


· ¸
2 00 0 m2
(1 − x )y − 2xy + (n + 1)n − y=0
1 − x2
1 2
can be seen as an eigenvalue problem with q(x) = 1−x 2 and λ = m [the solutions

R1 1 ·y2
will thus be orthogonal in the y1−x 2 dx = 0 sense, assuming that they share the
−1
same n but the m’s are different].
By using the substitution y(x) = (1 − x2 )m/2 · u(x), the equation is converted
to (1 − x2 )u00 − 2(m + 1)xu0 + [(n + 1)n − (m + 1)m]u = 0, which has the following
(m)
polynomial solution: Pn (x) [the mth derivative of the Legendre polynomial Pn ].

Proof: Differentiate the Legendre equation m times: (1 − x2 )Pn00 − 2xPn0 + (n +


(m+2) (m+1) (m) (m+1)
1)nPn = 0 getting: (1−x2 )Pn − 2mxPn − m(m−1)Pn − 2xPn −
(m) (m) 2 (m+2) (m+1) (m)
2mPn + (n+1)nP = (1−x )Pn − 2x(m+1)Pn − (m+1)mPn +
(n + 1)nPn = 0 ¤
(m)

IChebyshev equationJ

(1 − x2 )y 00 − xy 0 + λy = 0
71

[verify that this is the same equation as in the S-L discussion].


It easy to see that the Legendre recurrence formula needs to be modified to
read (i + 2)(i + 1)ci+2 − i(i − 1)ci − ici + λci = 0 [only the ici coefficient has changed
λ − i2
from −2 to −1] ⇒ ci+2 = − ci . To make the expansion finite [and the
(i + 2)(i + 1)
resulting function square-integrable] we have to choose

λ = n2 (Eigenvalues)

This leads to the following Chebyshev polynomials:

T0 ≡ 1
T1 = x
T2 = 1 − 2x2
4
T3 = x − x3
3
....
2 2 2 2 2 2 2 2 2
i.e. 1 − n2! x2 + n (n4!−2 ) x4 − n (n −26!)(n −4 ) x6 + ... in the even case, and x −
n2 −1 3 (n2 −1)(n2 −32 ) 5 (n2 −1)(n2 −32 )(n2 −52 ) 7
3!
x + 5!
x − 7!
x + ... in the odd case.
The
√ corresponding set of second basic solutions would consist of functions of
2
the 1 − x Qn (x) type, where Qn is also a polynomial of degree n.

Method of Frobenius
The power-series technique described so far is applicable only when both f (x)
and g(x) of the MAIN equation can be expanded at x = 0. This condition is
violated when either f or g (or both) involve a division by x or its power [e.g.
y 00 + x1 y 0 + (1 − 4x12 ) y = 0].
To make the power-series technique work in some of these cases, we must extend
it in a manner described shortly (the extension is called the method of Frobenius).
The new restriction is that the singularity of f is (at most) of the first degree in
x, and that of g is no worse than of the second degree. We can thus rewrite the
main equation as
a(x) 0 b(x)
y 00 + y + 2 y=0 (Frobenius)
x x
where a(x) and b(x) are regular [i.e. ’expandable’: a(x) = a0 + a1 x + a2 x2 +
a3 x3 + ...., and b(x) = b0 a0 + b1 x + b2 x2 + b3 x3 + ....].
The trial solution now has the form of
X

(T )
y = ci xr+i = c0 xr + c1 xr+1 + c2 xr+2 + ....
i=0

where r is a number (no necessarily an integer) yet to be found. When substituted


into the above differential equation (which is normally simplified by multiply it by
x2 ), the overall coefficient of the lowest (rth ) power of x is [r(r − 1) + a0 r + b0 ] c0 .
This must (as all the other coefficients) be equal to zero, yielding the so called
indicial equation for r
r2 + (a0 − 1)r + b0 = 0
72

Even after ignoring the possibility of complex roots [assume this never happens
to us], we have to categorize the solution of the indicial (simple quadratic) equation
into three separate cases:

1. Two distinct real roots which don’t differ by an integer

2. A double root

3. Two roots which differ by an integer, i.e. r2 − r1 is a nonzero integer (zero is


covered by Case 2). ¥

We have to develop our technique separately for each of the three cases:

IDistinct Real RootsJ

The trial solution is substituted into the differential equation with r having
the value of one of the roots of the indicial equation. Making the coefficients of
each power of x cancel out, one gets the usual recurrence formula for the sequence
of the c-coefficients [this time we get two such sequences, one with the first root
r1 and the other, say c∗i , with r2 ; this means that we don’t have to worry about
intermingling the two basic solutions — the technique now automatically separates
them for us]. Each of the two recurrence formula allows a free choice of the first c
(called c0 and c∗0 , respectively); the rest of each sequence must uniquely follow.

EXAMPLE:
5
x2 y 00 + (x2 + 36 )y = 0 [later on we will see that this is a special case of the so
5
called Bessel equation]. Since a(x) ≡ 0 and b(x) = x2 + 36 the indicial
2 5 1 5
equation reads r − r + 36 = 0 ⇒ r1,2 = 6 and 6 [Case 1]. Substituting our
P

trial solution into the differential equation yields ci (r + i)(r + i − 1)xr+i +
i=0
5
P

r+i
P

r+i+2
36
ci x + ci x = 0. Introducing a new dummy index i∗ = i + 2 we
i=0 i=0
P

5
P


get ci [(r + i)(r + i − 1) + 36 ]xr+i + ci∗ −2 xr+i = 0 [as always, i∗ can now
i=0 i∗ =2
be replaced by i]. Before we can combine the two sums together, we have to
deal with the exceptional i = 0 and 1 terms. The first (i = 0) term gave us
our indicial equation and was made to disappear by taking r to be one of the
5
equation’s two roots. The second one has the coefficient of c1 [(r + 1)r + 36 ]
which can be eliminated only by c1 ≡ 0. The rest of the left hand side is
P∞ © ª −ci−2
5
ci [(r + i)(r + i − 1) + 36 ] + ci−2 xr+i ⇒ ci = 5 . So
i=0 (r + i)(r + i − 1) + 36
far we have avoided substituting a specific root for r [to be able to deal with
both cases at the same time], now, to build our two basic solutions, we have
to set
−ci−2 −c0 c0
1. r = 16 , getting ci = ⇒ c2 = 2× 4 , c4 =
4×2× 43 × 10
, c6 = 6×4×2×−c4 ×
0
10 16 , .....
i (i − 23 ) 3 3 3 3
×3

[the odd-indexed coefficients must be all equal to zero]. Even though the ex-
pansion has an obvious pattern, the function cannot be identified as a ’known’
73

function. Based on this expansion, one can introduce a new function [even-
tually a whole set of them], called Bessel, as we do in full detail later on.
1
The first basic solution is thus y1 = c0 x 6 (1 − 38 x2 + 320
9 4 9
x − 10240 x6 + ...).
[For those of you who know the Γ-function, the solution can be expressed in
P

(−1)k ( x2 )2k+1/6
a more compact form of c̃ k!Γ(k+ 2 )
].
k=0 3

−c∗i−2 −c∗0 c∗0 −c∗0


2. r = 56 , getting c∗i = ⇒ c∗2 = 2× 83
, c∗4 = 4×2× 83 × 14
, c∗6 = 6×4×2× 83 × 14 × 20
, ..... ⇒
i (i + 23 ) 3 3 3

5 3 2 9 4 9
P

(−1)k ( x2 )2k+5/6
y2 = c∗0 x 6 (1 − 16
x + 896
x − 35840
x6 + ...) [= c̃∗ k!Γ(k+ 43 )
].
k=0

IDouble rootJ

The first basic solution is constructed in the usual manner of y1 = c0 xr +


c1 xr+1 + c2 xr+2 + .... The second basic solution has the form [guaranteed to work]
of:
y2 = y1 ln x + c∗0 xr + c∗1 xr+1 + c∗2 xr+2 + .....
where y1 is the first basic solution (with c0 set equal to 1, i.e. removing the
multiplicative constant). The corresponding recurrence formula (for the c∗i ’s) will
offer us a free choice of c∗0 , which we normally set equal to 0 (a nonzero choice
would only add c∗0 y1 to our second basic solution). After that, the rest of the c∗i ’s
uniquely follows (they may turn out to be all 0 in some cases).

EXAMPLES:
• (1 + x)x2 y 00 − (1 + 2x)xy 0 + (1 + 2x)y = 0 [a(x) = − 1+2x
1+x
and b(x) = 1+2x
1+x
].
2
The indicial equation is r − 2r + 1 = 0 ⇒ r1,2 = 1 ± 0 [double]. Substituting
P∞ P
∞ P
∞ P∞
ci xi+1 for y yields ci (i + 1)ixi+1 + ci (i + 1)ixi+2 − ci (i + 1)xi+1 −
i=0 i=0 i=0 i=0
P

i+2
P

i+1
P

i+2
2 ci (i + 1)x + ci x + 2 ci x = 0. Combining terms with like
i=0 i=0 i=0
P
∞ P

powers of x: ci i2 xi+1 + ci i(i − 1)xi+2 = 0. Adjusting the index of the
i=0 i=0
P

2 i+1
P

second sum: ci i x + ci−1 (i − 1)(i − 2)xi+1 = 0. The ’exceptional’
i=0 i=1
i = 0 term must equal to zero automatically, our indicial equation takes care
of that [check], the rest implies ci = − (i−1)(i−2)
i2
ci−1 for i = 1, 2, 3, ...., yielding
c1 = 0, c2 = 0,.... The first basic solution is thus c0 x [i.e. y1 = x, verify!].
Once we have identified the first basic solution as a simple function [when
lucky] we have two options:

1. (a) Use V of P: y(x) = c(x) · x ⇒ (1 + x)xc00 + c0 = 0 ⇒ dz z


= ( 1+x1
− x1 ) dx ⇒
ln z = ln(1 + x) − ln x + c̃ ⇒ c0 = c∗0 1+x
x
⇒ c(x) = c∗0 (ln x + x) + c0 . This
makes it clear that the second basic solution is x ln x + x2 .
P∞
(b) Insist on using Frobenius: Substitute y (T ) = x ln x + c∗i xi+1 into the
i=0
original equation. The sum will give you the same contribution as before,
74

the x ln x term (having no unknowns) yields an extra, non-homogeneous


term of the corresponding recurrence equation. There is a bit of an
automatic simplification when substituting y1 ln x (our x ln x) into the
equation, as the ln x-proportional terms must cancel. What we need is
y0
thus y → 0, y 0 → yx1 and y 00 → 2 x1 − xy12 .This substitution results in the
P
∞ P

same old [except for c → c∗ ] c∗i i2 xi+1 + c∗i−1 (i − 1)(i − 2)xi+1 on
i=0 i=1
the left hand side of the equation, and −(1 + x)x2 · x1 + (1 + 2x)x = x2
[don’t forget to reverse the sign] on the right hand side. This yields
the same set of recurrence formulas as before, except at i = 1 [due
to the nonzero right-hand-side term]. Again we get a ’free choice’ of
c∗0 [indicial equation takes care of that], which we utilize by setting c∗0
equal to zero (or anything which simplifies the answer), since a nonzero
c∗0 would only add a redundant c∗0 y1 to our second basic solution. The
x2 -part of the equation (i = 1) then reads: c∗1 x2 + 0 = x2 ⇒ c∗1 = 1.
The rest of the sequence follows from c∗i = − (i−1)(i−2)
i2
c∗i−1 , i = 2, 3,
4... ⇒ c∗2 = c∗3 = .... = 0 as before. The second basic solution is thus
y1 ln x + c∗1 x2 = x ln x + x2 [check].

• x(x−1)y 00 +(3x−1)y 0 +y = 0 [a(x) = 3x−1 x−1


x
b(x) = x−1 ] ⇒ r2 = 0 [double root
P
∞ P
∞ P

of 0]. Substituting y (T ) = ci xi+0 yields i(i − 1)ci xi − i(i − 1)ci xi−1 +
i=0 i=0 i=0
P
∞ P
∞ P
∞ P
∞ P

3 ici xi − ici xi−1 + ci xi = 0 ⇔ [i2 +2i+1]ci xi − i2 ci xi−1 = 0 ⇔
i=0 i=0 i=0 i=0 i=0
P
∞ P

(i + 1)2 ci xi − (i + 1)2 ci+1 xi = 0 . The lowest, i = −1 coefficient
i=0 i=−1
is zero automatically, thus c0 is arbitrary. The remaining coefficients are
(i + 1)2 [ci − ci+1 ], set to zero ⇒ ci+1 = ci for i = 0, 1, 2,.... ⇒ c0 = c1 = c2 =
1
c3 = .... ⇒ 1 + x + x2 + x3 + .... = 1−x is the first basic solution. Again, we
can get the second basic solution by either the V of P or Frobenius technique.
ln x
P∞
We demonstrate only the latter: y (T ) = 1−x + c∗i xi+0 getting the same left
i=0 h i
2 1
hand side and the following right hand side: x(x − 1) x(1−x)2 − x2 (1−x) +
1
(3x − 1) · x(1−x) = 0 [not typical, but it may happen]. This means that not
only c0 , but all the other c∗ -coefficients can be set equal to zero. The second

ln x
basic solution is thus 1−x [which can be verified easily by direct substitution].

I r1 − r2 Equals a Positive IntegerJ

(we choose r1 > r2 ).


P

The first basic solution can be constructed, based on y (T ) = ci xi+r1 , in the
i=0
usual manner (don’t forget that r1 should be the bigger root). The second basic
solution will then have the form of
X

Ky1 ln x + c∗i xi+r2
i=0
75

where K becomes one of the unknowns (on par with the c∗i ’s), but it may turn
out to have a zero value. Note that we will first have a free choice of c∗0 (must be
non-zero) and then, when we reach it, we will also be offered a free choice of c∗r1 −r2
(to simplify the solution, we usually set it equal to zero — a nonzero choice would
only add an extra multiple of y1 ).

EXAMPLES:
2 2
• (x2 − 1)x2 y 00 − (x2 + 1)xy 0 + (x2 + 1)y = 0 [a(x) = − xx2 +1
−1
and b(x) = xx2 +1
−1
]⇒
P
∞ P

r2 −1 = 0 ⇒ r1,2 = 1 and −1. Using y (T ) = ci xi+1 we get: (i+1)ici xi+3 −
i=0 i=0
P

i+1
P

i+3
P

i+1
P

i+3
P

(i + 1)ici x − (i + 1)ci x − ci (i + 1)x + ci x + ci xi+1 =
i=0 i=0 i=0 i=0 i=0
P
∞ P
∞ P
∞ P

0 ⇔ i2 ci xi+3 − i(i + 2)ci xi+1 = 0 ⇔ i2 ci xi+3 − (i + 2)(i +
i=0 i=0 i=0 i=−2
i+3
4)ci+2 x = 0. The lowest i = −2 term is zero automatically [⇒ c0 can
have any value], the next i = −1 term [still ’exceptional’] disappears only
i2 ci
when c1 = 0. The rest of the c-sequence follows from ci+2 = (i+2)(i+4) with
i = 0, 1, 2, ... ⇒ c2 = c3 = c4 = .... = 0. The first basic solution is thus c0 x
[y1 = x, discarding the constant]. To construct the second basic solution, we
P∞ P
∞ P

substitute Kx ln x + c∗i xi−1 for y, getting: (i − 1)(i − 2)ci xi+1 − (i −
i=0 i=0 i=0
i−1
P

i+1
P

i−1
P

i+1
P

1)(i − 2)ci x − (i − 1)ci x − c(i − 1)i x + ci x + ci xi−1 =
i=0 i=0 i=0 i=0
P

2 i+1
P

i−1
P

2 i+1
P

(i − 2) ci x − i(i − 2)ci x = (i − 2) ci x − (i + 2)ici+2 xi+1
i=0 i=0 i=0 i=−2
on the left hand side, and −(x2 − 1)x2 · Kx + (x2 + 1)x · K = 2Kx on the
right hand side (the contribution of Kx ln x). The i = −2 term allows c∗0 to
be arbitrary, i = −1 requires c∗1 = 0, and i = 0 [due to the right hand side,
the x1 -terms must be also considered ’exceptional’] requires 4c∗0 = 2K ⇒
K = 2c∗0 , and leaves c∗2 free for us to choose (we take c∗2 = 0). After that,
2
c∗i+2 = (i−2) c∗ where i = 1, 2, 3, .... ⇒ c∗3 = c∗4 = c∗5 = ..... = 0. The second
(i+2)i i ¡ ¢
basic solution is thus c∗0 2x ln x + x1 [verify!].

• x2 y 00 +xy 0 +(x2 − 14 )y = 0 ⇒ r2 − 14 = 0 ⇒ r1,2 = 12 and − 12 . Substituting y (T ) =


P
∞ P∞ P
∞ P∞
ci xi+1/2 , we get (i + 12 )(i − 12 )ci xi+1/2 + (i + 12 )ci xi+1/2 + ci xi+5/2 −
i=0 i=0 i=0 i=0
1
P∞ P
∞ P
∞ P

4
ci xi+1/2 = 0 ⇔ (i + 1)ici x i+1/2
+ ci x i+5/2
= 0 ⇔ (i + 3)(i +
i=0 i=0 i=0 i=−2
P

2)ci+2 xi+5/2 + ci xi+5/2 = 0, which yields: a free choice of c0 , c1 = 0 and
i=0
ci
ci+2 = − (i+2)(i+3) where i = 0, 1, ..... ⇒ c2 = − c3!0 , c4 = c0
5!
, c6 = − c7!0 , ..., and
2 4 6
c3 = c5 = ... = 0. The first basic solution thus equals c0 x (1 − x3! + x5! − x7! + 1/2

P

...) = c0 sinx
√ x ]. Substituting Ky1 ln x +
√ x [⇒ y1 = sin
x
c∗i xi−1/2 for y similarly
i=0
P
∞ P

reduces the equation to (i − 1)ic∗i xi−1/2 + c∗i xi+3/2 on the left hand side
i=0 i=0
76

5/2 9/2
and −x2 · (− xy12 + x2 y10 ) − x · yx1 = −2xy10 = K(−x1/2 + 5x3! − 9x5! +...) on the
P
∞ P

right hand side or, equivalently, (i + 1)(i + 2)c∗i+2 xi+3/2 + c∗i xi+3/2 =
i=−2 i=0
5x5/2 9/2
K(−x + 1/2
3!
− 9x5! +...). c∗0
This implies that can have any value (i = −2),

c1 can also have any value (we make it 0), K must equal zero (i = −1), and
c∗i c∗ c∗ c∗
c∗i+2 = − (i+1)(i+2) for i = 0, 1, 2, ... ⇒ c∗2 = − 2!0 , c∗4 = 4!0 , c∗6 = − 6!0 , ... ⇒
y2 = x−1/2 (1 − x2
2!
+ x4
4!
− ....) = √ x.
cos
x
¥

In each of the previous examples the second basic solution could have been
constructed by V of P — try it.
Also note that so far we have avoided solving a truly non-homogeneous re-
currence formula — K never appeared in more than one of its (infinitely many)
equations.

A few Special functions of Mathematical Physics


This section demonstrates various applications of the Frobenius technique.

ILaguerre EquationJ
1−x 0 n
y 00 + y + y=0
x x
or, equivalently:
(xe−x y 0 )0 + nye−x = 0
which identifies it as an eigenvalue problem, with the solutions being orthogonal
R∞
in the e−x Ln1 (x) · Ln2 (x) dx sense.
0
Since a(x) = 1 − x and b(x) = x we get r2 = 0 [duplicate roots]. Substituting
P i
∞ P
∞ P

ci x for y in the original equation (multiplied by x) results in i2 ci xi−2 + (n−
i=0 i=0 i=0
i−1
P

2 i−1
P

i−1 n−i
i)ci x = 0 ⇔ (i + 1) ci+1 x + (n − i)ci x = 0 ⇒ ci+1 = − (i+1) 2 ci for
i=−1 i=0
i = 0, 1, 2, .... Only polynomial solutions are square integrable in the above sense
(relevant to Physics), so n must be an integer, to make cn+1 and all subsequent
ci -values equal to 0 and thus solve the eigenvalue problem.
The first basic solution is thus Ln (x) [the standard notation for Laguerre
polynomials] =

n n(n − 1) 2 n(n − 1)(n − 2) 3 1


1− 2
x+ 2
x − 2
x + .... ± xn
1 (2!) (3!) n!

The second basic solution does not solve the eigenvalue problem (it is not square
integrable), so we will not bother to construct it [not that it should be difficult —
try it if you like].

Optional: Based on the Laguerre polynomials, one can develop the following solu-
tion to one of the most important problems in Physics (Quantum-Mechanical
treatment of Hydrogen atom):
77

We know that xL00n+m + (1 − x)L0n+m + (n + m)Ln+m = 0 [n and m are two


integers; this is just a restatement of the Laguerre equation]. Differentiating
(2m+3) (2m+2) (2m+2)
2m + 1 times results in xLn+m + (2m + 1)Ln+m + (1 − x)Ln+m −
(2m+1) (2m+1) (2m+1)
(2m + 1)Ln+m + (n + m)Ln+m = 0, clearly indicating that Ln+m is a
solution to xy 00 + (2m + 2 − x)y 0 + (n + m − 1)y = 0. Introducing a new
dependent variable u(x) = xm+1 e−x/2 y, i.e. substituting y = x−m−1 ex/2 u into
the previous equation, leads to u00 + [− 14 + nx − m(m+1)
x2
]u = 0. Introducing a
new independent variable z = 2 x [⇒ u h= 2 u̇ and u = ( ni2 )2 ü where each
n 0 n 00

m(m+1)
dot implies a z-derivative] results in ü + − n12 + z2 − z2
u = 0.
We have thus effectively solved the following (S-L) eigenvalue problem:
· ¸
2 m(m + 1)
ü + λ + − u=0
z z2

[m considered fixed], proving that the eigenvalues are λ = − n12 and con-
structing the respective eigenfunctions [the so called orbitals]: u(z) =
(2m+1)
( 2z
n
)m+1 e−z/2 Ln+m ( 2z
n
). Any two such functions with the same m but dis-
R∞
tinct n1 and n2 will be orthogonal, thus: u1 (z)u2 (z) dz = 0 [recall the
0
general L-S theory relating to (pu0 )0 + (λq + r)u = 0]. Understanding this
short example takes care of a nontrivial chunk of modern Physics. ⊗

IBessel equationJ

x2 y 00 + xy 0 + (x2 − n2 )y = 0

where n has any (non-negative) value.


The indicial equation is r2 − n2 = 0 yielding r1,2 = n, −n.
P
∞ P

To build the first basic solution we use y (T ) = ci xi+n ⇒ i(i+2n)ci xi+n +
i=0 i=0
P
∞ P
∞ P

ci xi+n+2 = 0 ⇔ i(i + 2n)ci xi+n + ci−2 xi+n = 0 ⇒ c0 arbitrary, c1 =
i=0 i=0 i=2
ci−2 c0
c3 = c5 = ... = 0 and ci = − i(2n+i) for i = 2, 4, 6, ... ⇒ c2 = − 2(2n+2) , c4 =
k
(−1) c0
c0 c0
4×2×(2n+2)×(2n+4)
c6 = − 6×4×2×(2n+2)×(2n+4)×(2n+6)
, , ..., c2k = 22k (n+1)(n+2).....(n+k)k!
in general, where k = 0, 1, 2, .... When n is an integer, the last expression can be
k n!c (−1)k c̃o
written as c2k = 2(−1) 0
2k (n+k)!k! ≡ 22k+n k! (n+k)! . The first basic solution is thus

X

(−1)k ( x )2k+n
2

k=0
k!(n + k)!

It is called the Bessel function of the first kind of ’order’ n [note that the ’order’
has nothing to do with the order of the corresponding equation, which is always
2], the standard notation being Jn (x); its values (if not on your calculator) can be
found in tables.
When n is a non-integer, one has to extend the definition of the factorial
function to non-integer arguments. This extension is called a Γ-function, and is
78

’shifted’ with respect to the factorial function, thus: n! ≡ Γ(n + 1). For positive α
(= n + 1) values, it is achieved by the following integral
Z∞
Γ(α) ≡ xα−1 e−x dx
0

[note that for integer α this yields (α − 1)!], for negative α values the extension is
done with the help of
Γ(α)
Γ(α − 1) =
α−1
[its values can often be found on your calculator].
Using this extension, the previous Jn (x) solution (of the Bessel equation) be-
comes correct for any n [upon the (n + k)! → Γ(n + k + 1) replacement].
When n is not an integer, the same formula with n → −n provides the second
basic solution [easy to verify].
Of the non-integer cases, the most important are √ those with a half-integer
1
value of n. One can easily verify [you will need Γ( 2 ) = π] that the corresponding
Bessel functions are elementary, e.g.
r
2
J 1 (x) = sin x
2 πx
r
2
J− 1 (x) = cos x
2 πx
r µ ¶
2 sin x
J 3 (x) = − cos x
2 πx x
...
Unfortunately, the most common is the case of n being an integer.
Constructing the second basic solution is then a lot more difficult. It
P

has, as we know, the form of Ky1 ln x + c∗i xi−n . Substituting this into the
i=0
P
∞ P

Bessel equation yields i(i − 2n)c∗i xi−n + c∗i−2 xi−n on the left hand side and
i=0 i=2
h i X∞
(−1) (2k + n)( x2 )2k+n
k
y0 y1 y1
−K x2 · (2 x1 − x2
) +x· x
= −2K ≡
k=0
k!(n + k)!
X

(−1)k−n (2k − n)( x )2k−n 2
−2K on the right hand side of the recurrence formula.
k=n
(k − n)!k!
One can solve it by taking c∗0 to be arbitrary, c∗1 = c∗3 = c∗5 = .... = 0, and
c∗0 c∗0 c∗0
c∗2 = 2(2n−2) , c∗4 = 4×2×(2n−2)×(2n−4) , c∗6 = 6×4×2×(2n−2)×(2n−4)×(2n−6) , ...,
c∗0 c∗0 (n − k − 1)!
c∗2k = ≡
22k (n − 1)(n − 2).....(n − k)k! 22k (n − 1)!k!
up to and including k = n − 1 [i = 2n − 2]. When we reach i = 2n the right hand
1
side starts contributing! The overall coefficient of xn is c∗2n−2 = −2K 2n (n−1)! ⇒
−c∗0
K=
2n−1 (n − 1)!
79

allowing a free choice of c∗2n .


To solve the remaining part of the recurrence formula (truly non-homogeneous)
is more difficult, so we only quote (and verify) the answer:
(−1)k−n (hk−n + hk )
c∗2k = c∗0
22k (k − n)!k!(n − 1)!
1 1
for k ≥ n, where hk = 1 + 2
+ 3
+ .... + k1 .
Proof: Substituting this c∗2k ≡ c∗i into the recurrence formula and cancelling the
(−1)k−n c∗0
common part of 22k (k−n)!k!(n−1)! x2k−n yields: 2k(2k − 2n)(hk−n + hk ) − 4k(k −
n)(hk−n−1 + hk−1 ) = 4(2k − n). This is a true identity as hk−n − hk−n−1 +
hk − hk−1 = k−n1
+ k1 = k(k−n)
2k−n
[multiply by 4k(k − n)]. ¤

The second basic solution is usually written (a slightly different normalizing


constant is used, and a bit of Jn (x) is added) as:
2 h x i 1 X (−1)k+1−n (hk−n + hk ) ³ x ´2k−n

Yn (x) = Jn (x) ln + γ +
π 2 π k=n (k − n)!k! 2

1 X (n − k − 1)! ³ x ´2k−n
n−1

π k=0 k! 2

where γ is the Euler constant ≈ 0.557 [the reason for the extra term is that the last
formula is derived based on yet another, possibly more elegant approach than ours,
namely: lim Jν cos(νπ)−J
sin(νπ)
−ν
]. Yn (x) is called the Bessel function of second kind of
υ→n
order n.
More on Bessel functions:
To deal with initial-value and boundary-value problems, we have to be able to
evaluate Jn , Jn0 , Yn and Yn0 [concentrating on integer n]. The tables on page A97
of your textbook provide only J0 and J1 , the rest can be obtained by repeated
application of
2n
Jn+1 (x) = Jn (x) − Jn−1 (x)
x
and/or
Jn−1 (x) − Jn+1 (x)
Jn0 (x) =
2
[with the understanding that J−1 (x) = −J1 (x) and lim Jnx(x) = 0 for n = 1, 2,
x→0
3, ...], and the same set of formulas with Y in place of J.
X

(−1)k ( x )2k+2n−1
Proof: [( x2 )n Jn ]0 = n x n−1
( ) Jn
2 2
+ ( x2 )n Jn0 = 2
= ( x2 )n Jn−1 and
k=0
k!(n + k − 1)!
X
(−1)k ( x2 )2k−1
∞ X∞
(−1)k+1 ( x2 )2k+1
[( x2 )−n Jn ]0= − n2 ( x2 )−n−1 Jn +( x2 )−n Jn0
= = −
k=1
(k − 1)!(n + k)! k=0
k!(n + k + 1)!
( x2 )−n Jn+1 . Divided by ( x2 )n and ( x2 )−n respectively, these give
n
Jn + Jn0 = Jn−1
x
80

and
n
− Jn + Jn0 = −Jn+1
x
Adding and subtracting the two yields the rest [for the Y functions the proof
would slightly more complicated, but the results are the same]. ¤

EXAMPLES:
4 4 2 4
1. J3 (1.3) = 1.3 J2 (1.3) − J1 (1.3) = 1.3 [ 1.3 J1 (1.3) − J0 (1.3)] − J1 (1.3) = [2
1.3 1.3
×
0.52202 − 0.62009] − 0.52202 = 0.0411

2. J20 (1.3) = 12 [J1 (1.3) − J3 (1.3)] = 12 [0.52202 − 0.0411] = 0.2405 ¥

Modified Bessel equation:

x2 y 00 + xy 0 − (x2 + n2 )y = 0

[differs from Bessel equation by a single sign]. The two basic solutions can be
developed in almost an identical manner to the ’unmodified’ Bessel case [the results
differ only be an occasional sign]. We will not duplicate our effort, and only mention
the new notation: the two basic solutions are now In (x) and Kn (x) [modified
Bessel functions of first and second kind]. Only I0 and I1 need to be tabulated
as In+1 (x) = In−1 (x) − 2n I and In0 = In−1 +I
x n 2
n+1
(same with In → Kn ).

Transformed Bessel equation:

x2 y 00 + (1 − 2a)xy 0 + (b2 c2 x2c − n2 c2 + a2 )y = 0

where a, b, c and n are arbitrary constants [the equation could have been written as
x2 y 00 + Axy 0 + (B 2 xC −D)y = 0, but the above parametrization is more convenient].
To find the solution we substitute y(x) = xa · u(x) [introducing new dependent
variable u] getting: a(a − 1)u + 2axu0 + x2 u00 + (1 − 2a)(au + xu0 ) + (b2 c2 x2c −
n2 c2 + a2 )u =
x2 u00 + xu0 + (b2 c2 x2c − n2 c2 )u = 0
Then we introduce z = bxc as a new independent variable ³ [recall that u0 → du
dz
· ´
d2 u du d2 u du
bcx and u → dz2 ·(bcx ) + dz ·bc(c−1)x ] ⇒ x · dz2 · (bcx ) + dz · bc(c − 1)xc−2 +
c−1 00 c−1 2 c−2 2 c−1 2
¡ ¢
x · du
dz
· bcxc−1 + (b2 c2 x2c − n2 c2 )u = [after cancelling c2 ]

d2 u du ¡ 2 ¢
z2 · + z · + z − n2
u= 0
dz 2 dz
which is the Bessel equation, having u(z) = C1 Jn (z) + C2 Yn (z) [or C2 J−n (x) when
n is not an integer] as its general solution.
The solution to the original equation is thus

C1 xa Jn (bxc ) + C2 xa Yn (bxc )

EXAMPLES:
81

1. xy 00 − y 0 + xy = 0 [same as x2 y 00 − xy 0 + x2 y = 0] ⇒ a = 1 [from 1 − 2a =
−1], c = 1 [from b2 c2 x2c y = x2 y], b = 1 [from b2 c2 = 1] and n = 1 [from
a2 − n2 c2 = 0] ⇒
y(x) = C1 xJ1 (x) + C2 xY1 (x)

2. x2 y 00 − 3xy 0 + 4(x4 − 3)y = 0 ⇒ a = 2 [from 1 − 2a = −3], c = 2 [from


b2 c2 x2c y = 4x4 y], b = 1 [from b2 c2 = 4] and n = 2 [from a2 − n2 c2 = −12] ⇒

y = C1 x2 J2 (x2 ) + C2 x2 Y2 (x2 )

3. x2 y 00 + ( 81
4
x3 − 35
4
)y = 0 ⇒ a = 12 [from 1 − 2a = 0], c = 3
2
[from x3 ], b = 3
[from b c = 4 ] and n = 2 [from a2 − n2 c2 = − 35
2 2 81
4
]⇒
√ √
y = C1 xJ2 (3x3/2 ) + C2 xY2 (3x3/2 )

4. x2 y 00 −5xy 0 +(x+ 35
4
)y = 0 ⇒ a = 3 [1−2a = −5], c = 12 [xy], b = 2 [b2 c2 = 1]
and n = 1 [a2 − n2 c2 = 35 4
]⇒
√ √
y = C1 x3 J1 (2 x) + C2 x3 Y1 (2 x)

IHypergeometric equationJ

x(1 − x)y 00 + [c − (a + b + 1)x]y 0 − aby = 0

⇒ r2 + r(c − 1) = 0 ⇒ r1,2 = 0 and 1 − c.


P
∞ P
∞ P

Substituting y (T ) = ci xi yields: (i+1)(i+c)ci+1 xi − (i+a)(i+b)ci xi ⇒
i=0 i=−1 i=0
ab
c1 = 1·c c0 , c2 = a(a+1)b(b+1)
1·2·c(c+1)
c0 , c3 = a(a+1)(a+2)b(b+1)(b+2)
1·2·3·c(c+1)(c+2)
c0 , .... which shows that the
first basic solution is
ab a(a + 1)b(b + 1) 2 a(a + 1)(a + 2)b(b + 1)(b + 2) 3
1+ x+ x + x + ....
1·c 1 · 2 · c(c + 1) 1 · 2 · 3 · c(c + 1)(c + 2)
The usual notation for this series is F (a, b; c; x), and it is called the hyper-
geometric function. Note that a and b are interchangeable.. Also note that
when either of them is a negative integer (or zero), F (a, b; c; x) is just a simple
polynomial (of the corresponding degree) — please learn to identify it as such!
Similarly, when c is noninteger [to avoid Case 3], we can show [skipping the
details now] that the second basic solution is

x1−c F (a + 1 − c, b + 1 − c; 2 − c; x)

[this may be correct even in some Case 3 situations, but don’t forget to verify it].

EXAMPLE:
1. x(1 − x)y 00 + (3 − 5x)y 0 − 4y = 0 ⇒ ab = 4, a + b + 1 = 5 ⇒ b2 − 4b + 4 = 0 ⇒
a = 2, b = 2, and c = 3 ⇒ C1 F (2, 2; 3; x) + C2 x−2 F (0, 0, ; −1; x) [the second
part is subject to verification]. Since F (0, 0; −1; x) ≡ 1, the second basic
solution is x−2 , which does meet the equation [substitute].
82

Transformed Hypergeometric equation:

(x − x1 )(x2 − x)y 00 + [D − (a + b + 1)x]y 0 − aby = 0

where x1 and x2 (in addition to a, b, and D) are specific numbers.


x − x1
One can easily verify that changing the independent variable to z =
x2 − x1
transforms the equation to
· ¸
d2 y D − (a + b + 1)x1 dy
z(1 − z) 2 + − (a + b + 1)z − aby = 0
dz x2 − x1 dz

which we know how to solve [hypergeometric].

EXAMPLES:
1. 4(x2 − 3x + 2)y 00 − 2y 0 + y = 0 ⇒ (x − 1)(2 − x)y 00 + 12 y 0 − 14 y = 0 ⇒ x1 = 1,
x2 = 2, ab = 14 and a + b + 1 = 0 ⇒ b2 + b + 14 = 0 ⇒ a = − 12 and b = − 12 ,
1
−(a+b+1)x1
and finally c = 2
x2 −x1
= 12 . The solution is thus

1 1 1 3
y = C1 F (− , − ; ; x − 1) + C2 (x − 1)1/2 F (0, 0; ; x − 1)
2 2 2 2
[since z = x − 1]. Note that F (0, 0; 32 ; x − 1) ≡ 1 [some hypergeometric
functions are elementary or even trivial, e.g. F (1, 1; 2; x) ≡ − ln(1−x)
x
, etc.].

2. 3x(1 + x)y 00 + xy 0 − y = 0 ⇒ (x + 1) (0 − x) y 00 − 13 xy + 13 y = 0 ⇒ x1 = −1
[note the sign!] x2 = 0, ab = − 13 and a + b + 1 = 13 ⇒ a = 13 and b = −1 ⇒
0− 13 (−1) 1
c= 1
= 3

1 1 1 5
y = C1 F ( , −1; ; x + 1) + C2 (x + 1)2/3 F (1, − ; ; x + 1)
3 3 3 3
[the first F (...) equals to −x; coincidentally, even the second F (...) can be
converted to a rather lengthy expression involving ordinary functions]. ¥
Part II
VECTOR ANALYSIS

83
85

Chapter 7 FUNCTIONS IN THREE


DIMENSIONS — DIFFERENTIATION
3-D Geometry (overview)
It was already agreed (see Prerequisites) that everyone understands the concept
of Cartesian (right-handed) coordinates, and is able to visualize points and (free)
vectors within this framework. Don’t forget that both vectors and point are repre-
sented by a triplet on numbers, e.g. (2,1,-4). For their names, I will normally use
small boldface letters (e.g. a, b, c, ... in these notes, but a, b, c on the board).

B Notation and terminology


p
|a| = a2x + a2y + a2z is the vector’s length or magnitude (ax , ay and az are
the vector’s three components; sometimes I may also call them a1 , a2 and a3 ).
When |u| = 1, u is called a unit vector (representing a direction). e1 (e2 ,
e3 ) is the unit vector of the +x (+y, +z) direction, respectively.
(0, 0, 0) is called a zero vector.
Multiplying every component of a vector by the same scalar (single) number
is called scalar multiplication, e.g. 3 · (2, −1, 4) = (6, −3, 12). Geometrically,
this represents modifying the vector’s length according to the scalar’s magnitude,
without changing direction [a negative value of the scalar also changes the vector’s
orientation].
Addition of two vectors is the corresponding component-wise operation, e.g.:
(3, −1, 2) + (4, 0, −3) = (7, −1, −1). It is clearly commutative, i.e. a + b ≡ b + a
[be able to visualize this].

B Dot (inner) [scalar] product

of two vectors is defined by

a • b ≡ |a| · |b| · cos γ

(a scalar result), where γ is the angle between the direction of a and b [anywhere
from 0 to π]. Geometrically, this corresponds to the length of the projection of a
into the direction of b, multiplied by |b| (or, equivalently, reverse). It is usually
computed based on
a • b ≡ a1 b1 + a2 b2 + a3 b3
[e.g. (2, −3, 1) • (4, 2, −3) = 8 − 6 − 3 = −1], and it is obviously commutative [i.e.
a • b ≡ b • a].

To prove the equivalence of the two definitions, your textbook starts with |a| · |b| ·
cos γ and reduces it to a1 b1 +a2 b2 +a3 b3 . The crucial part of their proof is the
following distributive law, which they don’t justify: (a + b) • c ≡ a • c + b • c.
To see why it is correct, think of the two projections of a and b (individually)
into the direction of c, and why their sum must equal the projection of a + b
into the same c-direction. ¤
86

An alternate proof (of the original equivalence) would put the butts of a and b
into the origin (they are free vectors, i.e. free to ’slide’), and out of all points
along b [i.e. t(b1 , b2 , b3 ), where t is arbitrary], find the one which is closest to
the tip of a (resulting in the a →pb projection). This leads to minimizing the
corresponding distance, namely (a1 − tb1 )2 + (a2 − tb2 )2 + (a3 − tb3 )2 .The
smallest value is achieved with tm = a1 b1b+a 2 b2 +a3 b3
2 +b2 +b2 [by the usual procedure].
1 2 3
Thus, the length of this projection is tm |b| = a1√ b1 +a2 b2 +a3 b3
2 2 2
= a1 b1 +a|b|
2 b2 +a3 b3
.
b1 +b2 +b3
This must equal to |a| cos γ, as we wanted to prove. ¤
B Cross (outer) [vector] product

(notation: a × b) is defined as a vector whose length is |a|·|b|·sin γ (i.e. the area of


a parallelogram with a and b as two of its sides), whose direction is perpendicular
(orthogonal) to each a and b, and whose orientation is such that a, b and
a × b follow the right-handed pattern [this makes the product anti-commutative,
i.e. a × b = −b × a].
One way of visualizing its construction is this: project a into the plane perpen-
dicular to b (≡ the blackboard, b is pointing inboard), rotate this projection by
+90o (counterclockwise) and multiply the resulting vector by |b|.
Also note that this product is not associative: (a × b) × c 6= a × (b × c).
The cross¯ product is usually
¯ computed based on the following symbolic scheme:
¯ e1 e2 e3 ¯
¯ ¯
a × b = ¯¯ a1 a2 a3 ¯¯ = (a2 b3 − a3 b2 )e1 + (a3 b1 − a1 b3 )e2 + (a1 b2 − a2 b1 )e3 ≡
¯ b1 b2 b3 ¯

(a2 b3 − a3 b2 , a3 b1 − a1 b3 , a1 b2 − a2 b1 )

[e.g. (1, 3, −2) × (4, −2, 1) = (−1, −9, −14)].

The proof that the two definitions are identical rests on the validity of the dis-
tributive law: (a + b) × c ≡ a × c + b × c, which can be understood by vi-
sualizing a and b projected into a plane perpendicular to c, constructing the
vectors on each side of the equation and showing that they are identical. ¤

Optional: Another way of expressing the kth component of (a × b) is:

X
3 X
3
(a × b)k = ai bj ijk
i=1 j=1

(for k = 1, 2, 3), where ijk [called a fully antisymmetric tensor] changes sign
when any two indices are interchanged (⇒ = 0 unless i, j, k distinct) and 123 = 1
(this defines the rest).
One can show that
X
3

ijk k m = δi δjm − δ j δim


k=1

(where δ ij = 1 when i = j and δ ij = 0 when i 6= j; this is Kronecker’s delta).


87

Based on this result, one can prove several useful formulas such as, for example:

(a × b) × c = (a • c)b − (b • c)a
P P
Proof: The mth component of the left hand side is ijk ai bj k m c = (δ i δ jm −
i,j,k, i,j,
P
δ j δ im )ai bj c = (a bm c − am b c ) [the mth component of the right hand
side]. ¤

and
(a × b) • (c × d) = (a • c)(b • d) − (a • d)(b • c)
having a similar proof. ⊗

B Triple product

of a, b and c is, by definition, equal to a • (b × c). ¯ ¯


¯ a1 a2 a3 ¯
¯ ¯
Computationally, this is identical to following determinant ¯¯ b1 b2 b3 ¯¯ , and
¯ c1 c2 c3 ¯
it represents the volume of the parallelepiped with a, b and c being three of its
sides (further multiplied by −1 if the three vectors constitute a left-handed set).
This implies that a • (b × c) = b • (c × a) = c • (a × b) = − b • (a × c) =
− c • (b × a) = a • (c × b) [its value does not change under cyclic permutation
of the three vectors].
A useful application of the triple product is the following test: a, b and c are
in the same plane (co-planar) iff a • (b × c) = 0.
Another one is to compute the volume of an (arbitrary) tetrahedron. Note that
if you use the three vectors as sides of the tetrahedron (instead of parallelepiped),
its base will be half of the parallelepiped’s, and its volume will thus be a•(b×c)
6

In general: if you slide a (planar) base along a straight line to create a 3-D
volume, this volume can be computed as the area of the base times the perpen-
dicular height; if instead you create a ’cone’ by running a straight line from each
point of the base’s boundary to the tip of the object, the corresponding volume
will be 3 times smaller.
Rh
Proof of the last assertion: Volume is computed by A(x) dx where A(x) is the
0
area of the cross-section at height x, and h is the total height of a 3-D object.
2
In our case A(x) = A0 · (x−h)
h2
where A0 is the base area [right?]. This implies
Rh h ih
A0 (x−h)3
A0 2
that the total volume equals h2 (x − h) dx = h2 3
= A30 h . ¤
0 x=0

Optional: IRotationJ

of a coordinate system [to match it to someone else’s, who uses the same origin
but places the axes differently]. Suppose that his coordinates of our point (x, y, z)
are (x0 , y 0 ,z 0 ) — what is the relationship between the two?
88

What is needed is some mathematical description of the corresponding rota-


tion (to move our coordinates to his).At first one may (incorrectly) assume that a
rotation is best represented by a vector [its direction being the axis of rotation, its
length being the rotation angle]. The problem with such a description is this: one
of the main operations we want to correctly describe is performing composition
(applying one after the other) of two or more rotations, and we need the corre-
sponding mathematical ’machinery’. If we use the proposed vector representation
of a rotation, the only ’composition’ of two vectors we learned about is taking
their cross product, which does not correspond to the composition of two rotations
[which, unlike the cross product, is associative].
To find the proper way of representing rotations, we first realize that a rotation
is a transformation (mapping) of points, symbolically: r0 = R(r) [as in Physics,
we now use r ≡ (x, y, z) as a general notation for a point], This transformation is
obviously linear [meaning R(cr) = cR(r) and R(r1 + r2 ) = R(r1 ) + R(r2 ), where
c is a scalar]. We already know (from Linear Algebra) that a linear transformation
of r corresponds to multiplying r (in its column form) by a 3 × 3 matrices [say R],
thus: r0 = Rr.
But a rotation is a special case of a linear transformation; it preserves both
lengths and angles (between vectors), which implies that it also preserves our dot
product, i.e. (r01 )T r02 ≡ rT1 r2 (a matrix representation of the dot product) for any
r1 and r2 . This is the same as rT1 RT R r2 ≡ rT1 r2 ⇒ RT R ≡ I (such matrices are
called orthogonal).
All this implies that rotations must be represented by orthogonal matrices.
Now in reverse: Does each orthogonal matrix represent a rotation? The answer is
’no’, orthogonal matrices allow the possibility of a reflection (with respect to a
plane), since it also preserves lengths and angles. To eliminate reflections (and be
left with ’pure’ rotations only), we have to further insist that det(R) = +1 (and
not −1).
The matrix representation enables us to ’compose’ two rotations by a simple
matrix multiplication of the corresponding R1 and R2 (in reverse order), thus:
r0 = R2 R1 r. This operation is associative (even though non-commutative), in full
agreement with what we already know about rotations.
Finding the orthogonal matrix which corresponds to a specific rotation is a
fairly complicated procedure.
There is a recent mathematical formalism which simplifies all this (a rotation is
represented by a vector), but it requires a rudimentary knowledge of quoternion
algebra (that is why it has not become widely used yet). ⊗

IStraight Lines and PlanesJ

(later to be extended to curved lines and surfaces).

There are two ways of defining a straight line:

(i) parametric representation, i.e. a + b·t where a is an arbitrary point on


the straight line and b is a vector along its direction, and t (the actual
parameter) is a scalar allowed to vary from −∞ to +∞
89
½
2x + 3y − 4z = 6
(ii) by two linear equations, e.g. (effectively an intersec-
x − 2y + z = −2
tion of two planes). ¥

Neither description is unique (a headache when marking assignments).

Similarly, there are two ways of defining a plane:

(i) parametric, i.e. a + b · u + c·v where a is an arbitrary point in the plane, b


and c are two nonparallel vectors within the plane, and u and v are scalar
parameters varying over all possible real values

(ii) by a single linear equation, e.g. 2x + 3y − 4z = 6 [note that (2, 3, −4)


is a vector perpendicular to the plane, its so called normal — to prove it
substitute two distinct points into the equation and subtract, getting the dot
product of the connecting vector and (2, 3, −4), always equal to zero]. ¥

Again, neither description is unique.

EXAMPLES:
½
3x + 7y − 4z = 5
1. Convert to its parametric representation.
2x − 3y + z = −4
Solution: The cross product of the two normals must point along the straight
line, giving us b = (3, 7, −4) × (2, −3, 1) = (−5, −11, −23) . Solving the two
equations with an arbitrary value of z (say = 0) yields a = (− 13 , 22 , 0).
23 23
Answer: (− 13
23
− 5t, 22
23
− 11t, −23t).

2. Find a equation of an (infinite) cylindrical surface with (3 − 2t, 1 + 3t, −4t)


as its axis, and with the radius of 5.
Solution: Let us first find an expression of the (shortest) distance from a point
r ≡(x, y, z) to a straight line a + b·t [bypassing minimization]. Visualize the
b
vector r − a. We know that |r − a| is its length, and that (r − a) • |b| is the
length ofrits projection into the straight line. By Pythagoras, the direct dis-
h i2 q
2
tance is |r − a|2 − (r − a) • |b| b
= (x − 3)2 + (y − 1)2 + z 2 − (−2x+3y−4z+3)
29
(in our case). Making this equal to 5 yields the desired equation (square it
to simplify).
(−2x+3y−4z+3)2
Answer: (x − 3)2 + (y − 1)2 + z 2 − 29
= 25.

3. What is the (shortest) distance from r = (6, 2, −4) to 3x − 4y + z = 7 [bypass


minimization].
Solution: n • (r − a), where n is the unit normal and a is an arbitrary point
of the plane [found, in this case, by setting x = y = 0 ⇒ (0, 0, 7)].
Answer: √(3,−4,1)
9+16+1
• (6, 2, −11) = − √126 [the minus sign establishes on which
side of the plane we are].
90

4. Find the (shortest) distance between a1 + b1 · t and a2 + b2 · t [bypassing


minimization, as always].
Solution: To find it, we have to move perpendicularly to both straight lines,
i.e. along b1 × b2 . We also know that a2 − a1 is an arbitrary connection
between the two lines. The projection of this vector into the direction of
b1 × b2 supplies (up to the sign) the answer: (a2 − a1 ) • |bb11 ×b2
×b2 |
[visualize
the situation by projecting the two straight lines into the blackboard so that
they look parallel — always possible]. ¥

ICurvesJ

are defined via their parametric representation r(t) ≡ [x(t), y(t), z(t)], where
x(t), y(t) and z(t) are arbitrary (continuous) functions of t (the parameter, ranging
over some interval of real numbers).

EXAMPLE: r(t) = [cos(t), sin(t), t] is a helix centered on the z-axis, whose


radius (when projected into the x-y plane) equals 1, with one full loop per 2π
of vertical distance. The same r(t) can be also seen as a motion of a point-like
particle, where t represents time. Note that [cos(2t), sin(2t), 2t] represents a
different motion (the particle is moving twice as fast), but the same curve
(i.e. parametrization of a curve is far from unique). ¥

B Arc’s length

(’arc’ meaning a specific segment of the curve). The three-component (vector)


distance travelled between time t and t + dt (dt infinitesimal) is r(t + dt) − r(t) ≈
r(t) + ṙ(t) dt + .... − r(t) = ṙ(t) dt + ...., where the dots stand for terms propor-
tional to dt2 and higher [these give zero contribution in the dt → 0 limit], and
ṙ(t) represents the componentwise differentiation with respect to t (the particle’s
velocity). This converts to |ṙ(t)| dt + ... in terms of the actual scalar distance
(length). Adding all these infinitesimal distances (from time a to time b — these
should correspond to the arc’s end points) results in

Zb
|ṙ(t)| dt
a

which is the desired formula for the total length.

EXAMPLES:

1. Consider the helix of the previous example. The length of one of its com-
R2π
plete loops (say from t = 0 to t = 2π) is thus |[− sin(t), cos(t), 1]| dt =
0
R2π p √
sin(t)2 + cos(t)2 + 1 dt = 2π 2.
0
91

2. The intersect of x2 + y 2 = 9 (a cylinder) and 3x − 4y + 7z = 2 (a plane) is


an ellipse. How long is it?
Solution: First we need to parametrize it, thus: r(t) = [3 cos(t), 3 sin(t), 2−9 cos(t)+12
7
sin(t)
]
where t ∈ [0, 2π).
R2π ˙ R2π q ¡ ¢
cos t 2
Answer: |ṙ| dt = 9 + 9 sin t+12
7
dt which is an integration we can-
0 0
not carry out analytically (just to remind ourselves that this can frequently
happen). Numerically (using Maple), this equals 21.062.

B A tangent (straight) line

to a curve, at a point r(t0 ) [t0 being a specific value of the parameter] passes through
r(t0 ), and has the direction of ṙ(t0 ) [the velocity]. Its parametric representation
will be thus
r(t0 ) + ṙ(t0 ) · u
[where u is the parameter now, just to differentiate].

EXAMPLE: Using the same helix, at t = 0 its tangent line is [1, u, u]. ¥
B When r(t) is seen as a motion of a particle, ṙ(t) ≡ v(t) gives the particle’s (in-
stantaneous, 3-D) velocity. |ṙ(t)| then yields its (scalar) speed [the speedometer
ṙ(t)
reading]. It is convenient to rewrite v(t) as |ṙ(t)| · |ṙ(t)| ≡

|ṙ(t)| · u(t)

[a product of its speed and unit direction].


The corresponding (3-D) acceleration
is simply a(t) ≡ r̈(t) [a double t-derivative]. It is more meaningful to decompose it
into its ’tangential’ [the one observed on the speedometer, pushing you back into
your seat] and ’normal’ [observed even at constant speeds, pushing you sideways
— perpendicular to the motion] components. This is achieved by the product rule:
dv(t)
= d|ṙ(t)| · u(t) + |ṙ(t)| · du(t) [tangential and normal, respectively]. d|ṙ(t)| can
dt dt
d
p dt
1 √ 2ẋẍ+2ẏ ÿ+2żz̈ ṙ•r̈
dt
2 2
be simplified to dt ẋ(t) + ẏ(t) + ż(t) = 2 · 2 = |ṙ(t)| =
2 2
ẋ(t) +ẏ(t) +ż(t)

u • r̈ (tangential magnitude)

The normal acceleration is then most easily computed from

r̈ − (u • r̈)u (normal)

[full minus tangential]. In this form it is trivial to verify that the normal accelera-
tion is perpendicular to u.

EXAMPLE: For our helix at t = 0, the speed is 2, u = [0, √12 , √12 ] and r̈ =
[−1, 0, 0] ⇒ zero tangential acceleration and [−1, 0, 0] normal acceleration.
¥
92

B When interested in the geometric properties of a curve only, it is convenient


to make its parametrization unique by introducing a special parameter s (instead
of t) which measures the actual length travelled along the curve, i.e.

Zt
s(t) = |ṙ(t)| dt
0

where r(t) is the old parametrization.


Unfortunately, to carry out the details of such a ’reparametrization’ is normally
too difficult [to eliminate t, we would have to solve the previous equation for t — but
we don’t know how to solve general equations]. Yet, the idea of this new ’uniform’
(in the sense of the corresponding motion) parameter s is still quite helpful, when
we realize that the previous equation is equivalent to
ds
= |ṙ(t)|
dt
This further implies that, even though we don’t have an explicit formula for s(t),
we know how to differentiate with respect to s, as
d d
d dt dt
≡ ds

ds dt
|ṙ(t)|
ṙ(t)
Note that our old u = |ṙ(t)| [the unit velocity direction] can thus be defined simply
dr 0
as ds ≡ r [prime will imply s-differentiation].
Using this new parameter s, we now define a few interesting geometrical prop-
erties (describing a curve and its behavior in space); we will immediately ’translate’
these into the t-’language’, as we normally parametrize curves by t and not s:

B Curvature

Let us first compute du ds


≡ r00 which corresponds to the rate of change of the
unit direction per (scalar) distance travelled. The result is a vector which is always
perpendicular to u, as we will show shortly.
Curvature κ is the magnitude of this r00 , and corresponds, geometrically, to the
reciprocal of the radius of a tangent circle to the curve at a point [a circle with
the same r, r0 and r00 — 6 independent conditions].
The main thing now is to figure out is: how do we compute curvature when
du(s)
our curve has the usual t-parametrization? This is not too difficult, as =
ds
du(t) d ṙ
dt |ṙ| r̈ ṙ d|ṙ|
dt
= = 2 − 3 · (u • r̈) [since = √ẋẍ+2 ẏÿ+ ż z̈
= (u • r̈)] =
|ṙ(t)| |ṙ| |ṙ| |ṙ| dt ẋ + ẏ 2 +ż 2

r̈(ṙ • ṙ) − ṙ(ṙ • r̈)


|ṙ|4

[this is easily seen to be ṙ perpendicular, as claimed].


93

s To get κ, we need the corresponding magnitude:


(r̈ • r̈)(ṙ • ṙ)2 + (ṙ • ṙ)(ṙ • r̈)2 − 2(ṙ • r̈)2 (ṙ • ṙ)
=
(ṙ • ṙ)4
s
(ṙ • ṙ)(r̈ • r̈) − (ṙ • r̈)2
(ṙ • ṙ)3
This is the final formula for computing curvature.
EXAMPLE:
q For the same old helix, (ṙ • ṙ) = 2, (r̈ • r̈) = 1, and (ṙ • r̈) = 0 ⇒
κ = 212 = 12 [the same for all points of the helix — that seems to make sense;
the tangent circles all have a radius of 2]. ¥
B A few related definitions
From what we already know r00 = κ · p where p is a unit vector we will call
principal normal, automatically orthogonal to u and pointing towards the tan-
gent circle’s center. Furthermore, b = u × p must thus be yet another unit vector,
orthogonal to both u and p. It is called the binormal vector (perpendicular to
the tangent circle’s plane).
One can show that the rate of change of b (per unit distance travelled), namely
b is a vector in the direction of p, i.e. b0 = −τ · p, where τ defines the so called
0

torsion (’twist’) of the curve at the corresponding point [τ is thus either + or −


of the corresponding magnitude, the extra minus sign is just a convention].
Note that knowing a curve’s curvature and torsion, we can ’reconstruct’ the
curve (by solving the corresponding set of differential equations), but we will not
go into that.
We now derive a formula for computing τ based on the usual r(t)-parametrization.
¡ 0 ¢0
First: b0 = u0 × p + u × p0 = 0 + u × uκ [since u0 ≡ κp].
¡ 0¢ h ¡ 0 ¢0 i u0 • (u × u00 ) u • (u0 × u00 )
Then: τ = −p • b0 = − uκ • u × uκ = − = .
κ2 κ ¡
2
••• dt 3 ¢
dt dt 2 d2 t
And finally: u = r0 = ṙ ds , u0 = r00 = r̈( ds ) + ṙ ds 00
2 and u = r
000
= r ds +
dt d2 t d3 t
3r̈ ds · ds2 + ṙ ds3 .
Putting it together [and realizing that, whenever identical vectors ’meet’ in a
••• ¡ ¢
r ) dt 6
triple product, the result is zero], we get τ = ṙ•(r̈×κ2 ds
= [since dsdt 1
= |ṙ| ]
•••
ṙ • (r̈× r )
(ṙ • ṙ)(r̈ • r̈) − (ṙ • r̈)2
which is our final formula for computing torsion.
Both the original definition and the final formula clearly imply that a planar
curve has a zero torsion (identically).
•••
EXAMPLE: For the helix r = (sin t, − cos t, 0) ⇒ τ = 12 . ¥

In the next chapter we will introduce surfaces (two-dimensional structures in


3-D; curves are of course one-dimensional) and explore the related issues. But now
we interrupt this line of development to introduce
94

Fields
A scalar field is just a fancy name for a function of x, y and z [i.e. to each point
in space we attach a single value, say its temperature], e.g. f (x, y, z) = x(y+3) z
.
A vector field assigns, to each point in space, a vector value (i.e. three num-
bers rather than just one). Mathematically, this corresponds to having three
functions of x, y and z which are seen as three components of a vector, thus:
g(x, y, z) ≡ [g1 (x, y, z), g2 (x, y, z), g3 (x, y, z)], e.g. [xy, z−3
x
, y(x−4)
z2
] Physically, this
may represent a field of some force, permeating the space.
An operator is a ’prescription’ which takes a field and modifies it (usually, by
computing its derivatives, in which case it is called a differential operator) to return
another field. To avoid further difficulties relating to differential operators, we have
to assume that our fields are sufficiently ’smooth’ (i.e. not only continuous, but
also differentiable at each point).
The most important cases of operators (acting in 3-D space) are:
IGradientJ
which converts a scalar field f (x, y, z) into the following vector field
∂f ∂f ∂f
( , , )
∂x ∂y ∂z
≡ [notation] ∇f (x, y, z). The ∇-operator is usually called ’del’ (sometimes
∂ ∂ ∂
’nabla’), and has three components, ∂x , ∂y and ∂z , i.e. it can be considered to
have vector attributes.
It yields the direction of the fastest increase in f (x, y, z) when starting at
(x, y, z); its magnitude provides the corresponding rate (per unit length). This can
be seen by rewriting the generalized Taylor expansion of f at r, thus: f (r + h) =
f (r) + h • ∇f (r)+ quadratic (in h-components) and higher-order terms. When h
is a unit vector, h • ∇f (r) provides a so called directional derivative of f , i.e.
the rate of its increase in the h-direction [obviously the largest when h and ∇f
are parallel].
An interesting geometrical application is this: f (x, y, z) = c [constant] usu-
ally defines a surface (a 3-D ’contour’ of f — a simple extension of the f (x, y) = c
idea). The gradient, evaluated at a point of such a surface, is obviously normal
(perpendicular) to the surface at that point.
EXAMPLE: Find the normal direction to z 2 = 4(x2 + y2 ) [a cone] at (1, 0, 2)
[this must lie on the given surface, check].
Solution: f ≡ 4(x2 + y 2 ) − z 2 = 0 defines the surface. ∇f = (8x, 8y, −2z),
evaluated at (1, 0, 2) yields (8, 0, −4), which is the answer. One may like to
convert it to a unit vector, and spell out its orientation (either inward or
outward). ¥
Application to Physics: If r(t) represents a motion of a particle and f (x, y, z)
a temperature of the 3-D media in which the particle moves, ṙ • ∇f [r(t)] is the
rate of change (per unit of time) of temperature as the particle experiences it
[nothing but a chain rule]. To convert this into a spacial (per unit length) rate,
one would have to divide the previous expression by |ṙ|.
95

IDivergenceJ

converts a vector field g(r) to the following scalar field:


∂g1 ∂g2 ∂g3
+ +
∂x ∂y ∂z
≡ [symbolically] ∇ • g(r).
Its significance (to Physics) lies in the following interpretation: If g repre-
sents some flow [the direction and rate of a motion of some continuous sub-
stance in space; the rate being established by measuring mass/sec./cm.2 through
an infinitesimal area perpendicular to its direction], then the divergence tells us
the rate of mass loss from an (infinitesimal) volume at each point, per volume
[mass/sec./cm.3 ]. This can be seen by surrounding the point by an (infinitesimal)
cube, and figuring out the in/out flow through each of its sides [h2 g1 (x + h2 , y, z) is
the outflow from one of them, etc.].

Optional: A flow (also called flux) is usually expressed as a product of the


substance’ density ρ(r) [measured as mass/cm.3 , obviously a scalar field] and
its velocity v(r) [measured in cm./sec., obviously a vector field]. The equation
∂ρ
∇•[ρv] + = 0 then expresses the conservation-of-mass law — no mass is
∂t
being lost or created (no sinks nor sources), any outflow of mass results in
a corresponding reduction of density. Here we have assumed that our ρ and
v fields are functions of not only x, y and z, but also of t (time), as often
done in Physics. ⊗
EXAMPLE: Find ∇•(x2 , y2 , z 2 ). Answer: 2x + 2y + 2z. ¥
ICurlJ

(sometimes also called rotation), applied to a vector field g, converts it to yet


another vector field symbolically defined by ∇ × g, i.e.
∂g3 ∂g2 ∂g1 ∂g3 ∂g2 ∂g1
[ − , − , − ]
∂y ∂z ∂z ∂x ∂x ∂y
If g represents a flow, Curl(g) can then be visualized by holding an imaginary
paddle-wheel at each point to see how fast the wheel rotates (its axis at the fastest
rotation yields the curl’s direction, the torque establishes the corresponding mag-
nitude).

EXAMPLE: Curl(x, yz, −x2 − z 2 ) = (−y, 2x, 0). ¥


B One can easily prove the following trivial identities:

Curl [Grad (f )] ≡ 0
Div [Curl (g)] = 0

There are also several nontrivial identities, for illustration we mention one only:

Div(g1 × g2 ) = g2 • Curl(g1 ) − g1 • Curl(g2 )


96

Optional: Divergence and gradient are frequently applied, consecutively, to a


scalar field f, to create a new scalar field Div[Grad(f )] ≡ 4f =

∂2f ∂2f ∂2f


+ + 2
∂x2 ∂y 2 ∂z

(where 4 is the so called Laplace operator). It measures how much the


value of f (at each point) deviates from its average over some infinitesimal
surface [visualize a cube] centered at the point, per the surface’s area (the
exact answer is obtained in the limit, as the size of the cube approaches zero).

Optional: Curvilinear coordinates


(such as, for example, the spherical coordinates r, θ and ϕ) is a set of three
new independent variables [replacing the old (x, y, z)], each expressed as a func-
tion of x, y and z, and used to locate points in (3-D) space [by inverting the
transformation].
Varying only the first of the new ’coordinates’ (keeping the other two fixed)
results in a corresponding coordinate curve (the changing coordinate becomes
its parameter — the old t) whose unit direction is labelled er (similarly for the
other two), and whose ’speed’ (i.e. distance travelled per unit change of the new
coordinate) is called hr [both er and hr are functions of location; they can be easily
established geometrically].
When er , eθ and eϕ remain perpendicular to each other at every point, the new
coordinate system is called orthogonal (the case of spherical coordinates). All
our subsequent results apply to orthogonal coordinates only.
Fields can be easily transformed to new coordinates, all it takes is to express
x, y and z in terms of r, θ and ϕ. How do we compute Grad, Div and Curl in the
new coordinates, so that they agree with the old, rectangular-coordinate results?
First of all, Grad and Curl will be expressed in terms of the new, curvilinear
axes er , eθ and eϕ , instead of the original e1 , e2 and e3 . Each component of the
new, curvilinear Grad(f ) should express the (instantaneous) rate of increase of f,
when moving along the respective e, per distance travelled (let us call this distance
d). Thus, when we increase the value of r and start moving along er , we obtain:
∂f
∂f ∂f ∂r
= · ≡ ∂r , by our previous definition of the h-functions. [Similarly for
∂d ∂r ∂d hr
θ and ϕ.] The full gradient is thus

er ∂f eθ ∂f eϕ ∂f
Grad(f ) = · + · + ·
hr ∂r hθ ∂θ hϕ ∂ϕ

[note that, for spherical coordinates, hr = 1, hθ = r and hϕ = r sin θ].


To get Div(g), we note that an infinitesimal volume (’near-cube’) built by
increasing r to r + dr, θ to θ + dθ and ϕ to ϕ + dϕ has sides of length hr dr,
hθ dθ and hϕ dϕ, faces of area hr hθ drdϕ, hr hθ drdθ and hθ hϕ dθdϕ, and volume of
size hr hθ hϕ drdθdϕ. One can easily see that hθ hϕ gr dθdϕ is the total flow (flux)
97


through one of the sides; ∂r (hθ hϕ gr ) dθdϕdr is then the corresponding flux differ-
ence between the two opposite sides. Adding the three contributions and dividing
by the total volume yields:
· ¸
1 ∂ ∂ ∂
Div(g) = (hθ hϕ gr ) + (hr hϕ gθ ) + (hr hθ gϕ )
hr hθ hϕ ∂r ∂θ ∂ϕ

In thehcase of spherical coordinates this reduces i to:


1 ∂ 2 ∂ ∂
r2 sin θ ∂r
(r sin θgr ) + ∂θ (r sin θgθ ) + ∂ϕ (rgϕ ) , yielding, for the corresponding Lapla-
cian
µ ¶ µ ¶
1 ∂ 2 ∂f 1 ∂ ∂f 1 ∂2f
Div[Grad(f )] = 2 r + 2 sin θ + 2 2
r ∂r ∂r r sin θ ∂θ ∂θ r sin θ ∂ϕ2

Understanding this is essential when computing (quantum mechanically) the en-


ergy levels (eigenvalues) of Hydrogen atom.
Similarly one can derive a formula for Curl(g), which we do not quote here
(see your textbook). ⊗
98
99

Chapter 8 FUNCTIONS IN 3-D —


INTEGRATION
Line Integrals
are of two types:

IScalar (Type I) IntegralsJ

where we are given a (scalar) function f (x, y, z) and a curve r(t), and need to
integrate f over an arc of the curve (which now assumes the rôle of the x-axis). All
it takes is to add the areas of the individual ’rectangles’ of base |ṙ| dt and ’height’
[which, unfortunately, has to be pictured in an extra 4th dimension] f [r(t)], ending
up with
Zb
f [r(t)] · |ṙ(t)| dt (LI)
a

which is just an ordinary (scalar) integral of a single variable t. Note that the result
must be independent of the actual curve parametrization.
In this context we should mention that all our curves are piece-wise smooth,
i.e. continuous, and consisting of one or more differentiable pieces (e.g. a square).
This kind of integration can used for (spacial) averaging of the f -values (over
a segment of a curve). All we have to do is to divide the above integral by the
Rb
arc’s length |ṙ(t)| dt:
a
Rb
f [r(t)] · |ṙ(t)| dt
a
f sp =
Rb
|ṙ(t)| dt
a

To average in time (taking r(t) to be a motion of a particle) one would do

Rb
f [r(t)] dt
a
f tm =
b−a
instead.
The symbolic notation for this integral is
Z
f (r) ds
C

s being the special unique parameter which corresponds to the ’distance travelled’,
and C stands for a specific segment of a curve. To evaluate this integral, we
normally use a convenient (arbitrary) parametrization of the curve (the result must
be the same), and carry out the integration in terms of t, using (LI).
100

Two other possible applications are:

1. Center of mass of a wire-like object of uniform mass density:


R R R 
x ds y ds z ds
 CR , CR , CR 
ds ds ds
C C C

[the denominator is the total length L].

2. Moment of inertia of any such an object:


Z
M
d2 · ds
L
C

where d(x, y, z) is distance from the axis of rotation. [Angular acceleration


is torque divided by moment of inertia]. ¥

EXAMPLES:
R
• Evaluate (x2 + y 2 + z 2 )2 ds where C ≡ (cos t, sin t, 3t) with t ∈ (0, 2π) [one
C
loop of a helix].
R2π p √ R2π
Solution: (cos2 t + sin2 t + 9t2 )2 (− sin t)2 + (cos t)2 + 9 dt = 10 (1 +
0 0
√ £ ¤
5 2π
18t2 + 81t4 ) dt = 10 t + 6t3 + 81 5
t 0
= 5.0639 × 10 5
.

• Find the center of mass of a half circle (the circumference only) of radius a.
R Rπ
Solution: r(t) = [a cos t, a sin t, 0] ⇒ y ds = a2 sin t dt = 2a2 .
C 0
2a2
Answer: The center of mass is at [0, πa
, 0] = [0, 0.63662a, 0].

• Find the moment of inertia of a circle (circumference) of mass M and radius


a with respect to an axis passing through its center and two of its points.
R
Solution: Using r(t) = [a cos t, a sin t, 0] and y as the axis, we get (x2 +
C
R2π
z 2 ) ds = a3 cos2 t dt = πa3 .
0
M Ma2
Answer: 2πa
· πa3 = 2
.

IVector (Type II) IntegralsJ

Here, we are given a vector function g(x, y, z) [i.e. effectively three functions
g1 , g2 and g3 ] which represents a force on a point particle at (x, y, z), and a curve
r(t) which represents the particle’s ’motion’. We know (from Physics) that, when
the particle is moved by an infinitesimal amount dr, the energy it extracts from
the field equals g • dr [when negative, the magnitude is the amount of work needed
101

to make it move]. This is independent of the actual speed at which the move is
made.
The total energy thus extracted (or, with a minus sign, the work needed)
when a particle moves over a segment C is, symbolically,
Z
g(r) • dr
C
R
[ g1 dx+ g2 dy+ g3 dz is an alternate notation] and can be computed by parametriz-
C
ing the curve (any way we like — the result is independent of the parametrization,
i.e. the actual motion of the particle) and finding

Zb
g[r(t)] • ṙ(t) dt (LII)
a
R
EXAMPLE: Evaluate (5z, xy, x2 z) • dr where C ≡ (t, t, t2 ), t ∈ (0, 1).
C
R1 R1
Solution: (5t2 , t2 , t4 ) • (1, 1, 2t) dt = (6t2 + 2t5 ) dt = 7
3
= 2.3333. ¥
0 0

Note that, in general, the integral is path dependent, i.e. connecting the
same two points by a different curve results in two different answers.
R
EXAMPLE: Compute the same (5z, xy, x2 z) • dr, where now C ≡ (t, t, t),
C
t ∈ (0, 1).
R1 R1
Solution: (5t, t2 , t3 ) • (1, 1, 1) dt = (5t + t2 + t3 ) dt = 37
12
= 3.0833. ¥
0 0

Could there be a special type of vector fields to make all such vector integrals

BPath Independent

The answer is yes, this happens for any g which can be written as

∇f (x, y, z)

[a gradient of a scalar field, which is called the corresponding potential; g is


then called a conservative vector field].

R Rb Rb df [r(t)]
Proof: (∇f ) • dr = (∇f [r(t)]) • ṙ(t) dt = [← chain rule] dt =
C a a dt
f [r(b)] − f [r(a)]. ¤

But how can we establish whether a given g is conservative? Easily, the suffi-
cient and necessary condition is

Curl(g) ≡ 0
102

Proof: g = ∇f clearly implies that Curl(g) ≡ 0.


Now the reverse: Given such a g, we³construct (as discussed in the subsequent
R R R R ∂g1 ´ R R ¡R ∂g1 ¢
example) f = g1 dx + g2 dy − ∂y
dx dy + g3 dz − ∂z
dx dz −
R ¡R ∂g2 ¢ R hR ³R ∂ 2 g1 ´ i
∂z
dy dz + ∂y∂z
dx dy dz.
R ∂g1 R ∂g1 R ∂g1 R ∂g1 R ³R ∂ 2 g1 ´
This implies: ∂f
∂x
= g1 + ∂y
dy− ∂y
dy+ ∂z
dz− ∂z
dz− ∂y∂z
dy dz+
R hR ∂ 2 g1 i
∂y∂z
dy dz ≡ g1 .
Similarly, we can show ∂f
∂y
= g2 and ∂f
∂z
= g3 . ¤
Note that when g is conservative, all we need to specify is the starting and
final point of the arc (how you connect them is irrelevant, as long as you avoid an
occasional singularity). We can then use the following notation:
Zb
g(r)•dr
a

which gives you a strong hint that g is conservative (the notation would not make
sense otherwise).
(1, π4 ,2)
R
EXAMPLE: Evaluate 2xyz 2 dx+ [x2 z 2 +z cos(yz)]dy+ [2x2 yz+y cos(yz)]dz.
(0,0,0)

Solution: This is what we used to call ’exact differential form’, extended to


three independent variables. We solve it by integrating g1 with respect to x
[calling the result f1 ], adding g2 − ∂f
∂y
1
integrated with respect to y [call the
overall answer f2 ], then adding the z integral of g3 − ∂f ∂z
2
, to get the final f.
2 2 2 2
In our case, this yields x y z for f1 , x yz + sin(yz) for f2 ≡ f , as nothing
is added in the last step. Thus f (x, y, z) = x2 yz 2 + sin(yz) [check].
Answer: f (1, π4 , 2) − f (0, 0, 0) = 1 + π = 4.1416. ¥
Optional: We mention in passing that, similarly, Div(g) = 0 ⇔ there is a
vector field h say such that g ≡ Curl(h) [g is then called purely rotational].
Any vector field g can be written as Grad(f ) + Curl(h), i.e. decomposed into its
conservative and purely rotational part. ⊗

Double integrals
can be evaluated by two consecutive (univariate) integrations, the first with respect
to x, over its conditional range given y, the second with respect to y, over its
marginal range (or the other way round, the two answers must agree).
EXAMPLES:

 x>0 R
1−y
• To integrate over the y>0 triangle, we first do ... dx followed
 0
x+y <1
R1 R
1−x R1
by ... dy (or ... dy followed by ... dx).
0 0 0
103

1
R3 Rx
• To integrate over 0 < y < x
, where 1 < x < 3, we can do either ... dy dx
1 0
1 1
R1 Ry R3 R3
or ... dx dy + ... dx dy [only a graph of the region can reveal why it
1 1 0 1
3
is so].
√ 
1−y 2 p
RR R1
R R1
• y 2 dx dy =  y 2 dx dy = 2y 2 1 − y 2 dy =
x2 +y 2 <1 −1
√ −1
− 1−y 2
h p i
3 1
1
4
arcsin y + 14 y 1 − y 2 − 12y(1 − y 2 ) 2 = π4 . ¥
y=−1

The last of these double integrals can be simplified by introducing


IPolar CoordinatesJ
(effectively a change of variables, from the old x, y, to a new pair of r, ϕ) by:
x = r cos ϕ
y = r sin ϕ
One has to remember that dx dy of the double integration must be replaced by
dr dϕ, further multiplied by the Jacobian of the transformation, namely the ab-
solute value of ¯ ∂x ∂y ¯
¯ ¯
¯ ∂x
∂r dr ¯
¯ ∂y ¯
∂ϕ ∂ϕ
¯ ¯
¯ cos ϕ −r sin ϕ ¯
¯
In our case (of polar coordinates) this equals to ¯ ¯ = r.
sin ϕ r cos ϕ ¯
RR R2π R1
EXAMPLE: y 2 dx dy = r2 sin2 ϕ · r dr dϕ [note how, in polar coordi-
x2 +y 2 <1 0 0
nates, the region of integration is a simple ’rectangle’ and the double integral
R1 R2π £ ¤2π
becomes separable] = r3 dr × sin2 ϕ dϕ = 14 × ϕ2 − sin42ϕ 0 = π4 [a lot
0 0
easier than the direct integration above]. ¥
Similarly to polar coordinates, one can introduce any other set of new vari-
ables to simplify the integration (the actual form of the transformation would be
normally suggested to you).
RR
EXAMPLE: y2 dx dy where R is a square with corners at (0, 1), (1, 0), (0, −1)
R
and (−1, 0).
Introducing u, v by x = u + v and y = u − v, we will cover the same square
with −12
< u < 12 and −1 2
< v < 12 . Furthermore, the Jacobian of this
transformation equals to 2.
R2 h u3 i 12
1 1 1 1
R2 R2 2 u2
R2 1
Solution: (u − v) du dv = 2 3
− 2 2 v + uv 1
dv = 2 ( 12 +
u=− 2
− 12 − 12 − 12 − 12
v 2 ) dv = 1
2( 12 1
+ 12 ) = 13 . ¥
104

An important special case is integrating a constant, say c, which can often be


done geometrically. i.e. ZZ
c dx dy = c · Area(R)
R

IApplicationsJ

of two-dimensional integrals to geometry and physics:

B An area

of a 2-D region R is computed by


ZZ
dx dy
R

B Center of mass

of a 2-D object (lamina) is computed by


RR
xρ(x, y) dx dy
RR
ρ(x, y) dx dy

[x component] and RR
yρ(x, y) dx dy
RR
ρ(x, y) dx dy
[y component], where ρ(x, y) is the corresponding mass density. When the object
is of uniform density (ρ ≡ const.), the formulas simplify to
RR
x dx dy
RR
dx dy
and RR
y dx dy
RR
dx dy

B Moment of inertia

with respect to some axis (this is needed when computing angular acceleration as
torque/moment-of-inertia):
ZZ
d(x, y)2 · ρ(x, y) dx dy

where d(x, y) is the (perpendicular) distance of (x, p


y) from the axis [when the axis
is x, d ≡ y and vice versa; when the axis is z, d = x2 + y 2 ].

B 3-D volume ZZ
h(x, y) dx dy
105

where h(x, y) is the object’s ’thickness’ (height) at (x, y).

EXAMPLES:

1. Find the center of mass of a half disk of radius R and uniform mass density.
Solution: We position the object in the upper half plane with its center at the
Rπ RR
r sin ϕ · r dr dϕ R3
0 0 ·(cos 0−cos π)
origin, and use polar coordinates to evaluate: Rπ RR
= 3
R2
=
·π
r dr dϕ 2
0 0
4R

= 0.42441R [its y component]. From symmetry, its x component must be
equal to zero.

2. Find the volume of a cone with circular base of radius R and height H.
We do this in polar coordinates where the formula for h(r, ϕ) simplifies to
H · R−r
R
.

H
R2π RR 2πH 2 r3 R πR2 H
Answer: R
(R − r) · r dr dϕ = R
· [R r2 − ]
3 r=0
= 3
(check).
0 0

3. Find the volume of a sphere of radius R.


Solution:
√ Introducing polar coordinates in x, y, the z-thickness is h(x, y) =
2 R2 − r2 [Pythagoras]. Integrating this over the sphere’s x, y projection (a
R2π RR √ h i
3 R
circle of radius R) yields 2 R2 − r2 · r dr dϕ = 4π − 13 (R2 − r2 ) 2 =
0 0 r=0
4
3
πR3 (check).

4. Find the volume of the (solid) cylinder x2 + z 2 < 1 cut along y = 0 and z = y
[i.e. 0 < y < z].
Solution: Its x, z projection is a half -circle x2 + z 2 < 1 with z > 0, its
thickness along y is h(x, z) = z. Replacing x and z by polar coordinates, we
Rπ RR
can readily integrate r sin ϕ · r dr dϕ = 13 · [− cos ϕ]πϕ=0 = 23 . There are
0 0
two alternate ways of computing the volume, integrating the z-thickness over
the (x, y) projection, or the x-thickness over dy dz [try both of them].

5. Find the volume of the 3-D region defined by x2 + y 2 < 1 and y 2 + z 2 < 1
[the common part of two cylinders crossing each other at the right angle].
Solution: The (x, y) projection of the region is describe by x2 + y 2 <p 1 (now a
circle, not a cylinder), the corresponding z-thickness is h(x, y) = 2 1 − y 2 .
R2π R1 p R2π h i
3 1
Answer: 2 1 − r2 sin2 ϕ·r dr dϕ = − 3 sin22 ϕ (1 − r2 sin2 ϕ) 2 dϕ =
0 0 0 r=0
3 π 3
π π
R2π 2(1−| cos ϕ|3 ) R
2
2(1−| cos ϕ|3 ) R2
2(1−cos3 ϕ) R
2
2(1+cos3 ϕ)
3 sin2 ϕ
dϕ = 3 sin2 ϕ
dϕ = 3 sin2 ϕ
dϕ + 3 sin2 ϕ
dϕ =
0 − π2 − π2 π
2
16
3
[The integration is quite tricky, later on we learn how to deal with it more
.
efficiently].
106

An alternate way is to use the (x, z) projection (a unit square, divided by


its two diagonals into four sections of identical volume), and then integrate
over one of these sections (say the right-most) the corresponding y-thickness
√ R1 √ Rx R1 √
h(x, z) = 2 1 − x2 , thus: 2 1 − x2 dz dx = 2 1 − x2 · 2x dx =
0 −x 0
h i
3 1
4 4
− 3 (1 − x2 ) 2 = 3 . The total volume is four times bigger (check). The
x=0
integration was now a lot easier.

In these type of questions, it is important to first identify each side of the


3-D object (and the corresponding equation), and each of its edges (described by
two equations). To project a specific edge into, say, the (x, y) plane, one must
eliminate z from one of the two equations and substitute into the other (getting a
single x-y equation).

Surfaces in 3-D
There are two ways of defining a 2-D surface:

1. By an equation: f (x, y, z) = c [c being a constant].

2. Parametrically: r(u, v) ≡ [x(u, v), y(u, v), z(u, v)] (three arbitrary func-
tions of two parameters u and v; restricting these to a 2-D region selects a
section of the surface). ¥

EXAMPLES:

• Parametrize a sphere of radius a.


Answer: r(u, v) = [a sin v cos u, a sin v sin u, a cos v] where 0 ≤ u < 2π and
0 ≤ v ≤ π [later on we introduce the so called spherical coordinates in
almost the same manner — they are usually called r, θ and ϕ rather than a,
v and u]. The curves we get by fixing v and varying u (or vice versa) are
called ’coordinate’ curves [latitude circles and longitude half-circles in
this case].

• Identify r(u, v) = [u cos v, u sin v, u].


Answer: a 45o cone centered on z.

• Parametrize the cylinder x2 + y 2 = a2 .


Solution: r(u, v) = [a cos u, a sin u, v].

• Identify [u cos v, u sin v, u2 ].


Answer: A paraboloid centered on +z.
107

Surface integrals
∂r
Let us consider a specific parametrization of a surface. It is obvious that ∂u
[componentwise operation, keeping v fixed] is a tangent direction to the correspond-
ing coordinate curve and consequently tangent to the surface itself. Similarly, so
∂r
is ∂v (note that these two don’t have to be orthogonal). Constructing the corre-
sponding tangent plane is then quite trivial.
Consequently,
∂r ∂r
×
∂u ∂v
¯ ∂r ∂r ¯
yields a direction normal (perpendicular) to the surface, and its magnitude ¯ ∂u × ∂v ¯ ,
multiplied by du dv, provides the area of the corresponding (infinitesimal) parallel-
∂r ∂r
ogram, obtained by increasing u by du and v by dv [ ∂u du and ∂v dv being its two
sides]. This can be seen from:
¯ ¯ ¯ ¯
¯ ∂r ∂r ¯ ¯ ∂r ∂r ¯
¯ du × dv ¯ = ¯ × ¯ du dv ≡ dA
¯ ∂u ∂v ¯ ¯ ∂u ∂u ¯

Since |a × b|2 = |a|2 |b|2 sin2 γ = |a|2 |b|2 (1 − cos2 γ) = |a|2 |b|2 − (a • b)2 , we
can simplify it to
s¯ ¯ ¯ ¯ µ ¶2
¯ ∂r ¯2 ¯ ∂r ¯2 ∂r ∂r
dA = ¯¯ ¯¯ ¯¯ ¯¯ − • du dv
∂u ∂v ∂u ∂v

which is more convenient computationally (bypassing the cross product).


To find an area of a whole surface (or its section), we need to ’add’ the
contributions from all these parallelograms, thus:
s¯ ¯ ¯ ¯ µ ¶2
ZZ ZZ
¯ ∂r ¯2 ¯ ∂r ¯2 ∂r ∂r
¯ ¯ ¯ ¯
Area = dA = ¯ ∂u ¯ ¯ ∂v ¯ − ∂u • ∂v du dv
S R

where R is the (u, v) region needed to cover the (section of the) surface S. Needless
to say, the answer must be the same, regardless of the parametrization.

EXAMPLES:

1. Find the tangent plane to the ellipsoid 3x2 + 2y 2 + z 2 = 20 at (1, 2, 3).


Solution: First one can easily check that the point is on the ellipsoid (just
in case). We can parametrize the upper half √ of the ellipsoid (which is
2 2 ∂r
sufficient in this case) by r(u, v) = (u, v, 20 − 3u − 2v ). Then ∂u =
3u ∂r 2v 4
(1, 0, − √20−3u2 −2v 2 ) = (1, 0, −1) and ∂v = (0, 1, − 20−3u2 −2v 2 ) = (0, 1, − 3 ).

The corresponding cross product (1, 0, −1) × (0, 1, − 43 ) = (1, 43 , 1) yields the
tangent plane’s normal; we also know that the plane has to pass through
(1, 2, 3).
Answer: 3x + 4y + 3z = 20.
108

2. Find the area of a surface of a sphere of radius a.


Solution: Using the r(u, v) = (a sin v cos u, a sin v sin u, a cos v) parametriza-
∂r
tion, we get: ∂u = (−a sin v sin u, a sin v cos u, 0) and
∂r
∂v
= (a cos v cos u, a cos v sin u, −a sin v) ⇒ dA ≡ a2 |sin v| du dv.
R2π Rπ
Answer: a2 sin v dv du = 4πa2 .
0 0

3. Find the surface area of a torus (donut) of dough-radius equal to b and


hole-radius equal to a − b.
Solution: We make z its axis, and [0, a + b cos v, b sin v] its cross section
with the (y, z) plane. The full parametrization is then: r(u, v) = [(a +
b cos v) cos u, (a + b cos v) sin u, b sin v], where both u and v vary from 0 to
∂r ∂r
2π. This yields ∂u = [−(a + b cos v) sin u, (a + b cos v) cos u, 0], ∂v =
[−b sin v cos u, b sin v sin u, −b cos v] ⇒ dA ≡ b(a + b cos v).
R2π R2π
Answer: b (a + cos v) dv du = 4π 2 ab.
0 0

Computing areas is just a special case of a


ISurface Integral of Type IJ
(’scalar’ type). In general, weRRcan integrate any scalar function f (x, y, z) over
a surface S [symbolic notation f (x, y, z) dA] by parametrizing the surface and
S
computing s¯ ¯ ¯ ¯
ZZ µ ¶2
¯ ∂r ¯2 ¯ ∂r ¯2 ∂r ∂r
¯ ¯ ¯
f [r(u, v)] · ¯ ¯ ¯ ¯ − ¯ • du dv
∂u ∂v ∂u ∂v
R
[the answer is independent of parametrization].
When divided by the corresponding surface area, this represents the average
of f (x, y, z) over S.
Other applications to Physics are:
1. Moment of inertia of a shell-like structure (lamina) of surface density
ρ(x, y, z): ZZ
d2 · ρ · dA
S
where d(x, y, z) is the distance from the rotation axis. For a lamina of uniform
density, ρ = MA
(total mass over total area).
2. Center of mass
 RR RR RR 
x · ρ · dA y · ρ · dA z · ρ · dA
 S RR , S RR , SRR 
ρ · dA ρ · dA ρ · dA
S S S
RR
(ρ cancels out when constant, i.e. uniform mass density). Note that ρ · dA
S
is the total mass. ¤
109

EXAMPLE: Find the moment of inertia of a spherical shell of radius a and


total mass M (uniformly distributed) with respect to an axis going through
its center.
Solution: ’Borrowing’ the parametrization (and dA) from the previous Ex-
RR Rπ R2π 2 2
ample 2, and using z as the axis, we get ρ (x2 + y 2 ) dA = ρ a sin v ·
S 0 0
= 23 Ma2 . ¥
3
a2 sin v du dv = 2πρa4 [ cos3 v M
− cos v]πv=0 = 2π 4πa 4
2a ·
4
3

ISurface integrals of Type IIJ

(’vector’ type): When integrating a vector field g(x, y, z) [representing some


stationary flow] over an orientable (having two sides) surface S, we are usually
interested in computing the total flow (flux) through this surface, in a chosen
direction.
The flow through an ’infinitesimal’ area [our parallelogram] of the surface is
given by the dot product
g • n dA
where n is a unit direction normal [perpendicular] to the area, since the flow is
obviously proportional to the area’s size dA, to the magnitude of g (the flow’s
speed), and to the cosine of the n-g angle.
’Adding’ these, one gets
ZZ ZZ ZZ
g • n dA ≡ g•dA ≡ (g1 dy dz + g2 dz dx + g3 dx dy)
S S S

introducing two more alternate, symbolic notations (I usually use the middle
one).
We can convert this to a regular double-integral (in u and v), by parametriz-
ing the surface [different parametrizations must give the same correct answer] and
∂r ∂r
replacing n dA by ∂u × ∂v [having both the correct area and direction], getting:
ZZ · ¸
∂r ∂r
g[r(u, v)] • × du dv
∂u ∂v
R

∂r ∂r
where R is the (u, v) region corresponding to S. Note that ∂u × ∂v does not neces-
sarily have the correct (originally prescribed) orientation; when that happens, we
fix it by reversing the sign of the result.
∂r ∂r
EXAMPLES (to simplify our notation, we use ∂u
≡ ru and ∂v
≡ rv ):
RR
1. Evaluate (x, y, z − 3) • dA where S is the upper (i.e. z > 0) half of the
S
x2 + y 2 + z 2 = 9 sphere, oriented upwards.
Solution: Here
£ we
√ can bypass¤ spherical coordinates (why?) and use instead
r(u, v) = u, v, 9 − u − v with u + v2 < 9 [defining the two-dimensional
2 2 2

region R over which we integrate]. Furthermore, ru = [1, 0, − √9−uu2 −v2 ] and


110

rv = [0, 1, − √9−uv2 −v2 ] ⇒ ru × rv = [ √9−uu2 −v2 , √9−uv2 −v2 , 1] [correct orienta-


u2 +v2

tion!] ⇒ g • (ru ×rv ) = √9−u 2 −v 2 + 9 − u2 − v2 −3 = √9−u92 −v2 −3. The ac-
R2π R3 ³ 9 ´
tual integration will be done in polar coordinates: √
9−r2
− 3 r dr dϕ =
0 0
h √ i3
2 r2
2π −9 9 − r − 3 2 = 27π.
r=0
RR
2. Evaluate (yz, xz, xy)•dA where S is the full x2 +y 2 +z 2 = 1 sphere oriented
S
outwards. Using the usual parametrization: r(u, v) = (cos u sin v, sin u sin v, cos v)
⇒ ru = (− sin u sin v, cos u sin v, 0), rv = (cos u cos v, sin u cos v, − sin v) and
ru × rv = (− cos u sin2 v, − sin u sin2 v, − sin v cos v) [wrong orientation, re-
verse its sign!], we get g • (−ru × rv ) = 3 cos u sin u sin3 v cos v.
R2π Rπ
Answer: 3 sin u cos u du × sin3 v cos v dv = 0 ¥
0 0

Shortly we learn a shortcut for evaluating Type II integrals over a closed surface
which will make the last example trivial. But first we need to discuss

’Volume’ integrals
which are only of the scalar type (there is no natural direction to associate with
an infinitesimal volume, say a cube; contrast this with the tangent direction for a
curve and the normal direction for a surface).
The other difference is that the 3-D integration can be carried out directly in
terms of x, y and z, which are in a sense direct ’parameters’ of the corresponding
3-D region (sometimes called ’volume’). This is not to say that we can not try
different (more convenient) ways of ’parametrizing’ it, but this will now be referred
to as a ’change of variables’ (or introducing generalized coordinates).
The most typical example of these are the spherical coordinates r, θ and ϕ
(to simplify integrating over a sphere). When using spherical coordinates, dx dy dz
(≡ dV ) needs to be replaced
¯ by dr dθ dϕ multiplied by the Jacobian
¯ of the trans-
¯ sin θ cos ϕ r cos θ cos ϕ −r sin θ sin ϕ ¯
¯ ¯
formation, namely: ¯¯ sin θ sin ϕ r cos θ sin ϕ r sin θ cos ϕ ¯¯ =
¯ cos θ −r sin θ 0 ¯

r2 sin θ

[this expression can be derived and understood geometrically].


Similarly to double integration of a constant, some triple integrals can be also
evaluated ’geometrically’, by
ZZZ
c dV = c · V olume(V)
V

whenever the 3-D region is of a simple enough shape, and we remember a formula
for the corresponding volume.
111

IPossible ApplicationsJ
of volume integrals include computing the actual volume of a 3-D body
ZZZ
V = dV
V

averaging a scalar function f (x, y, z) over a 3-D region


RRR
f (x, y, z) dV
V
V
computing the center of mass of a 3-D object of mass density ρ(x, y, z) [it cancels
out when constant]
 RRR RRR RRR 
x ρ(x, y, z) dV y ρ(x, y, z) dV z ρ(x, y, z) dV
 RRR
V V
, RRR V
, RRR 
ρ(x, y, z) dV ρ(x, y, z) dV ρ(x, y, z) dV
V V V

and computing the corresponding moment of inertia


ZZZ
d2 ρ dV
V
M
where d(x, y, z) is distance from the rotational axis, and ρ ≡ V
when the mass
density is uniform.
EXAMPLE: Find the moment of inertia of a uniform sphere of radius a with
an axis going through its center.

M
RRR R2π Rπ Ra 2 2 5
Solution: 4
πa3
(x2 + y 2 )dV = 4 M
πa3 r sin θ · r2 sin θ dr dθ dϕ = 4
M
πa3
· a5 ·
3 3 3
h 3 V i
π
0 0 0
cos θ
3
− cos θ · 2π = 25 Ma2 . ¥
θ=0

There is an interesting and useful relationship between a Type II integral


over a closed (outward oriented) surface Sc , and a volume integral over the 3-D
region V enclosed by this Sc , called
IGauss TheoremJ
ZZ ZZZ
g•dA ≡ Div(g) dV
Sc V

[Div(g) must have no singularities throughout V].


Indication of Proof: We have already seen this to be true for an infinitesimal
volume dx dy dz when we introduced divergence of a vector field. When
the contributions of all these infinitesimal volumes are added together (to
build the surface integral on the left hand side of our formula) the adjacent-
side flows cancel out and we are left with the overall surface only; adding
the divergences (each multiplied by the corresponding infinitesimal volume)
results in the right-hand-side integral. ¤
112

EXAMPLES:
1. The integral of Example 2 from the previous section thus becomes quite
trivial, as Div ([yz, xz, xy]) ≡ 0.
½ 2
RR 3 2 2 x + y 2 < a2
2. Evaluate (x , x y, x z) • n dA, where S is the surface of
S 0<z<b
(a cylinder of radius a and height b), oriented outwards.
RRR RR
Solution: Using the Gauss theorem, we get 5x2 dV = 5b x2 dx dy =
x2 +y 2 < a2 x2 +y 2 <a2
0< z< b
R2π Ra a4
5b r2 cos2 ϕ · r dr dϕ [going polar] = 5b2 · 4
· π = 54 a4 bπ.
0 0
Let us verify this by recomputing the original surface integral directly (note
that now we have to deal with three distinct surfaces: the top disk, the bot-
tom disk, and the actual cylindrical walls): The top can be parametrized by
RR R2π Ra 2
r(u, v) = [u, v, b], contributing [u3 , u2 v, u2 b]•(0, 0, 1) du dv = b r cos2 ϕ·
u2 +v 2 <a2 0 0
4
r dr dϕ [polar] = b a4 π.
The bottom is parametrized by r(u,
RR v) = [u, v, 0],
contributing minus (because of the wrong orientation) [u3 , u2 v, 0] •
u2 +v2 <a2
(0, 0, 1) du dv ≡
RR 0. Finally, the sides are parametrized by r(u, v) = [a cos u, a sin u, v],
contributing [a cos u, a3 cos2 u sin u, a2 v cos2 u]•[a cos u, a sin u, 0] du dv =
3 3
0<u<2π
0<v<b
Rb R2π
a4 cos2 u du dv = a4 bπ. Adding the three contributions gives 54 a4 bπ [check].
0 0
¥

Similarly, there is an interesting relationship between the Type II line in-


tegral over a closed curve Cd and a Type II surface integral over any surface S
having Cd as its boundary, called

IStokes’ TheoremJ
ZZ I
Curl(g) • n dA ≡ g•dr
S Cd

where the orientation of Cd and that of n dA follow the right-handed pattern.


[When Cd and S lie in the (x, y) plane, this is known as the Green’s Theorem].

Indication of Proof (which is, this time, a lot more complicated):


We parametrize S and then, using the corresponding coordinate
H lines, we
divide S into many infinitesimal parallelograms and evaluate g•dr for each
of these. When adding these together, the contributions of any two adjacent
sides cancel out, and we end up with the integral on the right hand side of
the Stokes’ formula.
On the other hand, the contribution of each of these integrals can be approx-
imated (the approximation becomes exact in the appropriate limit) by the
113

difference in g between two opposite sides of the parallelogram ( ∂g ∂u


du and
∂g ∂r
dv
dv) dot-multiplied by the vector representation of the two sides ( ∂v dv and
∂r
∂u
du, respectively). These two are then subtracted (since one runs with, and
the other one against, the counterclockwise orientation of the boundary) to
get
I
∂g ∂r ∂g ∂r
g•dr ' ( • − • ) du dv
∂u ∂v ∂v ∂u
∂r ∂r
Using the chain rule for expanding and expressing the dot prod-
∂v
and ∂u
,
P
3
∂g ∂r ∂g
ucts in terms of individual components, we get ( ∂rji · ∂r
∂u
i
· ∂vj − ∂rji · ∂r
∂v
i
·
i,j=1
∂rj P
3
∂gj ∂rk ∂r P
3
∂gj ∂rk ∂r
∂u
) du dv = ∂ri
· ∂u
· ∂v
(δ ik δ j − δi δ jk ) = ∂ri
· ∂u
· ∂v
·
i,j,k, =1 i,j,k, ,m=1
¡ ∂r ∂r ¢
= Curl(g) • ∂v
ijm k m × ∂u . These, when added together, result in the
left hand side of our formula. ¤
½ 2
H x + y2 = 4
EXAMPLE: Evaluate (y, xz , −zy )•dr, where Cd is defined by
3 3
z = −3
,
Cd
counterclockwise when viewed from the top.
RR
Solution: Using Stokes’ Theorem we replace this integral by [−3zy 2 −
S
3xz 2 , 0, z 3 −1]•n dA, where S is the corresponding (flat) disk. Parametrizing
RR
S by r(u, v) ≡ [u, v, −3] ⇒ n dA = [0, 0, 1] du dv, this converts to (−28) du dv =
u2 +v 2 <4
−28 · 4π = −112π [note that we did not need to know the first two compo-
nents of Curl(g) in this case, i.e. it pays to do the n dA first].
We will verify the answer by performing the original line integral, directly:
r(t) = [2 cos t, 2 sin t, −3] is the parametrization of Cd , which converts the
R2π R2π
integral to [2 sin t, −54 cos t, 24 sin3 t]•[−2 sin t, 2 cos t, 0] dt = (−4 sin2 t−
0 0
108 cos2 t) dt = −112π [almost equally easily]. ¥

Unless Curl(g) ≡ 0, the computational simplification achieved by applying the


Stokes’ theorem is very limited (a far cry from the Gauss theorem). One exception
is when Cd is a ’broken’ planar curve (consisting of several segments), as we can
trade one surface integral for several line integrals.

Review exercises ½
z = x2 + y 2
1. Find the area of the following (truncated) paraboloid: .
z<b
2 2
Solution:
p Parametrize: r =[u, v, u +v ] ⇒ ru p = [1, 0, 2u] and rv = [0, 1, 2v] ⇒
dA = (1 + 4u )(1 + 4v ) − 16u v du dv = 1 + 4(u2 + v2 )du dv. We need
2 2 2 2

RR R2π R b √ h 3
i√b
1
dA = [going polar] 1 + 4r2 · rdr dϕ = 2π · 12 (1 + 4r2 ) 2 =
2 2 0 0 r=0
h <b
u +v
3
i
π
6
(1 + 4b) 2 − 1 .
114

RR y = x2

2. Evaluate [y, 2, xz] • n dA, where S is defined by 0 < x < 2 , and n is
S 
0<z<3
pointing in the direction of −y.
Solution: r(u, v) = [u, u2 , v] ⇒ ru ×rv = [1, 2u, 0]×[0, 0, 1] = [2u, −1, 0] (cor-
R3 R2 2
rect orientation). The integral thus converts to [u , 2, uv]•[2u, −1, 0] du dv =
0 0
R3 R2 3 h 4 i2
(2u − 2) du dv = 3 u2 − 2u = 12.
0 0 u=0


RR  x>0
2 2
3. Find [x , 0, 3y ]•n dA, where S is the y > 0 portion of the x+y+z = 1
S 
z>0
plane, and n is pointing upwards.
Solution: r = [u, v, 1−u−v] ⇒ ru ×rv = [1, 0, −1]×[0, 1, −1] = [1, 1, 1] (cor-
R1 1−v
R 2 R1 1−v
R 2
rect orientation) ⇒ [u , 0, 3v 2 ] • [1, 1, 1] du dv = (u + 3v 2 ) du dv =
0 0 0 0
R1 h u3 2
i1−v R1 h (1−v)3 2
i h
(1−v)4 v3 v4
i1
3
+ 3uv dv = 3
+ 3(1 − v)v du = − 12
+ 3 3
− 3 4
=
0 u=0 0 v=0
1
3
.

4. Parametrize a circle of radius ρ = 5, centered on a = [1, −2, 4], and normal


to n =[2, 0, −3].
Solution: In general, a circle is parametrized by: r(t) = a + ρm1 cos t +
ρm2 sin t, where m1 and m2 are unit vectors perpendicular to n and to each
other. They can be found by taking the cross product of n and an arbi-
trary vector, then taking the cross product of the resulting vector and n,
and normalizing both, thus: [2, 0, −3] × [1, 0, 0] = [0, −3, 0] and [0, −3, 0] ×
[2, 0, −3] = [9,
h 0, 6] ⇒ mi 1 = [0, −3, 0] ÷ 3 = [0, −1, 0] and m2 = [9, 0, 6] ÷
√ 3 2
2 2
9 + 6 = √13 , 0, √13 .

Answer: r(t) = [1+ √1513 sin t, −2−5 cos t, 4+ √1013 sin t] where 0 ≤ t < 2π. Sub-
sidiary: To parametrize the corresponding disk: r(u, v) = [1 + √3v13 sin u, −2 −
v cos u, 4 + √2v13 sin u] where 0 ≤ u < 2π and 0 ≤ v < 5.

5. Find the moment of inertia (with respect to the z axis) of a shell-like torus
(parametrized earlier) of uniform mass density and total mass M.
Solution: Recall that r(u, v) = [(a+b cos v) cos u, (a+b cos v) sin u, b sin v] ⇒
R2π R2π
dA = b(a + b cos v) du dv [done earlier] and d2 = (a + b cos v)2 ⇒ ρ (a +
0 0
R2π
b cos v)2 b(a+b cos v) du dv = ρb2π (a3 + 3a2 b cos v+ 3ab2 cos2 v+ b3 cos3 v) dv =
0
M
4π 2 ab
b2π[2πa3 + 3ab2 π] = M (a2 + 32 b2 ).

6. Repeat with a solid torus.


115

Solution: We replace r by [(a + r cos v) cos u, (a + r cos v) sin u, r sin v], where
the new variables u, v and r (0 ≤ r < b) can be also seen as orthogonal
coordinates. For any orthogonal coordinates it is easy to find the Jacobian,
geometrically, by dx dy dz → r dv ·(a+r cos v) du·dr = r (a+r cos v) du dv dr
Rb R2π R2π Rb
⇒ρ (a + r cos v)2 r (a + r cos v) du dv dr = ρ2π 2 r (2a3 + 3ar2 ) dr =
0 0 0 0
ρ2π2 (a3 b2 + 34 ab4 ).
Rb R2π R2π 2
Similarly, the total volume is r (a + r cos v) du dv dr = (2π)2 a b2 .
0 0 0
M 3
Answer: 2π 2 ab2
2π 2 (a3 b2 + 4
ab4 ) = M (a2 + 34 b2 ).
An alternate approach would introduce polar coordinates in the (x, y)-plane,
p R2π a+b
R 2
use 2 b2 − (r − a)2 for the z-thickness and r2 for d2 , leading to ρ r ·
0 a−b
p
2 b2 − (r − a)2 · r dr dϕ = ... [verify that this leads to the same answer].


 x>0

y>0
7. Consider the following solid of uniform density. Find:

 z>0

x+y+z <1

(a) Center of mass.

R1 1−z
R 1−y−z
R
Solution: To find its x-component we need to divide x dx dy dz =
0 0 0
R1 1−z
R (1−y−z)2 R1 (1−z)3 1
R1 1−z
R 1−y−z
R
2
dy dz = 6
dz = 24
by the volume dx dy dz =
0 0 0 0 0 0
R1 1−z
R R1 (1−z)2
(1 − y − z)dy dz = 2
dz = 16 .
0 0 0

Answer: [ 14 , 14 , 14 ],
as the y and z-components must have the same value as
the x-component [obvious from symmetry].

(b) Moment of inertial with respect to [t, t, t] (the axis).

Solution: To find d2 we project [x, y, z] into [ √13 , √13 , √13 ] (unit direction of the
axis), getting [x, y, z] • [ √13 , √13 , √13 ] = x+y+z

3
. By Pythagoras, d2 = x2 + y 2 +
h i2
z 2 − x+y+z

3
.

M
R1 1−z R h 2
R 1−y−z (x+y+z)2
i
Answer: V
x + y2 + z 2 − 3
dx dy dz =
0 0 0
R1 1−z
R 1−y−z
R
4M [x2 + y 2 + z 2 − xy − xz − yz] dx dy dz =
0 0 0
R1 1−z
R 1−y−z
R
12M [x2 − xz] dx dy dz [due to symmetry] =
0 0 0
116

R h (1−y−z)3 (1−y−z)2 i
R1 1−z R1
12M 3
− 2
z dy dz = M [(1 − z)4 − 2(1 − z)3 z] dz =
h 0 0 5 i1 0
(1−z) (1−z)4 (1−z)5 M
M − 5 + 2 4 z + 2 20 = 10 .
0

8. A container is made of a spherical shell of radius 1 and height h. Find:

(a) The shell’s surface area.



Solution: r(u, v) = [u, v, − 1 − u2 − v2 ] ⇒ ru = [1, 0, √1−uu2 −v2 ] and rv =
q¡ ¢¡ ¢
2 2 2 2
[0, 1, √1−uv2 −v2 ] ⇒ dA = 1 + 1−uu2 −v2 1 + 1−uv2 −v2 − (1−uu2 v−v2 )2 du dv ≡
q
1
1−u2 −v2
du dv.

RR R
2π h(2−h)
R √ √
du dv r dr h(2−h)
Answer: √ = √ dϕ = 2π[− 1 − r 2 ] =
2
1−u −v 2 1−r 2 r=0
u2 +v 2 <h(2−h) 0 0
p
2π[1 − 1 − h(2 − h)] = 2πh.

(b) The container’s volume:


p
Solution: Since the z-thickness (depth) equals 1 − x2 − y 2 − (1 − h), all we

RR hp i R2π h(2−h)
R √
need is 1 − x2 − y 2 − (1 − h) dx dy = [ 1 − r2 −
x2 +y2 <h(2−h) 0 0
h 3
i√ 2
h(2−h)
1 + h] · r dr dϕ = 2π − 13 (1 − r2 ) 2 − (1 − h) r2 = 2π[− 13 (1 − h)3 −
¡ ¢ r=0
(1 − h) h(2−h)
2
+ 1
3
] = πh2
1 − h
3
.
H
9. Evaluate [(x + y) dx + (2x − z) dy + (y + z) dz], where C is the closed curve
C
consisting of three straight-line segments connecting [2, 0, 0] to [0, 3, 0], that
to [0, 0, 6], and back to [2, 0, 0].
Solution: Applying the Stokes’ Theorem, which enables us to trade three
line integrals (the three segments would require individual parametrization)
for one surface integral, we first compute Curl(g) = [2, 0, 1], then r(u, v) =
[u, v, 6−3u−2v] (note that x2 + y3 + z6 = 1 is the equation of the corresponding
plane) ⇒ ru × rv = [1, 0, −3] × [0, 1, −2] = [3, 2, 1] which has the correct
orientation.
RR
Answer: [2, 0, 1] • [3, 2, 1] du dv = 7 × Area = 7 × 2×3
2
= 21
u>0
v>0
3u+2v<6

[Verify by computing the line integral (broken onto three parts) directly].
H
10. Evaluate [yz dx + xz dy + xy dz], where C is the intersection of x2 + 9y 2 = 9
C
and z = 1 + y 2 oriented counterclockwise when viewed from above (in terms
of z).
Solution: Applying the same Stokes’ Theorem, we get Curl(g) ≡ [0, 0, 0].
117

Answer: 0.
We will verify this by evaluating the line integral directly: r(t) = [3 cos t, sin t, 1+
R2π
sin2 t] parametrizes the curve (0 < t < 2π) ⇒ [(1 + sin2 t) sin t, 3(1 +
0
R2π
sin2 t) cos t, 3 sin t cos t]• [−3 sin t, cos t, 2 sin t cos t] dt = [−3 sin2 t(1+sin2 t)+
0
R2π
3(1 + sin2 t)(1 − sin2 t) + 6 sin2 t(1 − sin2 t)] dt = (3 + 3 sin2 t − 12 sin4 t) dt =
¡ ¢ 0
2π × 3 + 3 × 12 − 12 × 38 = 0 [check].
R2π R2π
Note that sin2n t dt = cos2n t dt = 2π × 12 × 34 × 56 × 78 × .... × 2n−1
2n
.
0 0

11. In Physics we learned that the gravitational force of a ’solid’ (i.e. 3-D) body
exerted on a point-like particle at R ≡ [X, Y, Z] is given by
ZZZ
r−R
µ ρ(r) dV
|r − R |3
V

where µ is a constant, ρ is the body’s mass density, and V is its ’volume’ (i.e.
3-D extent). [Here we are integrating a vector field in the componentwise
(scalar) sense, i.e. these are effectively three volume integrals, not one].
Prove that, when the body is spherical (of radius a) and ρ is a function of r
only (placing the coordinate origin at the body’s center), this force equals

−R
µM ·
|R |3

where M is the body’s total mass.


r−R 1 ∂ ∂ ∂
Solution: First we notice that |r−R 3 ≡ ∇R |r−R | , where ∇R ≡ [ ∂X , ∂Y
, ∂Z
].
RRR r−R
|
This implies that µ ρ(r) |r−R |3
dV ≡
V
ZZZ
1
µ∇R ρ(r) dV
|r − R |
V

leading to a lot easier integration (also, now we need one, not three integrals).
RRR 1
Evaluating ρ(r) |r−R |
dV (the so called gravitational potential) in spheri-
V
Ra Rπ R2π 2 p
cal coordinates yields ρ(r) √r sin θ·dϕ dθ dr [note that |r − R| = x2 + y 2 + (z − R)2 ,
r2 +R2 −2Rr cos θ
0 0 0
where we have conveniently chosen the direction of R (instead of the usual z)
Ra £√ ¤π
to correspond to θ = 0]. This further equals 2π
R
ρ(r)·r· r2 + R2 − 2Rr cos θ θ=0 dr =
0

Ra 4π
Ra M
R
ρ(r) · r · [(R + r) − (R − r)] dr = R
ρ(r) r2 dr = R
.
0 0
118

This proves our assertion, as ∇R R1 = ∇R √X 2 +Y1 2 +Z 2 =


· ¸
−X −Y −Z
2 2 2
3 ,
2 2 2
3 ,
2 2 2
3 = −R
R3
.
(X +Y +Z ) 2 (X +Y +Z ) 2 (X +Y +Z ) 2

Now try to prove the original statement directly (bypassing the potential),
you should not find it too difficult.

12. Optional: Using Gauss Theorem (somehow indirectly, because of the singu-
larity at R = r, but this need not concern us here), one can show that
ZZZ µ ¶
r−R
DivR dVR = −4π
|r − R |3
V

for any V containing r, and equals 0 otherwise [note that the variable of both
the integration, and the divergence operator, ³is R, whereas
´ r is considered
r−R
a fixed parameter]. This implies that DivR |r−R |3 [as a function of R]
must be equal to zero everywhere except at R = r, where its value becomes
minus infinity (this can be easily verified by direct differentiation). This
infinite ’blip’ of its value contributes an exact, finite amount of −4π when
the function is integrated. We can change this to +1 by a simple division,
thus: µ ¶
1 r−R
DivR 3 ≡ δ (3) (R − r)
−4π |r − R |
defining the so called (3-D) Dirac’s delta function. Its basic property is
ZZZ
f (R)·δ (3) (R − r) dVR = f (r)
V

We can now understand why


ZZZ
r−R
F(R) =µ ρ(r) dV
|r − R |3
V

(the gravitational force of the previous example) implies

DivR (F(R)) = −4πµρ(R)

When studying partial differential equations, one learns that the last equal-
ity also implies the previous one, and there are thus two equivalent ways
of expressing the same law of Physics [in the so called integral and dif-
ferential form, respectively]. This is essential for understanding Maxwell
equations and their experimental basis. ⊗
Part III
COMPLEX ANALYSIS

119
121

Chapter 9 COMPLEX FUNCTIONS —


DIFFERENTIATION
Preliminaries
We already know how to add, subtract, multiply [e.g. (4 + 3i)(2 − 5i) = 8 + 6i −
20i + 15 = 23 − 14i] and divide [e.g. 4+3i2−5i
= (4+3i)(2+5i)
(2−5i)(2+5i)
= −7+26i
29
7
= − 29 + 26
29
i]
complex numbers. In this chapter, we will learn how to evaluate most of the usual
functions using a complex argument (getting, in general, a complex answer). We
will also investigate the issue of taking a derivative of any such function.

IBasic DefinitionsJ

We reserve the letter z = x + iy for a complex number (soon to become a


complex variable), where x = Re(z) is its real part and y = Im(z) is its
(purely) imaginary part [these are already two simple examples of functions of
z].
Similarly, z̄ ≡ x − iy (some books use z ∗ ) is the complex conjugate of z [yet
another function of z]. It is obvious that z̄ ≡ z and easy to prove

z1 · z2 = z̄1 · z̄2

Proof: (x1 + iy1 )(x2 + iy2 ) = x1 x2 −y1 y2 −i(x1 y2 +x2 y1 ) and (x1 −iy1 )(x2 +iy2 ) =
x1 x2 − y1 y2 − i(x1 y2 + x2 y1 ), which agree ¤
and µ ¶
z1 z̄1
=
z2 z̄2
[proof similar]. Also note that zz = x2 + y 2 .
Geometrically, complex numbers are often represented as points of the x-y
plane, leading to their so called polar representation:
p
r = |z| ≡ x2 + y 2

[the magnitude] and


θ = ± arctan(y/x)
[the argument, where the sign is chosen according to the quadrant of θ]. The
value of θ is usually chosen from the (−π, π] interval (the so called principal
value of the argument), but it has obviously infinitely many potential values
[θ ± 2πk, where k is an integer]. Conversely,

z = r(cos θ + i sin θ)

Using this representation, one can easily show that the product z1 · z2 =
r1 r2 (cos θ1 +i sin θ1 )(cos θ2 +i sin θ2 ) = r1 r2 (cos θ1 cos θ2 −sin θ1 sin θ2 )+ ir1 r2 (sin θ1 cos θ2 +
sin θ2 cos θ1 ) =
r1 r2 [cos(θ1 + θ2 ) + i sin(θ1 + θ2 )]
122

z1
and similarly, the ratio z2
=
r1
[cos(θ1 − θ2 ) + i sin(θ1 − θ2 )]
r2
The first of these two formulas can be extended to any number of factors, further
implying that an integer power of a complex number can be computed from

z n = rn [cos(nθ) + i sin(nθ)]

This also enables us to derive formulas of the following type: cos 5θ + i sin 5θ =
(cos θ+i sin θ)5 = cos5 θ−10 cos3 θ sin2 θ+5 cos θ sin4 θ+ i(5 cos4 θ sin θ−10 cos2 θ sin3 θ+
sin5 θ).

EXAMPLES: Find the region (of complex plane) which corresponds to:
1. |z| ≤ 1.
p
Solution: x2 + y 2 ≤ 1, i.e. the unit disk centered on (0, 0).

2. |z − 1| + |z + 1| = 3.
Solution: Square |z −1| = 3−|z +1| to get (z −1)(z̄ −1) = 9−6|z +1|+ (z +
1)(z̄ + 1) ⇔ 6|z + 1| = 9 + 2(z + z̄). Square again getting: 36[(x + 1)2 + y 2 ] =
81 + 72x + 16x2 ⇔ 20x2 + 36y 2 = 45 [ellipse centered on (0, 0)].

3. 0 < Im( 1z ) < 1.


Solution: Since Im( 1z ) = Im( xx−iy y 2 2
2 +y 2 ) = − x2 +y 2 , we get 0 < −y < x + y , or

y < 0 and x2 + (y + 12 )2 > 14 , i.e. a set of points below the x-axis and outside
the disk of radius 12 centered on (0, − 12 ). ¥

Introducing complex functions


Any expression involving z defines a complex function, e.g.. f (z) = z 2 + 3z. In
general, any such function will have complex values and can be thus expressed in
terms of two (real) function, the real part of f (z) and its (purely) imaginary part.
These are usually called u(x, y) and v(x, y) respectively, each being a function of
x and y (real arguments), i.e.

f (z) ≡ u(x, y) + i v(x, y)

EXAMPLE: f (z) ≡ z 2 + 3z = x2 + 2ixy − y2 + 3x + 3iy = (x2 − y2 + 3x) +


i(2xy + 3y) ≡ u(x, y) + iv(x, y). ¥

IDerivativeJ

of a complex function is a fairly difficult concept, even though, its definition is


seemingly the same as in the real case, namely

f (z + ∆) − f (z)
f 0 (z) = lim
∆→0 ∆
123

where ∆ can approach zero from any (complex) direction. And only when all
these limits agree, the function is called differentiable (at z), the value of the
resulting derivative equal to this common limit.
Is there a simple way to establish that a given function is differentiable [we don’t
want to compare infinitely many limits]? The answer is yes, the two real functions
u and v must meet the following, so called Cauchy-Riemann conditions:
∂u ∂v

∂x ∂y
∂v ∂u
≡ −
∂x ∂y
Proof: f (z + ∆) ≡ u(x + ∆x , y + ∆y ) + iv(x + ∆x , y + ∆y ) where ∆ ≡ ∆x + i∆y .
This can be expanded (generalized Taylor) as u(x, y) + ∂u ∆ + ∂u
∂x x
∆ + ... +
∂y y
∂v ∂v
iv(x, y) + i ∂x ∆x + i ∂y ∆y + ... ⇒
∂u ∂u ∂v ∂v
f (z + ∆) − f (z) ∆
∂x x
+ ∆
∂y y
+ i ∂x ∆x + i ∂y ∆y

∆ ∆x + i∆y
Furthermore [we know from real analysis], all limits will agree when the ’hori-
∂u ∂v
∆ +i ∂x
∂x x
∆x
zontal’ limit lim and the ’vertical’ limit lim do. This implies that lim ∆x
=
∆x →0 ∆y →0 ∆x →0
∆y =0 ∆x =0
∂u
∂u ∂v ∆ +i ∂v
∂y y

∂y y ∂v
∂x
+ i ∂x must equal lim i∆y
= ∂y
− i ∂u
∂y
, from which the Cauchy-
∆y →0
Riemann conditions easily follow. ¤

Note that
∂u ∂v ∂v ∂u
f 0 (z) =+i ≡ −i
∂x ∂x ∂y ∂y
when the function is differentiable.

EXAMPLE:
• f (z) = z 2 = (x + iy)2 = x2 − y 2 + 2ixy. Find f 0 (z) [first check whether it
exists].
∂ ∂ √ ∂ ∂
Solution (checking C-R): ∂x (x2 − y 2 ) ≡ ∂y (2xy) and ∂x (2xy) ≡ − ∂y (x2 −

y2) .
Answer: f 0 (z) is then equal to ∂u
∂x
∂v
+ i ∂x = 2x + 2iy ≡ 2z. Note that we are
getting the same answer as if z were real. ¥

Along these lines, one can show that in general all polynomial functions are
differentiable, and that the corresponding derivative can be obtained by applying
the usual (z n )0 = nz n−1 rule.
This follows from the fact that, when f1 and f2 are differentiable, so is f1 + f2
[quite trivial to prove] and f1 · f2 ≡ (u1 u2 − v1 v2 ) + i(u1 v2 + u2 v1 ).

Proof (of the latter): ∂u 1


∂x 2
u + u1 ∂u
∂x
2
− ∂v1
∂x 2
v − v1 ∂v2
∂x
≡ ∂u1
∂y 2
v +u1 ∂v
∂y
2
+ ∂v1
∂y 2
u +v1 ∂u
∂y
2

and ∂u 1
∂y 2
u + u1 ∂u
∂y
2
− ∂v1
∂y 2
v − v1 ∂v
∂y
2
= ∂u 1
∂x 2
v + u1 ∂v
∂x
2
+ ∂v1
∂x 2
u + v1 ∂u∂x
2
(⇒ the
product rule still applies). ¤
124

1 u v
Similarly, if f is differentiable, so is f
≡ u2 +v 2
− i u2 +v 2.

Proof: ∂u∂x
(u2 + v2 ) − 2u( ∂u
∂x
∂v
u + ∂x ∂v
v) = − ∂y (u2 + v2 ) + 2v( ∂u
∂y
u + ∂v
∂y
v) and ∂u
∂y
(u2 +
v 2 ) −2u( ∂u∂y
∂v
u+ ∂y v) = ∂x∂v
(u2 +v 2 ) −2v( ∂u
∂x
∂v
u+ ∂x v) [each divided by (u2 +v 2 )2 ]
(⇒ the quotient rule still applies). ¤
And finally a composition [f1 (f2 (z)) ≡ u1 (u2 , v2 ) + iv1 (u2 , v2 ), sometimes
denoted f1 ◦ f2 ] of two differentiable functions is also differentiable.
Proof: ∂u1 ∂u2
∂x ∂x
+ ∂u1 ∂v2
∂y ∂x
= ∂v1 ∂u2
∂x ∂y
+ ∂v1 ∂v2
∂y ∂y
and ∂u1 ∂u2
∂x ∂y
+ ∂u1 ∂v2
∂y ∂y
= ∂v1 ∂u2
∂x ∂x
+ ∂v1 ∂v2
∂y ∂x
where both u1 and v1 have (u2 , v2 ) as arguments (⇒ the chain rule still
applies). ¤
B In summary:
All rational expressions (in z) are differentiable (everywhere, except when di-
viding by zero — the so called singularities), and the old differentiation formulas
still apply (after the x → z replacement). This provides us with a huge collection
of differentiable functions (later to be extended further). ¤
The natural question to ask now is: Are there any complex functions which are
not differentiable? The answer is yes, aplenty as well.
EXAMPLE:
• f (z) = z̄ ≡ x − iy. The first C-R condition requires ∂x
∂x
≡ ∂(−y)
∂y
which is
obviously not met. This function is nowhere differentiable. ¥
Similarly one can verify that |z|, Re(z) and Im(z) are nowhere differentiable
(since they have zero v(x, y) component). Thus, any expression involving any of
these function is nowhere differentiable in consequence.
If a function is differentiable at a point, and also at all points of its (open)
neighborhood, the function is called analytic at that point [⇒ a function can be
differentiable at a single, isolated point, but it can be analytic only in some open
region]. This subtle distinction is not going to have much impact on us, we will
simply take ’analytic’ as another name for ’differentiable’.
2 2 2 2
One interesting implication of C-R is that ∂∂xu2 + ∂∂yu2 = 0 and ∂x ∂ v ∂ v
2 + ∂y 2 =

0 (proof is trivial). Functions which meet the corresponding partial differential


equation are called harmonic. This means that both u(x, y) and v(x, y) of an
analytic function are harmonic, and reverse: given a harmonic function, we can
make it a u(x, y) [or v(x, y)] of an analytic function by deriving the corresponding
v(x, y) [or u(x, y)].
EXAMPLE:
Given u(x, y) = ex cos(y), find the corresponding f (z).
Solution: First we can easily check that the function is harmonic. Then we
∂v
compute ∂x ≡ − ∂u
∂y
= ex sin y and ∂y∂v
≡ ∂u
∂x
= ex cos y, from which v(x, y)
follows [by the procedure of solving exact differential equation]: v(x, y) =
ex sin y + c1 (x) = ex sin y + c2 (y). Making these ’compatible’ we get v(x, y) =
ex sin y. Thus f (z) = ex (cos y + i sin y) ≡ ex · eiy = ez .
125

We have thus proved that the exponential function ez ≡ exp(z) is also


analytic, with a derivative given by (ez )0 = ez [which follows easily from our last
example].

IRoots of z J
√ 1
The function f (z) = n z ≡ z n , where n is an integer, can in general have n
possible (distinct) values, all given by

n
θ θ
r(cos + i sin )
n n
(depending on the choice of θ). We normally select its principal value, to make
the answer unique.
This (principal-value) function is analytic everywhere except 0 and the nega-
tive x-axis (due to the discontinuity of θ when crossing −x). The old
1 1 1 −1
(z n )0 = zn
n
formula still applies, in the analytic region.
We now return to our

IExponential Function f (z) = ez J

It is periodic in the following sense: f (z ± 2kπi) ≡ f (z). The complex strip


−π < y ≤ π is called its fundamental region (everywhere else, the values of
ez are just repeated). Note that now the function can have negative values, e.g.
eiπ = cos π + i sin π = −1. Its derivative equals to ez , as already mentioned.

Proof: In the previous example we saw that ez was analytic everywhere. Its
derivative when z is real is ex . The only way to extend this to an analytic
expression is to add i y to x (making it z), as any other combination of x and
y would not be analytic. (This argument applies to any analytic function,
which means that we can always use the old formulas for differentiation, just
replacing x by z). ¤

The corresponding inverse function

(to ez ) is w(z) ≡ ln z, a solution to z = ew or, more explicitly, to

z = eu+iv = eu (cos v + i sin v)

where w ≡ u + iv. To solve this equation for u and v, we must express z in its
polar form, thus: r(cos θ + i sin θ) = eu (cos v + i sin v) ⇒

u = ln r

(this is the usual, real logarithm) and

v=θ
126

ln z is thus a multivalued function of z; to fix that, we define its principal


value by taking −π < θ ≤ π [in which case we call the function Ln(z)]. It is
analytic everywhere except at 0 and negative real values [its derivative is given by
the old dLn(z)
dz
= z1 ]. Since, in this manner, one can take a logarithm of any complex
number, we can now find Ln(−1) = Ln(1 · eiπ ) [express the number in its polar
form] = 0 + iπ (purely imaginary).
Using the Ln function, we can now define

IGeneral ExponentiationJ

f (z) ≡ z a

where a is also complex. This equals to (eLn(z) )a = eaLn(z) which is well (and
uniquely) defined [in terms of its principal value — one could also define the corre-
sponding multivalued function].
This function is thus analytic everywhere except the negative x axis and 0
[when a is an integer, we need to exclude only 0 when a < 0, and nothing when
a > 0]. Its derivative is of course the old (z a )0 = az a−1 .
π π
Using this definition we can compute ii = ei(i 2 ) = e− 2 = 0.20788 [real!].
Similarly, one can also define the usual ITrigonometric FunctionsJ
1 iz
sin z ≡ (e − e−iz )
2i
and
1 iz
cos z ≡ (e + e−iz )
2
and the corresponding inverse functions arcsin(z) and arccos(z). Since these are of
lesser importance to us, we are skipping the respective sections of your textbook.

Chapter summary
Complex differentiation is trivial for expressions contains z only; we differentiate
them as if z were real.
This similarity of real and complex differentiation is a nontrivial (certainly not
an automatic) consequence of the algebra of complex numbers; it is also the main
reason why extending Calculus to complex numbers is so fruitful (as we will see in
the next chapter).
As soon as we find z̄, |z|, Re(z) or Im(z) in a definition of a complex function,
the function is nowhere differentiable.
127

Chapter 10 COMPLEX FUNCTIONS —


INTEGRATION
Similar to differentiation, complex integration of analytic functions will be shown
to follow the formulas of real integration. But there is an extra bit of good news:
the so called contour integration will make complex integration even easier,
so that many real integrals can be simplified by going complex.

IDefinitionJ

of a complex integral is similar to that of a line integral in a plane, with a


small but essential modification: instead of taking the dot product of f and dr,
the complex function f (z) and the(two-component) infinitesimal element dz are
multiplied using complex algebra, resulting in a complex (i.e. two-component)
answer, thus:
Z Z Z Z
f (z)dz ≡ (u + iv)(dx + idy) = (udx − vdy) + i (vdx + udy)
C C C C

where C is come complex curve.


Note that this definition does not require f (z) to be analytic; we can thus
integrate all (not just analytic) complex function.
The actual integration can be carried out by parametrizing C and performing
the implied single-variable (dt) integration (we need two of them now).

EXAMPLES:
Evaluate:
R dz
1. z
, where C is the unit circle centered at 0, traversed counterclockwise.
C

Solution: Parametrize z by z = eit where 0 ≤ t < 2π [this form is more


convenient than the more explicit but equivalent z = cos t + i sin t]. Since
dz
dt
= ieit , we can replace dz by ieit dt [and z1 by e−it ].
R2π R2π
Answer: e−it · ieit dt = i dt = 2πi.
0 0

R2π − sin t+i cos t


[Using the more explicit form of z, we would have to struggle with cos t+i sin t
dt
0
to get the same answer].
R
2. Re(z)dz, where C is the straight-line segment from 0 to 1 + i.
C

Solution: z = t + it with t ∈ (0, 1) ⇒ Re(z) = t and dz = (1 + i)dt.


R1 h 2 i1
Answer: t(1 + i)dt = (1 + i) t2 = 12 + 2i .
0 t=0
128
R
3. (z −z0 )m dz, where m is an integer (of either sign), z0 is a complex constant,
C
and C is a counterclockwise circle of radius ρ > 0 centered at z0 (this is an
extension of Example 1).
R2π R2π
Solution: z = z0 +ρeit with t ∈ (0, 2π) ⇒ ρm eimt ·ρeit dt = ρm+1 ei(m+1) t dt =
0 0
R2π
ρm+1 (cos[(m + 1) t] + i sin[(m + 1) t]) dt = 0, with one important excep-
0
R2π
tion: when m = −1, we get ρ0 (cos 0 + i sin 0) dt = 2πi [remember this
0
result, it is of special importance].

Integrating analytic functions


An analytic function f (z) can be integrated by first finding its anti-derivative
(as if z were real, i.e. all of the old formulas and techniques still apply), then
evaluating it at the first and the last point of C (each being a complex number),
and finally subtracting the former from the latter (the same old procedure).
R R R
Proof: (u+iv)(dx+idy) = (udx−vdy)+i (vdx+udy). The last two integrals
C C C
are, effectively, line integrals in a plane (with r ≡ [u, −v] and r ≡ [v, u],
respectively). They are both path independent iff the C-R conditions are
met. Furthermore, when integrated via their ’potentials’, they yield
(x ,y ) (x ,y )
g(x, y)|(x10 ,y10 ) + i h(x, y)|(x10 ,y10 )

where g and h clearly meet the C-R conditions as well [we use 0 and 1 as
indices of the first and last point of C, respectively]. We can thus write
the result as F (z)|zz10 where F is analytic, and must agree with the usual
antiderivative when z is real. The only way to extend a real function F (x)
to an analytic function is by x → z. ¤
R
EXAMPLE: Find z 2 dz, where C is a straight line segment from 0 to 2 + i.
C
¯2+i
z3 ¯ 3
8+3×4i+3×2i2 +i3
Solution: = 3 ¯ = (2+i)
3
= 3
= 2
3
+ 11
3
i [note that the result
z=0
is path independent]. ¥

Integration is thus very simple for fully analytic functions (i.e. analytic every-
where, they are also called entire functions). Things get a bit more interesting
when the function has one or more

ISingularitiesJ
R dz
EXAMPLE: Integrating z
, where C is a curve starting at −1 − i and ending
C
at 1 + i yields two different results [−πi and πi, respectively] depending on
whether we pass to the left or to the right of 0 [we have to avoid 0, since the
function is singular there].
129
R
Thus, we have to conclude that the anti-derivative technique dz
z
= Ln(z)|1+i
z=−1−i
C
returns the correct answer only when C does not cross the −x axis (the so
called cut), since Ln(z) is not analytic there.
To correctly evaluate the other integral (when C passes to the left of 0),
we must define our own ln(z) function whose cut follows +x (rather than
−x). This simply means selecting θ from the [0, 2π) interval, rather than
the ’principal’ (−π, π]. Using this function, ln(z)|1+i
z=−1−i returns the correct
answer (for all paths to the left of 0). ¥
R
In general: f (z) dz is path-independent as long as C is modified without
C
crossing any singularity of f (z).
This implies:
When C is closed I
f (z) dz ≡ 0
C
provided that there are no singularities of f (z) inside C.
H
is a notation we use when integrating over a closed curve, this is also called
contour integration.
H
Now, the main question is: How do we evaluate f (z) dz with some singu-
C
larities inside a closed curve C?
This leads to a very important technique of the so called

Contour integration
We are in a good position to figure out the answer:
Each such ’contour’ (C) can be subdivided into as many pieces (also closed
curves) as there are singularities (with one singularity in each).
Each of these pieces can be then continuously modified (without crossing any
singularity) until its singularity is encircled. This will not change the value of the
original integral, which has now become a sum of several ’circle’ integrals.
And the value of each of these can be computed by expanding f (z) at z0 [the
corresponding singular point], thus:
a−3 a−2 a−1
f (z) = ... + + + + a0 + a1 (z − z0 ) + a2 (z − z0 )2 + ...
(z − z0 )3 (z − z0 )2 z − z0
[the so called Laurent series, where ai are constant coefficients], and then inte-
grating term by term.
a−1
We already know that only the z−z 0
term will contribute a nonzero value of
2πia−1 . The a−1 coefficient of the Laurent expansion is thus of a rather special
importance. It is called the residue of f (z) at z0 , and is usually denoted by
Res(f ).
z=z0
H
B The final result: f (z) dz is equal to 2πi, multiplied by the sum of residues of
C
all singular points of f (z) inside C [no actual integration is thus necessary].
130

IComputing ResiduesJ
B A special (but very common and important) case is
g(z)
f (z) =
(z − z0 )m
where g(z) is analytic at z0 [i.e. the singularity at z0 is due to an explicit division
by (z − z0 )m ]. We can thus expand g(z) at z0 in a regular (Taylor) manner, divide
the result by (z − z0 )m , and clearly see that
g(m−1) (z0 )
Res(f ) =
z=z0 (m − 1)!
EXAMPLES:
ez
H ez
1. The residue of z2
at z = 0 is (ez )0 |z=0 = 1. We can thus easily evaluate z2
dz
C
[where C is any contour encircling 0 counterclockwise] as 2πi.
H 2
2. Find z1+z
2 −1 dz, where C is (counterclockwise):
C

(a) The unit circle centered at 1.


1+z 2
Solution: The only singularities of (z+1)(z−1)
are at z = −1 (residue equal
1+(−1)2 1+12
to −1−1 = −1) and at z = 1 (residue equal to 1+1
= 1).
Answer: 2πi.
(b) The unit circle centered at −1.
Answer: 2πi × (−1) = −2πi.
(c) The unit circle centered at i.
Answer: 0 (no singularity is inside this C).
(d) The circle of radius 3, centered at 0.
Answer: 2πi × (1 − 1) = 0 [both singularities are inside this C].
z 2 −1
3. Identify the singularities (and find the corresponding residues) of z 2 +1
.
z 2 −1 (−i)2 −1
Solution: (z+i)(z−i)
has singularities at z = −i (residue: −i−i
= −i) and at
i2 −1
z=i (residue: i+i = i).
z 2 +1
4. Same for (4z−1)2
.
h 2 i0
z +1 1
Solution: 16
= 32
.
z= 14

(z+4)3
5. Same for z 4 +5z 3 +6z 2
.
(z+4)3 (−2+4)3
Solution: z 2 (z+2)(z+3)
has singularities at z = −2 (residue: = 2),
(−2)2 (−2+3)
i0 h
(−3+4)3 (z+4)3
at z = −3 (residue: (−3)2 (−3+2)
= − 19 ) and at z = 0 (residue: z2 +5z+6 =
¯ z=0
3(z+4)2 (z 2 +5z+6)−(2z+5)(z+4)3 ¯
(z 2 +5z+6)2 ¯ = 3×42 ×6−5×43
62
= − 89 ). ¥
z=0
131

B The general case of

g(z)
f (z) =
h(z)
where h(z0 ) = 0. The corresponding residue (at z = z0 ) equals
· ¸(m−1)
f (z)(z − z0 )m
lim
z→z0 (m − 1)!
where m is the order of the z0 root. If this can not be established in advance, one
has to try m = 1, m = 2, m = 3, ..., until the limit is finite (a larger value of m
would still yield the correct answer, but with a lot more effort).

EXAMPLES:
Find the residue of
2
ez
1. cos πz
at z = 12 .
¯
2 2
ez +(z− 12 )2zez
2 ¯
Solution: lim1 (z − 1
) · ez
equals, by L’Hopital rule, ¯ =
z→ 2
2 cos πz −π sin πz ¯
z= 12
1
− eπ4 = −0.40872.
2z + 1
2. at z = 0.
(1 − ez )2
³ 2 ´0
Solution: This looks like a second-order singularity, so we first find z(1−e
(2z+1)
z )2 =
¡ z ¢2 z z
1−e +ze z
2 1−ez + 2 · 1−e z · (1−ez )2 · (1 + 2z) and then take the z → 0 limit. Us-
0 0
z 1 z=0 1−ez +zez
ing L’Hopital rule gives (individually) 1−ez
→ −ez
→ −1 and (1−ez )2

0 z=0 1
zez ez +zez
−2ez (1−ez )
→ −2(ez −2e2z )
→ 2.
Answer: 2(−1)2 + 2 × (−1) × 12 × 1 = 1.
An alternate, and in many cases easier, approach is to directly expand:
1+2z 2 2
z2 z3 2
= (1 + 2z) z12 (1 + z2 + z6 + ...)−2 = (1 + 2z) z12 (1 − z − z3 +
(−z− 2 − 6 −...)
2
3 z4 + ...) = z12 + 1z − 19
12
+ .....
1
3. 1− cos z
at z = 0.
z2 z4
Solution: Since cos z = 1 − − ... we can deduce that the singularity
+
³ 2 ´02 4!
z)−z 2 sin z
is of second order. The residue is thus 1−cos z
z
= 2z(1−cos
(1−cos z)2
, after
we take the z → 0 limit. This requires applying L’Hopital rule four times
(relatively easy if we differentiate¡and
¢ substitute at the same time), resulting
in 0 for the numerator, and 2 × 31 × cos 0 × cos 0 = 6 for the denominator
(we need to know that it is non-zero).
Answer: 0.
2 4 2 2
Alternately, we expand ( z2 − 24
z
−...)−1 = 2
z2
z
(1− 12 +...)−1 = 2
z2
(1+ z12 +...) =
2
z2
+ 16 + ..., yielding the same result.
132

1 √
1 3
4. at zs = 2
+ i 2
.
1 + z3
Solution:
¡ z−zs ¢Even though this1 is¯ a rational function, we may now prefer√ to do
lim 1+z 3 = [L’Hopital] ¯ = − zs
[since zs
3
= −1] = − 1
− i 3
.
z=zs 3z 2 z=zs 3 6 6

1
5. ez −1−z
at z = 0.
1 2 z
Solution: Here we would rather expand: z2 3 = z2
· (1 + 3
+ ...)−1 =
2
+ z6 +...
2
z2
· (1 − z3 + ...) ⇒
Answer: − 23 [coefficient of 1z ]. ¥

Applications
to evaluating real integrals of several special types:

IRational Functions of sin t and/or cos t J

integrated over a full-period interval, i.e. from c to c + 2π where c is any real


number (usually equal to 0).
−1 −1
We introduce z = eit (⇒ sin t = z−z2i , cos t = z+z2 and dt = dz iz
) and integrate
the corresponding complex function over C0 [the counterclockwise unit circle cen-
tered at 0] via contour integration (i.e. by finding all singularities inside this circle
and adding their residues ×2πi; note that the final answer must be real).

EXAMPLES:
R2π dt
H dz H dz 2
H dz
1. 5−3 cos t
= −1 = 3 3 = − 3i 10 =
0 C0 iz(5 − 3 z +z2 ) 2
C0 i(5z − 2 z − 2 )
2
C0 z − 3 + 1
H dz
− 3i2 2 1 π
1 = − 3i × 1 −3 × 2πi = 2 [the only singularity inside C0
C0 (z − 3)(z − 3
) 3

is at z = 13 , 1 −3
1
being the corresponding residue].
3

R4π dt
Rπ dt
Note that 5−3 cos t
would be simply twice as large. Similarly, 5−3 cos t
would
0 0
1
yield half the value, because 5−3 cos t
is an even function of t.

R2π 1+sin θ H 1+ z −z −1 H z 2 + 2iz − 1


2i dz
2. 3+cos θ
dθ [θ plays the rôle of t] = z +z −1
· = − 2
dz =
0 C0 3+ 2
iz C0 z(z + 6z + 1)
H z + 2iz − 12
− √ √ dz.
C0 z(z + 3 + 8)(z + 3 − 8)

The √singularities

are at z = 0 [residue: −1] and z = −3 + 8 [residue:
(−3+ 8)2 +2i(−3+ 8)−1

(−3+ 8)×2 8
√ = 1 + √i8 ].

Answer: − √i8 × 2πi = π



2
.
133

Rπ sin2 t − 2 cos t R2π sin2 t − 2 cos t


3. dt = 12 dt [even function] =
0 2 + cos t 0 2 + cos t
³ −1 ´2 −1
z−z
H 2i
− 2 z+z2 dz H z 4 + 4z 3 − 2z 2 + 4z + 1
1 1
2 −1 · = − 4i
dz.
C0 2 + z+z2 iz C0 z 2 (z 2 + 4z + 1)
· 4 ¸0
z + 4z 3 − 2z 2 + 4z + 1
The singularities are at z = 0 (residue: =
z 2 + 4z + 1 z=0
4×1−1×4
√ zs4 + 4zs3 − 2zs2 + 4zs + 1
12
= 0) and at z = −2 + 3 (residue: √ =
zs2 (zs + 2 + 3)
−zs2 − 2zs2 − zs2 −2

√ = √ 3
, where zs ≡ −2 + 3; note that zs2 + 4zs ≡ −1 and
zs2 × 2 3
4zs + 1 ≡ −zs2 ).
Answer: − 4i1 × −2

3
× 2πi = π

3
.
Rπ dx
4. where a > 0.
0 a + cos2 x
Rπ H dz H
1 dx 1 iz 2 z dz
Solution: = 2 ¡ eix +e−ix ¢2 = 2 −1 = i
.
−π a+ C0 a+ ( z+z2 )2 C0 4az 2 + (1 + z 2 )2
2
4 2 2
The singularities
√ are the roots of z + (2 + 4a)z + 1 = 0 namely: z =
2
−(1 + 2a) ± 2 a + a [the minus sign puts us outside the unit circle, the plus
sign results in a negative value between 0 and −1, the proof is simple].
p √
The roots we need are thus z1,2 = ± i 1 + 2a − 2 a + a2 , the corresponding
factorization of the function’s denominator results in
z
√ .
(z − z1 )(z − z2 )(z + 1 + 2a + 2 a + a2 )
2

z1 z2
The residues are thus √ and √ The sum
(z1 − z2 ) · 4 a + a2 (z2 − z1 ) · 4 a + a2
1 π
is √ , multiplied by 2πi × 2i equals √ .¥
4 a+a 2 a + a2
IRational Function of x J
(the denominator must be a polynomial of at least two degrees higher than that in
the numerator), integrated from −∞ to ∞.
We replace any such integral by a complex integral over z (≡ x), from −R to
R, extended by the half circle Reit [t ∈ (0, π)] to make the contour closed. One
can show that, in the R → ∞ limit, the half circle’s contribution tends to 0, and
one thus obtains the correct answer to the original integral.
Proof: One can easily show that the magnitude of a complex integral cannot
exceed the maximum magnitude of the integrand, multiplied by the length
of C. In our case, the length of the half circle is πR, and the function’s
magnitude has an upper bound (based on the triangular inequality) equal to
a polynomial in R over another polynomial of at least two degrees higher.
The product of the πR length and this upper bound tends to 0 as R → ∞.
¤
134

B Algorithm: To find the value of the original integral, one thus has to replace
x by z in the integrand, find all its singularities in the upper half plane (y > 0),
add their residues, and multiply by 2πi.
Note that the answer must be real.

EXAMPLES:
Evaluate:
R∞ dx
1. 1+x4
.
−∞

√ i are the function’s singularities, ±1+
Solution: 4 −1 = ±1± √ i being in the
2 2 ¡ z−zs ¢
upper half plane. The corresponding residues are found from lim 1+z 4 =
z→zs
[by L’Hospital] 4z13 ≡ − z4s [since zs4 ≡ −1], where zs is either one of the two
s
singularities. Substituting zs = 1+ √ i this yields − 1+
2
√ i , substituting zs = −1+
4 2
√ i
2
√i.
we get 41− 2
³ ´
Answer: − 41+ √ i + 1−
2
√ i × 2πi = √π .
4 2 2

R∞ dx
2. 1+x6
.
−∞

6

± 3+i
Solution: The relevant singularities
¡ z−zs ¢are −1 = 2
and i. The correspond-
1 zs
ing residues are given by lim 1+z 6 = 6z 5
= − 6
[since zs6 ≡ −1]. The sum
z→zs s

of the three residues is thus − i+i


6
= − 3i .
Answer: − 3i × 2πi = 23 π.
R∞ dx
3. (1+x2 )3
.
0

1
R∞ dx
Solution: This equals to 2 (1+x2 )3
since the integrand is even. Furthermore,
−∞
1
has only one y > 0 singularity (zs = i), which is of the third order.
(z−i)3 (z+i)3
³ ´00
The corresponding residue is thus 12 (z+i)
1
3 = 6(z + i)−5 |z=i = (2i) 6
5 =
z=i
3
16i
.
3
Answer: 16i
× 2πi = 38 π.
R∞ 1+x2
4. 1+x4
dx.
0

1
R∞ 1+x2
Solution: = dx [even]. The relevant singularities are at ±1+i
2 1+x4
√ , the
2
−∞ ³ 2 )(z−z )
´ 2 2
corresponding residues are given by lim (1+z1+z 4
s
= 1+z
4z 3
s
= − 1+z
4
s
· zs
z=zs i
(1+i)2 (−1+i)2
1+ 2 1+i
1+ 2 −1+i
[as zs4 ≡ −1]. Their sum equals − · √
2
− · √
2
=
4 4
(1+i)2 (1−i)(−1+i)
− √
4 2
− √
4 2
= − 42i
√ −
2
2i

4 2
−i
= √2
.
135

Answer: 1
2
· −i

2
· 2πi = π

2
. ¥

We now consider integrating, from −∞ to ∞, the same type of rational expres-


sion as in the previous case, further

IMultiplied by sin kx or cos kx J

Now we replace sin kx (or cos kx) by eikz , find the value of the corresponding
integral in the same manner as before, and then take the imaginary (or real) part
of the answer.

EXAMPLES:
Find the value of:
R∞ sin 2x
1. x2 +x+1
dx.
−∞
R∞
e2iz
Solution: = Im 2
dz. The last integral can be evaluated by adding
−∞ z + z + 1
the y > 0 residues of the integrand and multiplying √ their sum by 2πi. The
only contributing singularity is at zs = − 12 + i 23 , the corresponding residue
e2izs √
− 3
equals = e (cos √
1−i sin 1)
i 3
. Multiplying this by 2πi and keeping the
(zs − z̄s ) √
2π − 3
imaginary part only results in − √ 3
e sin 1 = −0.54006

R∞ cos 2x e2iz R∞
2. Similarly, dx = Re
(x2 +4)2 2 2
dz, with zs = 2i being (the only
−∞ −∞ (z + 4)
· ¸0
e2iz
relevant) second-order singularity, having a residue of =
¯ (z + 2i)2 z=2i
2ie2iz (z + 2i)2 − 2(z + 2i)e2iz ¯¯ 2i(4i)2 − 2(4i) −4 5i
¯ = · e = − 32 · e−4 . Mul-
(z + 2i)4 z=2i (4i)4

tiplying this by 2πi and keeping the real part of the result yields 5π 16
· e−4 =
0.017981. ¥

IOther CasesJ

EXAMPLES:
R∞ emx
1. I ≡ x
dx where 0 < m < 1.
−∞ 1 + e

Solution. We make x complex (x → z) and integrate the same function over


the following contour [a collection of four straight-line segments which we call
C1 , C2 , C3 and C4 ]: −R to R (real), R to R + 2πi, R + 2πi to −R + 2πi, and
emx+2πmi
−R+2πi to −R. One can show that, on C3 , the integrand equals ≡
¯ ¯ ¯ ¯ 1 + ex+2πi
emx ¯ emz ¯ mR ¯ emz ¯ −mR
e2πmi · . Since ¯ ¯≤ e on C and ¯ ¯≤ e on
1 + ex ¯ 1 + ez ¯ eR − 1
2 ¯ 1 + ez ¯ 1 − e−R
C3 , their contributions disappear in the R → ∞ limit. In the same limit, the
136

contribution of C1 yields I, and that of C3 results in −e2πmi · I [since we are


going backwards].
The contour µ integral has only
¶ onemπisingularity at z = πi, with the residue
(z − πi) emz e
equal to lim z
= πi = −eπmi . Its value is thus −2πieπmi ;
z=πi 1+e e
−2πieπmi
the value of our I must be this, divided by 1 − e2πmi , i.e. ≡
1 − e2πmi
2πi π
πmi −πmi
= .
e −e sin(mπ)
R∞ xp−1
2. I ≡ dx where 0 < p < 1. [Note that this integral can be converted to
0 1+x
the previous one by a x = eu substitution, but we will pretend not to notice].
Solution: This time we use C1 : ri to R + ri (straight line), C2 : R + ri to
R − ri (nearly a full circle centered on 0), C3 : R − ri¯to −ri¯ (straight line),
¯ z p−1 ¯
and C4 : −ri to ri (a semicircle centered at 0). Since ¯¯ ¯ ≤ Rp−1 on C2 ,
1 +¯ z ¯ ¯ R−1
¯ z p−1 ¯
this contribution disappears in the R → ∞ limit, since ¯¯ ¯ ≤ rp−1 on
1 + z¯ 1−r

C4 , ditto for the r → 0 limit.


z p−1 e(p−1)(ln x+2πi) xp−1
Secondly, on C3 , = ≡ e2πpi · [in the r → 0 limit],
1+z 1+x 1+x
so its contribution is −e2πpi × I. The only singularity inside the contour is at
z = −1, with the residue of (−1)p−1 = eiπ(p−1) .
2πieiπ(p−1) −2πieiπp π
Answer: 2πpi
= 2πpi
= .
1−e 1−e sin(pπ)
R∞ R∞
3. Compute cos(x2 ) dx and sin(x2 ) dx as the real and imaginary part of
0 0
R∞ ix2
e dx.
0
π
Solution: Our segments are now: C1 [0 to R, a straight line], C2 [Rei 4 t
iπ/4
with 0 ≤¯ t ≤ 1, an
¯ eighth of a circle], and C3 [Re to 0, a straight line].
¯R ¯ ¯ ¯
R ¯ iz2 ¯ R1 R1
¯ 2 ¯ 2 π 2
On C2 ¯ eiz dz ¯ ≤ ¯e ¯ |dz| = e−R sin 2 t R π4 dt ≤ π4 R e−R t dt =
¯C2 ¯ C2 0 0
¯1
π −R2t
¯
4
R e−R2 ¯ ≤ π4 · R1 → 0 as R → ∞.
t=0
Since our integrand has no singularities, the contributions of C1 and C3 must
be identical, with opposite signs (to cancel out). Parametrizing C3 by z =
t(1 + i) with 0 ≤ t ≤ ∞ (taking the R → ∞ limit at the same time) results
R∞ 2 p
in (1 + i) e−2t dt [as z 2 = 2it2 , and dz = (1 + i) dt] = (1 + i) · π8 . This
0
R∞ R∞ √

implies that cos(x2 ) dx = sin(x2 ) dx = 4
.
0 0

You might also like