Analytic Equations: 4.1 Power Series
Analytic Equations: 4.1 Power Series
Analytic Equations: 4.1 Power Series
Analytic equations
where the coefficients {ak } are prescribed complex numbers and z is a com-
plex variable. Any such power series has a radius of convergence R. R may
be zero, in which case the series fails to converge except in the uninteresting
case when z = 0. If R is not zero, then the series converges for each z with
|z| < R and diverges for any z with |z| > R. It is also possible to have
R = ∞, meaning that the series converges for all z in the complex plane.
The reasoning leading to these conclusions is the following.
93
94
What is more, the function a is analytic for |z| < R, and its derivative is
obtained by term-by-term differentiation of the series:
∞
X
a0 (z) = kak z k−1 ,
k=0
for constructing solutions. Even if the coefficients are real and we are ulti-
mately interested in real solutions of these equations, it is useful and illumi-
nating to allow for solutions in the form of complex functions of a complex
independent variable. To emphasize this we employ standard, complex no-
tation z for the independent variable and w for the dependent variable, and
consider the differential equation
Lw ≡ w(n) + a1 (z) w(n−1) + · · · + an−1 w0 + an w = 0. (4.4)
We have written the homogeneous equation but, as usual, we shall also be
interested in solutions of the inhomogeneous equation.
The definition of linear dependence and independence is the same as
it was for real equations, with only the obvious changes: we need to al-
low complex coefficients, and we need to consider the linear dependence or
independence, not on an interval, but on a domain in the complex plane:
As in the case of real equations, we shall find that any solution of equa-
tion (4.4) is a linear combination of a basis of n linearly independent solu-
tions.
By a shift of the independent variable (z 0 = z − z1 ) we may assume
without loss of generality that the expansions of the coefficients may be
written
∞
X
aj (z) = ajk z k , j = 1, 2, . . . , n (4.5)
k=0
where each of the n series converges if |z| < r. Normally we take for r the
smallest of the radii of convergence of the coefficient functions.
The important features of the theory are already in evidence if n = 2,
so we now turn to this.
are linearly independent on the real line, but their Wronskian vanishes
identically there.
98
P∞ k
P∞ k
Definition 4.4.1 The series k=o Ak z majorizes the series k=0 ak z if
Ak ≥ |ak |, k = 0, 1, . . ..
It is seen from the definition that the coefficients of the majorizing series
are real and non-negative. The basic fact about two power series related
in this way is that if the majorizing series converges for |z| < r, so also
does the other series. The proof of this remark may be basedP on the Cauchy
convergence criterion: the infinite series of complex numbers cn converges
if and only if, given any positive number , there is an integer N such that
Xn
ck < provided n > m ≥ N.
k=m
Suppose the majorizing series has radius of convergence R and let ρ be any
real number in the interval (0, R). The majorizing series then converges for
any z such that |z| ≤ ρ. For such z we have
m
X m
X m
X
k k
| ak z | ≤ |ak ||z| ≤ Ak ρk .
n n n
Given > 0 we can choose N so that, for values of n and m greater than N
(and n > m), the last sum Pon the right is less than . Hence the radius of
convergence of the series ak z k cannot be less than ρ and, since ρ is any
positive number less than R, the radius of convergence cannot be less than
R.
Consider now the differential equation (4.6) and suppose its coefficients
p and q have power-series expansions
∞
X ∞
X
q (z) = qk z k and p (z) = pk z k
k=0 k=0
Denote by w0 and w1 the initial data for the solution w of equation (4.6).
Lemma 4.4.1 Let W0 ≥ |w0 | and W1 ≥ |w1 |. Then the solution W (z) of
the initial-value problem
W 00 = P (z) W 0 + Q (z) W, W (0) = W0 , W 0 (0) = W1
majorizes the solution w (z) of the initial-value problem consisting of equa-
tion (4.6) together with the initial data w0 , w1 .
100
Proof: The recursion relation for the solution of the majorant initial-value
problem is the same, except for a change of sign, as that given by equation
(4.9) above. Comparison of the successive coefficients in the two cases, and
a simple induction, provides the conclusion.
Suppose now that the lesser of the radii of convergence of the power
series for p and q is R, and let r be any positive number with r < R. Then
choose
P∞ ρ so that r < ρ < R. There is some M > 0 such that the series
−k
M k=0 ρ z k , convergent for |z| < ρ, majorizes each of the series for p and
q. If the series with coefficients M ρ−k majorizes that for q, so also does the
series with coefficients M 2 (k + 1) ρ−k if M ≥ 1; we are free to assume this.
Then the majorizing functions are
∞ ∞
X zk M X zk M2
P (z) = M = , Q (z) = M 2 (k + 1) k = .
0
ρk 1 − z/ρ
0
ρ (1 − z/ρ)2
The equation
M M2
W 00 = W0 + W
1 − z/ρ (1 − z/ρ)2
has the solution W = A (1 − z/ρ)α where
q
α = 1 − ρM − (1 − ρM )2 + 4ρ2 M 2 < 0.
2. Refer to Example 4.3.1 and impose instead the initial data w (0) =
1, w0 (0) = −1. Identify the solution with a known, elementary func-
tion. The same with the initial data w (0) = 1, w0 (0) = 0 and
w (0) = 0, w0 (0) = 1.
3. Prove the correctness of the relation (4.8).
4. Prove P
the statement in the text that the radius of convergence P of the
series ak z k is at least as great as that of a majorizing series Ak z k .
5. In Example 4.3.3 find the coefficients {qk } and write the recursion
formula (4.9) explicitly for this example. Carry
P outk the calculation of
the coefficients {ak } of the expansion w = ak z via this formula,
0
for the initial data w(0) = a0 = 1, w (0) = a1 = 2, and obtain the
function w1 (z) of Example 4.3.3 in this way.
6. Rewrite the equation of Example 4.3.3 as (1 + z)2 w00 −2w = 0. Obtain
a recursion formula by substituting the series for w in this equation
instead. Solve the initial-value problem for the initial data given in
the preceding problem using the recursion formula that you obtain this
way.
7. Find the recursion formula for the coefficients of the power-series so-
lution of the equation
1
w00 + w = 0.
1 + z2
What is the radius of convergence for general initial conditions?
8. Find a lower bound for radius of convergence for power-series solutions
to the equation
1 + Az + Bz 2 w00 + (C + Dz) w0 + Ew = 0.
Here A, . . . , E are real constants. Do not work out the recursion for-
mula. Do rely on Theorem 4.4.1.
9. Put A = C = 0 in the preceding problem. Find a condition on B, D, E
for this equation to possess a poynomial solution (a non-trivial condi-
tion, please: do not set w equal to zero).
where λ is a constant.
ak z k about
P
10. Find the recursion formula for power-series solutions
the origin in the form
k(k + 1) − λ
ak+2 = ak (4.11)
(k + 1)(k + 2)
12. Suppose, as in Problem 11, that one solution u(z) of Legendre’s equa-
tion is bounded at z = 1. Assuming u(1) 6= 0, show that the second
solution v(z) becomes infinite like ln(1 − z) there.
Obtain the power-series expansion under the initial data w(0) = 1, w0 (0) =
0. What is its radius of convergence?
1 − z 2 u00 − zu0 + λu = 0.
this ensures that the initial data for the majorant system appropriately
dominate those for the system (4.12). It is easily checked that with this
105
choice of initial data the majorizing system reduces to the solution of the
one-dimensional system
−1 n
0 z X
v = Mn 1 − v, v (0) = |W0i | ,
ρ
i=1
whose solution is n
−M nρ X
z
v (z) = 1− |W0i | .
ρ
i=1
Theorem 4.5.1 In equation (4.12) suppose the entries of the matrix A have
power series expansions about the origin with least radius of convergence R.
Then, for arbitrary initial data W0 , the solution to that equation exists and
has a power-series expansion with radius of convergence not less than R.
V 0 = BV + β, (4.18)
with the same choice of B as in the preceding section, will lead to the first-
order equation
z −1 z −1
0
v = nM 1 − v+M 1− .
ρ ρ
This has the solution
−M nρ
z 1
v =γ 1− − .
ρ n
The initial data must again dominate those for the problem (4.17). This
can be achieved by choosing the constant γ large enough. The conclusion
is the same: any power-series solution of the system (4.17) has radius of
convergence at least equal to R.
107
where
∞
X
f (z, w) = akl z k wl , (4.23)
k,l=0
and it is assumed that the indicated series converges for at least one value
(z∗ , w∗ ) where neither z∗ nor w∗ vanishes. From this latter requirement it
follows that there exist real, positive numbers ρ, σ such that the series
∞
X
|akl |ρk σ l
k,l
108
The procedure for generating the coefficients in this series is the same as
that for linear equations: substitute in equation (4.22), collect the terms
of like powers of z, and set the resulting coefficient equal to zero. There
are two important rules to verify in this procedure: the first is that it de-
termines the coefficients {w}∞ 1 recursively – i.e., wk should depend only
on w1 , w2 , . . . , wk−1 ; the second is that the expression for wk should be a
polynomial in w1 , . . . , wk−1 and the coefficients {aij }, with only positive
coefficients. Let’s examine the first few of these expressions:
w1 = a00 ,
w2 = (1/2) (a10 + a01 w1 ) ,
w3 = (1/3) a20 + a11 w1 + a02 w12 + a01 w2 ,
w4 = (1/4) a30 + a21 w1 + a11 w2 + a01 w3 + a12 w12 + 2a02 w1 w2 + a03 w13 .
These first four coefficients conform to the rules, and it is not difficult to
verify them generally1 .
Now define the function
X
F (z, W ) = M (z/ρ)k (W/σ)l ; (4.25)
k=0,l=0
the series is absolutely convergent if |z| < ρ and |w| < σ, so the function F
is defined in that domain of C 2 . Moreover, it majorizes the corresponding
series for f (z, w). The initial-value problem
can be solved formally in exactly the same way that the formal series for
w(z) was obtained. Because of the second of the two rules noted above, it
1
A fully detailed verification can be found in Hille, chapter 2.
109
z −1 W −1
F (z, W ) = M 1− 1− .
ρ σ
where we have used the initial condition to evaluate the constant term. A
single-valued determination of the logarithm is obtained by introducing a
branch cut along the real axis from z = ρ to infinity. Solving this quadratic
for W gives
(1/2)
W = σ ± σ 2 + 2σρ ln(1 − z/ρ)
.
With a determination of the square-root function that makes it positive
when its argument is real and positive, we need to choose the minus sign in
order to satisfy the initial condition:
(1/2)
W = σ − σ 2 + 2M σρ ln(1 − z/ρ)
.
For |z| < ρ this function is analytic as long as the argument of the square
root does not lie on the negative, real axis. The smallest value of z for which
this fails occurs with z real, say z = r where
σ + 2M ρ ln(1 − r/ρ) = 0,
or
r = ρ (1 − exp(−σ/2M ρ)) . (4.27)
The majorizing series therefore converges with radius of convergence r. This
proves
longer expect the maximal result that was found in the linear case. In fact,
the application of this estimate includes the linear case, and it is not hard
to see that it provides an unnecessarily conservative estimate in this case.
This technique can be extended to prove the existence of analytic solu-
tions to a system of n analytic equations: if
n
X
wk0 = fj (z, w1 , w2 , . . . , wn ), wi (z0 ) = ai
j=1
3. For any autonomous equation, i.e., one of the form w0 = f (w), provide
an estimate for the radius of convergence of power-series solutions that
is independent of the parameter ρ.
[2] P.F. Byrd and M. D. Friedman. Handbook of elliptic integrals for engi-
neers and physicists. Springer, Berlin, 1954.
315
316