Lecture Note
Lecture Note
where p(x) and r(x) are defined and continuous in a the interval a ≤ x ≤ b. Here
f (x, y) = −p(x)y + r(x). If L = maxa≤x≤b |p(x)|, then
Clearly f does not satisfy Lipschitz condition near origin. But still it has unique
solution. Can youqprove this? [Hint: Let y1 and y2 be two solutions and consider
q 2
z(x) = y1 (x) − y2 (x) .]
Comment: The existence and uniqueness theorem are also valid for certain system
of first order equations. These theorems are also applicable to a certain higher order
ODE since a higher order ODE can be reduced to a system of first order ODE.
y 0 = xy − sin y, y(0) = 2.
Here f and ∂f /∂y are continuous in a closed rectangle about x0 = 0 and y0 = 2. Hence,
there exists unique solution in the neighbourhood of (0, 2).
y0 = 1 + y2, y(0) = 0.
y 0 = x|y|, y(1) = 0.
Since f is continuous and satisfy Lipschitz condition in the neighbourhood of the (1, 0),
it has unique solution around x = 1.
y 0 = y 1/3 + x, y(1) = 0
Now
1/3 1/3 |y1 − y2 |
|f (x, y1 ) − f (x, y2 )| = |y1 − y2 | = 2/3 1/3 1/3 1/3
|y1 + y1 y2 + y2 |
Suppose we take y2 = 0. Then
|y1 − 0|
|f (x, y1 ) − f (x, 0)| = 2/3
|y1 |
2/3
Now we can take y1 very close to zero. Then 1/|y1 | becomes unbounded. Hence, the
relation
|f (x, y1 ) − f (x, 0)| ≤ L|y1 − 0|
does not always hold around a region about (1, 0).
Since f does not satisfy Lipschitz condition, we can not say whether unique solution
exits or does not exist (remember the existence and uniqueness conditios are sufficient
but not necessary).
On the other hand
y 0 = y 1/3 + x, y(1) = 1
has unique solution around (1, 1).
Example 5. Discuss the existence and unique solution for the IVP
2y
y0 = , y(x0 ) = y0
x
Solution: Here f (x, y) = 2y/x and ∂f /∂y = 2/x. Clearly both of these exist and
bounded around (x0 , y0 ) if x0 6= 0. Hence, unique solution exists in a interval about x0
for x0 6= 0.
For x0 = 0, nothing can be said from the existence and uniquness theorem. Fortunately,
we can solve the actual problem and find y = Ax2 to be the general solution. When
x0 = 0, there exists no solution when y0 6= 0. If y0 = 0, then we have infinite number
of solutions y = αx2 (α any real number) that satisfy the IVP y 0 = 2y/x, y(0) = 0.
2 Picard iteration for IVP
This method gives approximate solution to the IVP (1). Note that the IVP (1) is
equivalent to the integral equation
Z x
y(x) = y0 + f (t, y(t)) dt (3)
x0
A rough approximation to the solution y(x) is given by the function y0 (x) = y0 , which
is simply a horizontal line through (x0 , y0 ). (don’t confuse function y0 (x) with constant
y0 ). We insert this to the RHS of (3) in order to obatin a (perhaps) better approximate
solution, say y1 (x). Thus,
Z x Z x
y1 (x) = y0 + f (t, y0 (t)) dt = y0 + f (t, y0 ) dt
x0 x0
The nest step is to use this y1 (x) to genereate another (perhaps even better) approxi-
mate solution y2 (x): Z x
y2 (x) = y0 + f (t, y1 (t)) dt
x0
At the n-th stage we find
Z x
yn (x) = y0 + f (t, yn−1 (t)) dt
x0
Theorem 3. If the function f (x, y) satisfy the existence and uniqueness theorem for
IVP (1), then the succesive approximation yn (x) converges to the unique solution y(x)
of the IVP (1).
Example 6. Apply Picard iteration for the IVP
y 0 = 2x(1 − y), y(0) = 2.
Solution: Here y0 (x) = 2. Now
Z x
y1 (x) = 2 + 2t(1 − 2) dt = 2 − x2
0
Z x
x4
y2 (x) = 2 + 2t(t2 − 1) dt = 2 − x2 +
0 2
4
x4 x 6
Z x !
t
y3 (x) = 2 + 2t t2 − − 1 dt = 2 − x2 + −
0 2 2 3!
t6 t4 x4 x6 x8
Z x !
y4 (x) = 2 + 2t − + t2 − 1 dt = 2 − x2 + − +
0 3! 2 2 3! 8!
By induction, it can be shown that
x 4 x6 x2n
yn (x) = 2 − x2 + − + · · · + (−1)n
2 3! n!
2 2
Hence, yn (x) → 1 + e−x as n → ∞. Now y(x) = 1 + e−x is the exact solution of the
given IVP. Thus, the Picard iterates converge to the unique solution of the given IVP.
Comment: Picard iteration has more theoretical value than practical value. It is
used in the proof of existence and uniqueness theorem. On the other hand, finding
approximate solution using this method is almost impractical for complicated function
f (x, y).
Lemma1 3.1.
Let x 7→ φ(x) be a function with continuous derivative, defined in the interval Ih (x0 ) = [x0 −
h, x0 + h], with values in [y0 − b, y0 + b].
Then φ satisfies the initial value problem
φ′ (x) = F (x, φ(x))
(3.1)
φ(x0 ) = y0
Proof. Let us first assume that φ is differentiable (and φ′ continuous) so that φ satisfies (3.1). Then
we integrate and deduce that for |x − x0 | ≤ h
Z x Z x
′
φ (t)dt = F (t, φ(t))dt;
x0 x0
however the left hand side is equal to φ(x) − φ(x0 ) by the fundamental theorem of calculus. Thus,
since φ(x0 ) = y0 we get
Z x
φ(x) − y0 = F (t, φ(t))dt
x0
IV. Proof of the uniqueness part of the theorem. Here we show that the problem (3.1) (and
thus (1,1)) has at most one solution (we have not yet proved that it has a solution at all).
Let Φ, Ψ be two functions with values in [y0 − b, y0 + b] satisfying the integral equation (3.1),
thus Rx
Φ(x) = y 0 + x0
F (t, φ(t))dt
, say for x ∈ [x0 − γ, x0 + γ].
Rx
Ψ(x) = y0 + x0 F (t, Ψ(t))dt.
We wish to establish that Φ(x) = Ψ(x) for x in Ih (x0 ). We shall show that Φ(x) = Ψ(x) for x in
[x0 , x0 + γ] but an analogous argument shows Φ(x) = Φ(x) for x ∈ [x0 − γ, x0 ]. (Carry out this
modification of the argument yourself!)
where for the last inequality we used the mean value theorem for derivatives.2 Indeed
∂F
F (t, Φ(t)) − F (t, Ψ(t)) = (t, ξ)(Φ(t) − Ψ(t))
∂y
where ξ is between Φ(t) and Ψ(t) and if we use the bound (1.4), we obtain
Then clearly
Now (4.1) is rewritten as U ′ (x) ≤ KU(x), or equivalently U ′ (x) − KU(x) ≤ 0 which is also
equivalent to [U ′ (x) − KU(x)]e−K[x−x0 ] ≤ 0. But the last inequality just says that
d
[U(x)e−K(x−x0 ) ] ≤ 0
dx
for x0 ≤ x ≤ x0 + h.
Integrating from x0 to x yields
Z x
−K(x−x0 ) −K(x0 −x0 ) d
U(x)e − U(x0 )e = U(t)e−K(t−t0 ) dt ≤ 0
x0 |dt {z }
≤0
But the integrand is continuous and one can show that therefore Φ(x) = Ψ(x) (for x ≥ x0 ).
A similar argument shows that also Φ(x) = Ψ(x) for x ≤ x0 .
2 The mean value theorem for derivatives says that for a differentiable function g we have g(s1 ) − g(s2 ) =
g ′ (ξ)(s 1 − s2 ) where ξ lies between s1 and s2 . We apply this to g(s) = F (t, s) (for fixed t).
4
V. Existence of the solution via iterations. Let g be a continuous function in [x0 − h, x0 + h].
Define the function T g by
Z x
(5.1) T g(x) = y0 + F (t, g(t))dt.
x0
Proof. Itis here where the crucial condition h ≤ b/M is used. If |g(x) − y0| ≤ b and |x − x0 | ≤ h <
b
min a, M then for x0 ≤ x
Z x
|T g(x) − y0 | ≤ F (t, g(t))dt
x0
Z x
≤ |F (t, g(t)|dt
x0
Z x
≤ M dt = M |x − x0 | ≤ M h ≤ b
x0
Rx Rx
and a similar argument goes for x ≤ x0 (just write x 0 | · · · |dt instead of x0 | · · · |dt in this case).
|x − x0 |n+1
(5.4) |Φn+1 (x) − Φn (x)| ≤ M0 K n for all x ∈ [x0 − h, x0 + h],
(n + 1)!
One can deduce that the sequence Φn converges to a limit Φ which is a solution of (3.1) and
therefore a solution of the initial value problem (1.1). The precise information is contained in
Lemma 5.3. The sequence Φn defined in (5.3) converges to a limit function Φ, for all |x − x0 | ≤ h.
That is, Φ is defined in [x0 − h, x0 + h] and, moreover, Φ has a continuous derivative and satisfies
the initial value problem (1.1). The following error estimate is true for |x − x0 | ≤ h:
M0 K n |x − x0 |n+1 K|x−x0 |
(5.5) |Φ(x) − Φn (x)| ≤ e .
(n + 1)!
|x − x0 |n
(∗n−1 ) |Φn (x) − Φn−1 (x)| ≤ M0 K n−1
n!
implies the inequality
|x − x0 |n+1
(∗n ) |Φn+1 (x) − Φn (x)| ≤ M K n .
(n + 1)!
3 Carry
R x0 Rx
this out yourselves. If x < x0 write x | · · · | instead of x0 | · · · |) in the argument below, ...
6
4
We write
|Φn+1 (x) − Φn (x)| = |T Φn (x) − T Φn−1 (x)|
Z x
= F (t, Φn (t)) − F (t, Φn−1 (t)) dt.
x0
By the mean value theorem of differential calculus (with respect to the y-variable, as in §III)
∂F
F (t, Φn (t)) − F (t, Φn−1 (t)) = (t, s)[Φn (t) − Φn−1 (t)]
∂y
where s is some value between Φn (t) and Φn−1 (t). Therefore
|F (t, Φn (t)) − F (t, Φn−1 (t)| ≤ K|Φn (t) − Φn−1 (t)|
but since we assume the validity of (∗n−1 ) we have
|t − x0 |n
|Φn (t) − Φn−1 (t)| ≤ M0 K n−1 .
n!
Thus we get
Z x
F (t, Φn (t) − F (t, Φn−1 (t))dt
x0
Z x
≤ |F (t, Φn (t) − F (t, Φn−1 (t))|dt
x0
Z x Z x
n−1 (t − x0 )n (t − x0 )n
≤K M0 K dt = M0 K n−1 K dt
x0 n! x0 n!
n+1
(x − x0 )
= M0 K n
(n + 1)!
Putting inequalities together we get (∗n ) which we wanted to verify.
VII. Discussion of Lemma 5.3. The proof here has to be somewhat incomplete as we have to
use various results from advanced calculus to justify some convergence results. However you can
still understand how the argument goes if you take those results for granted.
If m > n then
|Φm (x) − Φn (x)|
= |Φm (x) − Φm−1 (x) + Φm−1 (x) − Φm−2 (x) · · · + Φn+1 (x) − Φn (x)|
≤ |Φm (x) − Φm−1 (x)| + |Φm−1 (x) − Φm−2 (x)| + · · · + |Φn+1 (x) − Φn (x)|
and from Lemma 5.2 we get
|Φm (x) − Φn (x)|
|x − x0 |m |x − x0 |m−1 (x − x0 )n+1
≤ M0 K m−1 + M0 K m−2 + · · · + M0 K n
m! (m − 1)! (n + 1)!
n+1 h
|x − x0 | K|x − x0 | 2
K |x − x0 | 2
K m−n−1 |x − x0 |m−n−1 i
= M0 K n 1+ + + ··· +
(n + 1)! (n + 2) (n + 2)(n + 3) (n + 2) · · · m
n+1 h 2 2 3 3 i
|x − x0 | K|x − x0 | K |x − x0 | K |x − x0 |
≤ M0 K n 1+ + + + ···
(n + 1)! 1 2! 3!
(7.1)
|x − x0 |n+1 K|x−x0 |
= M0 K n e .
(n + 1)!
4 In other words, for assuming (∗n−1 ) we have to show that (∗n ) holds too, and this implication has to be valid
for all n.
7
The last formula follows from the power series expansion of the exponential function. By a theorem
from advanced calculus the estimate (7.1) shows the convergence of Φn to a limiting function Φ, in
fact we get
max |Φm (x) − Φ(x)| → 0 as m → ∞.
|x−x0 |≤h
|x − x0 |n+1 K|x−x0 |
|Φ(x) − Φn (x)| ≤ M0 K n e
(n + 1)!
Therefore, by Lemma 3.1, Φ is also differentiable with continuous derivative and solves (3.1) which
is our original initial value problem.
Exercise: For the example in §II above, discuss the error estimate in Lemma 5.3. Give some
explicit bounds for the maximal error in the interval [−2.001, −1.999].
More examples.
We use our theorem to show that this problem has a unique solution in (−∞, ∞). To do this it
suffices to show that it has a unique solution on every interval [−L, L].
This is because we have a unique solution on the interval [−L1 , L1 ] and a unique solution on
the interval [−L2 , L2 ] with L2 > L1 by the uniqueness part the two solutions have to agree on the
smaller interval [−L1 , L1 ].
Now fix L. Define R = {(x, y) : −L ≤ x ≤ L, y0 − b ≤ y ≤ y0 + b} for very large A. Note that
the function F defined by F (x, y) = 2 sin(3xy) satisfies |F (x, y)| ≤ 2 and |∂F/∂y| ≤ 6L for (x, y) in
R; in particular observe that these bounds are independent of b. By the existence and uniqueness
theorem there is a unique solution for the problem on the interval [−h, h] where h = min{L, b/2}.
Since our bounds are independent of b we may choose b large, in particular we may choose b larger
than 2L, so that h = L. Thus we get a unique solution on [−L, L].
8
(ii). The next example is really a counterexample to show what happens if a hypothesis in Theorem
1 does not hold. Consider the problem
′ p
y (x) = |y(x)|
y(0) = 0.
Clearly
y(x) ≡ 0
is a solution. Another solution is given by
x2 /4 if x > 0
Y (x) =
0 if x ≤ 0.
(Check that this is indeed a function which satisfies the equation and initial value condition!).
Yet another solution of the initial value problem is
2
x /4 if x > 0
e
Y (x) = 2
−x /4 if x ≤ 0.
So, clearly, uniqueness does not hold. Which hypothesis of Theorem 1 is not satisfied?
(iii). If instead you consider the initial value problem
′ p
y (x) = |y(x)|
y(0) = y0 ,
and for y0 6= 0 you can apply Theorem 1. Show from Theorem 1 that there is a unique solution in
some interval containing y0 . For this problem also find the explicit solution, by the usual methods
for first order separable equations.
(iv). Consider the initial value problem
′
y (x) = 1 + B 2 y(x)2
y(0) = 0.
Here B is a parameter and we are interested in what happens when B gets large.
Use Theorem 1 to show that there is an interval containing 0 where the problem has a solution.
What is the length of this interval (predicted by your application of Theorem 1)?
Then find the explicit solution and determine the maximal interval containing 0 for which a
solution exists.
VIII. A global variant of the existence and uniqueness theorem.
We consider again an initial value problem
′
y (x) = F (x, y(x))
(8.1)
y(x0 ) = y0 .
but now assume that F is a function of (x, y) defined in a entire strip
(8.2) S = {(x, y) : x0 − a ≤ x ≤ x0 + a, −∞ < y < ∞}.
∂F
and we assume that F is continuous and has a continuous and bounded y-derivative ∂y
in S. In
particular we assume that
∂F
(8.3) (x, y) ≤ K for all (x, y) ∈ S.
∂y
9
Theorem 8.1. Suppose that F satisfies the assumptions above. Then there is a unique function
x 7→ y(x), defined in [x0 − a, x0 + a] with continuous first derivative, such that
y ′ (x) = F (x, y(x))
y(x0 ) = y0
The conclusion is somewhat stronger as in Theorem 1 since we get a unique solution on the
full interval [x0 − a, x0 + a] ; moreover there is no analogue of the assumption (1.3). However the
assumption on ∂F/∂y is much restrictive as boundedness is required on the entire infinite strip (such
an assumption fails for F (x, y) = y 2 , for example).
The assumption (1.3) on the size of F is not needed since the iteration in section V above always
makes sense (as F is defined on the entire strip). For the proof of the convergence result we need
only the boundedness of F at height y0 (i.e. an inequality |F (t, y0 )| ≤ M0 which holds if the function
F is continuous) and, most importantly, the bound for ∂F/∂y on the entire strip (i.e. (8.3)).
Examples: (i) The example in (7.2) is also valid here, that is, Theorem 8.1 can be applied.
Check this.
(ii) Another example is (
y(x)3
y ′ (x) = 1+x2 +y(x)2
y(0) = y0
which has a unique solution on every interval [−L, L]. Check the hypothesis of Theorem 8.1) for the
function F (x, y) = y 3 (1 + x2 + y 2 )−1 .
IX. A global variant of the existence and uniqueness theorem for systems.
It is very useful to have a variant of Theorem 8.1 for systems. The proof requires perhaps more
mathematical maturity but is not really harder, and the result is very useful.
Let’s formulate this result first for systems with two equations.
We now consider an initial value problem for two unknown functions y1 , y2
′
y1 (x) = F (x, y1 (x), y2 (x))
y ′ (x)
2 = F2 (x, y1 (x), y2 (x))
(9.1)
y1 (x0 )
= y1,0
y2 (x0 ) = y2,0
∂F
and we assume that F is continuous and has continuous and bounded y1 - and y2 -derivatives ∂y1 ,
∂F
∂y2
, in S, in particular we assume that for all (x, y) ∈ S
∂F
(x, y1 , y2 ) ≤ K
∂y1
(9.3)
∂F
(x, y1 , y2 ) ≤ K
∂y2
10
Theorem 9.1. Suppose that F satisfies the assumptions above. Then there is a unique pair of
functions y1 , y2 defined in [x0 − a, x0 + a] with continuous first derivative, such that (9.1) holds for
all x in [x0 − a, x0 + a].
The proof of Theorem 9.1 is very similar to the proofs of Theorems 1 and 8.1. We just write out
the itreation procedure. This times we have to simultaneously compute approxinations for y1 and
y2 , and we let Φn,1 and Φn,2 be the n th iteration.
Then we set
Extensions to larger systems. The extension to systems with m equations and m unknown
functions is also possible. Then we are working with m functions F1 , . . . , Fm of the variables x and
y1 , . . . , ym and we are trying to determine unknown functions y1 , . . . , ym (x) so that
′
y1 (x) = F1 (x, y1 (x), y2 (x), . . . , ym (x)),
y2′ (x) = F2 (x, y1 (x), y2 (x), . . . , ym (x)),
(9.4) ..
.
′
ym (x) = Fm (x, y1 (x), y2 (x), . . . , ym (x)).
with an initial condition for each function
Here we assume besides the continuity of the Fi the boundedness of all partial y derivatives, i.e.
the boundedness of all functions ∂Fi /dyk , for i = 1, . . . , m and k = 1, . . . , m; all of this is supposed
to hold in a region where x0 − a ≤ x ≤ x0 + a] and −∞ < yi < ∞, i = 1, . . . , m. The straightforward
generalization of Theorem 9.1 to systems with m equations holds true:
Theorem 9.2. Under the assumptions just stated the problem (9.4), (9.5) has a unique solution
(y1 (x), y2 (x), . . . , ym (x)) in [x0 − a, x0 + a].
Linear systems.
What will be important in our class is the example of linear systems
′
y1 (x) = a11 (x)y1 (x) + a12 (x)y2 (x) + · · · + a1m (x)ym (x) + g1 (x)
y2′ (x) = a21 (x)y1 (x) + a22 (x)y2 (x) + · · · + a2m (x)ym (x) + g2 (x)
(9.6) ..
.
′
ym (x) = am1 (x)y1 (x) + am2 (x)y2 (x) + · · · + amm (x)ym (x) + gm (x)
which we consider subject to the
11
Theorem 9.3. Suppose the coefficient functions aij and the functions gi are continous on the
interval [x0 −a, x0 +a]. Then the problem (9.6), (9.7) has a unique solution (y1 (x), y2 (x), . . . , ym (x))
in [x0 − a, x0 + a].
∂Fi
The coefficient functions aij and also the gi are bounded on the interval [x0 −a, x0 +a] and ∂yj (x, y) =
aij (x).
We shall see later in class that the problem of solving equations with higher derivatives is
equivalent to a problem of solving first order equations (see the following section).
(10.1) y (m) (x) + am−1 (x)y (m−1) (x) + · · · + a1 (x)y ′ (x) + a0 (x)y(x) = g(x)
and we assume that the functions a0 , ... am−1 and g are continuous functions on the interval
[x0 − a, x0 + a].
Theorem 10.1. There is exactly one function y which has continuous derivatives up to order n in
[x0 − a, x0 + a] so that (10.1) and (10.2) are satisfied.
This follows from Theorem 9.3 and the following Lemma which is not hard to check.
Lemma 10.2.
(i) Suppose that y solves (10.1) and (10.2). Then set y1 (x) = y(x), y2 (x) = y ′ (x), ..., ym (x) =
(m−1)
y (x). Then the functions y1 , . . . , ym satisfy the first order system
′
y1 (x) = y2 (x)
y ′ (x) = y3 (x)
2
(10.3) ..
.
y′ (x) = ym (x)
m−1
′
ym (x) = −a0 (x)y1 (x) − a1 (x)y2 (x) − · · · − am−1 (x)ym (x) + g(x)
In other words the problem of solving the initial value problem for the higher order equation
(10.1), (10.2) is equivalent to solving the initial value problem for the first order system (10.3),
(10.4).
13
231
We now consider how the solution of the differential equation dy/dx = f (x, y)
depends upon a slight change in the initial conditions or upon a slight change in the
function f. We will show that under suitable restrictions such slight changes would
cause only slight changes in the solution.
dy
= f ( x, y )
dx
y(x0) = y0
has a unique solution φ defined on some sufficiently small interval |x – x0| ≤ h0. Now
suppose the initial y value is changed from y0 to Y0. If Y0 is such that |Y0 – y0| is
sufficiently small, then we can be certain that the new initial value problem
232
dy
= f ( x, y )
dx
y(x0) = Y0 (1)
also has a unique solution on some sufficiently small interval |x – x0| ≤ h1.
In fact, let the rectangle, R: |x – x0| ≤ a,| y – y0| ≤ b, lies in D and let Y0 be such
that |Y0 – y0| ≤ b/2. Then by existence and uniqueness theorem, this problem (1) has a
unique solution ψ which is defined and contained in R for |x - x0| ≤ h1, where h1 =
min (a, b/2M) and M = max |f (x, y)| for (x, y) ∈ R. Thus we may assume that there
exists δ > 0 and h > 0 such that for each Y0 satisfying |Y0 – y0| ≤ δ, problem (1)
possesses a unique solution φ(x, Y0) on |x – x0| ≤ h (see Figure).
Figure
We are now in a position to state the basic theorem concerning the dependence
of solutions on initial conditions.
Theorem 4.4
Hypothesis
2. Assume there exists δ > 0 and h > 0 such that for each Y0 satisfying |Y0 – y0|
≤ δ, the initial value problem.
233
dy
= f ( x, y )
dx
y(x0) = Y0 (1)
Conclusion
If φ denotes the unique solution of (1) when Y0 = y0, and φ denotes the unique
φ ( x) − φ ( x) ≤ δ1 e kh on |x - x0| ≤ h.
φ = lim φn
n→∞
where
x
φn (x) = y0 + ∫ f [t, φn-1 (t)] dt (n = 1, 2, 3,...)
x0
Similarly,
φ = lim φn
n→∞
x
where φn (x) = y0 + ∫ f [t, φn-1 (t)] dt (n = 1, 2, 3,...)
x0
and φ0 (x) = y0 ; |x - x 0 | ≤ h.
n
k j (x - x 0 ) j
φn (x) - φn (x) ≤ δ1 ∑
j=0 j!
(2)
on [x0 , x0 + h], where k is the Lipschitz constant. We assume that on [x0 , x0 + h],
234
n-1
k j (x - x 0 ) j
φn-1 (x) - φn-1 (x) ≤ δ1 ∑j=0 j!
(3)
Then
x x
φn (x) - φn (x) = y0 + ∫ f [t, φn-1 (t)] dt − y0 − ∫ f [t, φ n-1 (t)] dt
x0 x0
x
≤ y0 − y0 +
x0
∫ f [t, φ n-1 (t)] - f [t, φn-1 (t)] dt.
f [x, φn-1 (x)] − f [x, φn-1 (x)] ≤ k φn-1 (x) - φn-1 (x)
and since y0 − y0 = δ1 , so
x
φn (x) - φn (x) ≤ δ1 + k ∫φ
x0
n-1 (t) - φn-1 (t) dt .
k j (t - x 0 ) j
x n-1
φn (x) - φn (x) ≤ δ1 + k ∫ δ1 ∑j=0 j!
dt
x0
kj
n-1 x
= δ1 + k δ1 ∑
j=0 j! ∫ (t - x 0 ) j dt
x0
⎡ n-1
k j+1 (x - x 0 ) j+1 ⎤
= δ1 ⎢1 + ∑ (j +1)! ⎦
⎥.
⎣ j=0
Since
⎡ n-1
k j+1 (x - x 0 ) j+1 ⎤ n
k j (x - x 0 ) j
δ1 ⎢1 + ∑ (j +1)! ⎦
⎥ = δ1 ∑ j!
,
⎣ j=0 j=0
we have
n
k j (x - x 0 ) j
φn (x) - φn (x) ≤ δ1 ∑
j=0 j!
,
235
x x
φ1 (x) - φ1 (x) = y0 + ∫ f [t, y0 ] dt − y0 − ∫ f [t, y ] dt0
x0 x0
x
≤ y0 − y0 + ∫ f [t, y ]
x0
0 − f [t, y0 ] dt
x
≤ δ1 + ∫k y
x0
0 - y0 dt = δ1 + k δ1 (x – x0).
Thus (2) holds for n = 1. Hence the induction is complete and (2) holds on
[x0, x0 + h]. Using similar arguments on [x0 – h, x0], we have
n
k j |x - x 0 | j n
(kh) j
φn (x) - φn (x) ≤ δ1 ∑
j=0 j!
≤ δ1 ∑j=0 j!
∞
(kh) j
φ (x) - φ (x) ≤ δ1 ∑
j=0 j!
.
∞
(kh) j
But ∑j=0 j!
= ekh ; and so we have the desired inequality
Remark Thus under the conditions stated, if the initial values of the two solutions φ
and φ differ by a sufficiently small amount, then their values will differ by an
arbitrary small amount at every point of | x- x0| ≤ h. Geometrically, this means that if
the corresponding integral curves are sufficiently close to each other initially, then
they will be arbitrarily close to each other for all x such that | x- x0| ≤ h.
We now consider how the solution of dy/dx = f(x, y) will change if the function f is
slightly changed. In this connection we have the following theorem.
236
Theorem 4.5
Hypothesis
(ii) F is continuous.
dy
= f ( x, y )
dx
y ( x0 ) = y0
dy
= F ( x, y ),
dx
y ( x0 ) = y0
Conclusion Then
ε
φ ( x) −ψ ( x) ≤ (e kh − 1) on x − x0 ≤ h.
k
x
φn ( x) = y0 + ∫ f [t , φn −1 (t )]dt , x − x0 ≤ h (n = 1, 2, 3, ……)
x0
By Hypothesis 1(i), the initial-value problem dy/dx = f(x, y), y(x0) = y0, has a unique
solution on |x – x0| ≤ h. Thus from Hypothesis 2(i) φ ( x) = φ ( x) on x − x0 ≤ h , and
so lim φn = φ .
n →∞
x
ψ ( x) = y0 + ∫ F [t , ψ (t )]dt , x − x0 ≤ h .
x0
k j-1 (x - x 0 ) j
n −1
φn −1 ( x) −ψ ( x) ≤ ε ∑ (2)
j =1 j!
Then
x x
φn ( x) −ψ ( x) = y0 + ∫ f [t , φn −1 (t )]dt − y0 − ∫ F [t , ψ (t )] dt
x0 x0
x
≤ ∫ x0
f [t , φn −1 (t )] − F [t , ψ (t )] dt .
Applying the inequality |A – B| ≤ |A| + |B| and then the Lipschitz condition
satisfied by f, we have
x x
φn ( x) −ψ ( x) ≤ ∫ x0
f [t , φn −1 (t )] − f [t , ψ (t )] dt + ∫ x0
δ [t , ψ (t )] dt
x x
≤ k ∫ x0
φn −1 (t ) - ψ (t ) dt + ∫x0
δ [t , ψ (t )] dt.
we obtain
n −1
k j-1
φn ( x) −ψ ( x) ≤ kε ∑
x x
j=1 j! ∫x0
(t - x 0 ) j dt + ∫
x0
ε dt
n −1
k j (x - x 0 ) j + 1
= ε ∑j=1 (j + 1)!
+ ε (x - x 0 )
238
n
k j -1 (x - x 0 ) j
= ε ∑
j=1 j!
.
Thus (2) holds with (n – 1) replaced by n. Also Hypothesis 1 (iii) shows that
x x
φ1 ( x) −ψ ( x) ≤ ∫
x0
f [t , ψ (t )] − F [t , ψ (t )] dt ≤ ∫ x0
ε dt = ε (x - x 0 )
on [x0, x0 + h]. Thus (1) holds for n = 1. Thus the induction is complete and so
(1) holds on [x0, x0 + h] for n = 1, 2, 3, …..
Letting n → ∞, we obtain
∞
ε (kh) j
φ ( x ) −ψ ( x ) ≤
k
∑
j =1 j!
∞
(kh) j
But ∑ = ekh – 1. Thus we obtain the desired inequality
j =1 j!
ε kh
φ ( x ) −ψ ( x ) ≤ (e − 1) on |x – x0| ≤ h.
k