Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Lectures 1-6

Download as pdf or txt
Download as pdf or txt
You are on page 1of 21

DIFFERENTIAL EQUATIONS (2302-MTL102)

ANANTA KUMAR MAJEE

1. Introduction: ODEs
A di↵erential equation is an equation involving an unknown function and its derivatives. In
general the unknown function may depend on several variables and the equation may include
various partial derivatives.
Definition 1.1. A di↵erential equation involving ordinary derivatives is called a ordinary dif-
ferential equation (ODE).
• A most general ODE has the form
F x, y, y 0 , . . . , y (n) = 0 , (1.1)
where F is a given function of (n + 2) variables and y = y(x) is an unknown function of
a real variable x.
• The maximum order n of the derivative y (n) in (1.1) is called the order of the ODE.
Applications: Di↵erential equations play a central role not only in mathematics but also in
almost all areas of science and engineering, economics, and social sciences:
• Flow of current in a conductor: Consider an RC circuit with resistance R and capacity
C with no external current. Let x(t) be the capacitor voltage and I(t) the current
circulating in the circuit. Then according to Kirkcho↵’s law, R I(t)+x(t) = 0. Moreover,
the constitutive law of capacitor yields I(t) = C dx(t)
dt . Hence, we get the first order
di↵erential equation
x(t)
x0 (t) + = 0.
RC
• Population dynamics: Let x(t) be the number of individuals of a population at time t,
b be the birth rate of the population and d be the death rate of the population. Then
according to the simple “Malthus model”, growth rate of the population is proportional
to the number of new born individuals minus the number of deaths. Hence we get the
first order ODE
x0 (t) = kx(t), where k = b d.
• An example of a second order equation is y 00 + y = 0, which arises naturally in the study
of electrical and mechanical oscillations.
• Motion of a missile; The behaviours of a mixture; The spread of disease etc.
Definition 1.2. A function y : I ! R, where I ⇢ R is an open interval, is said to be a solution
of n-th order ODE, if it is n-times di↵erentiable, and satisfies (1.1) for all x 2 I.
Example 1.1. Consider the ODE y 0 = y. Let us first find all positive solutions. Note that
y0 0 0
y = (ln y) and therefore we obtain (ln y) = 1. Thus, this implies that

ln y = x + C =) y = C1 ex where C1 = eC > 0, C 2 R.
1
2 A. K. MAJEE

0
If y(x) < 0 for all x, then use yy = (ln( y))0 , and obtain y = C1 ex where C1 > 0. Combining
these two cases together, we obtain any solution y(x) has the form
y(x) = Cex , C 2 R.

2. Certain classes of nonlinear first order ODE


An n-th order linear ODE is a relation of the form
an (t)u(n) (t) + an 1 (t)u
(n 1)
(t) + . . . a1 (t)u0 (t) + a0 (t)u(t) = b(t) 8t 2 I with an 6= 0.
Definition 2.1. We say that the linear ODE is homogeneous if b(t) = 0 for all t 2 I. Otherwise
we say that it is non-homogeneous.
Theorem 2.1. Consider the linear homogeneous ODEs
u(m) (t) + am (t) + . . . a1 (t)u0 (t) + a0 (t)u(t) = 0, t 2 I.
1 (t)u
(m 1)
(2.1)
n o
Let X = u : I ! R : u is a solution of (2.1) . Then X is a real vector space with usual
addition of functions and scalar multiplication by real number.
2.1. 1st order linear ODE. Let us first consider the linear ODE of 1st order of the form
y 0 + ↵(x)y = b(x) , (2.2)
where ↵ and b are given function defined on I. A linear ODE can be solved as follows:
Theorem 2.2 (The method of variation of parameter). For ↵, b 2 C(I), the general solution of
(2.2) has the form
h Z i
A(x)
y(x) = e C + b(x)eA(x) dx ,

where A(x) is a primitive of ↵(x) on I, i.e., A0 (x) = ↵(x).


Proof. We want to find a di↵erential function µ(x) > 0 such that
0
µ(x)y 0 (x) + µ(x)↵(x)y(x) = µ(x)y(x) .
Note that µ(x) = eA(x) does this work (check !!!). This µ(x) is called an integrating factor. Let
us make the change of the unknown function
u(x) = y(x)eA(x) , y(x) = u(x)e A(x)
.
Substituting this in the given ODE, we obtain
A 0
(ue ) + ↵ue A
= b =) u0 e A
+ ue A
A0 + ↵ue A
= b.
Since A0 = ↵, we have
Z h Z i
u0 = beA =) u(x) = C + b(x)eA(x) dx =) y(x) = e A(x)
C+ b(x)eA(x) dx .


Example 2.1. Find the general solution of x0 (t) + 4tx(t) = 8t.

Here ↵(t) = 4t, and hence we have A(t) = 2t2 . Therefore, using the method of variation of
parameter, the general solution of the given ODE is given by
Z Z
2t2
⇥ 2t2
⇤ 2t2
⇥ d 2t2 ⇤ 2
x(t) = e C + 8te dt = e C +2 e = 2 + Ce 2t .
dt
ODES AND PDES 3

1st order linear IVP: Suppose, we are interested in solving the initial value problem
y 0 + ↵(x)y = b(x), y(x0 ) = y0 , where x0 2 I.
From the previous 0 A 0
R x calculation, we see that u = be with A = ↵. Integrating from x0 to x and
taking A(x) = x0 ↵(s) ds, we have
Z x Rs
↵(r) dr
u(x) u(x0 ) = b(s)e x0 ds.
x0

Note that u(x0 ) = y(x0 )eA(x0 ) = y(x0 )e0 = y0 . Thus,


Z x Rs
↵(r) dr
u(x) = y0 + b(s)e x0 ds
x0
Rx h Z x Rs i
↵(s) ds ↵(r) dr
=) y(x) = e x0 y0 + b(s)e x0 ds . (2.3)
x0

Example 2.2. Find the solution of x0 (t) + kx(t) = h, x(0) = x0 , where h and k are constant.
This equation arises in the RC circuit when there is a generator of constant voltage h.
Solution: Using (2.3), we get
Z t
⇥ ⇤ ⇥ ekt 1 ⇤ ⇣ k ⌘ kt h
x(t) = e kt x0 + heks ds = e kt x0 + h = x0 e + .
0 k h k
Notice that x(t) ! hk as t ! 1 from below if x0 < hk , and from above if x0 > hk . Moreover, the
capacitor voltage x(t) does not decay to 0 but tends to the constant voltage hk .
2.2. General 1st order ODEs. We consider the general first order ODE of the form
y 0 = f¯(x, y) ,
where f¯ is some continuous function. We have seen that if f¯(x, y) = ↵(x)y + b(x) for some
↵, 2 C(I), then the above ODE has solution in explicit form (cf. the method of variation of
parameter).
2.2.1. Equations with variables separated: Let us consider a separable ODE
y 0 = f (x)g(y) , (2.4)
where f and g are given continuous functions with f (x) 6= 0. If y = k is any zero of g, then
y(x) = k is a constant solution of (2.4). On the other hand, if y(x) = k is a constant solution,
then g(k) = 0. Therefore y(x) = k is a constant solution if and only if g(k) = 0.
• y(x) = k is called an equilibrium solution if and only if g(k) = 0.
Hence if y(x) is a non-constant solution, then g(y(x)) 6= 0 for any x. Any separable equation
can be solved by means of the following theorem.
Theorem 2.3 (The method of separation of variables). Let f and g be continuous functions
on some intervals I and J respectively such that g 6= 0 on J. Let F resp. G be a primitive
function of f resp. g1 on I resp. J. Then a function y defined on some subinterval of I, solves
the equation (2.4) if and only if
G(y(x)) = F (x) + C , (2.5)
for all x in the domain of y, where C is a real constant.
Proof. Let y(x) solves (2.4). Since F 0 = f and G0 = g1 , the equation (2.4) is equivalent to
0
y 0 G0 (y) = F 0 (x) =) G(y(x)) = F 0 (x) =) G(y(x)) = F (x) + C .
4 A. K. MAJEE

Conversely, if function y satisfies (2.5) and is known to be di↵erentiable in its domain, then
di↵erentiating (2.5) in x, we obtain y 0 G0 (y) = F 0 (x). Arguing backwards, we arrive at (2.4).
Let us show that y is di↵erentiable. Since g(y) 6= 0, either g(y) > 0 or g(y) < 0 in the whole
domain. Then G is either strictly increasing or strictly decreasing in the whole domain. In both
cases, the inverse function G 1 is well-defined and di↵erentiable. It follows from (2.5) that
1
y(x) = G F (x) + C .
Since both F and G 1 are di↵erentiable, we conclude that y is di↵erentiable. ⇤
Example 2.3. Consider the ODE
y 0 (x) = y(x) , x 2 R , y(x) > 0 .
Then f (x) ⌘ 1 and g(y) = y 6= 0. Note that F (x) = x and G(y) = log(y). The equation (2.5)
becomes
log(y) = x + C =) y(x) = Cex ,
where C is any positive constant.
Example 2.4. Consider the equation
p
y0 = |y|
which is defined for all y 2 R. Note that y = 0 is a trivial solution. In the domains y > 0 and
y < 0, the equation can be solved using separation of variables. In the domain y > 0, we obtain
Z Z
dy p 1 2
p = dx =) 2 y = x + C =) y = x + C .
y 4
Since y > 0, we must have x > C which follows from the second expression. Similarly in the
domain y < 0, we have
1
y= (x + C)2 , x < C .
4
We see that the integral curves in the domain y > 0 touch the curve y = 0 and so do the integral
curves in the domain y < 0.
Example 2.5. The logistic equation
y 0 (x) = y(x) ↵ y(x) , ↵, > 0.
In this model, y(x) represents the population of some species and therefore y(x) 0. Note that
y(x) = 0 and y(x) = ↵ are two equilibrium solutions. Such solutions play an important role
in analyzing the trajectories of solutions in general. In order to solve the logistic equation, we
separate the variables and obtain (assuming that y 6= 0 and y 6= ↵ )
dy 1 dy dy 1 1
= dx =) + = dx =) log(|y|) log(|↵ y|) = x + c
y(↵ y) ↵ y ↵↵ y ↵ ↵
1 y y
=) log | | = x + c =) | | = ke↵x ,
↵ ↵ y ↵ y
where k = ec↵ . This is a general solution in implicit form. To solve for y, consider the case
↵ke↵x
0 < y(x) < ↵ . Then, y(x) = 1+ ke↵x . For the case y >

, the solution takes the form
↵ke↵x ↵
y(x) = 1 ke↵x . In any case, limx!1 y(x) = . This shows that all non-constant solutions
approach the equilibrium solution y(x) = ↵ as x ! 1, some from above the line y = ↵ and
others from below.
ODES AND PDES 5

2.2.2. Exact equations: Suppose that the first order equation y 0 = f¯(x, y) is written in the
form
M (x, y) + N (x, y)y 0 = 0 , (2.6)
where M , N are real-valued functions defined for real x, y on some domain ⌦.
Definition 2.2. We say that the equation (2.6) is exact in ⌦ if there exists a function F having
continuous first partial derivatives such that
@F @F
=M, =N. (2.7)
@x @y
Theorem 2.4. Suppose the equation (2.6) is exact in a domain ⌦ ⇢ R2 i.e., there exists F
such that @F @F
@x = M , @y = N in ⌦. Then every continuously di↵erentiable function defined
implicitly by a relation
F (x, (x)) = c (c = constant) ,
is a solution of (2.6), and every solution of (2.6) whose graph lies in ⌦ arises in this way.
Proof. Under the assumptions of the theorem, equation (2.6) becomes
@F @F
(x, y) + (x, y)y 0 = 0 .
@x @y
If is any solution on some interval I, then
@F @F
(x, (x)) + (x, (x)) 0 (x) = 0 , 8x 2 I . (2.8)
@x @y
If (x) = F (x, (x)), then from the above equation, we see that 0 (x) = 0, and hence
F (x, (x)) = c, where c is some constant. Thus the solution must be a function which is
given implicitly by the relation F (x, (x)) = c. Conversely, if is a di↵erentiable function on
some interval I defined implicitly by the relation F (x, y) = c, then
F (x, (x)) = c , 8x 2 I .
@F
Di↵erentiation along with the property @x = M, @F
@y = N yields that is a solution of (2.6).
This completes the proof. ⇤
We will say that F (x, y) = c is the general solution of (2.6).
Example 2.6. Consider the equation
x (y 4 1)y 0 (x) = 0 .
Here M = x and N = 1 y 4 . Define F (x, y) = 12 x2 + y 1 5
5y . Then above equation is exact.
Hence the solution is given by
F (x, y) = c =) 2y 5 10y = 5x2 + c .
Example 2.7. Find the general solution of the ODE 2ye2x + 2x cos(y) + e2x x2 sin(y) y 0 = 0.
Solution: The equation is of the form M (x, y) + N (x, y)y 0 = 0 with M (x, y) = 2ye2x + 2x cos(y)
and N (x, y) = e2x x2 sin(y). Define F (x, y) = ye2x + x2 cos(y). Then F has continuous first
partial derivatives on R2 and @F @F
@x (x, y) = M (x, y), @y (x, y) = N (x, y). Hence the given ODE is
exact. Thus, the general solution is given by the formula
ye2x + x2 cos(y) = c, c 2 R.
How do we recognize when an equation is exact? The following theorem gives a necessary
and sufficient conditions.
6 A. K. MAJEE

Theorem 2.5. Let M, N be two real-valued functions which have continuous first partial deriva-
tives on some rectangle
n o
R := (x, y) 2 R2 : |x x0 |  a , |y y0 |  b .
Then the equation (2.6) is exact in R if and only if
@M @N
= (2.9)
@y @x
in R.
Proof. It is easy to see that if the equation (2.6) is exact, then (2.9) holds. Now suppose that
(2.9) holds in the rectangle R. We wand to find a function F having continuous first partial
derivatives such that @F @F
@x = M and @y = N . If we had a such function, then
Z x Z y Z x Z y
@F (s, y) @F (x0 , t)
F (x, y) F (x0 , y0 ) = ds + dt = M (s, y) ds + N (x0 , t) dt
x0 @x y0 @y x0 y0

Similarly by writing F (x, y) F (x0 , y0 ) = F (x, y) F (x, y0 ) + F (x, y0 ) F (x0 , y0 ), we could


have
Z x Z y
F (x, y) F (x0 , y0 ) = M (s, y0 ) ds + N (x, t) dt . (2.10)
x0 y0
We now define F by the formula
Z x Z y
F (x, y) = M (s, y) ds + N (x0 , t) dt . (2.11)
x0 y0
@F
Then, F (x0 , y0 ) = 0 and @x (x, y) = M (x, y) for all (x, y) in R. From (2.10), we can also
define F by the formula
Z x Z y
F (x, y) = M (s, y0 ) ds + N (x, t) dt . (2.12)
x0 y0

It is clear from (2.12) that @F @y (x, y) = N (x, y) for all (x, y) in R. Therefore, we need to show
that (2.12) is valid, where F is defined by (2.11). Now, by using the condition (2.9), we have
hZ x Z y i
F (x, y) M (s, y0 ) ds + N (x, t) dt
x0 y0
Z x Z y
⇥ ⇤ ⇥ ⇤
= M (s, y) M (s, y0 ) ds N (x, t) N (x0 , t) dt
x y0
Z 0x Z y h i
@M @N
= (s, t) (s, t) ds dt = 0 .
x0 y 0 @y @x
This completes the proof. ⇤
Example 2.8. Find the general solution of the ODE
2xydx + (x2 + y 2 ) dy = 0 .
Here, M (x, y) = 2xy and N (x, y) = x2 + y 2 . Note that @M @N
@y = @x = 2x. Thus, the equation is
exact. Define the function F by (taking (x0 , y0 ) = (0, 0))
Z x Z y Z x Z y
y3
F (x, y) = M (s, y) ds + N (x0 , t) dt = 2sy ds + t2 dt = yx2 + .
0 0 0 0 3
y3
Therefore, the general solution is given by the formula yx2 + 3 = c, where c is arbitrary real
constant.
ODES AND PDES 7

Example 2.9. Solve the ODE (x2 2y)y 0 = 3x2 2xy.


solution: Given ODE can be written as
(3x2 2xy)dx + (2y x2 )dy = 0 i.e., M (x, y)dx + N (x, y) dy = 0.
A simple calculation shows that @M
@y (x, y) =
@N
@x (x, y) = 2x. Hence the ODE is exact for all
x, y 2 R. To find F , we know that @F
@x = M,
@F
@y = N . Thus, F satisfies

F (x, y) = x3 x2 y + f (y), where f is independent of x.


@F
Now, @y = N gives
f 0 (y) x2 = 2y x2 =) f 0 (y) = 2y.
Taking f (y) = y 2 , we obtain F (x, y) = x2 (1 y) + y 2 . Thus, general solution is
x2 (1 y) + y 2 = c, c 2 R.
2.2.3. The integrating factor: Sometimes, if the equation (2.6) is NOT exact, one can find a
function u, nowhere zero, such that the equation
u(x, y)M (x, y)dx + u(x, y)N (x, y) dy = 0
is exact. Such a function is called an integrating factor. For example ydx xdy = 0 (x >
0, y > 0) is not exact, by multiplying the equation by u(x, y) = y12 makes it exact. Note that all
the three function
1 1 1
, 2
,
xy x y2
are integrating factors of the above ODE. Thus, integrating factors need not be unique.
Remark 2.1. In view of Theorem 2.5, we see that a function u on a rectangle R, having
continuously partial derivatives, is an integrating factor of the equation (2.6) if and only if
⇣ @M @N ⌘ @u @u
u =N M . (2.13)
@y @x @x @y
i) If u is an integrating factor which is function of x only, then
1 ⇣ @M @N ⌘
p=
N @y @x
is a continuous function of x alone, provided N (x, y) 6= 0 in R.
ii) If u is an integrating factor which is function of y only, then
1 ⇣ @N @M ⌘
q=
M @x @y
is a continuous function of y alone, provided M (x, y) 6= 0 in R.
Example 2.10. Find an integrating factor of
(2y 3 + 2) dx + 3xy 2 dy = 0 , x 6= 0 , y 6= 0
and solve the ODE.
Here M (x, y) = 2y 3 + 2 and⇣ N (x, y) =⌘ 3xy 2 . Note that the equation is not exact. Now
@M @N 1 @M @N
@y
2
@x = 3y and hence N @y @x = x1 is a continuous function of x alone. Thus
integrating factor should be only function of x. Note that u(x) = x satisfies the relation (2.13).
After multiplication by integrating factor, equation becomes
M̃ (x, y) dx + Ñ (x, y) dy = 0 , where M̃ (x, y) = 2xy 3 + 2x , Ñ (x, y) = 3x2 y 2 .
8 A. K. MAJEE

@ F̃
To find F̃ , we know that @x = 2xy 2 + 2x, and hence
F̃ (x, y) = x2 y 3 + x2 + f (y) ,
@ F̃
where f is independent of x. Again @y = Ñ gives

f 0 (y) + 3x2 y 2 = 3x2 y 2 =) f (y) = c.


Thus, the general solution is given implicitly by
x2 (y 3 + 1) = c , c 2 R.
2.2.4. Bernoulli Equations: We are interested in the ODE of the form
y 0 + p(x)y = q(x)y n ,
where p and q are continuous functions. Note that for n = 0 or n = 1, the equation is linear,
and we already know how to solve it. For n 6= 0, 1, we first divide the ODE by y n to get
y n y 0 + p(x)y 1 n = q(x). Take v = y 1 n . Then v 0 = (1 n)y n y 0 , and plugging this, we have
1
v 0 + p(x)v = q(x) .
1 n
Since this a linear ODE, we can use method of variation of parameter to find its solution and
hence able to find the solution of given Bernoulli equation.
Example 2.11. We wish to find all solutions of the ODE 3y 2 y 0 + y 3 = e x . We first write
x x
down the given ODE in the Bernoulli form: y 0 + 13 y = e 3 y 2 . Here p(x) = 13 , q(x) = e 3 and
n = 2. Thus, taking v = y 3 , we have the following linear ODE
v0 + v = e x
.
General solution is given by v(x) = e x (c + x), where c is an arbitrary constant. Thus, the
general solution of the given ODE is given by
1 x
y(x) = (c + x) 3 e 3 , 8 x 2 R.
Example 2.12. Find the general solution of the ODE y 0 2xy = xy 2 .
Solution: Note that given ODE is the Bernoulii equation with p(x) = 2x and q(x) = x.
Taking v = y 1 , we have the following linear ODE
v 0 + 2xv = x.
x2
The general solution is given by v(x) = 12 + ce . Hence the general solution of the given
ODE is
⇣ 1 2
⌘ 1
y(x) = + ce x , x 2 R.
2

3. Existence and uniqueness of solutions to first order ODE


3.1. The method of successive approximations: Let us consider the initial value problem
y 0 = f (x, y) , y(x0 ) = y0 , (3.1)
where f is any continuous real-valued function defined on some rectangle
n o
R := (x, y) 2 R2 : |x x0 |  a , |y y0 |  b , (a, b > 0)

Our aim is to show that on some interval I containing x0 , there is a solution of the above
initial value problem. Let us first show the following.
ODES AND PDES 9

Theorem 3.1. A function is a solution of (3.1) on some interval I if and only if it is solution
of the integral equation (on I)
Z x
y = y0 + f (s, y(s)) ds . (3.2)
x0

Proof. Suppose is a solution to the initial value problem on I, then


Z x Z x
0
(t) = f (t, (t)) =) (x) = (x0 ) + f (t, (t)) dt = y0 + f (t, (t)) dt ,
x0 x0
where in the middle equation, we used the fact that f (t, (t)) is continuous on I. Hence is a
solution of (3.2). Conversely, suppose that is a solution to (3.2) i.e.,
Z x
(x) = y0 + f (s, (s)) ds , 8x 2 I .
x0
Then (x0 ) = y0 and 0 (x) = f (x, (x)) for all x 2 I. Thus is a solution to the initial value
problem (3.1). ⇤
We now want to solve the integral equation via approximation. Define Picard’s approxima-
tions by
0 (x) = y0
Z x
(3.3)
k+1 (x) = y0 + f (s, k (s)) ds (k = 0, 1, . . .)
x0
First we show that all the functions k , k = 0, 1, 2, . . . exist on some interval.

Theorem 3.2. The approximate function exist as continuous function on


k
n b o
I := x 2 R : |x x0 |  ↵ = min{a, }
M
and (x, k (x)) 2 R for all x 2 I, where M > 0 is such that |f (x, y)|  M for all (x, y) in R.
Indeed,
| k (x) y0 |  M |x x0 | , 8x 2 I . (3.4)
b
Note that for x 2 I, |x x0 |  M, and hence (x, k (x)) are in R for all x 2 I.
Proof. We will prove it by induction. Clearly 0 exists on I and (3.4) satisfies. Now
Z x
| 1 (x) y0 | = f (s, y0 ) ds  M |x x0 |
x0
, and 1 is continuous on I. Now assume that it is valid for the functions 0 , 1 , . . . , k . Define
the function Fk (t) = f (t, k (t)), which exists for t 2 I. It is continuous on I. Therefore k+1
exists as a continuous function on I. Moreover,
Z x
k+1 (x) y0  Fk (t) dt  M |x x0 | .
x0

Definition 3.1. Let f be a function defined for (x, y) is a set S. We say that it is Lipschitz in
y, if there exists a constant K > 0 such that
|f (x, y1 ) f (x, y2 )|  K|y1 y2 | , 8(x, y1 ), (x, y2 ) 2 S .
Exercise 3.1. Show that if the partial derivative fy exists and is bounded in a rectangle R, then
f is Lipschitz in y in R.
We now show that approximate solutions k converges on I to a solution of the initial value
problem under certain assumptions on f .
10 A. K. MAJEE

Theorem 3.3. Let f be a continuous function defined on the rectangle R. Further suppose that
f is Lipschitz in y. Then { k } converges to a solution of the initial value problem (3.1) on I.
Proof. Note that
k
X
k (x) = 0 (x) + [ i (x) i 1 (x)] , 8x 2 I .
i=1
Therefore, it suffices to show that the series
X1
0 (x) + [ i (x) i 1 (x)]
i=1
converges. Let us estimate the terms i (x) i 1 (x). Observe that, since f satisfies Lipschitz
condition in R
Z xh i Z x
| 2 (x) 1 (x)| = f (t, 1 (t)) f (t, 0 (t)) dt  K | 1 (t) 0 (t)| dt
x0 x0
Z x
(x x0 )2
 KM |t x0 | dt  KM .
x0 2
Claim:
M Ki 1 M K i |x x0 |i
i (x) i 1 (x)  |x x0 |i = , i = 1, 2, . . . (3.5)
i! K i!
We shall prove (3.5) via induction. Note that (3.5) is true for i = 1 and i = 2. Assume now that
(3.5) holds for i = m. Let us assume that x x0 (similar proof for x  x0 ). By using Lipschitz
condition, and the induction hypothesis, we have
Z x Z x
M Km 1
m+1 (x) m (x)  K m (t) m 1 (t) dt  K |t x0 |m dt
x0 x0 m!
M Km
= |x x0 |m+1
(m + 1)!
P
Hence (3.5) holds for i = 1, 2, . . .. It follows that the i-th term of the series | 0 (x)|+ 1 i=1 | i (x)
M K|x x0 | . Hence
i 1 (x)]| is less than
P or equal to K times the i-th term of the power series for e
the series 0 (x) + 1 i=1 [ i (x) i 1 (x)] is convergent for all x 2 I, and therefore the sequence
{ k } converges to a limit (x) as k ! 1.
Properties of the limit function : We first show that is continuous on I. Indeed, for any
x, x̃ 2 I, we have, by using the boundedness of f
Z x
k+1 (x) k+1 (x̃) = f (t, k (t)) dt  M |x x̃|

=) (x) (x̃)  M |x x̃| .
This shows that is continuous on I. Moreover, taking x̃ = x0 in the above estimate, we see
that
| (x) y0 |  M |x x0 | , 8x 2 I
and hence (x, (x)) is in R for all x 2 I.
We now estimate | (x) k (x)|. Note that, since (x) is the limit of the series 0 (x) +
P1 Pk
i=1 [ i (x) i 1 (x)] and k = 0 (x) + i=1 [ i (x) i 1 (x)], we see that
1
X 1 1
M X K i |x x0 |i M X K i ↵i
| (x) k (x)| = [ i (x) i 1 (x)]  
K i! K i!
i=k+1 i=k+1 i=k+1
ODES AND PDES 11

M (K↵)k+1 n (K↵) (K↵)2 o


= 1+ + + ...
K (k + 1)! K + 2 (k + 2)(k + 3)
k+1 1
X
M (K↵) K i ↵i M (K↵)k+1 K↵
 = e . (3.6)
K (k + 1)! i! K (k + 1)!
i=0
(K↵)k+1
Note that (k+1)! ! 0 as k ! 1.
Now we will show that is a solution to the integral equation (3.2). Note that
Z x
k+1 (x) = y0 + f (t, k (t)) dt ,
x0
Rx Rx
and k+1 (x) ! (x), as k ! 1. Thus it suffices to show that x0 f (t, k (t)) dt ! x0 f (t, (t)) dt.
Indeed, thanks to Lipschitz condition of f , and (3.6) we have
Z x Z x Z x
f (t, k (t)) dt f (t, (t)) dt  K | (t) k (t)| dt
x0 x0 x0
(K↵)k+1 K↵
M e |x x0 | ! 0 , (k ! 1) .
(k + 1)!

To prove uniqueness, we first consider an interval I := {|x x0 |  } where > 0 is such


that L < 1. Suppose there exist two solutions y1 and y2 of the IVP. Then, one has
Z x Z x
y1 (x) y2 (x) = f (s, y1 (s)) f (s, y2 (s)) ds  L |y1 (s) y2 (s)| dx
x0 x0
 L|x x0 | max |y1 (x) y2 (x)|  L max |y1 (x) y2 (x)|
|x x0 | |x x0 |
=) max |y1 (x) y2 (x)|  L max |y1 (x) y2 (x)| .
|x x0 | |x x0 |

Since L < 1, we get max|x x0 | |y1 (x) y2 (x)| = 0—which then implies that y1 = y2 on
I . In particular, y1 (x0 ± ) = y2 (x0 ± ). Repeating the same procedure in the interval
[x0 + , x0 + 2 ] and [x0 2 , x0 ], we get y1 (x) = y2 (x) for all x 2 [x0 2 , x0 + 2 ]. Since
the set {x 2 R : |x x0 |  ↵} is compact, after a finite number of times, we get y1 (x) = y2 (x)
for all x 2 [a, b]. This completes the proof. ⇤
Remark 3.1. One can start Picard’s iteration procedure with any continuous function 0 on
|x x0 |  a such that the points (x, 0 (x)) are in R for |x x0 |  a. One can proceed as in the
previous theorem by showing:
i) 1 , 2 , . . . exist and are continuous on I, and satisfy | k (x) y0 |  M |x x0 | for all
k = 1, 2, . . ..
M K i |x x0 |i
ii | i+1 (x) i (x)|  2 K i! for all i = 1, 2, . . ..
(K↵) K↵ i
iii) Show that k tends to a limit function and | i (x) (x)|  2 M
K i! e .
Let us illustrate this method on the following example:
y0 = y , y(0) = 1 .
Here 0 (x) = 1. Moreover, the approximate functions are given by
Z x
1 (x) = 1 + 0 (s) ds = 1 + x
0
Z x
x2
2 (x) = 1 + 1 (s) ds = 1 + x +
0 2!
12 A. K. MAJEE
Z x
x2 x3
3 (x) =1+ 2 (s) ds =1+x+ +
0 2! 3!
and by induction
x2 x3 xk
k (x) =1+x+
+ + ... + , k = 0, 1, 2, . . . .
2! 3! k!
Clearly, k (x) ! ex as k ! 1, and the function y(x) = ex indeed solves the above IVP.
Definition 3.2. We say that J ⇢ R is the maximal interval of definition of the solution y(x)
of the IVP (3.1), if any interval I where y(x) is defined is contained in J, and y(t) cannot be
extended in an interval greater than J.
Example 3.1. Consider the IVP
y0 = y2 , y(x0 ) = a 6= 0 .
In view of Theorem 3.3, above problem has a solution in a neighbourhood of x0 . Note that,
the function (x, c) = x 1 c solves y 0 = y 2 . Thus, we need to impose the requirement that
(x0 , c) = a , c = x0 + a1 . Let ca = x0 + a1 . Hence for a > 0, solution to the above problem is
1
ya (x) =
, x < ca .
x ca
Thus, in this case, the maximum interval of definition is ( 1, ca ). Similarly, for a < 0, one
can show the maximum interval of definition is (ca , 1) for the above problem.
3.1.1. Non-local existence of solutions: Theorem 3.3 guarantees a solution only for x near
the initial point x0 . There are some cases, where solution may exists on the whole interval
|x x0 |  a. For example, consider the ODE of the form y 0 + g(x)y = h(x), where g, h are
continuous on |x x0 |  a. Let us define the strip
n o
S := |x x0 |  a , |y| < 1 .
Since g is contonuous on |x x0 |  a, there exists K > 0 such that |g(x)|  K. Then the
function f (x, y) = g(x)y + h(x) are Lipschitz continuous on the strip S.
Theorem 3.4. Let f be a real-valued continuous function on the strip S, and Lipschitz contin-
uous on S with constant K > 0. Then the Picard’s iterations { k } for the problem (3.1) exist
on the entire interval |x x0 |  a, and converge there to a solution of (3.1).
Proof. Note that (x, 0 (x)) 2 S. Now since f is continuous on S, there exists M > 0 such that
|f (x, y0 )|  M for |x x0 |  a, and hence
Z x
| 1 (x)|  |y0 | + f (t, y0 ) dt  |y0 | + M |x x0 |  |y0 | + M a < 1 .
x0
Moreover, each k is continuous on |x x0 |  a. Now assume that the points
(x, 0 (x)), (x, 1 (x), . . . (x, k (x))

are in S for |x x0 |  a. We show that (x, k+1 (x)) lies in S. Indeed,


Z x Z x
| k+1 (x)|  |y0 | + f (t, k (t)) dt  |y0 | + M |x x0 | + f (t, k (t)) f (t, y0 ) dt
x0 x0
Z x
 |y0 | + M a + K | k (t)| + |y0 | dt
x0
Since k (x) is continuous on |x x0 |  a, we see that | k+1 (x)| < 1 for |x x0 |  a. Hence,
by induction, the points (x, k (x)) are in S.
ODES AND PDES 13

To show the convergence of the sequence { k } to a limit function , we can mimic the proof
of Theorem 3.3, once we note that
Z x
1 (x) 0 (x)  |f (t, y0 )| dt  M |x x0 | .
x0
Next we show that is continuous. Observe that, thanks to (3.5) (which holds in this case)
k
X k
X k
M X K i |x x0 |i
| k (x) y0 | = [ i (x) i 1 (x)]  i (x) i 1 (x) 
K i!
i=1 i=1 i=1
X1
M K i |x x0 |i M Ka
 = e 1 := b .
K i! K
i=1
Taking the limit as k ! 1, we obtain
| (x) y0 |  b , (|x x0 |  a) .
Note that f is continuous on R, where the rectangle R is given by
n o
R := (x, y) 2 R2 : |x x0 |  a , |y y0 |  b ,
and hence, there exists a constant N > 0 such that |f (x, y)|  N for (x, y) 2 R. Let x, x̃ be two
pints in the interval |x x0 |  a. Then
Z x
k+1 (x) k+1 (x̃) = f (t, k (t)) dt  N |x x̃|

=) | (x) (x̃)|  N |x x̃| .
Rest of the proof is a repetition of the analogous parts of the proof of Theorem 3.3, with ↵
replaced by a everywhere. ⇤
Example 3.2. Consider the IVP y 0 = y + x2 sin(y), y(0) = 1, where is a real constant
such that | |  1. Show that the solution of the given IVP exists for |x|  1.
Solution: Here f (x, y) = y + x2 sin(y). Consider the strip S = {|x|  1, |y| < 1}. Then f is
continuous on S and Lipschitz continuous on S as |@y f (x, y)|  2 on S. Thus, by Theorem 3.4,
the solution of the given problem exists on the entire interval |x|  1.
In view of Theorem 3.4, we arrive at the following corollary.
Corollary 3.5. Let f be a real-valued continuous function on the plane |x| < 1, |y| < 1, which
satisfies a Lipschitz condition on each strip Sa defined by
n o
Sa := |x|  a , |y| < 1 , (a > 0) .
Then every initial value problem
y 0 = f (x, y) , y(x0 ) = y0 ,
has a solution which exists for all real x.
Proof. For any real number x, there exists a > 0 such that x is contained inside the interval
|x x0 |  a. Consider now the strip
n o
S := |x x0 |  a , |y| < 1 .
Since S is contained in the strip
n o
S̃ := |x|  |x0 | + a , |y| < 1 .
f satisfies all the conditions of Theorem 3.4. Thus, { k (x)} tends to (x), where is a solution
to the initial-value problem. This completes the proof. ⇤
14 A. K. MAJEE

Example 3.3. Consider the equation


y 0 = h1 (x)p(cos(y)) + h2 (x)q(sin(y)) := f (x, y) ,
where h1 and h2 are continuous functions for all real x, and p , q are polynomials. Consider the
strip Sa := |x|  a , |y| < 1 , where a > 0. Note that, since h1 , h2 are continuous, there
exists Na > 0 such that |hi (x)|  Na for |x|  a. Again, since p, q are polynomials, there exists
a constant C > 0 such that
max p0 (⇠), q 0 (⇠) : ⇠ 2 [ 1, 1]  C.
Let us check that f is Lipschitz continuous on the strip Sa . Indeed, for any (x, y1 ), (x, y2 ) 2 Sa ,
f (x, y1 ) f (x, y2 )  |h1 (x)||p0 (⇠1 )|| cos(y1 ) cos(y2 )| + |h2 (x)||q 0 (⇠2 )|| sin(y1 ) sin(y2 )|
 2 Na C|y1 y2 | .
Thus, thanks to Corollary 3.6, every initial value problem for this equation has a solution which
exists for all real x.
Example 3.4. Show that the IVP y 0 = cos(y)1 x2
(|x| < 1), y(0) = 2 has a unique solution for all
|x| < 1.
Solution: Given IVP has a unique solution for all |x| < 1, if we show that f (x, y) = cos(y)
1 x2
is
continuous on each strip Sa := {|x|  a, |y| < 1, 0 < a < 1} and Lipschitz continuous in the
second argument on Sa . It is easy to see that f is Lipschitz continuous in y on Sa with Lipschitz
constant K = 1 1a2 .
Corollary 3.6. Let ⌦ = R ⇥ R, and f is globally Lipschitz on ⌦ in its second argument. Let
(x0 , y0 ) 2 ⌦ be given. Then the Cauchy problem y 0 = f (x, y), y(x0 ) = y0 has a unique solution
defined on all of R.
Lemma 3.7. If the maximum interval of definition J is not all of R (assuming that f is defined
on R2 ), then it cannot be closed.
Proof. By contradiction, let J = [↵, ] or ( 1, ] or [↵, 1) for some ↵, 2 R with ↵ < . We
deal with first case. Consider the IVP ỹ 0 (x) = f (x, ỹ) with ỹ( ) = y( ). Then this problem has
a solution on [ , + ] for some > 0. Consider the function
(
y(x), ↵  x 
ȳ(x) =
ỹ(x),  x  + .
One can easily check that ȳ is a solution of the ODE y 0 = f (x, y) defined on [↵, + ]—
contradicting the fact that J is the maximal interval of definition. ⇤
Proposition 3.8. If the solution y(x) of y 0 = f (x, y), where f is continuous and locally Lipschitz
on R2 , is monotone and bounded, then the maximum interval of definition J is all of R.
Proof. By contradiction, let J is strictly contained in R. Suppose that J is bounded. We show
that J is closed. Let < +1 be the right extreme of J. Since y(x) is monotone and bounded,
limx! y(x) exists and finite. Thus y(x) is defined at x = and hence J contains . Similarly if
↵ < +1 is the left extreme of J, then J also contains ↵. In other words, J is closed—contraction
to previous lemma. ⇤
3.2. Continuous dependence estimate: Suppose we have two IVP
y 0 = f (x, y) , y(x0 ) = y1 , (3.7)
y 0 = g(x, y) , y(x0 ) = y2 , (3.8)
where f and g both are real-valued continuous function on the rectangle R, and (x0 , y1 ) , (x0 , y2 )
are points in R.
ODES AND PDES 15

Theorem 3.9. Let f , g be continuous function on R, and f satisfies a Lipschitz condition there
with Lipschitz constant K. Let , be solutions of (3.7), (3.8) respectively on an interval I
containing x0 , with graphs contained in R. Suppose that the following inequalities are valid
|f (x, y) g(x, y)|  " , (x, y) 2 R , (3.9)
|y1 y2 |  , (3.10)
for some non-negative constants " , . Then
x0 | " K|x x0 |
(x) (x)  eK|x + e 1 , 8x 2 I . (3.11)
K
Proof. Since , are solutions of (3.7), (3.8) respectively on an interval I containing x0 , we see
that
Z x⇥ ⇤
(x) (x) = y1 y2 + f (t, (t)) g(t, (t)) dt
x0
Z x⇥ Z x⇥
⇤ ⇤
= y1 y2 + f (t, (t)) f (t, (t)) dt + f (t, (t)) g(t, (t)) dt .
x0 x0

Assume that x x0 . Then, in view of (3.9), (3.10), and the Lipschitz condition of f with
Lipschitz constant K, we obtain from the above expression
Z x
| (x) (x)|  + K | (s) (s)| ds + "(x x0 ) . (3.12)
x0
Rx
Define, E(x) = x0 | (s) (s)| ds. Then E 0 (x) = | (x) (x)| and E(x0 ) = 0. Therefore,
(3.12) becomes
E 0 (x) KE(x)  + "(x x0 ) .

Multiplying this inequality by e K(x x0 ) , and then integrating from x0 to x, we have


Z x Z x
K(x x0 ) K(t x0 )
E(x)e  e dt + " (t x0 )e K(t x0 ) dt
x0 x0
⇥ K(x x0 )
⇤ " " ⇥ ⇤ K(x x0 )
= 1 e + 2+ 2 K(x x0 ) 1 e .
K K K
Multiplying both sides of this inequality by eK(x x0 ) , we have
⇥ ⇤ " ⇥ ⇤ "
E(x)  eK(x x0 )
1 K(x x0 ) + 1 + 2 eK(x x0 )
.
K K2 K
We now use this estimate in (3.12) to arrive at the required result for x x0 . A similar proof
holds in case x  x0 . This completes the proof. ⇤
Example 3.5. Consider the IVP
1
y 0 = xy + y 10 , y(0) = .
10
We first show that this problem has a solution which exists for all |x|  12 . To do so, we
consider the rectangle R = {|x|  12 , |y 10 1 1
|  10 }. It is easy to see that g(x, y) = xy + y 10 is
1
continuous on R and |g(x, y)| < 5 := M for all (x, y) 2 R. Hence the given IVP has a unique
solution defined on the interval I = {x : |x|  ↵ := min{ 12 , Mb } where b = 10 1
. Observe that
the IVP
1
y 0 = f (x, y) := xy, y(0) =
10
16 A. K. MAJEE

has a unique solution, say for |x|  12 such that the graph (x, (x)) lies in the rectangle R.
The Lipschitz constant K of f on the rectangle R is 12 , and |f (x, y) g(x, y)|  5110 for all
(x, y) 2 R. Hence by the continuous dependence estimate, we have, for all |x|  12
2 |x|
| (x) (x)|  10
e2 1 .
5
As a consequence of Theorem 3.9, we have
i) Uniqueness Theorem: Let f be continuous and satisfies a Lipschitz condition on
R. If and are two solutions of the IVP (3.1) on an interval I containing x0 , then
(x) = (x) for all x 2 I.
ii) Let f be continuous and satisfies a Lipschitz condition on R, and gk , k = 1, 2, . . . be
continuous on R such that
|f (x, y) gk (x, y)|  "k , (x, y) 2 R
with "k ! 0 as k ! 1. Let yk ! y0 as k ! 1. Let k be a solution to the IVP
y 0 = gk (x, y) , y(x0 ) = yk ,

and is a solution to the IVP (3.1) on some interval I containing x0 . Then k (x) ! (x)
on I.
Remark 3.2. The Lipschitz condition on f on the rectangle R is necessary to have uniqueness
of solution of the IVP (3.1). To see this, consider the IVP
2
y 0 = 3y 3 , y(0) = 0 .
It is easy to check that the function k , for any positive number k
(
0, 1 < x  k,
k (x) = 3
(x k) , k  x < 1 ,

is a solution of the above IVP. So, there are infinitely many solution on any rectangle R containing
the origin. But note that, f does NOT satisfy a Lipschitz condition on R.
Remark 3.3. We have shown the existence of solution of the IVP under more stronger condition
namely Lipschitzness of the function f . But one can relax the Lipschithzness to gurantee the
existence of a solution of the IVP only under the continuity assumption on f . This is called
Peano Theorem.
Theorem 3.10 (Peano Theorem). If the function f (x, y) is continuous on a rectangle R and
if (x0 , y0 ) 2 R, then the IVP y 0 = f (x, y) with y(x0 ) = y0 has a solution in the neighborhood of
x0 .

4. Existence and uniqueness for systems and higher order equations:


Consider the system of di↵erential equations in normal form
y10 = f1 (x, y1 , y2 , . . . , yn )
y20 = f2 (x, y1 , y2 , . . . , yn )
.. (4.1)
.
yn0 = fn (x, y1 , y2 , . . . , yn )
ODES AND PDES 17

˜ ⇢ Rn , and ⌦ = I ⇥ ⌦.
Let ⌦ ˜ We introduce
0 1 0 1
y1 f1 (x, ~y )
B y2 C B f2 (x, ~y ) C
B C n ~ B C
~y = B .. C 2 R ; f (x, ~y ) = B .. C 2 Rn
@.A @ . A
yn fn (x, ~y )

Then f~ : ⌦ ! Rn , and the system of equation (4.1) can be written in the compact form
~y 0 = f~(x, ~y ) .

An equation of the n-th order ODE y (n) = f (x, y, y 0 , . . . , y (n 1) ) may also be treated as a system
of the type (4.1). To see this, let y1 = y, y2 = y 0 , . . . , yn = y (n 1) . Then from the ODE equation
y (n) = f (x, y, y 0 , . . . , y (n 1) ), we have

y10 = y2 ; y20 = y3 ; . . . ; yn0 1 = yn ;


(4.2)
yn0 = f (x, y1 , y2 , . . . , yn ) .

Example 4.1. Consider the initial value problem y (3) + 2y 0 (y 0 )3 + y = x2 + 1 with y(0) =
0, y 0 (0) = 1, y 00 (0) = 1. We want to convert into a system of equations. Let y1 = y, y2 = y 0 and
y3 = y 00 . Note that y 0 = y10 = y2 . Using these, the required system takes the form
y10 = y2 ; y20 = y3 ; y30 = y23 2y2 y1 + x 2 + 1 ,
(4.3)
y1 (0) = 0; y2 (0) = 1; y3 (0) = 1 .

Definition 4.1. A solution ~ = ( 1 , 2 , . . . , n) of the system ~y 0 = f~(x, ~y ) is a di↵erentiable


function on a real interval I such that
i) (x, ~ (x)) 2 ⌦
ii) ~ 0 (x) = f~(x, ~ (x)) for all x 2 I .

Theorem 4.1 (Local existence). Let f~ be a continuous vector valued function defined on
R = |x x0 |  a, |~y ~y0 |  b, a, b > 0

and suppose f~ satisfies Lipschitz condition on R, then the successive approximation { ~ k }1


k=0
~ 0 (x) = ~y0
Z x
~ k+1 (x) = ~y0 + f~(s, ~ k (s)) ds , k = 0, 1, 2, . . .
x0

converges on the interval Icon = |x x0 |  ↵ = min{a, Mb } to a solution ~ of the IVP


~y 0 = f~(x, ~y ); ~y (x0 ) = ~y0 on Icon , where M is a positive constant such that |f~(x, ~y )|  M for
all (x, ~y ) 2 R. Moreover,
k+1
~ k (x) ~  M (K↵) eK↵ 8x 2 Icon ,
K (k + 1)!
where K is a Lipschitz constant of f~ on R.
Example 4.2. Consider the problem
y10 = y2 , y1 (0) = 0
y20 = y1 , y2 (0) = 1 .
18 A. K. MAJEE

This can be written in compact form ~y 0 = f~(x, ~y ), ~y (0) = ~y0 = (0, 1), where f~(x, ~y ) = (y2 , y1 ).
Let us calculate the successive approximations ~ k (x):
~ 0 (x) = ~y0 = (0, 1) ,
Z x
~ 1 (x) = (0, 1) + (1, 0) ds = (x, 1) ,
0
Z x Z x
~ 2 (x) = (0, 1) + x2 x2
f~(s, ~ 1 (s)) ds = (0, 1) + (1, s) ds = (0, 1) + (x, ) = (x, 1 ),
0 0 2 2
Z x
~ 3 (x) = (0, 1) + s2 x3 x2
(1 , s) ds = (x ,1 ),
0 2 3! 2
Z x
~ 4 (x) = (0, 1) + s2 s3 x3 x2 x4
(1 , s + ) ds = (x ,1 + ).
0 2 3! 3! 2 4!
It is not difficult to show that all the ~ k exist for all real x and ~ k (x) ! (sin(x), cos(x)). Thus,
the unique solution of the the given IVP is ~ (x) = (sin(x), cos(x)).
Theorem 4.2 (Non-local existence). Let f~ be a continuous vector valued function defined on
S = |x x0 |  a, |~y | < 1, a>0
and satisfies there a Lipschitz condition. Then the successive approximation { ~ k }1 k=0 for the
0 ~ ~
~y = f (x, ~y ); ~y (x0 ) = ~y0 exist on |x x0 |  a, and converges there to a solution of the IVP.
Corollary 4.3. Let f~ be a continuous vector valued function defined on |x| < 1, |~y | < 1, and
satisfies there a Lipschitz condition on each strip
Sa = {|x|  a, |~y | < 1, a > 0}.
Then every initial value problem ~y 0 = f~(x, ~y ); ~y (x0 ) = ~y0 has a solution which exists for all
x 2 R.
Example 4.3. Consider the system
y10 = 3y1 + xy3 , y20 = y2 + x3 y3 , y30 = 2xy1 y2 + e x y 3 .
this system of equation can be written in the compact form ~y 0 = f~(x, ~y ) where
0 1 0 1
y1 3y1 + 3y3
~y = @y2 A , and f~(x, ~y ) = @ y 2 + x 3 y3 A.
y3 2xy1 y2 + e y3 x

Note that f~ is a continuous vector-valued function defined on |x| < 1, |~y | < 1. It is Lipschitz
continuous on the strip Sa = {|x|  a, |~y | < 1, a > 0}, since for (x, ~y ), (x, ~ỹ) 2 Sa ,
f~(x, ~y ) f~(x, ~ỹ) = |3(y1 ỹ1 ) + x(y3 ỹ3 )| + |(y2 ỹ2 ) + x3 (y3 ỹ3 )|
x
+ |2x(y1 ỹ1 ) (y2 ỹ2 ) + e (y3 ỹ3 )|
x 3
 (3 + 2|x|)|y1 ỹ1 | + |x| + e + |x| |y3 ỹ3 | + 2|y2 ỹ2 |
 5 + 3a + ea + a3 |~y ~ỹ| .
Therefore, every initial value problem for this system has a solution which exists for all real x.
Moreover, solution is unique.
Example 4.4. For any Lipschitz continuous function on R, consider the IVP
y 00 (x) = f (y), y(0) = 0, y 0 (0) = 0 .
ODES AND PDES 19

Then solution of the above IVP is even. Since f is Lipschitz continuous, by writing the above
IVP in vector form, one can show that above problem has a solution y(x) defined on whole
real line. Let z(x) = y( x). Note that z(x) satisfies the above IVP. Hence by uniqueness,
z(x) = y( x) for all x 2 R. In other words, y(x) is even in x.
Like in the 1st order ODE (scalar valued), we have the following continuous dependence
estimate and uniqueness theorem.
Theorem 4.4 (Continuous dependence estimate ). Let f~, ~g be two continuous vector-valued
function defined on a rectangle
R = |x x0 |  a, |~y ~y0 |  b, a, b > 0
and suppose f~ satisfies a Lipschitz condition on R with Lipschitz constant K. Let ~ , ~ are
solutions of the problems ~y 0 = f~(x, ~y ), ~y (x0 ) = ~y1 and ~y 0 = ~g (x, ~y ), ~y (x0 ) = ~y2 respectively on
some interval I containing x0 . If for ", 0,
|~y1 ~y2 |  , |f~(x, ~y ) ~g (x, ~y )|  " 8(x, ~y ) 2 R
then,
" K|x x0 |
| ~ (x) ~ (x)|  eK|x ex0 |
+1 8x 2 I .
K
In particular, the problem ~y 0 = f~(x, ~y ), ~y (x0 ) = ~y0 has at most one solution on any interval
containing x0 .
Theorem 4.5 (Gronwall’s lemma:II). Let u(t), p(t) and q(t) be non-negative continuous func-
tions defined on the interval I = [a, b]. Suppose the following inequality holds:
Z t
u(t)  p(t) + q(s)u(s) ds, t 2 I.
a
Then, we have
Z t ⇣Z t ⌘
u(t)  p(t) + p(⌧ )q(⌧ ) exp q(s) ds d⌧, t 2 I.
a ⌧
Rt
Proof. Let r(t) = a q(s)u(s) ds. Then r(·) is di↵erentiable in (a, b) with r0 (t) = q(t)u(t) and
r(a) = 0. Hence, by our given condition, one has
r0 (t) = q(t)u(t)  q(t){p(t) + r(t)} =) r0 (t) r(t)q(t)  p(t)q(t)
⇣ Z t ⌘ ⇣ Z t ⌘
=) r0 (t) r(t)q(t) exp q(s) ds  p(t)q(t) exp q(s) ds
a a
dh ⇣ Z t ⌘i ⇣ Z t ⌘
=) r(t) exp q(s) ds  p(t)q(t) exp q(s) ds
dt a a
⇣ Z t ⌘ Z t ⇣ Z ⌧ ⌘
=) r(t) exp q(s) ds  p(⌧ )q(⌧ ) exp q(s) ds d⌧
a a a
Z t ⇣Z t Z ⌧ ⌘
=) r(t)  p(⌧ )q(⌧ ) exp q(s) ds q(s) ds d⌧
a a a
Z t ⇣Z t ⌘
=) r(t)  p(⌧ )q(⌧ ) exp q(s) ds d⌧ .
a ⌧
Hence, we get
Z t ⇣Z t ⌘
u(t)  p(t) + r(t)  p(t) + p(⌧ )q(⌧ ) exp q(s) ds d⌧ .
a ⌧

20 A. K. MAJEE

Corollary 4.6. Let u(t), p(t) and q(t) be non-negative continuous functions defined on the in-
terval I = [a, b]. Suppose the following inequality holds:
Z t
u(t)  p(t) + q(s)u(s) ds, t 2 I.
a
Assume that p(·) is non-decreasing. Then
⇣Z t ⌘
u(t)  p(t) exp q(s) ds , t 2 [a, b].
a

Proof. From Theorem 4.5, and non-decreasing property of p(t), we have


Z t ⇣Z t ⌘
u(t)  p(t) + r(t)  p(t) + p(⌧ )q(⌧ ) exp q(s) ds d⌧
a ⌧
n Z t ⇣Z t ⌘ o
 p(t) 1 + q(⌧ ) exp q(s) ds d⌧
a ⌧
n Z t ⇣Z t ⌘ o ⇣Z t ⌘
d
= p(t) 1 exp q(s) ds d⌧ = p(t) exp q(s) ds .
a d⌧ ⌧ a

Example 4.5. Consider the system
y10 = y1 + "y2 , y20 = "y1 + y2 , (4.4)
where " is a positive constant.
i) ✓ Writing the ~
◆ system in vector form, we see that the vector valued function f" (x, ~y ) =
y1 + "y2
is continuous and also Lipschitz continuous on each strip Sa with Lipschitz
"y1 + y2
constant Ka = 1 + ". Thus, in view of Corrolarry 4.3, every initial value problem of
(4.4) has a solution, which exists for all real x.
ii) Let ~ " be the solution of (4.4) satisfying ~ (0) = (1, 1), and let ~ be the solution of
y10 = y1 , y20 = y2 satisfying ~ (0) = (1, 1). Then, for each real x, by using Lipschitz
condition of f" , we get that
Z x Z x
~ " (x) ~ (x) = ~ ~
f" (s, " (s)) ds ~g (s, ~ (s)) ds
0 0
Z x Z x
= ~ ~ ~ ~
f" (s, " (s)) f" (s, (s)) ds f~" (s, ~ (s)) ~g (s, ~ (s)) ds
Z x0 Z x0
 f~" (s, ~ " (s)) f~" (s, ~ (s)) ds + f~" (s, ~ (s)) ~g (s, ~ (s)) ds
0 0
Z x Z x
 (1 + ") ~ " (s)
~ (s) ds + " | ~ (s)| ds .
0 0
An application of Gronwall’s lemma then implies
Z x
~ " (x) ~ (x)  "e(1+")x | ~ (s)| ds ! 0 as " ! 0 .
0

4.1. Existence and uniqueness for linear systems: Consider the linear system ~y 0 = f~(x, ~y ),
where the component of f~ are given by
n
X
fj (x, ~y ) = ajk yk + bj (x), j = 1, 2, . . . , n
k=1
ODES AND PDES 21

and the functions ajk , bj are continuous on an interval I containing x0 . Now consider the strip
Sa = {|x x0 |  a, |~y | < 1}. PSuppose ajk , bj are continuous on |x x0 |  a. Then there exists
a constant K > 0 such that nj=1 |ajk (x)|  K for all k = 1, 2, . . . , n, and for all x satisfying
|x x0 |  a. Note that
n
X
@ f~
(x, ~y ) = a1k (x), a2k (x), . . . , ank (x) = |ajk (x)|  K .
@yk
j=1

Thus, f~ satisfies a Lipschitz condition on S with Lipschitz constant K. Hence we arrive at the
following theorem
Theorem 4.7. Let ~y 0 = f~(x, ~y ) be a linear system described as above. If ~y0 2 Rn , there exists
one and only one solution ~ of the IVP ~y 0 = f~(x, ~y ), ~y (x0 ) = ~y0 .
Note that linear system can be written in the equivalent form
~y 0 (x) = A(x)~y (x) + ~b(x)
where
0 1 0 1
a11 (x) a12 (x) a13 (x) . . . a1n (x) b1 (x)
B a21 (x) a22 (x) a23 (x) . . . a2n (x) C B b2 (x) C
B C ~ B C
A(x) = B .. .. .. .. .. C and b(x) = B .. C .
@ . . . . . A @ . A
an1 (x) an2 (x) an3 (x) . . . ann (x) bn (x)
Pn
Example 4.6. Consider the homogeneous linear system yj0 = k=1 ajk yk (j = 1, 2, . . . , n),
where ajk are continuous on some interval I. Then it is easy to see that ~ = ~0 is a solution .
P
This is called a trivial solution. Let K be such that nj=1 |ajk (x)|  K. Let x0 2 I, and ~ be
any solution of the linear homogeneous system. Now consider two IVP
~y 0 = f~(x, ~y ), ~y (x0 ) = ~0; ~y 0 = f~(x, ~y ), ~y (x0 ) = ~ (x0 )
P
where the component of f~ is fj (x, ~y ) = nk=1 ajk yk . Then according to continuous dependence
estimate theorem, we have
" = 0 K|x x0 |
| ~ (x)| = | ~ (x) ~0|  | ~ (x0 ) ~0|eK|x x0 | + e 1 = | ~ (x0 )|eK|x x0 | 8x 2 I .
K
For linear equations of order n, we have non-local existence.
Theorem 4.8. Let a0 , a1 , . . . , an 1 and b be continuous real valued functions on an interval I
containing x0 . If ↵0 , ↵1 , . . . , ↵n 1 are any n constant, there exists one and only one solution
of the ODE
y (n) + an 1 (x)y
(n 1)
(x) + . . . + a0 (x)y = b(x) on I ,
y(x0 ) = ↵0 , y 0 (x0 ) = ↵1 , . . . , y (n 1)
(x0 ) = ↵n 1.

Proof. Let ~y0 = (↵0 , ↵1 , . . . , ↵n 1 ). Given ODE can be written as a system of linear equation
yj0 = yj+1 (j = 1, 2, . . . , n 1); yn0 = b(x) an 1 (x)yn ... a0 (x)y1 .
Then, according to Theorem 4.7, above problem has a unique solution ~ = 1, 2, . . . , n on
I satisfying 1 (x0 ) = ↵0 , 2 (x0 ) = ↵1 , . . . , n (x0 ) = ↵n 1 . But, since
0 0 00 (n 1)
2 = 1, 3 = 2 = 1, . . . , n = 1 ,
the function 1 is the required solution on I. ⇤

You might also like