Lectures 1-6
Lectures 1-6
Lectures 1-6
1. Introduction: ODEs
A di↵erential equation is an equation involving an unknown function and its derivatives. In
general the unknown function may depend on several variables and the equation may include
various partial derivatives.
Definition 1.1. A di↵erential equation involving ordinary derivatives is called a ordinary dif-
ferential equation (ODE).
• A most general ODE has the form
F x, y, y 0 , . . . , y (n) = 0 , (1.1)
where F is a given function of (n + 2) variables and y = y(x) is an unknown function of
a real variable x.
• The maximum order n of the derivative y (n) in (1.1) is called the order of the ODE.
Applications: Di↵erential equations play a central role not only in mathematics but also in
almost all areas of science and engineering, economics, and social sciences:
• Flow of current in a conductor: Consider an RC circuit with resistance R and capacity
C with no external current. Let x(t) be the capacitor voltage and I(t) the current
circulating in the circuit. Then according to Kirkcho↵’s law, R I(t)+x(t) = 0. Moreover,
the constitutive law of capacitor yields I(t) = C dx(t)
dt . Hence, we get the first order
di↵erential equation
x(t)
x0 (t) + = 0.
RC
• Population dynamics: Let x(t) be the number of individuals of a population at time t,
b be the birth rate of the population and d be the death rate of the population. Then
according to the simple “Malthus model”, growth rate of the population is proportional
to the number of new born individuals minus the number of deaths. Hence we get the
first order ODE
x0 (t) = kx(t), where k = b d.
• An example of a second order equation is y 00 + y = 0, which arises naturally in the study
of electrical and mechanical oscillations.
• Motion of a missile; The behaviours of a mixture; The spread of disease etc.
Definition 1.2. A function y : I ! R, where I ⇢ R is an open interval, is said to be a solution
of n-th order ODE, if it is n-times di↵erentiable, and satisfies (1.1) for all x 2 I.
Example 1.1. Consider the ODE y 0 = y. Let us first find all positive solutions. Note that
y0 0 0
y = (ln y) and therefore we obtain (ln y) = 1. Thus, this implies that
ln y = x + C =) y = C1 ex where C1 = eC > 0, C 2 R.
1
2 A. K. MAJEE
0
If y(x) < 0 for all x, then use yy = (ln( y))0 , and obtain y = C1 ex where C1 > 0. Combining
these two cases together, we obtain any solution y(x) has the form
y(x) = Cex , C 2 R.
⇤
Example 2.1. Find the general solution of x0 (t) + 4tx(t) = 8t.
Here ↵(t) = 4t, and hence we have A(t) = 2t2 . Therefore, using the method of variation of
parameter, the general solution of the given ODE is given by
Z Z
2t2
⇥ 2t2
⇤ 2t2
⇥ d 2t2 ⇤ 2
x(t) = e C + 8te dt = e C +2 e = 2 + Ce 2t .
dt
ODES AND PDES 3
1st order linear IVP: Suppose, we are interested in solving the initial value problem
y 0 + ↵(x)y = b(x), y(x0 ) = y0 , where x0 2 I.
From the previous 0 A 0
R x calculation, we see that u = be with A = ↵. Integrating from x0 to x and
taking A(x) = x0 ↵(s) ds, we have
Z x Rs
↵(r) dr
u(x) u(x0 ) = b(s)e x0 ds.
x0
Example 2.2. Find the solution of x0 (t) + kx(t) = h, x(0) = x0 , where h and k are constant.
This equation arises in the RC circuit when there is a generator of constant voltage h.
Solution: Using (2.3), we get
Z t
⇥ ⇤ ⇥ ekt 1 ⇤ ⇣ k ⌘ kt h
x(t) = e kt x0 + heks ds = e kt x0 + h = x0 e + .
0 k h k
Notice that x(t) ! hk as t ! 1 from below if x0 < hk , and from above if x0 > hk . Moreover, the
capacitor voltage x(t) does not decay to 0 but tends to the constant voltage hk .
2.2. General 1st order ODEs. We consider the general first order ODE of the form
y 0 = f¯(x, y) ,
where f¯ is some continuous function. We have seen that if f¯(x, y) = ↵(x)y + b(x) for some
↵, 2 C(I), then the above ODE has solution in explicit form (cf. the method of variation of
parameter).
2.2.1. Equations with variables separated: Let us consider a separable ODE
y 0 = f (x)g(y) , (2.4)
where f and g are given continuous functions with f (x) 6= 0. If y = k is any zero of g, then
y(x) = k is a constant solution of (2.4). On the other hand, if y(x) = k is a constant solution,
then g(k) = 0. Therefore y(x) = k is a constant solution if and only if g(k) = 0.
• y(x) = k is called an equilibrium solution if and only if g(k) = 0.
Hence if y(x) is a non-constant solution, then g(y(x)) 6= 0 for any x. Any separable equation
can be solved by means of the following theorem.
Theorem 2.3 (The method of separation of variables). Let f and g be continuous functions
on some intervals I and J respectively such that g 6= 0 on J. Let F resp. G be a primitive
function of f resp. g1 on I resp. J. Then a function y defined on some subinterval of I, solves
the equation (2.4) if and only if
G(y(x)) = F (x) + C , (2.5)
for all x in the domain of y, where C is a real constant.
Proof. Let y(x) solves (2.4). Since F 0 = f and G0 = g1 , the equation (2.4) is equivalent to
0
y 0 G0 (y) = F 0 (x) =) G(y(x)) = F 0 (x) =) G(y(x)) = F (x) + C .
4 A. K. MAJEE
Conversely, if function y satisfies (2.5) and is known to be di↵erentiable in its domain, then
di↵erentiating (2.5) in x, we obtain y 0 G0 (y) = F 0 (x). Arguing backwards, we arrive at (2.4).
Let us show that y is di↵erentiable. Since g(y) 6= 0, either g(y) > 0 or g(y) < 0 in the whole
domain. Then G is either strictly increasing or strictly decreasing in the whole domain. In both
cases, the inverse function G 1 is well-defined and di↵erentiable. It follows from (2.5) that
1
y(x) = G F (x) + C .
Since both F and G 1 are di↵erentiable, we conclude that y is di↵erentiable. ⇤
Example 2.3. Consider the ODE
y 0 (x) = y(x) , x 2 R , y(x) > 0 .
Then f (x) ⌘ 1 and g(y) = y 6= 0. Note that F (x) = x and G(y) = log(y). The equation (2.5)
becomes
log(y) = x + C =) y(x) = Cex ,
where C is any positive constant.
Example 2.4. Consider the equation
p
y0 = |y|
which is defined for all y 2 R. Note that y = 0 is a trivial solution. In the domains y > 0 and
y < 0, the equation can be solved using separation of variables. In the domain y > 0, we obtain
Z Z
dy p 1 2
p = dx =) 2 y = x + C =) y = x + C .
y 4
Since y > 0, we must have x > C which follows from the second expression. Similarly in the
domain y < 0, we have
1
y= (x + C)2 , x < C .
4
We see that the integral curves in the domain y > 0 touch the curve y = 0 and so do the integral
curves in the domain y < 0.
Example 2.5. The logistic equation
y 0 (x) = y(x) ↵ y(x) , ↵, > 0.
In this model, y(x) represents the population of some species and therefore y(x) 0. Note that
y(x) = 0 and y(x) = ↵ are two equilibrium solutions. Such solutions play an important role
in analyzing the trajectories of solutions in general. In order to solve the logistic equation, we
separate the variables and obtain (assuming that y 6= 0 and y 6= ↵ )
dy 1 dy dy 1 1
= dx =) + = dx =) log(|y|) log(|↵ y|) = x + c
y(↵ y) ↵ y ↵↵ y ↵ ↵
1 y y
=) log | | = x + c =) | | = ke↵x ,
↵ ↵ y ↵ y
where k = ec↵ . This is a general solution in implicit form. To solve for y, consider the case
↵ke↵x
0 < y(x) < ↵ . Then, y(x) = 1+ ke↵x . For the case y >
↵
, the solution takes the form
↵ke↵x ↵
y(x) = 1 ke↵x . In any case, limx!1 y(x) = . This shows that all non-constant solutions
approach the equilibrium solution y(x) = ↵ as x ! 1, some from above the line y = ↵ and
others from below.
ODES AND PDES 5
2.2.2. Exact equations: Suppose that the first order equation y 0 = f¯(x, y) is written in the
form
M (x, y) + N (x, y)y 0 = 0 , (2.6)
where M , N are real-valued functions defined for real x, y on some domain ⌦.
Definition 2.2. We say that the equation (2.6) is exact in ⌦ if there exists a function F having
continuous first partial derivatives such that
@F @F
=M, =N. (2.7)
@x @y
Theorem 2.4. Suppose the equation (2.6) is exact in a domain ⌦ ⇢ R2 i.e., there exists F
such that @F @F
@x = M , @y = N in ⌦. Then every continuously di↵erentiable function defined
implicitly by a relation
F (x, (x)) = c (c = constant) ,
is a solution of (2.6), and every solution of (2.6) whose graph lies in ⌦ arises in this way.
Proof. Under the assumptions of the theorem, equation (2.6) becomes
@F @F
(x, y) + (x, y)y 0 = 0 .
@x @y
If is any solution on some interval I, then
@F @F
(x, (x)) + (x, (x)) 0 (x) = 0 , 8x 2 I . (2.8)
@x @y
If (x) = F (x, (x)), then from the above equation, we see that 0 (x) = 0, and hence
F (x, (x)) = c, where c is some constant. Thus the solution must be a function which is
given implicitly by the relation F (x, (x)) = c. Conversely, if is a di↵erentiable function on
some interval I defined implicitly by the relation F (x, y) = c, then
F (x, (x)) = c , 8x 2 I .
@F
Di↵erentiation along with the property @x = M, @F
@y = N yields that is a solution of (2.6).
This completes the proof. ⇤
We will say that F (x, y) = c is the general solution of (2.6).
Example 2.6. Consider the equation
x (y 4 1)y 0 (x) = 0 .
Here M = x and N = 1 y 4 . Define F (x, y) = 12 x2 + y 1 5
5y . Then above equation is exact.
Hence the solution is given by
F (x, y) = c =) 2y 5 10y = 5x2 + c .
Example 2.7. Find the general solution of the ODE 2ye2x + 2x cos(y) + e2x x2 sin(y) y 0 = 0.
Solution: The equation is of the form M (x, y) + N (x, y)y 0 = 0 with M (x, y) = 2ye2x + 2x cos(y)
and N (x, y) = e2x x2 sin(y). Define F (x, y) = ye2x + x2 cos(y). Then F has continuous first
partial derivatives on R2 and @F @F
@x (x, y) = M (x, y), @y (x, y) = N (x, y). Hence the given ODE is
exact. Thus, the general solution is given by the formula
ye2x + x2 cos(y) = c, c 2 R.
How do we recognize when an equation is exact? The following theorem gives a necessary
and sufficient conditions.
6 A. K. MAJEE
Theorem 2.5. Let M, N be two real-valued functions which have continuous first partial deriva-
tives on some rectangle
n o
R := (x, y) 2 R2 : |x x0 | a , |y y0 | b .
Then the equation (2.6) is exact in R if and only if
@M @N
= (2.9)
@y @x
in R.
Proof. It is easy to see that if the equation (2.6) is exact, then (2.9) holds. Now suppose that
(2.9) holds in the rectangle R. We wand to find a function F having continuous first partial
derivatives such that @F @F
@x = M and @y = N . If we had a such function, then
Z x Z y Z x Z y
@F (s, y) @F (x0 , t)
F (x, y) F (x0 , y0 ) = ds + dt = M (s, y) ds + N (x0 , t) dt
x0 @x y0 @y x0 y0
It is clear from (2.12) that @F @y (x, y) = N (x, y) for all (x, y) in R. Therefore, we need to show
that (2.12) is valid, where F is defined by (2.11). Now, by using the condition (2.9), we have
hZ x Z y i
F (x, y) M (s, y0 ) ds + N (x, t) dt
x0 y0
Z x Z y
⇥ ⇤ ⇥ ⇤
= M (s, y) M (s, y0 ) ds N (x, t) N (x0 , t) dt
x y0
Z 0x Z y h i
@M @N
= (s, t) (s, t) ds dt = 0 .
x0 y 0 @y @x
This completes the proof. ⇤
Example 2.8. Find the general solution of the ODE
2xydx + (x2 + y 2 ) dy = 0 .
Here, M (x, y) = 2xy and N (x, y) = x2 + y 2 . Note that @M @N
@y = @x = 2x. Thus, the equation is
exact. Define the function F by (taking (x0 , y0 ) = (0, 0))
Z x Z y Z x Z y
y3
F (x, y) = M (s, y) ds + N (x0 , t) dt = 2sy ds + t2 dt = yx2 + .
0 0 0 0 3
y3
Therefore, the general solution is given by the formula yx2 + 3 = c, where c is arbitrary real
constant.
ODES AND PDES 7
@ F̃
To find F̃ , we know that @x = 2xy 2 + 2x, and hence
F̃ (x, y) = x2 y 3 + x2 + f (y) ,
@ F̃
where f is independent of x. Again @y = Ñ gives
Our aim is to show that on some interval I containing x0 , there is a solution of the above
initial value problem. Let us first show the following.
ODES AND PDES 9
Theorem 3.1. A function is a solution of (3.1) on some interval I if and only if it is solution
of the integral equation (on I)
Z x
y = y0 + f (s, y(s)) ds . (3.2)
x0
Theorem 3.3. Let f be a continuous function defined on the rectangle R. Further suppose that
f is Lipschitz in y. Then { k } converges to a solution of the initial value problem (3.1) on I.
Proof. Note that
k
X
k (x) = 0 (x) + [ i (x) i 1 (x)] , 8x 2 I .
i=1
Therefore, it suffices to show that the series
X1
0 (x) + [ i (x) i 1 (x)]
i=1
converges. Let us estimate the terms i (x) i 1 (x). Observe that, since f satisfies Lipschitz
condition in R
Z xh i Z x
| 2 (x) 1 (x)| = f (t, 1 (t)) f (t, 0 (t)) dt K | 1 (t) 0 (t)| dt
x0 x0
Z x
(x x0 )2
KM |t x0 | dt KM .
x0 2
Claim:
M Ki 1 M K i |x x0 |i
i (x) i 1 (x) |x x0 |i = , i = 1, 2, . . . (3.5)
i! K i!
We shall prove (3.5) via induction. Note that (3.5) is true for i = 1 and i = 2. Assume now that
(3.5) holds for i = m. Let us assume that x x0 (similar proof for x x0 ). By using Lipschitz
condition, and the induction hypothesis, we have
Z x Z x
M Km 1
m+1 (x) m (x) K m (t) m 1 (t) dt K |t x0 |m dt
x0 x0 m!
M Km
= |x x0 |m+1
(m + 1)!
P
Hence (3.5) holds for i = 1, 2, . . .. It follows that the i-th term of the series | 0 (x)|+ 1 i=1 | i (x)
M K|x x0 | . Hence
i 1 (x)]| is less than
P or equal to K times the i-th term of the power series for e
the series 0 (x) + 1 i=1 [ i (x) i 1 (x)] is convergent for all x 2 I, and therefore the sequence
{ k } converges to a limit (x) as k ! 1.
Properties of the limit function : We first show that is continuous on I. Indeed, for any
x, x̃ 2 I, we have, by using the boundedness of f
Z x
k+1 (x) k+1 (x̃) = f (t, k (t)) dt M |x x̃|
x̃
=) (x) (x̃) M |x x̃| .
This shows that is continuous on I. Moreover, taking x̃ = x0 in the above estimate, we see
that
| (x) y0 | M |x x0 | , 8x 2 I
and hence (x, (x)) is in R for all x 2 I.
We now estimate | (x) k (x)|. Note that, since (x) is the limit of the series 0 (x) +
P1 Pk
i=1 [ i (x) i 1 (x)] and k = 0 (x) + i=1 [ i (x) i 1 (x)], we see that
1
X 1 1
M X K i |x x0 |i M X K i ↵i
| (x) k (x)| = [ i (x) i 1 (x)]
K i! K i!
i=k+1 i=k+1 i=k+1
ODES AND PDES 11
Since L < 1, we get max|x x0 | |y1 (x) y2 (x)| = 0—which then implies that y1 = y2 on
I . In particular, y1 (x0 ± ) = y2 (x0 ± ). Repeating the same procedure in the interval
[x0 + , x0 + 2 ] and [x0 2 , x0 ], we get y1 (x) = y2 (x) for all x 2 [x0 2 , x0 + 2 ]. Since
the set {x 2 R : |x x0 | ↵} is compact, after a finite number of times, we get y1 (x) = y2 (x)
for all x 2 [a, b]. This completes the proof. ⇤
Remark 3.1. One can start Picard’s iteration procedure with any continuous function 0 on
|x x0 | a such that the points (x, 0 (x)) are in R for |x x0 | a. One can proceed as in the
previous theorem by showing:
i) 1 , 2 , . . . exist and are continuous on I, and satisfy | k (x) y0 | M |x x0 | for all
k = 1, 2, . . ..
M K i |x x0 |i
ii | i+1 (x) i (x)| 2 K i! for all i = 1, 2, . . ..
(K↵) K↵ i
iii) Show that k tends to a limit function and | i (x) (x)| 2 M
K i! e .
Let us illustrate this method on the following example:
y0 = y , y(0) = 1 .
Here 0 (x) = 1. Moreover, the approximate functions are given by
Z x
1 (x) = 1 + 0 (s) ds = 1 + x
0
Z x
x2
2 (x) = 1 + 1 (s) ds = 1 + x +
0 2!
12 A. K. MAJEE
Z x
x2 x3
3 (x) =1+ 2 (s) ds =1+x+ +
0 2! 3!
and by induction
x2 x3 xk
k (x) =1+x+
+ + ... + , k = 0, 1, 2, . . . .
2! 3! k!
Clearly, k (x) ! ex as k ! 1, and the function y(x) = ex indeed solves the above IVP.
Definition 3.2. We say that J ⇢ R is the maximal interval of definition of the solution y(x)
of the IVP (3.1), if any interval I where y(x) is defined is contained in J, and y(t) cannot be
extended in an interval greater than J.
Example 3.1. Consider the IVP
y0 = y2 , y(x0 ) = a 6= 0 .
In view of Theorem 3.3, above problem has a solution in a neighbourhood of x0 . Note that,
the function (x, c) = x 1 c solves y 0 = y 2 . Thus, we need to impose the requirement that
(x0 , c) = a , c = x0 + a1 . Let ca = x0 + a1 . Hence for a > 0, solution to the above problem is
1
ya (x) =
, x < ca .
x ca
Thus, in this case, the maximum interval of definition is ( 1, ca ). Similarly, for a < 0, one
can show the maximum interval of definition is (ca , 1) for the above problem.
3.1.1. Non-local existence of solutions: Theorem 3.3 guarantees a solution only for x near
the initial point x0 . There are some cases, where solution may exists on the whole interval
|x x0 | a. For example, consider the ODE of the form y 0 + g(x)y = h(x), where g, h are
continuous on |x x0 | a. Let us define the strip
n o
S := |x x0 | a , |y| < 1 .
Since g is contonuous on |x x0 | a, there exists K > 0 such that |g(x)| K. Then the
function f (x, y) = g(x)y + h(x) are Lipschitz continuous on the strip S.
Theorem 3.4. Let f be a real-valued continuous function on the strip S, and Lipschitz contin-
uous on S with constant K > 0. Then the Picard’s iterations { k } for the problem (3.1) exist
on the entire interval |x x0 | a, and converge there to a solution of (3.1).
Proof. Note that (x, 0 (x)) 2 S. Now since f is continuous on S, there exists M > 0 such that
|f (x, y0 )| M for |x x0 | a, and hence
Z x
| 1 (x)| |y0 | + f (t, y0 ) dt |y0 | + M |x x0 | |y0 | + M a < 1 .
x0
Moreover, each k is continuous on |x x0 | a. Now assume that the points
(x, 0 (x)), (x, 1 (x), . . . (x, k (x))
To show the convergence of the sequence { k } to a limit function , we can mimic the proof
of Theorem 3.3, once we note that
Z x
1 (x) 0 (x) |f (t, y0 )| dt M |x x0 | .
x0
Next we show that is continuous. Observe that, thanks to (3.5) (which holds in this case)
k
X k
X k
M X K i |x x0 |i
| k (x) y0 | = [ i (x) i 1 (x)] i (x) i 1 (x)
K i!
i=1 i=1 i=1
X1
M K i |x x0 |i M Ka
= e 1 := b .
K i! K
i=1
Taking the limit as k ! 1, we obtain
| (x) y0 | b , (|x x0 | a) .
Note that f is continuous on R, where the rectangle R is given by
n o
R := (x, y) 2 R2 : |x x0 | a , |y y0 | b ,
and hence, there exists a constant N > 0 such that |f (x, y)| N for (x, y) 2 R. Let x, x̃ be two
pints in the interval |x x0 | a. Then
Z x
k+1 (x) k+1 (x̃) = f (t, k (t)) dt N |x x̃|
x̃
=) | (x) (x̃)| N |x x̃| .
Rest of the proof is a repetition of the analogous parts of the proof of Theorem 3.3, with ↵
replaced by a everywhere. ⇤
Example 3.2. Consider the IVP y 0 = y + x2 sin(y), y(0) = 1, where is a real constant
such that | | 1. Show that the solution of the given IVP exists for |x| 1.
Solution: Here f (x, y) = y + x2 sin(y). Consider the strip S = {|x| 1, |y| < 1}. Then f is
continuous on S and Lipschitz continuous on S as |@y f (x, y)| 2 on S. Thus, by Theorem 3.4,
the solution of the given problem exists on the entire interval |x| 1.
In view of Theorem 3.4, we arrive at the following corollary.
Corollary 3.5. Let f be a real-valued continuous function on the plane |x| < 1, |y| < 1, which
satisfies a Lipschitz condition on each strip Sa defined by
n o
Sa := |x| a , |y| < 1 , (a > 0) .
Then every initial value problem
y 0 = f (x, y) , y(x0 ) = y0 ,
has a solution which exists for all real x.
Proof. For any real number x, there exists a > 0 such that x is contained inside the interval
|x x0 | a. Consider now the strip
n o
S := |x x0 | a , |y| < 1 .
Since S is contained in the strip
n o
S̃ := |x| |x0 | + a , |y| < 1 .
f satisfies all the conditions of Theorem 3.4. Thus, { k (x)} tends to (x), where is a solution
to the initial-value problem. This completes the proof. ⇤
14 A. K. MAJEE
Theorem 3.9. Let f , g be continuous function on R, and f satisfies a Lipschitz condition there
with Lipschitz constant K. Let , be solutions of (3.7), (3.8) respectively on an interval I
containing x0 , with graphs contained in R. Suppose that the following inequalities are valid
|f (x, y) g(x, y)| " , (x, y) 2 R , (3.9)
|y1 y2 | , (3.10)
for some non-negative constants " , . Then
x0 | " K|x x0 |
(x) (x) eK|x + e 1 , 8x 2 I . (3.11)
K
Proof. Since , are solutions of (3.7), (3.8) respectively on an interval I containing x0 , we see
that
Z x⇥ ⇤
(x) (x) = y1 y2 + f (t, (t)) g(t, (t)) dt
x0
Z x⇥ Z x⇥
⇤ ⇤
= y1 y2 + f (t, (t)) f (t, (t)) dt + f (t, (t)) g(t, (t)) dt .
x0 x0
Assume that x x0 . Then, in view of (3.9), (3.10), and the Lipschitz condition of f with
Lipschitz constant K, we obtain from the above expression
Z x
| (x) (x)| + K | (s) (s)| ds + "(x x0 ) . (3.12)
x0
Rx
Define, E(x) = x0 | (s) (s)| ds. Then E 0 (x) = | (x) (x)| and E(x0 ) = 0. Therefore,
(3.12) becomes
E 0 (x) KE(x) + "(x x0 ) .
has a unique solution, say for |x| 12 such that the graph (x, (x)) lies in the rectangle R.
The Lipschitz constant K of f on the rectangle R is 12 , and |f (x, y) g(x, y)| 5110 for all
(x, y) 2 R. Hence by the continuous dependence estimate, we have, for all |x| 12
2 |x|
| (x) (x)| 10
e2 1 .
5
As a consequence of Theorem 3.9, we have
i) Uniqueness Theorem: Let f be continuous and satisfies a Lipschitz condition on
R. If and are two solutions of the IVP (3.1) on an interval I containing x0 , then
(x) = (x) for all x 2 I.
ii) Let f be continuous and satisfies a Lipschitz condition on R, and gk , k = 1, 2, . . . be
continuous on R such that
|f (x, y) gk (x, y)| "k , (x, y) 2 R
with "k ! 0 as k ! 1. Let yk ! y0 as k ! 1. Let k be a solution to the IVP
y 0 = gk (x, y) , y(x0 ) = yk ,
and is a solution to the IVP (3.1) on some interval I containing x0 . Then k (x) ! (x)
on I.
Remark 3.2. The Lipschitz condition on f on the rectangle R is necessary to have uniqueness
of solution of the IVP (3.1). To see this, consider the IVP
2
y 0 = 3y 3 , y(0) = 0 .
It is easy to check that the function k , for any positive number k
(
0, 1 < x k,
k (x) = 3
(x k) , k x < 1 ,
is a solution of the above IVP. So, there are infinitely many solution on any rectangle R containing
the origin. But note that, f does NOT satisfy a Lipschitz condition on R.
Remark 3.3. We have shown the existence of solution of the IVP under more stronger condition
namely Lipschitzness of the function f . But one can relax the Lipschithzness to gurantee the
existence of a solution of the IVP only under the continuity assumption on f . This is called
Peano Theorem.
Theorem 3.10 (Peano Theorem). If the function f (x, y) is continuous on a rectangle R and
if (x0 , y0 ) 2 R, then the IVP y 0 = f (x, y) with y(x0 ) = y0 has a solution in the neighborhood of
x0 .
˜ ⇢ Rn , and ⌦ = I ⇥ ⌦.
Let ⌦ ˜ We introduce
0 1 0 1
y1 f1 (x, ~y )
B y2 C B f2 (x, ~y ) C
B C n ~ B C
~y = B .. C 2 R ; f (x, ~y ) = B .. C 2 Rn
@.A @ . A
yn fn (x, ~y )
Then f~ : ⌦ ! Rn , and the system of equation (4.1) can be written in the compact form
~y 0 = f~(x, ~y ) .
An equation of the n-th order ODE y (n) = f (x, y, y 0 , . . . , y (n 1) ) may also be treated as a system
of the type (4.1). To see this, let y1 = y, y2 = y 0 , . . . , yn = y (n 1) . Then from the ODE equation
y (n) = f (x, y, y 0 , . . . , y (n 1) ), we have
Example 4.1. Consider the initial value problem y (3) + 2y 0 (y 0 )3 + y = x2 + 1 with y(0) =
0, y 0 (0) = 1, y 00 (0) = 1. We want to convert into a system of equations. Let y1 = y, y2 = y 0 and
y3 = y 00 . Note that y 0 = y10 = y2 . Using these, the required system takes the form
y10 = y2 ; y20 = y3 ; y30 = y23 2y2 y1 + x 2 + 1 ,
(4.3)
y1 (0) = 0; y2 (0) = 1; y3 (0) = 1 .
Theorem 4.1 (Local existence). Let f~ be a continuous vector valued function defined on
R = |x x0 | a, |~y ~y0 | b, a, b > 0
This can be written in compact form ~y 0 = f~(x, ~y ), ~y (0) = ~y0 = (0, 1), where f~(x, ~y ) = (y2 , y1 ).
Let us calculate the successive approximations ~ k (x):
~ 0 (x) = ~y0 = (0, 1) ,
Z x
~ 1 (x) = (0, 1) + (1, 0) ds = (x, 1) ,
0
Z x Z x
~ 2 (x) = (0, 1) + x2 x2
f~(s, ~ 1 (s)) ds = (0, 1) + (1, s) ds = (0, 1) + (x, ) = (x, 1 ),
0 0 2 2
Z x
~ 3 (x) = (0, 1) + s2 x3 x2
(1 , s) ds = (x ,1 ),
0 2 3! 2
Z x
~ 4 (x) = (0, 1) + s2 s3 x3 x2 x4
(1 , s + ) ds = (x ,1 + ).
0 2 3! 3! 2 4!
It is not difficult to show that all the ~ k exist for all real x and ~ k (x) ! (sin(x), cos(x)). Thus,
the unique solution of the the given IVP is ~ (x) = (sin(x), cos(x)).
Theorem 4.2 (Non-local existence). Let f~ be a continuous vector valued function defined on
S = |x x0 | a, |~y | < 1, a>0
and satisfies there a Lipschitz condition. Then the successive approximation { ~ k }1 k=0 for the
0 ~ ~
~y = f (x, ~y ); ~y (x0 ) = ~y0 exist on |x x0 | a, and converges there to a solution of the IVP.
Corollary 4.3. Let f~ be a continuous vector valued function defined on |x| < 1, |~y | < 1, and
satisfies there a Lipschitz condition on each strip
Sa = {|x| a, |~y | < 1, a > 0}.
Then every initial value problem ~y 0 = f~(x, ~y ); ~y (x0 ) = ~y0 has a solution which exists for all
x 2 R.
Example 4.3. Consider the system
y10 = 3y1 + xy3 , y20 = y2 + x3 y3 , y30 = 2xy1 y2 + e x y 3 .
this system of equation can be written in the compact form ~y 0 = f~(x, ~y ) where
0 1 0 1
y1 3y1 + 3y3
~y = @y2 A , and f~(x, ~y ) = @ y 2 + x 3 y3 A.
y3 2xy1 y2 + e y3 x
Note that f~ is a continuous vector-valued function defined on |x| < 1, |~y | < 1. It is Lipschitz
continuous on the strip Sa = {|x| a, |~y | < 1, a > 0}, since for (x, ~y ), (x, ~ỹ) 2 Sa ,
f~(x, ~y ) f~(x, ~ỹ) = |3(y1 ỹ1 ) + x(y3 ỹ3 )| + |(y2 ỹ2 ) + x3 (y3 ỹ3 )|
x
+ |2x(y1 ỹ1 ) (y2 ỹ2 ) + e (y3 ỹ3 )|
x 3
(3 + 2|x|)|y1 ỹ1 | + |x| + e + |x| |y3 ỹ3 | + 2|y2 ỹ2 |
5 + 3a + ea + a3 |~y ~ỹ| .
Therefore, every initial value problem for this system has a solution which exists for all real x.
Moreover, solution is unique.
Example 4.4. For any Lipschitz continuous function on R, consider the IVP
y 00 (x) = f (y), y(0) = 0, y 0 (0) = 0 .
ODES AND PDES 19
Then solution of the above IVP is even. Since f is Lipschitz continuous, by writing the above
IVP in vector form, one can show that above problem has a solution y(x) defined on whole
real line. Let z(x) = y( x). Note that z(x) satisfies the above IVP. Hence by uniqueness,
z(x) = y( x) for all x 2 R. In other words, y(x) is even in x.
Like in the 1st order ODE (scalar valued), we have the following continuous dependence
estimate and uniqueness theorem.
Theorem 4.4 (Continuous dependence estimate ). Let f~, ~g be two continuous vector-valued
function defined on a rectangle
R = |x x0 | a, |~y ~y0 | b, a, b > 0
and suppose f~ satisfies a Lipschitz condition on R with Lipschitz constant K. Let ~ , ~ are
solutions of the problems ~y 0 = f~(x, ~y ), ~y (x0 ) = ~y1 and ~y 0 = ~g (x, ~y ), ~y (x0 ) = ~y2 respectively on
some interval I containing x0 . If for ", 0,
|~y1 ~y2 | , |f~(x, ~y ) ~g (x, ~y )| " 8(x, ~y ) 2 R
then,
" K|x x0 |
| ~ (x) ~ (x)| eK|x ex0 |
+1 8x 2 I .
K
In particular, the problem ~y 0 = f~(x, ~y ), ~y (x0 ) = ~y0 has at most one solution on any interval
containing x0 .
Theorem 4.5 (Gronwall’s lemma:II). Let u(t), p(t) and q(t) be non-negative continuous func-
tions defined on the interval I = [a, b]. Suppose the following inequality holds:
Z t
u(t) p(t) + q(s)u(s) ds, t 2 I.
a
Then, we have
Z t ⇣Z t ⌘
u(t) p(t) + p(⌧ )q(⌧ ) exp q(s) ds d⌧, t 2 I.
a ⌧
Rt
Proof. Let r(t) = a q(s)u(s) ds. Then r(·) is di↵erentiable in (a, b) with r0 (t) = q(t)u(t) and
r(a) = 0. Hence, by our given condition, one has
r0 (t) = q(t)u(t) q(t){p(t) + r(t)} =) r0 (t) r(t)q(t) p(t)q(t)
⇣ Z t ⌘ ⇣ Z t ⌘
=) r0 (t) r(t)q(t) exp q(s) ds p(t)q(t) exp q(s) ds
a a
dh ⇣ Z t ⌘i ⇣ Z t ⌘
=) r(t) exp q(s) ds p(t)q(t) exp q(s) ds
dt a a
⇣ Z t ⌘ Z t ⇣ Z ⌧ ⌘
=) r(t) exp q(s) ds p(⌧ )q(⌧ ) exp q(s) ds d⌧
a a a
Z t ⇣Z t Z ⌧ ⌘
=) r(t) p(⌧ )q(⌧ ) exp q(s) ds q(s) ds d⌧
a a a
Z t ⇣Z t ⌘
=) r(t) p(⌧ )q(⌧ ) exp q(s) ds d⌧ .
a ⌧
Hence, we get
Z t ⇣Z t ⌘
u(t) p(t) + r(t) p(t) + p(⌧ )q(⌧ ) exp q(s) ds d⌧ .
a ⌧
⇤
20 A. K. MAJEE
Corollary 4.6. Let u(t), p(t) and q(t) be non-negative continuous functions defined on the in-
terval I = [a, b]. Suppose the following inequality holds:
Z t
u(t) p(t) + q(s)u(s) ds, t 2 I.
a
Assume that p(·) is non-decreasing. Then
⇣Z t ⌘
u(t) p(t) exp q(s) ds , t 2 [a, b].
a
4.1. Existence and uniqueness for linear systems: Consider the linear system ~y 0 = f~(x, ~y ),
where the component of f~ are given by
n
X
fj (x, ~y ) = ajk yk + bj (x), j = 1, 2, . . . , n
k=1
ODES AND PDES 21
and the functions ajk , bj are continuous on an interval I containing x0 . Now consider the strip
Sa = {|x x0 | a, |~y | < 1}. PSuppose ajk , bj are continuous on |x x0 | a. Then there exists
a constant K > 0 such that nj=1 |ajk (x)| K for all k = 1, 2, . . . , n, and for all x satisfying
|x x0 | a. Note that
n
X
@ f~
(x, ~y ) = a1k (x), a2k (x), . . . , ank (x) = |ajk (x)| K .
@yk
j=1
Thus, f~ satisfies a Lipschitz condition on S with Lipschitz constant K. Hence we arrive at the
following theorem
Theorem 4.7. Let ~y 0 = f~(x, ~y ) be a linear system described as above. If ~y0 2 Rn , there exists
one and only one solution ~ of the IVP ~y 0 = f~(x, ~y ), ~y (x0 ) = ~y0 .
Note that linear system can be written in the equivalent form
~y 0 (x) = A(x)~y (x) + ~b(x)
where
0 1 0 1
a11 (x) a12 (x) a13 (x) . . . a1n (x) b1 (x)
B a21 (x) a22 (x) a23 (x) . . . a2n (x) C B b2 (x) C
B C ~ B C
A(x) = B .. .. .. .. .. C and b(x) = B .. C .
@ . . . . . A @ . A
an1 (x) an2 (x) an3 (x) . . . ann (x) bn (x)
Pn
Example 4.6. Consider the homogeneous linear system yj0 = k=1 ajk yk (j = 1, 2, . . . , n),
where ajk are continuous on some interval I. Then it is easy to see that ~ = ~0 is a solution .
P
This is called a trivial solution. Let K be such that nj=1 |ajk (x)| K. Let x0 2 I, and ~ be
any solution of the linear homogeneous system. Now consider two IVP
~y 0 = f~(x, ~y ), ~y (x0 ) = ~0; ~y 0 = f~(x, ~y ), ~y (x0 ) = ~ (x0 )
P
where the component of f~ is fj (x, ~y ) = nk=1 ajk yk . Then according to continuous dependence
estimate theorem, we have
" = 0 K|x x0 |
| ~ (x)| = | ~ (x) ~0| | ~ (x0 ) ~0|eK|x x0 | + e 1 = | ~ (x0 )|eK|x x0 | 8x 2 I .
K
For linear equations of order n, we have non-local existence.
Theorem 4.8. Let a0 , a1 , . . . , an 1 and b be continuous real valued functions on an interval I
containing x0 . If ↵0 , ↵1 , . . . , ↵n 1 are any n constant, there exists one and only one solution
of the ODE
y (n) + an 1 (x)y
(n 1)
(x) + . . . + a0 (x)y = b(x) on I ,
y(x0 ) = ↵0 , y 0 (x0 ) = ↵1 , . . . , y (n 1)
(x0 ) = ↵n 1.
Proof. Let ~y0 = (↵0 , ↵1 , . . . , ↵n 1 ). Given ODE can be written as a system of linear equation
yj0 = yj+1 (j = 1, 2, . . . , n 1); yn0 = b(x) an 1 (x)yn ... a0 (x)y1 .
Then, according to Theorem 4.7, above problem has a unique solution ~ = 1, 2, . . . , n on
I satisfying 1 (x0 ) = ↵0 , 2 (x0 ) = ↵1 , . . . , n (x0 ) = ↵n 1 . But, since
0 0 00 (n 1)
2 = 1, 3 = 2 = 1, . . . , n = 1 ,
the function 1 is the required solution on I. ⇤