Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
5 views

Extra Notes 02

The document discusses continuity of functions and existence and uniqueness of solutions to ordinary differential equations (ODEs). It defines continuity for functions between Euclidean spaces and continuous functions. It presents the Cauchy-Euler construction to prove existence of solutions to initial value problems if the function is continuous. It notes that uniqueness requires additional conditions like the Lipschitz condition on the ODE.

Uploaded by

Jason
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Extra Notes 02

The document discusses continuity of functions and existence and uniqueness of solutions to ordinary differential equations (ODEs). It defines continuity for functions between Euclidean spaces and continuous functions. It presents the Cauchy-Euler construction to prove existence of solutions to initial value problems if the function is continuous. It notes that uniqueness requires additional conditions like the Lipschitz condition on the ODE.

Uploaded by

Jason
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Differential Equations 2020/21

MA 209

Extra Notes 2
Continuity of Functions
Existence and Uniqueness of Solutions of ODEs
The Lipschitz Conditions

These extra notes provide some extra information related to Chapter 4 of the Lecture Notes.

2.1 Functions and their limits


Some important general definitions for functions are:
* If f : S −→ T , where S and T are non-empty sets, then S is called the domain of f and T
is the range of f .
* For R ⊆ S, the set of attainable values of f on R or the image of R under f is

f (R) = { y ∈ T | f (x) = y for some x ∈ R } = { f (x) | x ∈ R }.

• Most of you will have a notion of what it means for a function from R to R to have a
limit. Definitions for these concepts usually include notions such as “x approaches a” or “x
approaches a from above”. But if we are dealing with functions f : Rn −→ Rm , we must be
careful what we mean if we say “x approaches a” since x and a are points in some higher
dimensional space. We will use the following definition:
Definition
Given a function f : S −→ Rm where S ⊆ Rn . Then we say that f (x) −→ y as x −→ a,
where y ∈ Rm and a ∈ Rn , if for every ε > 0 there is a δ > 0 such that for all x ∈ S with
0 < kx − ak < δ we have kf (x) − yk < ε.
 
x1
n  ..  p
Here k . k is the normal norm in R : if x =  . , then kxk = x21 + · · · + x2n .
xn
• For one-dimensional functions we have an additional definition:
Definition
Given a function f : R −→ Rm . Then we say that f (t) −→ y as t −→ ∞, where y ∈ Rm , if
for all ε > 0 there exists an M ∈ R such kf (t) − yk < ε for all t > M .
c London School of Economics, 2019
MA 209 Differential Equations Extra Notes 2 — Page 2

• For any x ∈ Rn and real number r > 0, the open ball B(x, r) with centre x and radius r is
the set B(x, r) = { y ∈ Rn | kx − yk < r }.
If D ⊆ Rn , then a point x ∈ D is an interior point of D if there is an r > 0 so that B(x, r) ⊆ D.

2.2 Continuous functions


We use the following definitions for a function to be continuous:
* A function f : S −→ Rm , where S ⊆ Rn , is said to be continuous at x0 ∈ S if f (x) −→
f (x0 ) as x −→ x0 .
* And f is continuous on S if f is continuous at every point in S.
Using the definition from the previous subsection, a more extended definition would be:
* A function f : S −→ Rm , where S ⊆ Rn , is said to be continuous at x0 ∈ S if for all ε > 0
there is a δ > 0 such that for all x ∈ S with 0 < kx − x0 k < δ we have kf (x) − f (x0 )k < ε.
 
f1 (x)
• If f : S −→ Rm , then we can think of f as being defined by m functions f (x) =  ... ,
 

fm (x)
one for each coordinate. It can be shown that this means that:
* f is continuous at x0 ∈ S if and only if each fi is continuous at x0 ∈ S.

2.3 Existence of Solutions


For the remainder of these notes we suppose that we are given a function f : D −→ Rn
for some D ⊆ Rn+1 and a point (t0 , x0 ) ∈ D. And we are looking for solutions x(t) to the
following initial-value problem

(1) x0 = f (t, x) with x(t0 ) = x0 .

(The reason to allow f to be defined on a subset D of Rn+1 , is so that we also can consider
equations like x0 = x/t for t > 0.)
• Following the definitions and observations from the previous subsection, we can write
 
f1 (t, x1 , x2 , . . . , xn )
 f2 (t, x1 , x2 , . . . , xn ) 
 
f (t, x) = 
 .. ,
.

 
fn (t, x1 , x2 , . . . , xn )

and f is continuous on D if and only if each fi (t, x1 , . . . , xn ) is continuous on D.


MA 209 Differential Equations Extra Notes 2 — Page 3

• The following theorem guarantees that most initial-value problems have a solution.

Theorem 2.1
If f (t, x) is continuous on D and (t0 , x0 ) is an interior point of D, then there exist ts , te with
ts < t0 < te and a function x : (ts , te ) −→ Rn that solves the initial-value problem (1) for all
t ∈ (ts , te ).

The above is equivalent to Theorem 4.5.1 in the Lecture Notes (although formulated a little
different). It is a lot more general than Theorem 4.1.1 in the Lecture Notes.
The proof of Theorem 2.1 is fairly tricky. So we only give a sketch of a possible way to prove
the theorem. This in fact describes a way to find an approximate solution. The construction
is known as the Cauchy-Euler construction.
Sketch of proof We will only show that there is a solution on the interval [t0 , te ) for some
te > t0 . The same ideas can be used to show that there is a solution for t below t0 as well.
We will assume that all points considered below are in D. This can be achieved by choosing
the value of α below appropriately.
Fix a positive number α and set tα = t0 + α. We will construct an approximate solution on
[t0 , tα ]. Next, for a positive integer N , divide the interval between t0 and tα into N equal
parts. So write ∆t = α/N and define N + 1 time points tr = t0 + r∆t for r = 0, 1, . . . , N . We
now form the corresponding sequence of points xr , r = 0, 1, . . . , N , defined by

xr = xr−1 + (tr − tr−1 ) f (tr−1 , xr−1 ) for r = 1, . . . , N .

Finally, for a time t ∈ [t0 , tα ) we know that t ∈ [tr−1 , tr ), for exactly one r ∈ {1, 2, . . . , N }.
So for all t ∈ [t0 , tα ) we can define the approximate solution xN (t) as follows:

xN (t) = xr−1 + (t − tr−1 ) f (tr−1 , xr−1 ) where r is the integer so that t ∈ [tr−1 , tr ).

In order to understand what is happening, make the following observations about xN :


– For t ∈ [t0 , t1 ), xN (t) is nothing more than the line starting in x0 going in the direction
f (t0 , x0 ). Note that the solution of the ODE is a function x(t) with x(t0 ) = x0 and
x0 (t0 ) = f (t0 , x0 ). So for t ∈ [t0 , t1 ), xN (t) is the linear approximation of the solution x
we are looking for.
– The linear approximation from the first step goes from t = t0 until t = t1 . Then we are in
the point x1 and we start using a new direction f (t1 , x1 ). So for t ∈ [t1 , t2 ), xN (t) is the
linear approximation of a solution x that would start with x(t1 ) = x1 .
– The process continues; between times tr−1 and tr we follow a straight line starting at xr−1
and with direction f (tr−1 , xr−1 ).
So the function xN (t) consists of a sequence of linear pieces, each piece chosen to give a
reasonable approximation of a possible solution of the ODE at the starting point of the piece.
This approach is often used in computer software to find approximations of solutions, or for
instance to draw the graphs of solutions when the solution itself is not explicitly known.
From this point, we should continue the proof by showing that for α small enough, if we let
N −→ ∞ (which is the same as ∆t ↓ 0), then xN converges to some differentiable function x
MA 209 Differential Equations Extra Notes 2 — Page 4

which is the solution of (1). This part of the proof involves analysis of uniform convergence
of xN (t), etc. We will skip that and just believe that by doing the construction above using
smaller and smaller steps ∆t we eventually get a solution.
• Note that Theorem 2.1 only guarantees a solution over a certain interval I = (ts , te ) with
t0 ∈ I. This interval very much depends on the exact form of the ODE. For instance the
one-dimensional ODE

x0 = x2 − 1, x(0) = 0,

1 − e2t
has the solution x(t) = , for all t ∈ R (so we can take I = R). But the very similar
1 + e2t
looking ODE

y 0 = y 2 + 1, y(0) = 0,

has the solution y(t) = tan(t), for − 12 π < t < 12 π. So here the solution is only valid for the
interval I = (− 21 π, 12 π).
• Theorem 2.1 guarantees that most ODEs have a solution. But that doesn’t mean the solution
has to be unique. For instance, the ODE

x0 = 23 x ,
3
x(0) = 0,

has solutions x(t) = 0 for all t ∈ R, but also x(t) = |t|3/2 for all t ∈ R, and many others.
So in order to make sure that there is a unique solution, we must put some extra conditions
on the ODE, in particular on the expression f (t, x).

2.4 Uniqueness of Solutions - The Lipschitz Condition


The different forms of the Lipschitz Conditions (locally Lipschitz, globally Lipschitz, locally
Lipschitz in x uniform with respect to t) are defined in Section 4.4 of the Lecture Notes. The
most important one is the following.
* Definition
Let D ⊆ Rn+1 be some domain in which (t0 , x0 ) is an interior point. Then the function
f (t, x) defined on D satisfies the Lipschitz condition on D if there exists a constant L such
that

kf (t, x) − f (t, y)k ≤ Lkx − yk for all (t, x), (t, y) ∈ D.

With this condition, we can formulate the most important general result on the uniqueness
of ODEs:

* Theorem 2.2
Let f (t, x) satisfy the Lipschitz Condition on some domain D in which (t0 , x0 ) is an
interior point. Then there exist ts , te with ts < t0 < te such that the differential equation
x0 = f (t, x), with initial value x(t0 ) = x0 , has a unique solution on (ts , te ).
MA 209 Differential Equations Extra Notes 2 — Page 5

To prove this theorem, we first would need to show that there is at least one solution. But that
follows from the earlier results, since it can be shown that if a function satisfies a Lipschitz
Condition, then it is continues. So we only need to prove that that solution is unique.
That second part of the proof can be found in Section 4.3.2 of the Lecture Notes, but also at
the end of these extra notes. It is quite some work, although nothing incredibly complicated
is happening. Nevertheless, we won’t spend much time with the proof, and hence it is not
considered examinable material.
• Although the Lipschitz Condition is not too complicated, in practice it is quite hard to find
out if a function satisfies the Lipschitz Condition. The following condition is often useful.

* Theorem 2.3
Suppose f (t, x) is continuous differentiable on some open convex domain D ⊆ Rn+1 with
(t0 , x0 ) ∈ D and that there exists some constant K such that the partial derivatives with
respect to the x-coordinates satisfy:
∂fi (t, x)
≤ K for all i = 1, . . . , n, j = 1, . . . , n and (t, x) ∈ D.
∂xj
Then f satisfies the Lipschitz Condition on D.

2.5 Proof of Uniqueness under the Lipschitz Condition


Before really starting with the proof, we first rewrite the standard ODE from (1) in a some-
what different form. To find this, suppose we have a solution x(t) for (1). Integrating both
sides of the differential equation from t0 to t we get
Z t Z t
(2) x0 (τ ) dτ = f (τ, x(τ )) dτ.
t0 t0

You must realise that the functions x and f are in fact multi-dimensional. So in reality we
 
have x(s) = x1 (s), . . . , xn (s) , and hence we should read
Z t Z t  t
Rt

  R
ẋ(τ ) dτ = x1 (τ ), . . . , xn (τ ) dτ = x1 (τ ) dτ, . . . , xn (τ ) dτ ,
t0 t0 t0 t0

Rt
where each of the integrals xi (τ ) dτ is a normal, one-dimensional integral.
t0
Rt
Now recall that for a differentiable function f : R −→ R we have f 0 (τ ) dτ = f (t) − f (t0 ).
t0
Rt
Then we find that the equation in (2) is equivalent to x(t) − x(t0 ) = f (τ, x(τ )) dτ . Entering
t0
the initial value x(t0 ) = x0 , we get the so-called Volterra’s Integral Equation:
Z t
(3) x(t) = x0 + f (τ, x(τ )) dτ.
t0

In fact, we have shown that solving the initial-value differential equation in (1) is equivalent
to solving the integral equation in (3).
MA 209 Differential Equations Extra Notes 2 — Page 6

• We next need a preliminary lemma.

Lemma 2.4
Let ϕ : [t0 , te ) −→ R, where te > t0 , be continuous on [t0 , te ) and satisfy ϕ(t) ≥ 0 for all
t ∈ [t0 , te ). Suppose there is some constant K ≥ 0 so that
Z t
0 ≤ ϕ(t) ≤ K ϕ(τ ) dτ for all t ∈ [t0 , te ).
t0

Then ϕ(t) = 0 for all t ∈ [t0 , te ).

Proof If K = 0, then we immediately get 0 ≤ ϕ(t) ≤ 0, hence ϕ(t) = 0 for all t ∈ [t0 , te ).
So from now on we assume K > 0.
Rt
For t ∈ [t0 , te ) write Φ(t) = ϕ(τ ) dτ . Since ϕ(τ ) ≥ 0 for al τ ∈ [t0 , te ), we also have Φ(t) ≥ 0.
t0
Also, Φ(t0 ) = 0 and Φ(t) is continuous differentiable with Φ0 (t) = ϕ(t). Hence the inequality
in the lemma can be written as

0 ≤ Φ0 (t) ≤ K · Φ(t) for all t ∈ [t0 , te ).

The second part is the same as Φ0 (t) − KΦ(t) ≤ 0. After multiplying with the positive value
d  −Kt
e−Kt we get e−Kt Φ0 (t) − e−Kt KΦ(t) ≤ 0, which is the same as

e Φ(t) ≤ 0. Now take
dt
the integral from t0 to t on both sides to get:
Z t Z t
−Kt −Kt0 d  −Kτ 
e Φ(t) − e Φ(t0 ) = e Φ(τ ) dτ ≤ 0 dτ = 0.
t0 dt t0

Rb
(Recall that for a differentiable function ψ we have ψ 0 (τ ) dτ = ψ(b) − ψ(a).) But since
a
Φ(t0 ) = 0 we must conclude e−Kt Φ(t) ≤ 0. Since e−Kt
is positive, it must be the case that
Φ(t) ≤ 0. Together with the inequality 0 ≤ KΦ(t), hence 0 ≤ Φ(t) (since K > 0), we must
conclude Φ(t) = 0 for all t. But then also ϕ(t) = Φ0 (t) = 0 for all t ∈ [t0 , te ).
• Proof of Theorem 2.3 We only consider the interval [t0 , te ). The interval (ts , t0 ] can be
done similarly, but we must take care of the signs of the integrals when t < t0 .
Suppose there are two solutions x(t) and y(t) of (1) valid on [t0 , te ) for some te > t0 . We will
show that if f (t, x) satisfies the Lipschitz Condition, then we must have x(t) = y(t) for all
t ∈ [t0 , te ).
Let L be the constant corresponding to the Lipschitz Condition of f (t, x). From the integral
equation formulation in (3) we find
Z t Z t
x(t) = x0 + f (τ, x(τ )) dτ and y(t) = x0 + f (τ, y(τ )) dτ.
t0 t0

Subtracting, we see that


Z t
 
x(t) − y(t) = f (τ, x(τ )) − f (τ, y(τ )) dτ.
t0
MA 209 Differential Equations Extra Notes 2 — Page 7

Taking the norm of both sides we get


Z t
 
0 ≤ kx(t) − y(t)k = f (τ, x(τ )) − f (τ, y(τ )) dτ .
t0

Rt Rt
Now we use that for integrable functions a : R −→ Rn we have a(τ ) dτ ≤ ka(τ )k dτ .
t0 t0
This should require a proof, but if you recall that the integral is the limit of a large sum, and
using the triangle inequality for the norms of sums, I hope you will believe this. Anyway,
applying this inequality and the Lipschitz Condition we find
Z t
 
0 ≤ kx(t) − y(t)k = f (τ, x(τ )) − f (τ, y(τ )) dτ
t0
Z t
≤ f (τ, x(τ )) − f (s, y(τ )) dτ
t0
Z t Z t
≤ Lkx(τ ) − y(τ )k dτ = L kx(τ ) − y(τ )k dτ.
t0 t0

Now use Lemma 2.4 with K = L and ϕ(t) = kx(t) − y(t)k, and we find kx(t) − y(t)k = 0 for
all t ∈ [t0 , te ), hence x(t) = y(t), as required.

Exercises
# "
x 1 + 1
1 Consider the function f : R3 −→ R2 given by f (t, x) = 2 .
x2 + t 2
(a) Prove that f is locally Lipschitz.
(b) Show that f is not globally Lipschitz on R3 .

2 Suppose the function f (t, x), where f : Rn+1 −→ Rn , satisfies the Lipschitz Condition on a
certain domain D ⊆ Rn+1 , and let g : R −→ Rn be any function.
Prove that h defined by h(t, x) = f (t, x) + g(t) also satisfies the Lipschitz Condition on D.

3 (a) Suppose the function f : R2 −→ R satisfies the Lipschitz Condition on the whole space R2 .
Give an example that shows that this does not guarantee that f is continuous on R2 .
(b) Suppose that the function f (t, x) is globally Lipschitz on R2 . I.e. there is a constant L
such that

|f (t, x) − f (s, y)| ≤ Lk(t, x) − (s, y)k for all (t, x), (s, y) ∈ R2 .

Prove that this means that f is continuous on R2 .

You might also like