ODE How to Solve
ODE How to Solve
ODE How to Solve
1
expressible in terms of exponentials, sines and cosines, and polynomials! In
terms of the matrix A, the solution is expressed as exA . We will discuss the
meaning of the exponential of a matrix. Naturally this part of the course
involves a lot of linear algebra, especially eigenvalues and eigenvectors.
Now the moment we consider variable coefficients, such as
dy
= A(x)y, (2)
dx
where A(x) is a family of matrices that depend on x, there might or might
not be an explicit solution. Even worse, is a nonlinear system, like this one:
dy
= f (x, y), (3)
dx
with general “nonlinear” dependence on y. (Or better, because such systems
are much richer mathematically?)
Fortunately, we always know that there is a unique solution of the general
system (3) with any initial condition y(0) = y0 , where y0 is any given vector.
This is the central theorem of the course. The proof involves approximating
the solution by iteration, as in
dy m
= f (x, y m−1 ), y m (0) = y0 (m = 1, 2, 3, . . . ),
dx
where m is a superscript. It is really simple to get our hands on the se-
quence of functions y 1 (x), y 2 (x), . . . . Then we prove that it converges and
that its limit satisfies (3). This involves a bit of analysis, namely uniform
convergence. Actually, there are a couple of caveats to this theorem. First,
the solution is only guaranteed to exist for a short interval of x (“for a short
time”). Secondly, it has to be assumed that the function f (x, y) in the equa-
tion is not too bad (continuously differentiable is enough).
Okay, now we know how to solve the linear, constant-coefficient systems,
and we know that there is always a solution for any initial condition for a
little while. What happens to the more complicated equations after that
little while? Well, it can get very, very complicated. The solutions could be
chaotic, like fractals.
If the system is linear but has variable coefficients, the linear structure is
still very useful but there might be no explicit solution formula. For instance,
Bessel’s equation
n2
00 1 0
y + y + 1− 2 y =0
x x
2
is an equation that comes up in most problems with circular or spherical
symmetry. Its space of solutions is two-dimensional. One of its solutions is
the Bessel function, which oscillates like sin x but gradually decreases.
A very interesting type of nonlinear system is
dy1 dy2
= f (y1 , y2 ), = g(y1 , y2 ) (4)
dx dx
where f and g are functions that depend on y1 and y2 but not on x. This is
called an autonomous 2 × 2 system. The nice thing about it is that you can
draw pictures of the solutions in the y1 , y2 plane. This is called the phase
plane and the pictures of the solutions are called the phase portraits. Typical
solutions are called nodes, saddles and centers.
Some of them are stable and some are unstable. “Stable” means that
when you wiggle the initial condition a little, the solution only changes a
little. Stability is an important concept because it tells you something about
what happens beyond that guaranteed little interval of x. Stability is also
a crucial concept in most applications. For instance, the planets satisfy a
differential equation coming from Newton’s law of motion under the influence
of gravity. They move in stable orbits around the sun; if they didn’t, they
would fly off and disappear from the solar system or crash into another
planet or the sun. For system (4) there may be a limit cycle, which is a
closed orbit (a periodic solution) that is a limit of spiraling solutions. The
Poincaré-Bendixson theorem states that 2 × 2 systems typically have limit
cycles.
It is when we get to 3×3 systems that we may find chaos. In n dimensions,
that is, where there are n dependent variables, the solutions are curves in
n-dimensional euclidean space. Then other, more sophisticated geometrical
and topological concepts become useful.