L12 Chaos
L12 Chaos
L12 Chaos
12 August 2021
Readings:
• Chapter 12 of Taylor. Our librarian was able to make this chapter available on Quercus.
• Baker & Gollub is a whole textbook dedicated to the topic, and probably too long for you to read
entirely. It also assumes a somewhat higher level than you probably have. Use it as a side resource: if
there is something you don’t understand in the notes, look up the concept in their table of content and
go directly to the corresponding sub-section.
1 Introduction
We keep going with our description of the damped-driven pendulum, though in this lecture we
focus on what makes the system chaotic, introcugin more adapated visualization tools: Poincaré
sections and bifurcation diagrams. In the second half of this lecture, we focus on a simpler system
that exhibits chaos: the logistic map. It is used in population dynamics and not classical mechan-
ics, but we know by now that this course was about using classical examples as a pretense to solve
universal questions, so, it should not come as a shock. Because the logistic map is easier to simu-
late, it will allow me to introduce in more detail two features of chaos: the sub-critical cascade as
a route to chaos, and the Lyapunov exponent as a measure of the sensitivity to initial conditions.
1
Perhaps the most defining feature of chaotic systems is their sensitivity to initial conditions: if we
knew initial conditions perfectly, we could technically predict the outcome equally as perfectly.
This is what Pierre-Simon Laplace was writing about in 1814 in A philosophical[ˆ3] essay on proba-
bilities: > We may regard the present state of the universe as the effect of its past and the cause of
its future. An intellect which at a certain moment would know all forces that set nature in motion,
and all positions of all items of which nature is composed, if this intellect were also vast enough
to submit these data to analysis, it would embrace in a single formula the movements of the great-
est bodies of the universe and those of the tiniest atom; for such an intellect nothing would be
uncertain and the future just like the past would be present before its eyes.
This “intellect” came to be known as “Laplace’s demon”. Technically, had QM not happened,
Laplace would have probably been right even about classical chaotic systems. But it is fundamen-
tally impossible for such an intellect to exist, because even this formidable intellect would need a
measuring device, and there is no version of the future where the perfect measuring device exists
(again, QM will always be a hard limit on which hope and dreams crash). Therefore, chaos.
In order to study nonlinear systems, we are going to write our equations in a specific form. Any
system of ODEs, of whatever order, can be written as series of first-order ODEs for appropriately
defined xn ’s:
ẋ1 = F1 ( x1 , x2 , ..., x N )
ẋ2 = F2 ( x1 , x2 , ..., x N )
..
.
ẋ N = FN ( x1 , x2 , ..., x N )
Note that these equations are all linear: the first necessary condition is met, but not the second
one. There can be no chaos in this system, adopting an exponential will give you the solution.
Also, those conditions are necessary, not sufficient: the non-linear wave examples featured a large
(pendulum chain) or infinite (water waves) number of degrees of freedom, non-linearity, and yet,
their soliton solutions were perfectly predictable and not chaotic, which made them so special.
To solidify this concept, here is another counter-example: something that is unpredictable, yet
not chaotic: Brownian motion, which itself is an example of a random walk. Brownian motion is
experienced by e.g., a piece of pollen on the surface of water. The pollen gets tossed around by
collisions with surrounding water particles, making the motion erratic and unpredictable. Yet this
motion is not chaotic: the motion is unpredictable because of external circumstances, the collisions
2
with the water molecules. The motion itself is very linear, it’s just projectile motion between
two collisions. But in the absence of information about the water molecules, the motion is not
deterministic: it is as if the next move will be drawn from the roll of a dice. This is especially
important in QM, where outcomes of given processes are inherently drawn from distributions.
The DDP on the other hand was started at a very specific angle and initial velocity, and yet, it was
unpredictable. Chaos is about a system being internally, or inherently, unpredictable. And yet, it
was deterministic: actions or initial conditions led to unambiguous outcomes.
3
(not the damped pseudo-frequency of lecture 4; I have redefined ωd to be the driving frequency,
because I will need to keep ω as a free variable for Fourier visualizations).
The equation of motion is
θ̈ + 2γθ̇ + ω02 sin θ = ω02 β cos(ωd t),
with β some measure (in rad) of how hard we drive it.
Define x1 ≡ θ and x2 ≡ θ̇ = ẋ1 , and x3 ≡ ωd t. Then we can write this system as follows:
ẋ1 = x2 (linear)
ẋ2 = −2γx2 − ω02 sin x1 + βω02 cos x3 (nonlinear) X n = 3X
ẋ3 = ωd (linear)
We now have a system of three 1st order equations for x1 , x2 , x3 and there is nonlinearity in the
system (because of the cos and sin terms). So there is a possibility of chaos.
In lecture 11, we took g = 9.8 m/s2 , ` = 1 m, γ = ω0 /4, ωd = 2ω/3 and saw what happened for
β = 0.2 (linear), β = 0.9 (appearance of super-harmonics due to the internal dynamics of the non-
linear oscillator, but still a single-period, predictive cycle) and β = 1.2 (messy). Before analyzing
this system further, define parameters and main routines. I will use the notation Td = 2π/ωd for
the driving period.
Writing down all the routines in cells takes a lot of space, and is impractical if we need to change
them. What I’ve done is that I have copied and pasted last lecture’s routines (generating time
series, plotting time series, phase, plots. . . ) in a script called ddp_routines.py. We can load the
functions as a Python module like below.
import ddp_routines as ddp
[31]:
# Here is an example we saw last lecture.
[8]: beta = 0.2 # driving amplitude = F0/(m L omega0**2) with F0 driving torque
theta, dottheta = ddp.generate_time_series(
4
theta0, dottheta0, omegad, omega0, beta, gamma, time)
5
ddp.plot_spectrum(theta, omegad, omega0, beta, gamma, time, 4., ftsz)
[11]:
6
3.1.2 Poincaré section
Let’s now look at a new way to visualize the dynamics. Now that we know that we have three
equations, we know that our phase plots should have been 3D (or maybe we didn’t know it, and
I’m telling you): the appropriate phase plot should have been in (θ, ωd t, θ̇ )-space. And because
the ωd t axis is also associated with periodic motion (the forcing), we would also fold it on itself
every ωd t = 2π, just like the θ axis. Like the sketch below.
So, the dynamics cane be pretty complicated to visualize if phase trajectories start going in all
directions. What we’ve done so far in our phase plots was to take all points and to collapse them
on the (θ, θ̇ ) plane. But it showed its limits in the last case of the last lecture, with β = 1.2.
Another way to simplify the visualization is to only plot one point every forcing period, which
visually looks like slicing the phase-space across one value of ωd t. This is called a Poincaré section.
Let’s see what it would look like in our first three examples.
I reproduce the routine to draw a Poincaré section below, but the one I actually use is in the ddp
module. They should be identical.
def Poincare_section(n_cyc, n_per_Td, it_min, th, dth, t, wd, w0, g, b, ftsz):
[32]: """ draw Poincare section; ncyc is how many forcing cycles we computed,
n_per_Td is how many iterations per driving cycle, it_min is where we start
(to skip transient), th is theta, dth is dtheta/dt
"""
Pc_th_wrapped = ddp.wrap_theta(Pc_th)
7
ddp.plot_phase(Pc_th_wrapped, PC_dth, wd, w0, b, g, PC_t, ftsz, nconts=32)
return
Do you see the Poincaré section? It’s the little speck near the middle of the picture, around θ =
0.1. Of course, a Poincaré section is useless for a single-cycle periodic motion, because it shows
the same point over and over again. The case β = 0.9 would show something similar, because
superharmonics are “filtered out” in a Poincaré section.
8
If you look closely at the values of the local extrema (it is a bit more visible in the local minima
of θ), you see a two-cycle periodicity: the minima located just before even numbers of periods
(t/Td ≈ 26, 28,. . . ) are slightly below those in-between.
In the Fourier plots, we can see a sub-harmonic appear at ωd /2. You can also notice that peaks start
appearing between integer numbers of the frequency, which is how the non-linear term sin θ in
the pendulum reacts to the appearance of a sub-harmonic motion. See below.
ddp.plot_spectrum(theta, omegad, omega0, beta, gamma, time, 24., ftsz)
[15]:
9
As for the phase plot, see below.
ddp.plot_phase(theta, dottheta, omegad, omega0, beta, gamma, time, ftsz, nconts=32)
[16]:
10
Focus on the dark traces, which are the stationary regime (the darker traces correspond to the early
transient). The full cycle indeed appears to be split into two, but it is still closed and therefore
periodic. We can thus characterize this motion with 2 periods:
1. the time it takes to cover an angle of 2π around the graph and
2. the time it takes to get back to exactly the same spot as before, which is 2Td .
In the Poincaré section, the first period is the time it takes to go from one dot to the other dot (this
is the driving frequency). The second period is the time it takes to go from 1 dot to the other dot
then back to the first dot. In this example that is twice the driving period (or equivalently, half the
driving frequency). This attractor is known as a “two period cycle” because you bounce between
2 cycles.
ddp.Poincare_section(num_cycles, ntTd, 24*ntTd, theta, dottheta,
[16]: time, omegad, omega0, gamma, beta, ftsz)
11
ddp.plot_spectrum(theta, omegad, omega0, beta, gamma, time, 24., ftsz)
[18]:
12
ddp.plot_phase(theta, dottheta, omegad, omega0, beta, gamma, time, ftsz, nconts=32)
[19]:
You’d be excused to think that not much has happened, except for the amplitude and the phase.
But change the initial condition and something happens:
theta, dottheta = ddp.generate_time_series(
[20]: 0., dottheta0, omegad, omega0, beta, gamma, time) # replace theta0 with 0.
ddp.plot_TS(theta, dottheta, omegad, omega0, beta, gamma, time/Td, ftsz, tmin=24.)
13
ddp.plot_spectrum(theta, omegad, omega0, beta, gamma, time, 24., ftsz)
[21]:
14
# theta_wrapped = ddp.wrap_theta(theta)
[22]: ddp.plot_phase(theta, dottheta, omegad, omega0, beta, gamma, time, ftsz, nconts=32)
theta_wrapped = ddp.wrap_theta(theta) # I prefer to use the one from the ddp module
[25]: ddp.plot_phase(theta_wrapped, dottheta, omegad, omega0, beta, gamma, time/Td, ftsz, nconts=32)
15
The Poincaré section is now more compact.
ddp.Poincare_section(num_cycles, ntTd, 4*ntTd, theta, dottheta,
[26]: time, omegad, omega0, gamma, beta, ftsz)
16
So, we now have a three-period cycle. What happened?
In 3D, with high enough non-linearity, the phase-space is separated into “basins of attraction”:
if the dynamics starts in one of these “basins”, and if the driving isn’t so high that it forces the
system to cross basin boundaries, it will remain in it. By changing the initial condition, we change
which basin of attraction we start in.
17
theta_wrapped = ddp.wrap_theta(theta) # I prefer to use the one from the ddp module
[31]: ddp.plot_phase(theta_wrapped, dottheta, omegad, omega0, beta, gamma, time/Td, ftsz, nconts=32)
18
Periodicity is now hard to see. What is the Fourier spectrum saying?
ddp.plot_spectrum(theta, omegad, omega0, beta, gamma, time, 5., ftsz)
[34]:
So, the periodicity due to the driving is still there. But besides that, it’s a mess of frequencies: there
are plenty of superharmonics, and they are not multiple of ωd . But there are also subharmonics:
frequencies that are less than ω. In fact, the lower-frequency signals appear more important,
indicating that long-term motions (the drift from potential well to potential well each time the
pendulum does a barrel roll is such an example).
We just witnessed the appearance of chaos: a very unpredictable dynamics, where periodicity
gave way to all sorts of frequencies.
The Poincaré section at this point is not very informative because there aren’t many points on it
(see below). I will then do a longer integration to collect more data.
ddp.Poincare_section(num_cycles, ntTd, 4*ntTd, theta, dottheta,
[36]: time, omegad, omega0, gamma, beta, ftsz)
19
beta = 1.2 # have to re-run it
[34]: many_cycles = 1200
long_duration = many_cycles*Td
long_time = np.arange(0., long_duration, dt) # initialize the time array
theta, dottheta = ddp.generate_time_series(
theta0, dottheta0, omegad, omega0, beta, gamma, long_time)
20
# For some reason, the interactive slider doesn't work all the time
[35]: interact(ddp.Poincare_section, n_cyc=fixed(many_cycles), n_per_Td=fixed(ntTd),
th=fixed(theta), dth=fixed(dottheta), t=fixed(long_time), wd=fixed(omegad),
w0=fixed(omega0), g=fixed(gamma), b=fixed(beta), ftsz=fixed(ftsz),
it_min=IntSlider(min=int(4.*ntTd), max=int(5.*ntTd), step=int(0.1*ntTd), value=4*ntTd))
You can display a few Poincaré sections by changing the initial iteration via the parameter it_min,
which should look like 4*ntTd. If you change 4 into any number between 4 and 5 (don’t forget to
convert the number into an integer, e.g., int(4.5*ntTd)), you will get a different Poincaré section.
Or you can use the interactive slider in Jupyter.
The above is a lot more readable than the “raw” phase plot below.
th_wrapped = ddp.wrap_theta(theta)
[40]: ddp.plot_phase(th_wrapped, dottheta, omegad, omega0, beta, gamma, long_time, ftsz, nconts=32)
21
If we were able to zoom (no, not that Zoom. . . ), we would notice a complicated layered structure
that occupies a compact region of phase space in the Poincaré section (believe me, or see figures
12.29 to 12.31 for Taylor). This is no longer a single period cycle, or a two period cycle or a 10
period cycle. The attractor in this case is “stranger” and hence, this attractor is known as a strange
attractor.
The attractor has a self-similar structure: reappearance of certain patterns at smaller scales. The
structures you see in figures 12.29 to 12.31 in Taylor look the same no matter what the resolution
you have. This is a standard characteristic of a strange attractor. A shape like this is said to have a
“fractal dimension”.
When you have a strange attractor, your system is chaotic. This is because (as we will see later),
it is hard to predict the location of your system at a particular time. This is because the attractor
is so complicated, you could be at any of an infinite number of points. The problem then is if I
start my system with two initial conditions that are slightly different, they can end up far apart at
a later time. Since its hard (actually impossible) for me to measure my exact initial condition, I
don’t know which scenario I actually started with.
Or you can take it the other way around: take any point (θ, θ̇) in 2D phase space, and try to go
back in time and calculate what the initial condition was. To do that, you will pick the closest
trajectory and crank up time in the negative direction. But what does “closest trajectory” mean
when your dimension is fractal? Whatever you pick, there will always be one that is closer, and
that originated from a very different initial condition!
This sensitivity to initial conditions is the defining feature of chaotic systems.
22
4 Bifurcation diagrams
So right now you have a picture in your head that as we increased β we went from a single period
attractor, to a 2 or 3-period attractor to a strange attractor. What happens if we increase β further?
The answer may surprise you: you can actually return to another single period cycle attractor! It
becomes very confusing, and bifurcation diagrams help, shedding some light on it.
But first, let’s illustrate the return of the single-period attractor with β = 1.35.
beta = 1.35 # have to re-run it
[41]: theta, dottheta = ddp.generate_time_series(
theta0, dottheta0, omegad, omega0, beta, gamma, time)
th_wrapped = ddp.wrap_theta(theta)
[42]: ddp.plot_phase(th_wrapped, dottheta, omegad, omega0, beta, gamma, time, ftsz, nconts=32)
23
Go ahead, let a lol out. We increased the non-linearity from β = 1.2 to 1.35 and we are back to a
single-period attractor, although the type of motion is quite different. The pendulum is continu-
ously spinning around the top in one direction.
As we continue to increase β again (I will let you do it because I’m getting sick of copy-pasting
lines of code; see the attached python scripts, with which you can modify and run in a more
nimble way), we start getting period doubling just like we did before. First we get a two period
cycle at β = 1.45, then at β = 1.47 we get a 4 period cycle (note: the actual β values might be
slightly different on your computer), and the period keeps doubling as β is increased until we get
to another chaotic regime. This period doubling is known as a “subharmonic cascade” to chaos.
The behaviour is summarized in the “bifurcation diagram”. What you do is plot the θ value from
your Poincaré section as a function of β. So for β values where you have a one period cycle, you
have one dot on the curve, for β values where you have a 2 period cycle, you have 2 values on the
curve at the same β value and so on.
Bifurcation plots take a while to produce: you need to scan through a lot of values of β, one-
by one; for each of them, you need to integrate over a long time to have enough points on a
Poincaré section; then you need to extract the Poincaré section from the whole time series, and
then plot. For this reason, I have not included cells where we can create a bifurcation diagram,
because it would disrupt the flow of the Jupyter notebook. If you want to reproduce them, you
need to run the routine ddp.plot_bifurcation outside of this notebook (I provide a script called
ddp_bifurcation_plots.py that does just that). The routine in ddp is reproduce below.
24
def plot_bifurcation(bmin, bmax, nbs, wd, w0, g, th0, dth0, n_per_Td, ftsz):
[44]: """ Bifurcation plot for the DDP; produces two: one for velocity,
one for angle """
# We now have computed the values for this beta, we add to the plot
# Below is an array of same length as Pc_vals full of beta
x_values = np.full(len(Pc_thvals), beta)
ax1.plot(x_values, Pc_thvals, 'b.', markersize=1.)
ax2.plot(x_values, Pc_dthvals, 'g.', markersize=1.)
# ddp.plot_bifurcation(0.9, 1.5, 200, omegad, omega0, gamma, theta0, dottheta0, ntTd, ftsz, qtty='')
[45]: # The above cell would have taken too long to run; I will just include the pictures.
25
Note: we can make our bifurcation diagram by plotting either θ or θ̇ from our Poincaré section.
The shape of the figure would look more or less the same, but the numbers would change.
Notice the intervals in β where the chaotic regions because un-chaotic again, e.g., for β = 1.35,
wihch we investigated before. And watch them go through a subharmonic cascade again!
The bifurcation diagram also exhibits a self-similar structure, which we can check by plotting a
new bifurcation diagram over the more restricted range 1.06 ≤ β ≤ 1.086.
# plot_bifurcation(1.06, 1.0865, 200, omegad, omega0, gamma, theta0, dottheta0, ntTd, ftsz, qtty='')
[46]:
26
And below, I am zooming in even more.
27
5 Maps as chaotic systems
5.1 Presentation of the logistic map
Even though the damped driven pendulum (DDP) is a physically “simple” system (i.e., the equa-
tions aren’t that complicated), we can see chaotic behaviour in even simpler mathematical sys-
tems. To understand chaos at a more basic level, we turn to difference equations}. Difference equa-
tions are not continuous equations. They tell you the value of a variable in a sequence based on
the values of the variable earlier in the sequence.
A famous difference equation is the logistic map,
which is an equation used in population dynamics. If we know the initial value x0 , this equation
can give x1 , which we can then use to get x2 and so on. So this equation gives you a sequence of
values for x. I should note that for the logistic map, you restrict starting values of x between 0 and
1. It is called a map because you essentially “map” a point based on the value from the previous
point.
You’ve actually used difference equations already:
• Numerical integration in Python, x[i+1]=x[i] + v[i]*dt, has exactly the form of a differ-
ence equation. Any differential equation can be represented by a difference equation which
is why they are relevant for physics.
28
• Poincare sections can be represented in a 2D map, as in,
where G1 and G2 are functions that give you the new point’s location from the old point’s
location.
Back to the logistic map. This is one of the most standard maps you will see in any Chaos book, it
is relatively easy to understand and i s even found in some “general audience” books about chaos.
The graph of µx (1 − x ) is parabolic (see below), but think about what the axes represent. The
horizontal axis is the value of xn and the vertical axis is the value of xn+1 .
Just like in our ODEs for the damped double pendulum, we are interested in the evolution of
this system (i.e. what are the values of xn+1 for large n? Are there any “stable fixed points” (i.e.,
point attractors). Are any of them “strange attractors”? To find the attractors we can apply a nice
geometric method.
The logistic map (and other maps) can be considered as a set of instructions, a.k.a. an algorithm:
1. For value x, calculate y = µx (1 − x ),
2. set x = y,
3. Repeat.
This process can be envisioned graphically if we plot both the logistic equation and the line y = x
on the same graph. I am going to use dashed red lines to represent the steps above. Starting from
xn = x0 on the horizontal axis, I will move vertically up to the logistic curve to find my xn+1 value
(step 1, dashed red lines). Then I will set this y value to my x value by moving horizontally from
my point to the line y = x (step 2, dash-dotted green lines). Then I will repeat the process until we
have done 100 steps, whichever comes first.
def iterate_logistic_map(mu, x0):
[17]: xns = [x0] # one element for now
old_x = 0.
x = x0
counter = 0 # counting how many steps
while counter < 400:
counter += 1
y = mu*x*(1-x) # do step 1
old_x = x
x = y # do step 2
xns.append(y) # add the next value to the list
return xns
29
plt.plot([xn, xn], [0, yn], 'r--') # plot step 1
else:
plt.plot([xn, xn], [xn, yn], 'r--') # plot step 1
plt.plot([xn, yn], [yn, yn], 'g-.') # plot step 2
plt.xlim([0, 1])
plt.ylim([0, 1])
plt.xlabel('$x_n$')
plt.ylabel('$x_{n+1}$')
plt.subplot(1, 2, 2)
plt.plot(list_of_xn)
plt.xlim(skip_n_ites, len(list_of_xn)-1)
plt.xlabel('$n$')
plt.ylabel('$x_n$')
plt.grid()
plt.tight_layout()
plt.show()
return
5.2 µ<1
I will start by considering a µ value less than one, say µ = 0.9. Let’s start with any initial x0 value
between 0 and 1 (say x0 = 0.8), and run the program.
plot_logistic_map(0.9, 0.8)
[19]:
We find that x always heads to the origin. The origin is a “stable fixed point”, or “point attractors”
30
(like the valleys in potential plots) in this case.
5.3 µ>1
5.3.1 µ = 2.5
Let’s increase µ to 2.5. Again, start with any x0 value between 0 and 1 (you can try a few) and run
the program.
plot_logistic_map(2.5, 0.2)
[20]:
Now, x heads to the intersection of the y = x and y = µx (1 − x ) curves. This is true for all initial
x values except for x0 = 0, in which case y = 0 and xn stays at 0. In this case x = 0 is an “unstable
fixed point” (like the hills in potential plots) and the other x value is a “stable fixed point”. We
can solve for the location of the intersection between y = x and y = µx (1 − x ) by setting the
right hand sides of the equations equal to each other and find that the stable fixed point occurs at
x = 1 − 1/µ.
At this point we have found that for µ < 1 we have a single stable fixed point (i.e. a 1-cycle
attractor). At µ = 1 the stable fixed point changes to x = 1 − 1/µ, but it is still a 1-cycle attractor.
We are calling them “cycle attractors” because they are maps, and we are treating them like we
would the Poincaré sections from the last lecture. What happens as we keep increasing µ?
31
5.3.2 µ = 3.3
As we start increasing µ values further, we can find some interesting things happen to our fixed
point. Eventually this fixed point becomes unstable and we get a “period doubling” (same as with
the DDP).
plot_logistic_map(3.3, 0.8, 380)
[21]:
At µ = 3.3, xn eventually bounces between 2 points on the logistic curve. We now have a 2-cycle
attractor.
As we increase µ further, we go through a sequence of period doublings, i.e., a subharmonic
cascade, and eventually, we get chaos. E.g. at µ = 3.5: 4 cycle attractor, µ = 3.55: 8 cycle attractor,
µ = 3.9: chaotic.
plot_logistic_map(3.5, 0.8, 380)
[22]:
32
plot_logistic_map(3.9, 0.8, 300)
[23]:
33
Just like with the DDP, we can plot a bifurcation diagram for the logistic map (i.e. plot the stable
fixed points as a function of µ).
def bifurcation_logistic(mumin, mumax, skip_n_ites, ymin=0, ymax=1):
[24]: """ Bifurcation plot """
x0 = 0.8 # no need to vary it
mus = np.linspace(mumin, mumax, 400)
for mu in mus:
list_of_xn = iterate_logistic_map(mu, x0)
reduced_list_of_xn = list_of_xn[skip_n_ites:]
x_values = np.full(len(reduced_list_of_xn), mu) # array full of mu
plt.plot(x_values, reduced_list_of_xn, 'b.', markersize=1.)
plt.xlabel('$\mu$')
plt.ylabel('Stationary values of $x_n$')
plt.title('Bifurcation diagram for the logistic map')
plt.xlim(mumin, mumax)
plt.ylim(ymin, ymax)
plt.grid()
plt.tight_layout()
plt.show()
return
bifurcation_logistic(0, 4, 40)
[25]:
34
bifurcation_logistic(3.53, 3.59, 40, ymin=0.87, ymax=0.898)
[27]:
35
The 2nd and 3rd plots are just magnifications (actually, we ran new sets of simulations each time)
of the first plot. Their resemblance is striking, but pay attention to the axes ranges, and try to map
each one on the preceding plot: you should see that one plot is really a small piece of the preceding
plot. Notice the period doublings and eventual chaotic regions. Also notice the “windows” where
the chaotic regions because un-chaotic again and then go through a subharmonic cascade again!
The bifurcation diagram has a structure similar to that for the DDP. Both the DDP and the logistic
map exhibit the “period doubling route to chaos”. This route consists of a sequence of pitchfork
bifurcations that get closer and closer to each other. Your cycles go from 2 to 4 to 8 to 16 to 32, and
eventually to infinity. An ‘infinite’ period cycle is a chaotic cycle.
flipped R). Potatoes, potatoz. The one with a “y” is much more frequent though.
36
If we take the log of both sides, we get
(2) (1)
ln | xn − xn | = nλ + ln |e|
(2) (1)
So if we plot ln | xn − xn | vs n, the slope of our graph will give us λ, the Lyapunov exponent.
And if these graphs have positive slopes, then the trajectories diverge, and if the graph actually
looks linear, then they diverge exponentially. We will only diagnose λ, our exponential assumption
in equation~(1) will never enter the procedure, expect to measure it at the very end if the plots are
indeed linear. And if they are, our assumption will be validated.
Lets iterate the logistic map for a non-chaotic µ value: say µ = 2.0 (left plot below) and a chaotic
value: say µ = 3.8 (right pot below).
def lyapunov_logistic(mu, x01, epsilon):
[28]: """ iterate the two maps in parallel and compute their difference at each step. """
def map_it(x):
return mu*x*(1-x)
npts = 60
difference = np.empty(npts)
x1 = x01
x2 = x01 + epsilon
difference[0] = x2 - x1
return difference
plt.subplot(2, 1, 1)
plt.semilogy(abs(diff1))
plt.xlabel('$n$')
plt.ylabel('$|x_n^{(2)}- x_n^{(1)}|$')
plt.title(r"$\mu = {}$".format(mu1))
plt.grid()
plt.subplot(2, 1, 2)
plt.semilogy(abs(diff2))
plt.xlabel('$n$')
plt.ylabel('$|x_n^{(2)}- x_n^{(1)}|$')
plt.title(r"$\mu = {}$".format(mu2))
plt.grid()
plt.tight_layout()
return
37
Notice that the curve in the left plot is 0. This means that the difference actually went to zero and
the logarithm buckled. The Lyapunov exponent is therefore 0 and our solutions do not diverge.
The slope in the right plot is positive and the graph is linear, at least until n ≈ 35. After that, the
difference saturates around 1 because the output can only be between 0 and 1. The can be less that
one, but it hovers at or below it, meaning the solutions have diverged as far apart as they could.
The concept of the Lyapunov exponent is therefore only valid in the 1 ≤ n ≤ 35 phase, where it is
$>$0 and our solutions diverge exponentially. You could calculate the actual Lyapunov exponent
by finding the slope of this graph.
Positive Lyapunov exponents occur in many systems, e.g., turbulence, weather, nonlinear circuits,
etc.. These are all systems that exhibit chaos.
It is this divergence of trajectories that defines the sensitive dependence on initial conditions.
This is why we can’t predict systems that are chaotic. If we don’t know the initial condition
perfectly (which we can’t) then we don’t know what trajectory we are on, and hence we don’t
know where we will be on the strange attractor at some later time.
7 Summary
The hallmarks of chaos are
• For the system
ẋn = Fn ( x1 , ..., x N ), n=1→N
we need N ≥ 3 and at least one of the Fi to be nonlinear for chaos to be possible.
38
• Subharmonic cascade via period doubling is a route to chaos.
• Poincaré maps showing strange attractors with fractal (non-integer) dimension and self-
similar structure.
• Sensitive dependence on initial conditions: Exponential divergence of solutions.
39