Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

1.1 Preliminary Remarks: Objective and Motiva-Tion: Nic95 NN07 May74 Mur01 Smi78 SB06 NP77

Download as pdf or txt
Download as pdf or txt
You are on page 1of 26

Chapter 1

Introduction

”There are three simple rules for


creating a model - unfortunately
nobody knows what they are.”

W. Somerset Maugham

1.1 Preliminary Remarks: Objective and Motiva-


tion
Recently non-linear science emerged in the present form following a series
of decisive, analytical, numerical and experimental developments that took
place in close interaction in the last three decades. In contrast to many
fields of mathematical science where linear equations(such as, the wave
equation, heat-conduction equation, Schrodinger equation etc.) have been
used extensively and effectively, different non-linear effects(such as, hys-
teresis, structural instability, dissipative structures, dynamical chaos etc.)
exists in the model equations of mathematical biology[Nic95, NN07, May74,
Mur01, Smi78, SB06, NP77]. The non-linearity of a system is intimately
related to the complexity of the system. An apparently suitable and in-
nocent looking discrete logistic equation may exhibit somewhat complex
behaviours[May74]. The study of complex system requires tools of non-

1
Chapter 1. Introduction 2

linear dynamics[Smi78, SB06]. During the last two decades non-linear dy-
namics has been playing significant role in modelling different biological and
life processes[Smi78, SB06, NP89, NP77, FE11, FE89, SL83].
Among the different fields of non-linear science, the study of ecosystem con-
sisting of a number of interacting organisms in relation to its environment is
an important field of mathematical biology. We consider an ecosystem as a
physical entity consisting of regional biota and its environment. As a phys-
ical entity the ecosystem can be considered as a complex dynamical system
subject to relevant physical laws[HB91]. The remarkable variety of dynam-
ical behaviours exhibited by many species of plants, insects and animals
has stimulated great interest in the development of mathematical models,
particularly of dynamical models of different ecological systems. The non-
linear dynamical models were found successful in explaining various proper-
ties of complex ecosystems, for example, stability, bifurcation, catastrophic
change of state, pattern formation and self-organization[May74, Mur01,
Smi78, SB06, SL83, HB91, Lot56, NG82, Yod89]. Like the study of complex
ecosystem the study of chemical reaction systems, specially the biochemical
reactions is an important field of mathematical biology[Mur01]. The dy-
namical models of different ecological systems have similarity with different
chemical reaction models. The auto-catalytic reaction models of chemical
system, such as tri-molecular reaction, Brusselator, Lotka-Volterra model
system, Schnakenberg model and Sel0 kov model of glycolytic oscillation etc.
have great biological and ecological significance[Mur01, Lot56, Str94]. In
view of the utmost importance of non-linear dynamical models of complex
ecosystems and chemical reaction systems, we in the present research work
have developed some non-linear dynamical models(both deterministic and
stochastic) of some complex ecological and chemical reaction system and
studied the characteristic behaviours of structure, functions, stability, peri-
odicity, bifurcation, stochasticity and fluctuation and pattern formation of
the systems under consideration[NG82].

1.2 Methodology of the Work


The work is about the non-linear dynamical models and analysis of some
complex biological processes. The dynamical model is a systematic math-
ematical methodology found successful in the discovery and understanding
Chapter 1. Introduction 3

of the various underlying processes and dynamical behaviours exhibited by


different physical, chemical and biological systems. The dynamical model
may be deterministic and stochastic, it may be continuous time and discrete
time. The present work consists of both deterministic and stochastic models
and analysis.
The deterministic dynamical models based on differential or difference equa-
tions are used to explain the system properties, for examples stability, insta-
bility, periodicity, bifurcation, catastrophic change of state of a system. The
concepts of stability plays a significant role in the study of structure and
functions of complex biological and ecological systems. Here the object is
to study some problems of dynamical complexity associated with stability,
instability, periodicity, bifurcation, diffusion-driven instability and pattern
formation of complex ecological and chemical reaction systems.
The concepts of probability and stochasticity plays significant role in the
emergence of complex behaviour of a system. In many cases of physico-
chemical, biological, social and technological systems the future behaviours
of the systems can not be predicted accurately as in the case of deterministic
dynamical models based on differential or difference equation. In such cases
of unpredictable fluctuating phenomena, probabilistic or stochastic descrip-
tion of a system becomes the natural mode of approach to the study of the
system. The unpredictability of the system behaviour may arise due to the
many body aspect of the system(i.e. due to the enumerable number of ele-
ments, such as molecular cells, organisms etc.) and secondly due to effect of
randomly fluctuating environment on the system for example, an ecosystem
that is buffered by seasonal or other climatic fluctuations. We have eluci-
dated the above two approaches of stochastic modelling in the study of some
ecological and chemical reaction systems.

1.3 Relevant Mathematical Background

1.3.1 Dynamical System


The notion of the dynamical system is the mathematical formalization of
the general scientific concept of a deterministic process. The future states of
many physical, chemical, biological, ecological, economical and even social
systems can be predicted to a certain extent by knowing their present state
Chapter 1. Introduction 4

and the laws governing their evolution. Provided these laws do not change
in time, the behaviour of such a system could be considered as completely
defined by its initial state. Thus, the notion of a dynamical system includes
a set of possible states and a law of the evolution of the state in time.
After defining the ingredients separately we give a definition of a dynamical
system.
State Space: All possible states of a system are characterized by the points
of some set X. This set is called the state space of the system. Actually,
the specification of a point x ∈ X must be sufficient not only to describe
the current ”position” of the system but also to determine the evolution.
Time: The evolution of a dynamical system means a change in the state of
the system with time t ∈ T , where T is a number set. There are two types of
systems; those with continuous (real) time T = <1 , and those with discrete
(integer) time T = Z. System of the first type are called continuous time
systems, while those of the second are termed as discrete time systems.
Evolution Operator: The main component of a dynamical system is an
evolution law that determines the state xt of the system at time t, provided
the initial state x0 is known. The most general way to specify the evolution
is to assume that for given t ∈ T a map φt is defined in the state space X,

φt : X → X

which transforms as initial state x0 ∈ X to some state xt ∈ X at time t:

xt = φt x0

The map φt is often called the evolution operator of the system.


Definition: A dynamical system is a triplet (T, X, φt ), where T is a time
set, X is a state space, and φt : X → X is a family of evolution operators
parametrized by t ∈ T .
The most common way to define a continuous time dynamical system is by
differential equations. Suppose that the state space of a system is X = <n
with coordinates (x1 , x2 , ......., xn ). Very often the law of evolution of the
system is given implicitly, in terms of the velocities ẋi as functions of the
coordinates (x1 , x2 , ......., xn ) as

ẋi = fi (x1 , x2 , ......., xn ), i = 1, 2, ....., n

or in the equivalent vector form


Chapter 1. Introduction 5

ẋ = f(x)

where the vector-valued function f : <n → <n is supposed to be sufficiently


differentiable (smooth).
[Y. A. Kuznetsov(1997): Elements of Applied Bifurcation Theory]

1.3.2 Linearisation and Characteristic Equation


Consider an autonomous predator-prey system of the form
dN dP
dt = F (N, P ), dt = G(N, P )

where N is the number of prey and P is the number of predator. The


equations for the equilibria (N ∗ , P ∗ ) are found by setting the right hand
sides equal to zero,

F (N ∗ , P ∗ ) = 0 = G(N ∗ , P ∗ )

To determine the stability of an equilibrium, we introduce new variables


that measure the deviation about the equilibrium,

x(t) = N (t) − N ∗ , y(t) = P (t) − P ∗

We then linearise about the equilibrium point,


dx
 ∂F   ∂F 
dt = ∗ ∗ x + ∂P (N ∗ ,P ∗ ) y
dy  ∂N
∂G
(N ,P )  ∂G 
dt = ∂N (N ∗ ,P ∗ ) x + ∂P (N ∗ ,P ∗ ) y

This last set of equation can be written as


! ! !
ẋ a11 a12 x
= = JX
ẏ a21 a22 y
where the aij are the various partial derivatives. The Jacobian matrix J
is called the community matrix in ecology. It captures the strength of the
interactions in a community at equilibrium. We now look for solution of the
form

x(t) = x0 eλt , y(t) = y0 eλt

With this substitution, the linearised system of equation reduces to

λx0 = a11 x0 + a12 y0 , !λy0 =! a21 x0 !


+ a22 y0
a11 − λ a12 x0 0
or =
a21 a22 − λ y0 0
Chapter 1. Introduction 6

The simplest systematic way for solving the above equation for x0 and y0 is
to use Cramer’s rule

0 a12 a11 − λ 0




0

a22 − λ a21

0

x0 = , y0 =
a11 −λ a12 a11 −λ a12



a21

a22 − λ a21

a22 − λ

However, we have a problem. The determinant in each numerator is zero.


Unless the denominator also equals to zero, we are forced to accept the
trivial solution. To avoid this, we will require that

a − λ a
11 12
=0
a21 a22 − λ

By expanding the determinant, we obtain the characteristic equation

λ2 − (a11 + a22 )λ + (a11 a22 − a12 a21 ) = 0

[M. Kot(2001): Elements of Mathematical Ecology]

1.3.3 Routh-Hurwitz Stability Criteria


For any m × m matrix A, the characteristic equation for the square matrix
A is an mth order polynomial equation

| A − λI |= λm + a1 λm−1 + a2 λm−2 + ..... + am = 0

Routh-Hurwitz criteria is a formal as well as general expression giving con-


straints on the coefficients a1 , a2 , ....am which are necessary and sufficient
to ensure all eigenvalues lie in the left half of complex plane. Explicitly
Routh-Hurwitz stability conditions for m = 2, 3, 4 and 5 are as follows

m = 2, a1 > 0, a2 > 0
m = 3, a1 > 0, a3 > 0, a1 a2 > a3
m = 4, a1 > 0, a3 > 0, a4 > 0, a1 a2 a3 > a23 + a21 a4
m = 5, a1 > 0, a2 > 0, a3 > 0, a4 > 0, a5 > 0, a1 a2 a3 > a23 + a21 a4
(a1 a4 − a5 )(a1 a2 a3 − a23 − a21 a4 ) > a5 (a1 a2 − a3 )2 + a1 a25

[R. M. May (2001): Stability and Complexity in Model Ecosys-


tems ]
Chapter 1. Introduction 7

1.3.4 Bifurcation Theory


It is often assumed that a small change in input results in a small change
in output, that is, the output is a continuous function of the input. This is
not always true. Consider the process of heating a kettle of water. Near the
boiling point, a small increase of heat could result in a change of state, from
liquid to vapour, and this is a qualitative change. Bifurcation theory is the
study of the point at which the qualitative behaviour of a system changes.
Consider a system that depends on a set of parameters which are denoted
by (µ1 , µ2 , ......, µn ). We assume the system to be autonomous and the set
of equations describing the system can be written as
dx
dt = f(x, µ̄)

where x and µ̄ are column vectors with components xi and µi respectively.


Our objective is to determine equilibrium states and their stability as the
value of µ̄ is changed. If at some values of µ̄(= µ̄0 ), there is a qualitative
change in the solution, then µ̄0 is the bifurcation point. The equilibrium
states are determined by solving f(x, µ̄) = 0. The solution of the equation
f(x, µ̄) = 0 describes a surface in the (x, µ̄)-space . If the surface is smooth,
a small change in µ̄ leads to a small change in x and nothing dramatic
happens. However, if the surface is folded, then at the fold, a small change
in µ̄ can result in a jump in the value of x and this is exactly the bifurcation
point.
Hopf Bifurcation
Suppose that the system
dx1 dx2
dt = f1 (x1 , x2 , µ), dt = f2 (x1 , x2 , µ)

has an equilibrium state at (0, 0, µ0 ). The Jacobian matrix evaluated at the


equilibrium point,
!
a11 a12 ∂fi
J= , aij = ∂xj
a21 a22 (0,0,µ0 )

has eigenvalues α(µ) ± iβ(µ) with α(µ0 ) = 0, β(µ0 ) 6= 0. Further


h i
dα(µ)
dµ 6= 0
(µ=µ0 )

then, in the neighbourhood of (0, 0, µ0 ),there is a non-trivial periodic solu-


tion. This theorem is known as Hopf-bifurcation theorem and the solution
Chapter 1. Introduction 8

as Hopf bifurcating periodic solution.


[C. F. Chan Man Fong and D. De. Kee (1999): Perturbation
Methods, Instability, Catastrophe and Chaos]

1.3.5 Lyapunov Functions and Stability Criteria


Consider the equation

ẋ = f(t, x), t > t0 , x ∈ D ⊆ <n

and assume that the trivial solution satisfies the equation, that means f(t, 0) =
0, t > t0 , 0 ∈ D. Let us consider the scalar function V (t, x) which is defined
and continuously differentiable in [t0 , ∞)×D. Moreover at the interior point
x = 0 ∈ D, V (t, 0) = 0. In some cases V (t, x) does not depend explicitly on
0 t0 and we can write for short V (x).
Definition: The function V (x) (with V (t, 0) = 0) is called positive (nega-
tive) definite in D if V (x) > 0(< 0) for all x ∈ D with x 6= 0.
Definition: The function V (x) (with V (t, 0) = 0) is called positive (nega-
tive) semi-definite in D if V (x) ≥ 0(≤ 0) for all x ∈ D with x 6= 0.
If a function V (t, x) depends explicitly on 0 t0 , these definitions are adjusted
as follows:
Definition: The function V (t, x) is called positive (negative) definite in D
if there exists a function W (x) with the properties: W (x) is defined and
continuous in D, W (0) = 0, 0 < W (x) ≤ V (t, x)(V (t, x) < W (x) ≤ 0) for
x 6= 0, t ≥ t0 .
To define semi-definite functions V (t, x) we replace < (>) by ≤ (≥).
Definition: The orbital derivative Lt of the function V (t, x) in the direction
of the vector field f(t, x), is given by
∂V Pn ∂V
Lt V = ∂t + i=1 ∂xi fi (t, x)

where x = (x1 , x2 , ....., xn ) and f = (f1 , f2 , ....., fn ) and x is a solution of the


equation

ẋ = f(t, x), t > t0 , x ∈ D ⊆ <n

Theorem: Consider the equation ẋ = f(t, x) and if it is possible to find a


function V (t, x), defined in a neighbourhood of x = 0 and positive definite
for t ≥ t0 with the orbital derivative negative semi-definite, the solution
x = 0 is stable in Lyapunov sense.
Chapter 1. Introduction 9

Theorem: Consider the equation ẋ = f(t, x) and if it is possible to find a


function V (t, x), defined in a neighbourhood of x = 0 and positive definite
for t ≥ t0 with the orbital derivative negative definite, the solution x = 0 is
asymptotically stable.
[F. Verhulst(2000): Non-linear Differential Equations and Dynam-
ical Systems]

1.3.6 Diffusive Instability


Turing’s work on morphogenesis began in 1951. Morphogenesis is the devel-
opment of the form or structure of an organism during the life history of the
individual. His evolutionary idea was that passive diffusion could interact
with chemical reaction in such a way that even if the reaction by itself has
no symmetry breaking capabilities, diffusion can destabilise the symmetry
solutions so that the system with diffusion added can have them. The obvi-
ous question that arises ”Can diffusion destabilise a spatially homogeneous
steady state?” If so, this is known as diffusion-driven instability or Turing
instability.
We consider the system of m reaction diffusion equations given by
∂ ū
∂τ = f(ū) + D 52 ū

in a domain Ω̄ ∈ <N , where Ω̄ involves differentiation with respect to spatial


variable x̄. It is often useful to rescale the space variables in order to work
with a problem on a standard domain Ω, and it turns out that rescaling the
time variable as well simplifies the result. So we define x = γ x̄, t = γ 2 τ and
u(x, t) = ū(x̄, t), to obtain
∂u
∂t = γ 2 f(u) + D 52 u = αf(u) + D 52 u

say, on Ω ∈ <N . In this equation γ is a measure of the linear dimensions of


the original domain, so that increasing γ is equivalent to increasing domain
size. Let us assume that this has a spatially uniform steady state solution
u∗ , so that f(u∗ ) = 0, and take homogeneous Neumann (zero-flux) boundary
conditions on ∂Ω. Let û be the perturbation from the steady state, û =
u − u∗ . The linearisation of the above equation about u∗ is given by
∂v
∂t = αJ ∗ v + D∇2 v

in Ω with homogeneous Neumann boundary conditions on ∂Ω, where v is


the linearised approximation to û and J ∗ is the Jacobian matrix
Chapter 1. Introduction 10

h i
∂fi
J∗ = ∂uj u=u

There is a standard method for finding the solution of the linear system
with constant coefficient, the method of separation of variables. First, let us
assume that we know a function F (x) that satisfies −∇2 F = λF in Ω and
homogeneous Neumann boundary conditions on ∂Ω. (F is an eigenfunction
of −∇2 on Ω with the boundary conditions, and λ its eigenvalue). We
consider a function V of the form V (t, x) = cF (x)exp(σt). It satisfies the
linearised equation and the boundary conditions if

σc = αJ ∗ c − λDc = Ac

so that σ and c are an eigenvalue and the corresponding eigenvector of the


matrix A = αJ ∗ − λD. Let us define the spatial modes to be the eigenfunc-
tions Fn (x) of −∇2 on Ω with the appropriate boundary conditions, and
the spatial eigenfunctions λn to be the corresponding eigenvalues. We know
from Fourier analysis that any function on Ω may be written as a linear
combination of the spatial modes, so that v may be written as
P∞
v(x, t) = n=0 Fn (x)Gn (t)

It follows after separation of variables and some linear algebra that the
general solution of the linearised equation may be put into the form
P∞ Pm
v(x, t) = n=0 i=0 ani cni (t)Fn (x)exp(σni t)

where ani are arbitrary constants. σni are the eigenvalues of the matrix
An = αJ ∗ − λn D, which we shall refer to as the temporal eigenvalue of the
problem when we need to distinguish them from the spatial eigenvalues λn .
The eigenvalues σni of An satisfy

det(σn I − An ) = det(σn I − αJ ∗ − λn D) = 0

These are mth order polynomials, so that there are m eigenvalues σn1 , σn2 , ...., σnm
for each n. If σ0i has negative real part for all i, but σni has positive real
part for some n 6= 0 and some i, then we say that Turing Instability occurs.
Let us look a bit more closely at the last equation which tells us how the
spatial and temporal eigenvalues are related. We shall first analyse the equa-
tion that would result if the spatial eigenvalues could take any non-negative
value
Chapter 1. Introduction 11

det(σI − A) = det(σI − αJ ∗ − λD) = 0

For each λ this is a polynomial of degree m solutions σi . Let

ρ(λ) = max1≤i≤m Reσi (λ)

the real part of the eigenvalue with greatest real part. A relationship between
temporal and spatial eigenvalues like this one between ρ and λ is generally
called a dispersion relation.
[N. F. Britton(2003): Essential Mathematical Biology]

1.3.7 Stochastic process and random function


A random function is a family of random variables Xt = Xt (ω) taking values
in some space SP and depending on a parameter t running over some set T .
If the parameter set T is a part of the real line: T ⊆ R; if we interpret the
parameter t as time; and if we interpret Xt as motion of some random point
in the space SP , we call the random function Xt a stochastic process.

Stochastic differential equation

A stochastic differential equation is a differential equation in which one or


more of the terms is a stochastic process, resulting in a solution which is
itself a stochastic process.

Langevin equation

Langevin equation is a linear stochastic differential equation with an additive


white noise. It was introduced by Langevin in 1908 to describe Brownian
motion. The description comes in terms of a noisy differential equation
wherein one splits the motion into two parts, a slowly varying systematic
part and a rapidly varying random part. For the damped motion of a ran-
domly forced particle

mv̇ = −mγv + ξ(t), ẋ = v

2γkT
with hξ(t)ξ 0 (t)i = m δ(t − t0 )
Chapter 1. Introduction 12

Spectral density and Auto-covariance function

The auto-covariance of the random function X(t) is defined by


Z T
1 2
CX (τ ) = hX(t) X(t + τ )i = limT →∞ X(t)X(t + τ )dt
T − T2

The spectral density of the random function X(t) is defined as the Fourier
transform of the auto-covariance function CX (τ ) (taking T −→ ∞)
Z ∞
1
SX (ω) = CX (τ )eiωτ dτ
2π −∞

The inverse of the spectral density is the auto-covariance


Z ∞
CX (τ ) = Sx (ω)eiωτ dω
−∞

1.3.8 Entropy

(i) Boltzmann entropy

In statistical mechanics, we are interested in the disorder in the distribution


of the system over the permissible microstates. The measure of disorder first
provided by Boltzmann principle (known as Boltzmann entropy) is given by

S = K ln W

where K is the thermodynamic unit of measurement of entropy and is known


as Boltzmann constant. K = 1.33 × 10−16 erg/o C. W , called thermodynamic
probability or statistical weight, is the total number of microscopic complex-
ions compatible with the macroscopic state of the system.

(ii) Shannon entropy

Let us consider a system consisting of N elements (molecules, organisms,


etc.) classified into n classes (energy-states, species, etc.). Let Ni be the
occupation number of the ith class. The macroscopic state of the system is
given by the set of occupation number An = (N1 , N2 , ..., Nn ). The statistical
weight or degree of disorder of the macrostate An is given by
N!
W (An ) = Πn
i=1 Ni !
Chapter 1. Introduction 13

representing the total number of microscopic states or complexions compati-


ble with the constraints the system is subjected to. For large Ni , Boltzmann
entropy with this W (An ) reduces to the form of Shannon entropy
Pn
S = −KN i=1 Pi ln Pi

where Pi = Ni /N is the relative frequency of the ith class or energy state.


For large N , it is the probability that a molecule lies in the ith energy-state.
A formal definition of Shannon entropy is given below.
Def I: Let (p1 , p2 , ...., pn ) be the probability of occurrence of the events
(E1 , E2 , ...., En ) associated with a random experiment α. The Shannon en-
tropy of the random experiment or system α is defined by,

H(α) = H(p1 , p2 , ...., pn ) = −kpi lnpi

where 0ln0 = 0, k is the unit of measurement of entropy.


Def II: Let X ∈ < denote a discrete random variable which takes on values
(x1 , x2 , ....., xn ) with probabilities (p1 , p2 , ...., pn ), the entropy H(X) of X is
then defined by the expression
Pn
H(X) = −k i=1 pi lnpi , 0ln0 = 0

where k denotes a positive constant which determines the unit of measure-


ment.

(iii) Weighted Entropy

Consider a probabilistic experiment whose corresponding probability space


has a finite number of elementary events (E1 , E2 , ......, En ) with probability
of occurrence (p1 , p2 , ......., pn ):
Pn
pi ≥ 0 (i = 1, 2, ......, n), i=1 pi =1

The different elementary events {Ei } have different (objective or subjective)


weights according to their importance with respect to a given qualitative
characteristic of the system. We shall ascribe to each event Ek a positive
number wk (≥ 0) directly proportional to its importance or significance. We
call wk the weight of the elementary event Ek .
Def: We define the weighted entropy of the experiment by the quantity
Pn
In (p1 , p2 , ....., pn ; w1 , w2 , ....., wn ) = − k=1 wk pk lnpk
Chapter 1. Introduction 14

We have the properties:

(a) In (p1 , p2 , ...., pn ; w1 , w2 , ...., wn ) ≥ 0


(b) If w1 = w2 = ..... = wn = w
In (p1 , p2 , ....., pn ; w1 , w2 , ....., wn ) = −w ni=1 pk lnpk = Hn (p1 , p2 , ...., pn )
P

where Hn (p1 , p2 , ...., pn ) is the Shannon entropy which is determined uniquely


upto an arbitrary multiplicative constant.

(iv) Cross entropy or Relative Information

In the framework of mathematical statistics developed by Kullback and


Leibler[KL51, KL59] we are concerned mainly with problems related to prior
and posterior probabilities and we have the concept of relative information
or cross entropy defined as follows:
Def: Let P = (p1 , p2 , ...., pn ) and Q = (q1 , q2 , ...., qn ) denote two complete
set of probabilities with ni=1 pi = ni=1 qi = 1. The relative information or
P P

cross entropy H(P, Q) of P relative to Q is defined by the expression


Pn pi
H(P, Q) = i=1 pi ln qi

where 0ln 00 = 0 by definition.


H(P, Q) > 0 and vanishes if and only if P = Q.
[G. Jumarie (1990): Relative Information: Theory and Applica-
tion]

1.4 Organization of the Thesis: Summary of the


Work
Chapter-2: Dynamics of Multi-Species Ecosystems: Measure of
Dynamical Complexity with Applications
A multispecies ecosystem consisting of many varied interacting species or
components connected in a more or less complicated fashion is a complex
dynamical system[Smi78, HB91]. We have considered a multi-species pop-
ulation ecosystem consisting of n components or species governed by the
system of non-linear differential equations[JS04]

dNi
= fi (N1 , N2 , ......Nn , α), (i = 1, 2, ...., n)
dt
Chapter 1. Introduction 15

where the vector N (t) = (N1 (t), N2 (t), ...., Nn (t)) lies in the positive quad-
rant of the Euclidean space E n . A small deviation from the stationary state
of the system is taken and the system is linearized. The linearized system
is expressed in matrix form

dx(t)
= Ax(t)
dt

To find the solution of the matrix equation we have assumed that A is


diagonalizable as almost always turns out to be the case in ecology[JS04].
The solution of the matrix equation can be written as [Ros70, Yod89]

λi t
P
δN (t) = x(t) = i ci e vi

or more explicitly, δNi (t) = δNi (0)eλi t (i = 1, 2, ...n)

where ci are constants of integration and vi are constant column vectors.


Then on the basis of this solution the stability of the stationary state is
investigated. The effect of the variation of the environmental parameter α
is also studied.
Then we have used the concept of generalized Lyapunov function to provide
a measure of dynamical complexity. A generalized Lyapunov function is
given by [JS04]
n
X Ni
V (N |N ∗ ) = Ni∗ φ( )
Ni∗
i=1
Ni
where φ( N ∗ ) is a continuously twice-differentiable convex function of the
i
argument satisfying the conditions[Yod89]:

0 d2 φ Ni
φ(1) = φ (1) = 0 , 2 > 0 , f or ξi = ∗ > 0
dξi Ni

The dynamical complexity is the rate of change of the entropy that is, of
the second-order variation δ 2 V (N )
n
d 2 X
H(λ1 , λ2 , ...., λn ; t) = {δ V (N )} = ki λi e2λi t
dt
i=1

The positive value of the dynamical complexity implies the instability of the
system and vanishing the time average of the dynamical complexity implies
the existence of the closed orbits of the system. Then as applications we
Chapter 1. Introduction 16

have studied the role of the measure of dynamical complexity in the char-
acterization of different dynamical behaviours such as stability, instability,
periodicity, bifurcation and limit-cycle of some model ecosystems.

Chapter-3: Study of Stability, Bifurcation and Pattern Forma-


tion in a Non-Linear Diffusive Prey-Predator System
Reaction-Diffusion equations provide perhaps one of the most widely studied
model for the study of the biological pattern formation and have been suc-
cessfully applied to a wide range of developmental and ecological systems.
We have considered a Holling type-II functional response prey-predator sys-
tem described by the system of equations
 
dx x xy
=x 1− −
dt γ 1+x
 
dy x
=β −α y
dt 1+x

where x and y are respectively the concentrations of prey and predator


species at any time t. The parameters involved in the equations namely α,
β and γ are all positive. The steady states are calculated and we have studied
the dynamical behaviours of the system about the steady state where both
the species exist. For that the system is linearised and finally it has been
shown that Hopf-bifurcation take place when parameter γ passes through
the critical value
1+α
γc = , α ∈ (0, 1)
1−α
Next we have investigated the nature of the limit cycle arising from the
Hopf-bifurcation by calculating the Lyapunov index at the steady state and
at the bifurcation value γ = γc [EK88, BC03]. Next the diffusion driven
instability or Turing instability is studied and we have shown that the influ-
ence of diffusion is not only to destabilize the prey-predator system under
consideration, but also to shift the point of bifurcation of the original non-
diffusive system. The new bifurcation value is solely dependent on the ratio
of diffusion coefficients[BC99, BBB02].
Then we have searched for the explicit form of the steady state solutions bi-
furcating beyond the critical value γ = γc . Calculations showed that beyond
the critical point there is a two fold multiplicity of the dissipative structure
arising at the first bifurcation point. That is, beyond the transition the
Chapter 1. Introduction 17

system has equal probability to evolve to two different solutions, depending


on the initial conditions.

Chapter-4: Stochastic Analysis of Ecosystems with Fluctuating


Environmental Parameters
An ecosystem in a static environment is characterized by parameters which
are fixed for all time. We have considered a two species ecosystem discussed
by the system of differential equations

dN1 /dt = F1 (N1 , N2 , φ(t))


dN2 /dt = F2 (N1 , N2 , φ(t))

where N1 and N2 are the population sizes of the two-species system, φ(t)
is a single time-dependent parameter whose mean value over a long period
of time is φ∗ . We have assumed that a steady-state solution of the sys-
tem (with φ(t) held constant at φ∗ ) is asymptotically stable. We also have
assumed that a small deviation in φ about the mean value causes a small
fluctuation in the static of the system. We have denoted the deviation of
φ(t) from the mean value by f (t) and x1 and x2 are taken as the deviation
of (N1 , N2 ) from the steady-state value. The system is then linearized and
Fourier transform is taken to the linearized system to convert it to algebraic
equations. Then the algebraic equations are solved.
Now the system under consideration reveals stochastic or random nature un-
der the effect of fluctuating environmental parameters φ(t). Driven by the
parametric fluctuation fi (t), the population xi (t) turns to a random variable.
For random parametric fluctuation fi (t) the linearized system of equations
reduces to the system of stochastic equations[Arn74]. For the stochastic
analysis of the system we need two fundamental concepts of stochastic pro-
cess namely the auto co-variance and spectral density fluctuation of the
random variables xi (t) (i = 1, 2).
As application we have investigated the dynamical behaviour of stability of
two models of prey-predator system with the effect of fluctuating environ-
mental parameters. First we have considered the classical Lotke-Volterra
Chapter 1. Introduction 18

prey-predator system[EK88]

dN1 /dt = N1 (a − bN2 )


dN2 /dt = N2 (−c + dN1 )

where N1 and N2 are the population sizes of prey and predator species
respectively. The parameters a, b, c, d are all positive. We have assumed the
growth rate of prey a to be time-dependent. Next we have considered a
non-Lotka-Volterra prey-predator system[Ode80, CG13].

dN1 /dt = N12 (1 − N1 ) − N1 N2


dN2 /dt = N2 (N1 − µ)

Here the parameter µ is assumed to be time-dependent. It has been found


that the classical Lotka-Volterra system which becomes unstable with the
effect of small environmental parametric fluctuation. For the non-Lotka-
Volterra system under the effect of the randomly fluctuating parameter µ
characterized by a white noise the stability of the system is not disturbed.

Chapter-5: Deterministic and Stochastic Models of Two Mutu-


alistic Systems with and without delay
An interesting and most beneficial association between species is that of co-
operation or mutualism. First we have considered the classical mutualistic
model system due to May[May74]
 
dN1 N1
= rN1 1 −
dt k1 + αN2
 
dN2 N2
= rN2 1 −
dt k2 + βN1

where N1 and N2 are the population sizes of the two species. r is the common
growth rate and k1 , k2 are the parameters. As the second model we have
considered the famous Moth and Yucca plant mutualistic model which has
a similarity with the May’s model and is given by[Pel03]
 
dN1 N1
= r1 N1 1 −
dt k0 + k1 N2
dN2
= N2 (r2 N1 − mN2 )
dt
Chapter 1. Introduction 19

where N1 and N2 are the population sizes of the Yucca plant and moths
respectively, r1 is the growth rate of Yucca plant and r2 is that of the
moth. In both the models discrete time delay is introduced and the systems
are linearized about the non-trivial steady state. Then by Routh-Hurwitz
condition it has been shown that in absence of delay the systems are stable.
In presence of delay both the systems show periodic nature. Next we have
considered stochastic extension of the two model equations without and
with delay. For this study we have required some basic properties of Fourier
transform, spectral density, covariance function of random fluctuations etc.
The stochastic extension of the first model is given by
 
dN1 (t) N1 (t − τ )
= rN1 (t) 1 − + α1 ξ1 (t)
dt k1 + αN2 (t)
 
dN2 (t) N2 (t − τ )
= rN2 (t) 1 − + α2 ξ2 (t)
dt k2 + βN1 (t)

and that of the second model is given by


 
dN1 (t) N1 (t − τ )
= r1 N1 (t) 1 − + α1 ξ1 (t)
dt k0 + k1 N2 (t)
dN2 (t)
= N2 (t) (r2 N1 (t) − mN2 (t − τ )) + α2 ξ2 (t)
dt

where in both cases ξ1 (t) and ξ2 (t) are independent Gaussian white noises
satisfying the conditions[McQ67]

hξi (t)i = 0 and hξi (t)ξj (t)i = δij δ(t − t0 )

where h i denotes the ensemble average of the stochastic process, α1 and α2


are the amplitudes of the white noises. Calculations showed that in absence
of delay the models are stochastically stable (in the sense of the second-order
moment). When delay is present the integrations are intractable and hence
we have taken help of numerical computation.

Chapter-6: Activator-Inhibitor Model of a Dynamical System:


Application to an Oscillating Chemical Reaction System
A complex system is composed of many parts, elements or components which
are connected in a more or less complicated fashion[Ros70, Ros85, Ros79].
We have considered a general dynamical system described by the system of
Chapter 1. Introduction 20

differential equations

dxi
= fi (x1 , x2 , ....., xn ), (i = 1, 2, ....., n)
dt

The functions fi are assumed to be continuous and to have continuous partial


derivatives in some open set Ω = {xi , xi ≥ 0}. Following Higgin’s activation
and inhibition model of dynamical system we have considered the following
observational quantities[Ros85]
 
∂ dxi ∂
aij (x1 , x2 , .....xn ) = = Vx , (i, j = 1, 2, ...., n)
∂xj dt ∂xj i

where Vxi is the net rate at which the substance (or reactant) is being
produced as a result of interaction occurring in the system. The quantities
aij , (i, j = 1, 2, ...., n) as Higgin noted have informational correlation. For
instance, the quantities aij defined above have three possibilities[Ros70]

(a)aij > 0 (b)aij < 0 (c)aij = 0

Depending on them it is decided whether xj is an activator or inhibitor


of xi . On the basis of activator-inhibitor any dynamical system can be
converted into an informational network of activator and inhibitor which
seems more natural than the dynamical one[Ros85]. The quantities aijk
defined by[Ros85]
h  i
∂ ∂ ∂ dxi
aijk = ∂xk aij = ∂xk ∂xj dt
i, j, k = (1, 2, ....., n)

also have the characters of informational correlations. We can continue the


iterating process in this way to get successive networks aij , aijk , .... to give an
informational description of the dynamical system. The system of networks
{aij } play significant role in the study of stability, instability and periodicity
of the dynamical system. Next we have studied the role of the network {aij }
in finding out the criteria of stability, instability and periodicity of a chemical
Chapter 1. Introduction 21

reaction system described by the kinetic equations[Mur01]

dx1 x2
= a − bx2 + 1 = f1 (x1 , x2 )
dt x2
dx2
= x21 − x2 = f2 (x1 , x2 )
dt

where x1 and x2 are the concentrations of two reactants depending on time,


a and b are externally given fixed in time.

Chapter-7: Non-linear Dynamics of a Two-Species Chemical Os-


cillator: Two-Time Scales Perturbation Analysis
Oscillatory chemical reactions occur in many periodic phenomena both in
living and non-living systems. Schnakenberg considered a class of two species
simple but chemically possible tri-molecular reactions which admit periodic
solutions[Sch79, Gol96]. The model equations of the reaction scheme are

dx 2
dt = f (x, y) = x y − x + b
dy 2
dt = g(x, y) = −x y + a

Calculations showed that there is a bifurcation when[EK88]

a−b
= (a + b)2
a+b

Hopf-bifurcation theorem then predicts closed periodic trajectories[EK88].


To find a solution of such a non-linear system in closed form is very difficult.
Several perturbation methods exist to obtain approximate solution of such
non-linear systems for small strength of parameter (<< 1). We have here
adopted the two-time scale perturbation approach to find an approximate
solution of the system[Mur74, Str94].
We have reduced the concentration as

x − x0
x0 =
x0
y − y0
y0 =
y0
Chapter 1. Introduction 22

and have confined our analysis to the case such that

B − Bc
= 2 < 1
Bc

where
a
A = a + b, B =
a+b
and the critical value of B is

A2 + 1
Bc =
2

Then the solution is expanded as

x0 = x0 (τ, t0 ) = x01 (τ, t0 ) + 2 x02 (τ, t0 ) + · · · · · ·


y 0 = y 0 (τ, t0 ) = y10 (τ, t0 ) + 2 y20 (τ, t0 ) + · · · · · ·

0
where the slow time τ and the fast time t are defined by

τ = 2 t
0
t = (1 + ω1 + 2 ω2 + · · · · · · )t
d
dt = (1 + ω1 + 2 ω2 + · · · · · · ) ∂t∂ 0 + 2 ∂τ

Substituting the above expressions in the system of equations we have ob-


tained for each order of  a pair of equations. Solving the first three set of
equations and suppressing the secular terms we have the the solution to the
first order of approximation.
Next we have tested the effectiveness of the approximate solution by the
orbital stability of the limit cycle[Vor85, NP77]. It is done by the method
of Poincare0 map.

Chapter-8: Stochastic Models of Two Species Chemical Oscilla-


tors: Fluctuation and Stability
Chemical oscillator is one of the fascinating subjects of biological oscillators[FB85].
We have considered a system described by a set of n variables N (t) =
(N1 (t), N2 (t), .............., Nn (t)) where Ni (t)(i = 1, 2, ....., n) are the concen-
trations of the ith component at any time t and the system is governed by
Chapter 1. Introduction 23

the set of non-linear differential equations

dNi
= fi (N1 , N2 , ........, Nn ), i = 1, 2, ......, n
dt

The system of equation is then linearized about some stationary state and
the dynamical behaviours are studied about that stationary state. Next
we have taken the stochastic extension of the linearized system of equa-
tion where the randomness is incorporated in the form of Langevin type of
stochastic differential equations

dx(t)
= Ax(t) + f (t)
dt

where x(t) is now a random state variable, A is the time-independent drift


coefficient and f (t) is the random noise (or perturbation) which is assumed
to be a Gaussian white noise satisfying the conditions[McQ67]

hf (t)i = 0 and hf (t1 )f (t2 )i = 2Dδ(t2 − t1 )

where h i represents the average over the ensemble of the stochastic process
and D is the diffusion coefficient or intensity of the random perturbation[McQ67].
Then the solution is Z t
x(t) = ζ(t, τ )f (τ )dτ
0

where ζ(t, τ ) is the transfer function of the system satisfying the conditions

ζ(t, τ ) = 0, f or τ > t
ζ(τ, τ ) = 1

which are the physical causality conditions[GM62]. After the transfer func-
tion ζ(t, τ ) is calculated, the mean and variance of x(t) at any time t are
given by, Z t
mean x(t) = hx(t)i = ζ(t, τ ) hf (τ )i dτ
0

Z tZ t D E
0 0 0 0
var(x(t)) = σ 2 (t) = ζ(t, τ )ζ(t , τ ) f (τ )f (τ ) dτ dτ (1.4.1)
0 0
Chapter 1. Introduction 24

And finally we have,

mean x(t) = 0
Z t
σ 2 (t) = ζ 2 (t, τ )2Ddτ
0

Then we have
dσ 2 (t)
= 2Aσ 2 (t) + 2D
dt
which is the basic equation of the time-evolution of the variance (or mean-
square fluctuation) of the stochastic process N (t). Then the stochasticity
and stability of Lotka-Volterra model system and Schnakenberg model sys-
tem is studied and in both cases in the sense of second order moment the
system is unstable under the influence of random perturbation, whatever
small it may be.

Chapter-9: Statistical Model of Multi-Species Chemical Reaction


System: Entropic Analysis of Complexity, Stability and Periodic-
ity
Biochemical reactions are continuously taking place in all living organisms.
We have considered a multi-species chemical reaction system consisting of
n components. The concentrations Ni , (i = 1, 2, ...., n) of the ith compo-
nent are random variables. The randomness is in view of the many-body
aspect of the system. Pt (Ni ) is the probability distribution of Ni at time
t. The entropy of the ith component or species is given by the generalized
Boltzmann-Gibbs entropy[CD00]
 
X Pt (Ni )
S(Pt (Ni | m)) = − Pt (Ni )ln
m(Ni )
{Ni }

where the summation is over all possible values of Ni and m(Ni ) is some mea-
sure function of Ni defined on Ni -space and constitutes the prior information
about (or weight of) Ni . The main problem of the statistical model is to es-
timate the probability distribution Pt (Ni ) on the basis of some information
(or data) available. It has been estimated by Jaynes’ maximum-entropy
principle[Jay57]. Then using multi-poisson distribution, the expression of
entropy-production and entropy-reduction is computed.
Chapter 1. Introduction 25

Next we have considered the system of equations

dxi
= fi (x1 , x2 , ....., xn , α), (i = 1, 2, ....., n)
dt

which governs the multi-species chemical reaction system under considera-



tion. xi = N i > 0 are the concentrations of the species and are the state
variables (or phenomenological variables) for the dynamical (or thermody-
namical) description of the system, α is a control parameter (or a set of
control parameters). This non-linear system is linearised about its steady
state and the solution is obtained as

δxi (t) = δxi (0)eλi t , (i = 1, 2, ....., n)

To study the stability of the stationary state, entropy reduction is used as


a Lyapunov function. The rate of change of the entropy-reduction is then
given by
n
d X
H(λ1 , ...., λn , t) = δ 2 (∆Sr ) = Ci λi e2λi t
dt
i=1

which is known as the measure of thermodynamic complexity. The criteria


of stability and instability in terms of thermodynamic complexity reduces
to the form
Pn 2λi t
H(λ1 , λ2 , ......, λn ; t) = i=1 Ci λi e <0 f or asymptotic stability
>0 f or instability

The entropic criteria of periodicity of a system is the vanishing of the time av-
erage (over a period (0, 2π)) of the thermodynamic complexity [IM75, CG13].
Next we have made critical analysis of stability, instability and periodic-
ity and limit cycle of three oscillating chemical reactions, Lotka-Volterra
model system, Prigogine-Lefever model(Brusselator) and Sel0 kov model of
glycolytic oscillation on the basis of thermodynamic complexity. In all the
three cases the results have predicted a closed orbit about the stationary
state.

Chapter-10: General Conclusion


The thesis ends with a general summary of the work. This chapter also
includes some future path of work in this subject area.
Part I

Ecological Systems

26

You might also like