Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

The Dynamics of Money and A New Class of Self-Organized Critical Systems

Download as ps, pdf, or txt
Download as ps, pdf, or txt
You are on page 1of 78

The Dynamics of Money

and
A New Class of Self-Organized Critical
Systems
Simon Flyvbjerg Nrrelykke
The Niels Bohr Institute
University of Copenhagen
Blegdamsvej 17
DK-2100 Copenhagen 
Denmark
August 1999

Thesis submitted for the Master of Science degree (Cand. Scient.) in Physics at
the Faculty of Science, University of Copenhagen.
Supervisor: Per Bak
Acknowledgements

I thank Iva Tolic for her company and comments, Henrik Flyvbjerg for
numerous helpful discussions, and Jacob Sparre Andersen for scanning in
Fig. 5.9(b). Last, but not least, I thank my supervisor Per Bak for inspiration
and for hours spent on discussion of the early model. I am grateful to the
Lrup Foundation for granting me their scholarship.
Abstract
We introduce a dynamical many-body theory of money in which the value
of money is a time-dependent \strategic variable", which is chosen by the
individual agent. The theory is illustrated by a simple network-model of
monopolistic vendors and buyers. The indeterminacy of the value of money
in classical economic equilibrium theory implies a soft \Goldstone mode,"
leading to large uctuations in prices in the presence of noise.
A simpli ed version of the model, driven by extremal dynamics, is exam-
ined in Monte Carlo simulations. The model has no steady state attractor
for its dynamics but evolves instead to an asymptotic behaviour which is
self-organized critical. Avalanches are de ned and their size distribution is
found to follow a power-law with exponent  = 1:48  0:02. A simple ex-
planation of this exponent in terms of rst-return-times of a random walker
(hence  = 3=2) is discussed. Exponents for the power-laws describing the
spatio-temporal distribution of activity are determined and discussed. The
self-organized criticality (SOC) found here, in a system that does not evolve
to a steady state, demonstrates that SOC is a useful concept also in the
analysis of driven dissipative systems without a stationary attractor state.
Contents
1 Introduction 3
1.1 Thesis outline . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2 Introductory Remarks about Economy 5
2.1 Fundamentals . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.1.1 A di erent approach . . . . . . . . . . . . . . . . . . . 6
3 The Dynamics of Money 7
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3.2 The Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.2.1 Solving the dynamics . . . . . . . . . . . . . . . . . . . 13
3.3 Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.3.1 Spatial inhomogeneity . . . . . . . . . . . . . . . . . . 17
3.3.2 Smart agents use Lagrange multipliers . . . . . . . . . 18
3.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4 Self-Organized Criticality 20
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
4.2 Sandpile models . . . . . . . . . . . . . . . . . . . . . . . . . . 20
4.3 Evolution models . . . . . . . . . . . . . . . . . . . . . . . . . 22
4.4 De nition of SOC . . . . . . . . . . . . . . . . . . . . . . . . . 24
4.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
5 SOC on an Asymptote 26
5.1 Geometry and Game . . . . . . . . . . . . . . . . . . . . . . . 27
5.2 Goal and Strategy . . . . . . . . . . . . . . . . . . . . . . . . 27
5.2.1 How to increase the pro t . . . . . . . . . . . . . . . . 30

1
CONTENTS 2

5.2.2 Random new price . . . . . . . . . . . . . . . . . . . . 31


5.2.3 The Loser . . . . . . . . . . . . . . . . . . . . . . . . . 31
5.3 The simulation . . . . . . . . . . . . . . . . . . . . . . . . . . 32
5.4 Temporal correlations . . . . . . . . . . . . . . . . . . . . . . . 34
5.5 Distribution of pro ts . . . . . . . . . . . . . . . . . . . . . . . 35
5.5.1 Comments on the shape of p1 (s) near the threshold . . 36
5.6 Stationarity of threshold . . . . . . . . . . . . . . . . . . . . . 37
5.6.1 An analytical estimate of expected uctuations in the
value of the ratio of quartiles suq=slq. . . . . . . . . . . 38
5.6.2 Analytical determination of the time dependence of the
scaling factor . . . . . . . . . . . . . . . . . . . . . . . 41
5.7 Avalanches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
5.7.1 Rejection of simple random-walk explanation . . . . . . 44
5.7.2 The Standard & Poor's 500-Stock Index . . . . . . . . 45
5.8 Spatial correlations . . . . . . . . . . . . . . . . . . . . . . . . 48
5.9 Two words about prices . . . . . . . . . . . . . . . . . . . . . 53
5.10 On two-dimensional models . . . . . . . . . . . . . . . . . . . 54
5.11 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
6 Conclusion 56
A How to increase the utility 57
B First-return-times for a random walker 62
C A brief introduction to Levy Flights 64
D Finite size e ects 66
E Data tting 68
F Coarse graining of data 71
Chapter 1
Introduction
We want to examine the collective behaviour of many economic agents who
act according to speci c rules. The agents are chosen to resemble typical
rational economic agents as much as possible, as the latter are known from
the economic literature. That is, the agents are given utility functions which
they wish to maximise while satisfying some constraint. Agents do not have
complete insight into the detailed strategies of other agents in the economy,
as this would be an unrealistic assumption. This approach breaks with the
tradition of classical equilibrium theory, but we should bear in mind that the
latter theory was developed before extended simulations of many interacting
and heterogeneous agents was possible. Agents do have insight into their
neighbours' strategies, and they also behave in a completely rational manner.
We consider two di erent models of a one-dimensional economy. The
basic ingredients of both of them are the same, but the dynamic rules are
very di erent. In the rst model agents have rather complex updating rules
and strategies. In the simplest cases, the system dynamics is exactly solvable
analytically. In the second model agents have very simple updating rules and
act according to very simple strategies. In this case we nd that the econ-
omy evolves asymptotically to a non-steady, but nevertheless self-organized
critical state. No exact analytical solution is found in this case, nor is it to
be expected.

3
CHAPTER 1. INTRODUCTION 4

1.1 Thesis outline


This thesis is organised as follows:.
Chapter 2 contains a brief introduction to classical economic equilibrium
theory.
Chapter 3 introduces a simple one-dimensional model of an economy of
interacting agents. From this model the \value of money" is determined
in a dynamical manner. In the steady state the behaviour of the model
agrees with classical economic equilibrium theory .
Chapter 4 gives an introduction to the concept of Self-Organized Critical-
ity (SOC). Two examples are given: the \sand-pile" model by Bak,
Tang and Wiesenfeld, and the evolution model by Bak and Sneppen.
Avalanches are discussed and a possible de nition of SOC in presented.
Chapter 5 introduces a simpli ed version of the model solved in Cha. 3 but
with update rules that follow extremal dynamics. The average price
level in this model decays exponentially to zero, i.e., the model dynam-
ics leads to de ation with an asymptotically constant rate. In spite of
this asymptotic behaviour the model turns out to exhibit self-organized
criticality. Power laws are found and exponents are determined. Com-
parison to the Standard & Poor's 500-Stock Index is made.
Chapter 6 contains conclusions.
Appendix A contains an extended discussion on various aspects of the model
treated in Chap. 5, e.g., how to avoid de ation.
Appendix B contains derivations of the distribution of rst-return-times for
an unbiased random walk.
Appendix C contains a brief introduction to Levy ights.
Appendix D contains some considerations on nite size e ect.
Appendix E contains comments on the tting procedures we used.
Appendix F contains a description of how the coarse-graining of data applied
in Chap. 5 was carried out.
Chapter 2
Introductory Remarks about
Economy
In this chapter we introduce some of the fundamental notions and terminolo-
gies from classical economic equilibrium theory.

2.1 Fundamentals
Consider a collection of agents and a single central agent or auctioneer. The
auctioneer calls out a price for a speci c good, and the agents indicate how
much they want to buy or sell at that price. If the demand is not equal to
the supply the auctioneer changes the price. No trade takes place until a
price is found at which demand equals supply.
The price at which the trade takes place (the market is cleared) is the
market equilibrium price. When all agents trade (all sellers are matched by
buyers) the market is said to be in general equilibrium.
Assume each agent has a system of preferences, described by a utility
function. Each agent now want to maximise his utility as described by the
utility function. The competitive equilibrium price is the price at which the
agent has maximised his utility. The general competitive equilibrium , brie y
\general equilibrium," is the situation in which all agents in the economy
have simultaneously maximised their utility function. In other words, general
equilibrium is realised when no single agent can improve his situation without
some other agent being worse o .

5
CHAPTER 2. INTRODUCTORY REMARKS ABOUT ECONOMY 6

This is the basis of classical economic equilibrium theory. The next step
is to introduce constraints on the utility function and maximise it using
Lagrange multipliers. Depending on the assumed properties of the utility
function and the constraints on it, the analysis can be more or less compli-
cated; for an introduction see [28]. Much work has been done on proving the
existence of these equilibria, but usually this existence is merely assumed,
the work then concentrates on determining the properties of the equilibrium.

2.1.1 A di erent approach


In all fairness it should be mentioned that some economists are doing work
much in the spirit of ordinary complex system studies. A well known and
much cited example is Brian Arthur's El Farol bar problem [1, 2], a model
inspired by the attendance, on certain nights of entertainment, in a bar near
the Santa Fe Institute, New Mexico. This model has been further developed
by Challet and Zhang into The Minority Game [7]. While these models are
both interesting and amusing, they have little to do with what we intend to do
here, hence will not be discussed any further here. The American economist
Paul Krugman, who has served on the President's Council of Economic Ad-
visors, has written a small book, The Self-Organizing Economy, [19] in which
he seeks to incorporate some of the ideas and concepts of complexity studies
into economy.
Chapter 3
The Dynamics of Money
The content of this chapter is essentially an extended version of the material
published in [4]. We present a dynamical many-body theory of money in
which the value of money is a time dependent \strategic variable", which is
chosen by the individual agent. The idea is illustrated by a simple network-
model of monopolistic vendors and buyers.The indeterminacy of the value
of money in classical economic equilibrium theory implies a soft \Goldstone
mode," leading to large uctuations in prices in the presence of noise

3.1 Introduction
In classical equilibrium theory in economics [9], agents submit their demand-
vs-price functions to a \central agent" who then determines the relative prices
of goods and their allocation to individual agents. The absolute prices are
not xed, so the process does not determine the value of money, which merely
enters as a ctitious quantity that facilitates the calculation of equilibrium.
Thus, traditional equilibrium theory does not o er a fundamental explana-
tion of money, perhaps the most essential quantity in a modern economy.
Indeed, a \search-theoretic" approach to monetary economics has been
proposed [18, 17, 31]. Agents may be either money traders, producers, or
commodity traders. They randomly interact with each other, and they decide
whether or not to trade based on \rational expectations" about the value of a
transaction. After a transaction the agent changes into one of the two other
types of agents. This theory has a steady state where money circulates. As

7
CHAPTER 3. THE DYNAMICS OF MONEY 8

other equilibrium theories, this theory does not describe a dynamics leading
to the steady state, of sucient detail for one to simulate it.
In equilibrium theory, all agents act simultaneously and globally. In real-
ity, agents usually make decisions locally and sequentially. Suppose an agent
has apples and wants oranges. He might have to sell his apples to another
agent before he buys oranges from a third agent: hence money is needed for
the transaction, supplying liquidity. It stores value between transactions.
Money is essentially a dynamical phenomenon, since it is intimately re-
lated to the temporal sequence of events. Our goal is to describe the dynamics
of money utilising ideas and concepts from theoretical physics and economics,
and to show how the dynamics may x the value of money.
We study a network of vendors and buyers, each of whom has a simple
optimisation strategy. Whenever a transaction is considered, the agent must
decide the value of the goods and services in question, or, equivalently, the
value of money relatively to that of the goods and services he intends to
buy or sell. He will associate that value to his money that he believes will
maximise his utility. Thus, the value of money is a \strategic variable" that
the agent in principle is free to choose as he pleases. However, if he makes a
poor choice he will loose utility.
For simplicity, we assume that agents are rather myopic: they have short
memories, and they take into account only the properties of their \neigh-
bours", i.e., the agents with which they interact directly. They have no idea
about what happens elsewhere in the economy.
Despite the bounded rationality of these agents, the economy self-organises
to an equilibrium state where there is a spatially homogeneous ow of money.
Since we de ne the dynamics explicitly, we are, however, also able to treat
the nature of this relaxation to the equilibrium state, as well as the response
of the system to perturbations and to noise-induced uctuations around the
equilibrium. These phenomena are intimately related to the dynamics of the
system, and cannot be discussed within any theory concerned only with the
equilibrium situation.
Our model is a simple extension of Jevons' [15] example of a three agent,
three commodity economy with the failure of the double coincidence of wants,
i.e., when only one member of a trading pair wants a good owned by the other.
A way out of the paradox of no trade where there is gain to be obtained by
all, is to utilise a money desired by and held by all. Originally this was gold,
but here we show that the system dynamics can attach value to \worthless"
CHAPTER 3. THE DYNAMICS OF MONEY 9

paper money.
We nd that the value of money is xed by a \bootstrap" process: agents
are forced to accept a speci c value of money, despite this value's global
indeterminacy. The value of money is de ned by local constraints in the
network, not by trust. By \local," we simply mean that each agent interact
only with a very small fraction of other agents in his neighbourhood.
This situation is very similar to problems with continuous symmetry in
physics. Consider, for instance, a lattice of interacting atoms forming a crys-
tal. The crystal's physical properties, including its energy, are not a ected
by a uniform translation X of all atoms, this translational symmetry is con-
tinuous. Nevertheless, the position x(n) of the nth atom is restricted by
the position of its neighbours. This broken continuous symmetry results in
slow, large-wavelength uctuations, called Goldstone modes [14, 24] or \soft
modes." These modes are easily excited thermally, or by noise, and thus
gives rise to large positional uctuations.

3.2 The Model


In our model, we consider N agents, n = 1; 2; : : : ; N , placed on a one-
dimensional lattice with periodic boundary conditions. This geometry is
chosen in order to have a simple and speci c way of de ning who is interact-
ing with whom. The geometry is not important for our general conclusions
concerning the principles behind the xation of prices.
We assume that agents cannot consume their own output, so in order to
consume they have to trade, and in order to trade they need to produce.
Each agent produces a quantity qn , of one good, which is sold at a unit
price pn, to his left neighbour n 1. He next buys and consumes one good
from his neighbour to the right, who subsequently buys the good of his right
neighbour, etc., until all agents have made two transactions. This process is
repeated inde nitely, say, once per day.
For simplicity, all agents are given utility functions of the same form
un = c(qn ) + d(qn+1) + In(pnqn pn+1qn+1) : (3.1)
The rst term, c, represents the agent's cost, or displeasure, associated
with producing qn units of the good he produces. The displeasure is an
increasing function of q, and c is convex, say, because the agent gets tired.
CHAPTER 3. THE DYNAMICS OF MONEY 10

The second term d, is his utility of the good he can obtain from his neighbour.
Its marginal utility is decreasing with q, so d is concave. This choice of c and
d is common in economics; see, e.g., [31].
An explicit example is chosen for illustration and analysis,
c(qn ) = aqn , d(qn+1) = bqn +1 : (3.2)
The speci c values of a, b, , and are not important for the general results,
as long as c remains convex and d concave. For our analysis we choose a = 21 ,
b = 2, = 2, and = 21 .
The last term in Eq. (3.1) represents the change in utility associated with
the gain or loss of money after the two trades. Notice that the dimension
of In is [utility per unit of currency], i.e., the physical interpretation is the
value of money.
Each agent has knowledge only about the utility functions of his two
neighbours, as they appeared the day before. The agents are monopolistic,
i.e., agent n sets the price of his good, and agent n 1 then decide how much
qn, he will buy at that price. This amount is then produced and sold|there
is no excess production. The goal of each agent is to maximise his utility,
by adjusting pn and qn+1, while maintaining a constant (small) amount of
money. Money has value only as liquidity. There is no point in keeping
money, all that is needed is what it takes to complete the transactions of the
day.
Thus, the agents aim to achieve a situation where the expenditures are
balanced by the income:
pnqn pn+1qn+1 = 0 : (3.3)
When the value of money is xed, In =I , the agents optimise their utility
by charging a price
p = 21=3 I 1 (3.4)
and selling an amount
q = 2 2=3 (3.5)
at that price. This is the monopolistic equilibrium.
Note that the resulting quantities q, are independent of the value of
money, which thus represents a continuous symmetry. There is nothing in the
CHAPTER 3. THE DYNAMICS OF MONEY 11

equations that xes the value of money and the prices. Mathematically, the
continuous symmetry expresses the fact that the equations for the quantities
are \homogeneous of order one." The number of equations is one less than
the number of unknowns, leaving the value of money undetermined. We shall
see how this continuous symmetry eventually is broken by the dynamics.
Agent n tries to achieve his goal by estimating the amount of goods qn ,
that his neighbour will order at a given price, and the price pn+1, that his
other neighbour will charge at the subsequent transaction.
Knowing that his neighbours are rational beings like himself, he is able to
deduce the functional relationship between the price pn, that he demands and
the amount of goods qn, that will be ordered in response to this. Furthermore,
he is able to estimate the size of pn+1, based on the previous transaction with
his right neighbour. This enables him to decide what the perceived value of
money should be, and hence how much he should buy and what his price
should be. This process is then continued inde nitely, at times  = 1; 2; 3; : : :.
This de nes the game. The strategy we investigate contains the assump-
tion that agents do not change their valuation of money I , between their two
daily transactions, and they maximise their utility accordingly.
The process is initiated by choosing some initial values for the I 's. They
could, e.g., be related to some former gold standard.
In xing his price at his rst transaction of day  , agent n exploits the
knowledge he has of his neighbours' utility functions, i.e., he knows that the
agent to the left will maximise his function with respect to qn;
@un 1; = 0 ; (3.6)
@qn;
hence the left neighbour will order the amount:
qn; = (In 2
1; pn; ) : (3.7)
This functional relationship between the amount of goods qn; , ordered
by agent n 1 at time  and the price pn; , set by agent n, allows agent n
to gauge the e ect of his price policy. Lacking knowledge about the value of
In 1; , agent n instead estimates it to equal the value it had in the previous
transaction In 1; 1, which he knows. Eliminating qn; from Eq. (3.1) we
obtain
CHAPTER 3. THE DYNAMICS OF MONEY 12

un; = 1=2In 41; 1pn;4 + 2pqn+1; (3.8)


+ In; (pn;1 In 21; 1 pn+1; qn+1; ) :
Maximising this utility un; , with respect to pn; and qn+1; yields
pn; = 21=3 In;1=3 In 2=13; 1 ; (3.9)
and
qn+1; = (In; pn+1; ) 2 : (3.10)
By arguments of symmetry,
pn+1; = 21=3In+11=3; In;2=31 (3.11)
is the price agent n + 1 will demand of agent n in the second transaction.
Since agent n does not yet know the value of In+1; , he instead uses the
known value of In+1; 1 when estimating pn+1; .
In the constraint Eq. (3.3), the following expressions are used
(guess) = (I
qn = qn; 2 ;
n 1; 1 pn; ) (3.12)
pn+1 = p(guess) 1=3 1=3 2=3
n+1; = 2 In+1; 1In; 1 ; (3.13)
(guess) 2
 
qn+1 = qn(guess)
+1; = In; pn+1; ; (3.14)
and pn is given by Eq. (3.9). Solving for In; , and evaluating at time  + 1,
we nd 1
 1=7

In; +1 = In4 1; In;
2 I
n+1; (3.15)
which sets agent n's value of money on day  +1 equal to a weighted geometric
average of the value agent n and his two neighbours prescribed to their money
the previous day.
Using this value of In, agent n can x his price pn and decide which
quantity qn+1 , he should optimally buy. This simple equation completely
1 This is mathematically equivalent to di erentiating Eq. (3.8) with respect to In;
(with the expressions in Eqs. (3.9,3.10,3.11) substituted for pn; , qn+1; , and pn+1; ) and
demanding it be zero.
CHAPTER 3. THE DYNAMICS OF MONEY 13

speci es the dynamics of our model. The entire strategy can be reduced to
an update scheme involving only the value of money|everything else follows
from this. Thus, the value of money can be considered the basic strategic
variable.
Although Eq. (3.15), has been derived for a speci c simple example, we
submit that the structure is much more general. In order to optimise his
utility function, the agent is forced to accept a value of money, and hence
prices, which pertain to his economic neighbourhood. Referring again to
a situation from physics, the position of an atom on a general lattice is
restricted by the positions of its neighbours, despite the fact that the entire
lattice can be shifted with no physical consequences.
Even though there is no utility in the possession of money, as explicitly
expressed by Eq. (3.3), the strategies and dynamics of the model nevertheless
leads to a value being ascribed to the money. The dynamics in this model
is driven by the need of the agents to make estimates about the coming
transactions. In a sense, this models the real world where agents are forced
to make plans about the future, based on knowledge about the past|and,
in practise, only a very limited part of the past. In short: the dynamics is
generated by the bounded rationality of the agents.
In the steady state, where the homogeneity of the utility functions give
In = In+1, we retrieve the monopolistic equilibrium equations (3.4) and (3.5).

3.2.1 Solving the dynamics


Taking the logarithm in Eq. (3.15) and introducing hn; = ln(In; ) yields the
linear equation:
hn; +1 = 74 hn 1; + 27 hn; + 71 hn+1; ; (3.16)
describing a Markov process.
The general expression, obtained from Eq. (3.2) is
hn; +1 = 1 2 ( (1 )hn 1; + ( 1)hn; + (1 )hn+1; ) : (3.17)
We see that in the extreme cases where the demands on the shape of the
functions c and d are relaxed, so that linear dependence on qn and qn+1 is
allowed, we get strange or unde ned behaviour. For = 1 and > 1 we
CHAPTER 3. THE DYNAMICS OF MONEY 14

have hn; +1 = hn; , i.e., no time development takes place. For = 1 and
< 1 the agent will not take his own previous value of money into account.
Finally, for = = 1 the system becomes ill-de ned. Hence there is good
reason for demanding that c and d are convex and concave, respectively.
Now assume that hn; is a slowly varying function of (n;  ) and that we
may think of it as the value of a di erentiable function h(x; t) in (x; t) =
(nx; t). Then, expanding to rst order in t and second order in x, we
nd the di usion equation
@h(x; t) = D @ 2 h(x; t) v @h(x; t) ; (3.18)
@t @x2 @x
2
with di usion coecient D = 145 (xt) , and convection velocity v = 37 xt . The
generator T , of in nitesimal time translations is de ned by
@h(x; t) = Th(x; t) : (3.19)
@t
Taking the lattice Fourier transformation, the eigenvalues of T are found
to be k = k2 D ikv, where the periodic boundary condition yields k =
2 l; l = 0; 1; : : : ; N 1. The damping time for each mode k , is given by
N
tk = (k2D) 1, i.e., it increases as the square of the system size N . The
only mode that is not dampened has k = 0, and is the soft \Goldstone
mode" [14, 24] associated with the broken continuous symmetry with respect
to a uniform shift of the logarithm of prices in the equilibrium:
All prices can be changed by a common factor, but the amount of goods
traded will remain the same, as already noted by Marx [23]. The rest of
the modes are all dampened (for a nite-size system), and hence the system
eventually relaxes to the steady state.
Figures 3.1 and 3.2 show results from a numerical solution2 of Eq. (3.16)
for 1000 agents with random initial values for the variables h (sampled from
a uniform distribution on the interval [0,2].) Figure 3.1 shows the spatial
variation of prices at two di erent times|convection with velocity v = 37 xt
is clearly seen, while the e ect of di usion is not visible on this time scale.
The relatively weak e ect of di usion means that spatial price variations,
2 The solution was found using periodic boundary conditions although the model dy-
namics strictly speaking requires the use of helical boundary conditions to properly repro-
duce the serial behaviour of the system. The problem with helical boundary conditions in
computer solution is the introduction of a \seem" that breaks the symmetry of the model.
CHAPTER 3. THE DYNAMICS OF MONEY 15

0.5
0.48
p
0.46
0.44

0 200 400 600 800 1000


agent number
Figure 3.1: Variation of prices for all agents at two di erent times,  = 3000
(full line) and  = 3200 (broken line.)

such as those shown in Fig. 3.1, can travel around the entire lattice many
times before di usion has evened them out. Consequently, the individual
agent experiences price oscillations with slowly decreasing amplitude, as seen
in Fig. 3.2.
Thus, despite the myopic behaviour of agents, the system evolves towards
an equilibrium. But in contrast to equilibrium theory, we obtain the temporal
relaxation rates towards the equilibrium, as well as speci c absolute values for
individual prices. The value of money is xed by the history of the dynamical
process, i.e., by the initial condition combined with the actual strategies by
the bounded rational agents.

3.3 Noise
If an agent is suddenly supplied with some extra amount of money, he will
lower his value of money, hence increase his price and consequently work less
and buy more goods. This e ect can be seen directly by simply adding a
constant term k, to the money part of the utility function. The demand that
the agents wish to spend all their money by the end of the day gives
pnqn pn+1 qn+1 + k = 0 (3.20)
CHAPTER 3. THE DYNAMICS OF MONEY 16

0.5
0.48
p
0.46
0.44

0 25000 50000
time
Figure 3.2: Price variation for a single agent. The oscillations are an artifact
due to the periodic boundary condition, setting hN +1; = h1; .

and with the expressions for pn, pn+1, qn , and qn+1 substituted this gives a
new value when solving for In
 1=3
I (k)7n;=3+1 = I (k = 0)n;
7=3 4
+1 k 2In
6
1; In; : (3.21)
This value is lower than the value of money I (k = 0)n, when no extra money
is supplied, just as our intuition told us. The e ect is in ation propagating
through the system, as described by the solution to Eq. (3.18) for a delta-
function initial condition
C 1 ( x jL vt )2!
h(x; t) = p
X
exp
4Dt
; (3.22)
4Dt j= 1
where L is the system size, x 2 [0; L], and C is the amplitude of the original
disturbance. The sum is necessary as we are operating with periodic bound-
ary conditions, and is intuitively clear by applying the method of mirrors.
Likewise, the destruction or loss of some amount of money by a single
agent will a ect the whole system. These are both transient e ects, and in
the steady state the same amount of goods will be produced and consumed,
as before the change.
In general, there might be some noise in the system, due to imperfections
in the agents' abilities to optimise properly their utility functions, or due to
external sources a ecting the utility functions. A random multiplicative error
CHAPTER 3. THE DYNAMICS OF MONEY 17

in estimating the value of money transforms to a linear noise in Eq. (3.18).


We assume that the noise (x; t), has the characteristics: h(x; t)i = 0 and
h(x; t)(x0; t0)i = A(x x0 )(t t0 ). Adding it to Eq. (3.18) and taking
the Fourier transform (with periodic boundary conditions in a system of size
L) one nds, after a reasonable trivial calculation, the equal-time correlation
function:

h[h(x) h(0)][h(x0 ) h(0)]i


A
= 2DL
X
q 2(eiqx 1)(e iqx 0
1) ; (3.23)
q
where q = 2L n; n = 0; 1; 2; : : :. For x = x0 and L ! 1 this becomes
h[h(x) h(0)]2i = 2AD x ; (3.24)
viz., the dispersion for a biased random walker in one dimension with position
h, time x, and di usion coecient 4AD . In the presence of noise, the agents
no longer agree about the value of money, and there will be large price
uctuations. The uctuations re ect the lack of global restoring force due to
the continuous global symmetry.
How much money is needed to run an economy? In this model-economy
the total amount of money is re ected in the agents' I 's, and is always con-
served, as seen by X
(pnqn pn+1qn+1) = 0 ; (3.25)
n
since we have periodic boundary conditions. No matter what the initial
amount of money in the system is, the system will go to the equilibrium
where precisely that amount is needed|the nal I 's are xed by the initial
money supply. The total amount of money in the economy is irrelevant, since
the utility and amount of goods exchanged in the nal equilibrium does not
depend on that. However, as previously described, changes in the amount of
money have interesting transient e ects.

3.3.1 Spatial inhomogeneity


One possible extension of the model includes the introduction of spatially
inhomogeneous utility functions for the agents, i.e., and are allowed to
CHAPTER 3. THE DYNAMICS OF MONEY 18

depend on spatial position. The result of going through this calculation is


hn; +1 = An hn 1; + BN hn; + Cnhn+1; + Dn ; (3.26)
where the coecients are combinations of n, n+1, n 1 , and n, and obey
the relation An + Bn + Cn = 1. So now we are looking at di usion with a
source term.
It is possible to gain some insight in the behaviour by writing n = + n,
n = + n and then expanding in the noise n and n . This we will not
reproduce here. A simpler approach is to solve the problem numerically,
and the result is as expected a spatially inhomogeneous steady state that is
reached after a system size dependent transient. Not terribly interesting and
not illustrated here.

3.3.2 Smart agents use Lagrange multipliers


In an alternative formulation of the initial optimisation problem, the agents
are given utility functions of the form
vn = c(qn ) + d(qn+1) : (3.27)
and are left to maximise this under ful lment of the constraint
pnqn = pn+1qn+1 : (3.28)
Since we assume perfectly rational agents, we may as well assume that
they know how to use Lagrange multipliers|this is a much used tool by
economists, so let us just say that our agents are economist. The procedure
is pretty much the same as before, and the result is of course exactly the
same. Our agents now try to optimise the function
v~n = c(qn ) + d(qn+1) + In(pnqn pn+1qn+1) : (3.29)
Where In is a Lagrange multiplier. What we nd in this formulation, is that
the Lagrange multiplier In can be given a physical interpretation, namely
the value of money. This situation is well known from classical mechanics
where, e.g., for a hoop rolling, without slipping, down an inclined plane the
Lagrange multiplier has the role of the friction force of constraint [13]. In
classical economics the Lagrange multipliers are only used as a means for
calculating the equilibrium, but here we just showed that they can attain a
real, almost \physical" meaning to the agents using them.
CHAPTER 3. THE DYNAMICS OF MONEY 19

3.4 Conclusion
In general, economy deals with complicated heterogeneous networks of agents,
with complicated links to one another, representing the particular \games"
they play with one another. Here, we considered a simple toy model with
simple monopolistic agents. We submit that the general picture remains the
same. At each trade, the agents evaluate the value of money, by analysing
their particular local situation, and act accordingly. The prices charged by
the agents will be constrained by those of the interacting agents. It would
be interesting to study the formation and stability of markets where very
many distributed players are interested in the same goods, but not generally
interacting directly with one another.
Modi cations of this network model may also provide a toy laboratory
for the study of the e ects of the introduction of the key nancial features
of credit and bankruptcy, as well as the control-problems posed by the gov-
ernmental role in varying the money supply.
Chapter 4
Self-Organized Criticality
We give a brief introduction to the concept of self-organized criticality via a
review of two of the basic and seminal models|Bak, Tang and Wiesenfeld's
sandpile model and Bak and Sneppen's evolution model. Avalanches are
de ned and discussed. A possible de nition of self-organized criticality is
presented.

4.1 Introduction
Power laws are ubiquitous in nature. We nd them in the size distribu-
tion of super-conducting vortex avalanches [11], 1=f noise or icker noise,
the temporal distribution of mass-extinction events [27], solar ares, and
earthquakes [3]. Self-organized criticality is an attempt at explaining these
phenomena1 .

4.2 Sandpile models


In 1987 Bak, Tang and Wiesenfeld published a seminal paper, coining the
phrase `Self-Organized Criticality' [6]. The work in this paper was an attempt
to explain 1=f noise and the omnipresent self-similar or fractal structures in
nature. The concept of self-organisation to criticality was illustrated by some
1 In Ref [16] it is argued that some systems that display SOC, even if critical only for
a single value of a coupling parameter, possess a wide range over which they will appear
critical.

20
CHAPTER 4. SELF-ORGANIZED CRITICALITY 21

simple computer simulations. A very useful mental picture of these models is


that of sand piles. Later experimental work showed that real sand piles do not
behave like the model, but the model was never intended as an explanation
of granular ow phenomena, so this is of small consequence.
The two-dimensional version of the model is a cellular automaton with
simple update rules for the integer z
z(x; y) ! z(x; y) 4 ;
z ( x  1; y ) ! z ( x  1; y ) + 1 ; (4.1)
z(x; y  1) ! z(x; y  1) + 1 ;
whenever z exceeds some critical value zc. The boundary conditions are given
by z = 0 on the edges. The mental picture is that of grains of sand falling
on a chess-board. Whenever the number of sand grains on a square exceeds
zc the \pile" on that square topples and distributes one grain to each of its
four neighbours.
The system is either started with z = 0 for all sites, and then slowly built
up by dropping single grains of sand at random positions until it reaches
a (statistically) steady state, or it is started with z  zc and then left to
evolve according to (4.1) until a steady state is reached. Either way, the
number of topplings that can be triggered2 by dropping just one grain of
sand, i.e., the number of activations, are now distributed according to a
power law|there is no typical length scale in the system, hence it is critical.
An avalanche is de ned in an intuitively obvious manner as the number of
activations triggered by dropping a single grain of sand, i.e., as the number
of sites a ected weighted with the number of times each site is a ected. The
system is self-organised in the sense that the critical state is an attractor
for the dynamics; no external parameter is ne-tuned in order to reach this
state, as opposed to the critical point at phase transitions in equilibrium
statistical mechanics where, e.g., the temperature has to be ne-tuned to
obtain criticality.
It has often been argued that in order to obtain criticality, the driving
rate is a parameter that must be tuned to zero in order to get the proper
separation of time scales. In recent work [32] it was shown that criticality
2 To obtain a power-law an avalanche is not merely the area a ected, as initially thought,
but rather the number of activations. See, for example, Ref. [10] for the relation between
number of activations and area covered.
CHAPTER 4. SELF-ORGANIZED CRITICALITY 22

(in a generalised sand pile model) arises from tuning to zero one or more
parameters controlling the rate of dissipation and driving. However, the
papers on the subject are many (>2000) and so are the opinions, see Ref.
[32] and references herein.
The simple structure of the update rules, Eqs. (4.1), allow easy gen-
eralisation to d dimensions. However, the computer time needed becomes
prohibitively large as the dimensionality increases. An upper critical dimen-
sion, probably at four dc = 4, above which mean eld theory provides exact
values for the critical exponents, prevents unnecessary use of CPU time.

4.3 Evolution models


Moving on to biological evolution, Bak and Sneppen [5] in 1993 introduced a
simple model simulating the behaviour of the large scale evolution of species.
In this model N numbers, drawn from a uniform distribution on the interval
[0,1], are arranged on a one-dimensional grid with periodic boundary condi-
tions. The number on site n represents the overall tness (or barrier against
mutation) of the species living on site n. The idea is that the species with
the lowest tness is more likely to undergo mutation than the others and
in doing so a ects its immediate neighbourhood. The interaction between
neighbouring species is not speci ed, but when a species mutates its two
nearest neighbours does as well.
One can, of course, question the degree of reality of the model. The,
biologically speaking, weakest point, as originally formulated, seems to be the
selection of the globally worst t species, as this introduces a global measure
of relative tness in the model. This neglects the e ect of the local tness
landscape, the very idea that was invoked to explain why the neighbours of
the worst t species are also selected for mutation. If the tness of a species
depends on the neighbours (via the tness landscape) then the globally worst
t species should be the one with the \worst" neighbourhood, i.e., a gradient
or local average seems a better criteria for global selection. These are minor
details however, and the model is interesting by itself.
The update rules for the model are:
 Find the site with the (globally) lowest barrier against mutation and
mutate it by assigning a new random number from the uniform distri-
bution.
CHAPTER 4. SELF-ORGANIZED CRITICALITY 23

 Assign new random numbers to the two nearest neighbours as well.


 Repeat the process.
After an extensive transient period, the barrier distribution becomes (statis-
tically) stationary. When measuring the distribution of distances between
subsequent mutations (of the species with the lowest barrier) one nds a
power law with exponent  = 3:23(2)3 [26], i.e., the system is critical. Al-
though the value of the exponent is greater than 3, so that a mean jump
length and even the second moment of the distribution exists, the system
does not reduce to an ordinary di usion process. This is because there are
correlations in the system, and the medium is constantly changed by the
mutation of the species. An ordinary random walker does not change the
medium or landscape he is moving in.
In the stationary state, the distribution of barriers is such that all but a
few are above some critical threshold (fc=0.66702(3) [26]). By monitoring
the value of the smallest barrier, it is now possible to de ne an avalanche
as the time (number of system updates) spent below some threshold f0 by
the smallest barrier value. An avalanche starts when the threshold f0 is
penetrated from above, and ends when the value of the smallest barrier is
bigger than f0 , i.e., when all barrier values are above the threshold.
The size distribution of avalanches is given by a power law with exponent
 = 1:07(1)4. One can think of the duration of an avalanche as the rst-
return-time for some strange kind of \random walker." The rst-return-times
for an unbiased random walker is power law distributed with exponent 3=2.
So the `walker' in the Bak-Sneppen model is more likely to return after long
times than an ordinary random walker. Notice that in the BS model the
walker considers the whole systems before deciding where to step or y to,
and is not one single agent, but the site with the lowest barrier value.
This way of thinking about avalanches also allows us immediately to
discard exponents, for the avalanche size distribution, with values less than
one. The total return probability for a power law distributed walker
Z 1
Ptot (S ) / S  dS ; (4.2)

3 () indicates uncertainty in the last digit.
4 In the original paper [5]  was found to be 0.9(1) for avalanches through a threshold
at 0.65.
CHAPTER 4. SELF-ORGANIZED CRITICALITY 24

where S is the duration of the walk or size of the avalanche, has to be nite
(one), i.e., we must have  > 1.
While the concept of an avalanche is easy to picture in the sand pile
model, the interpretation in the case of models of evolution is much less
obvious. When a time scale that goes like exp(barrier value) is introduced for
the separation in time between successive mutations, the mutation happens
faster below than above threshold. But the transition in waiting times from
just below till just above the threshold is smooth, so the mutation activity
cannot really be thought of as stopping once the lowest barrier is above
the threshold. However, as there is no locality in the system (it is critical)
the duration of an avalanche gives a measure for how many species will be
a ected. In one dimension we have [26]
S  nDcov ; (4.3)
where S is the size of an avalanche, ncov is the number of sites a ected by
the avalanche, and D is the avalanche mass dimension (this expression is
actually used to determine D). In this way the size of an avalanche tells us
how many sites or species will be a ected by the one initial mutation. When
an avalanche is over, the activity jumps to the site with the lowest barrier
value above the threshold fc. Since the barrier values above fc are uniformly
distributed on the interval [fc; 1] the \avalanche starters" will be uniformly
distributed in space.

4.4 De nition of SOC


There is no generally accepted de nition of what traits a system should show
in order to be classi ed as self-organized critical. However, the de nition
given in Ref. [12] and reproduced below seems a good starting point.
A self-organized critical system is a driven, dissipative system
consisting of
(1) a medium which has
(2) disturbances propagating through it, causing
(3) a modi cation of the medium, such that eventually
(4) the medium is in a critical state, and
CHAPTER 4. SELF-ORGANIZED CRITICALITY 25

(5) the medium is modi ed no more.


In the next chapter we shall see that point (5) can be relaxed, while
maintaining a system as critical as any.

4.5 Summary
We have introduced two basic models and one possible de nition of self-
organized criticality. The concept of avalanches has been introduced and
discussed for both types of models.
Chapter 5
The Self-Organized Critical
Economy on an Asymptote
In this chapter we introduce a model resembling the economic model intro-
duced in Chap. 3 and Bak and Sneppen's model of evolution. However, in
contrast to the Bak-Sneppen model we have speci c given interactions be-
tween agents. Furthermore, not three but only one new random number is
introduced at each system update. The strategies and plans of the agents
are less complicated than in Chap. 3 but the resulting system dynamics is
much more complicated.
In Sec. 5.1 and 5.2 we describe the model and the strategies of the agents
in the economy. In the Sec. 5.3 to 5.8 we present the results of Monte Carlo
simulations of the economy. We nd that the economy enters an asymptote
and organises itself into a critical state. It is possible to rescale variables in a
time-dependent manner, that leads to a dynamics with a stationary attractive
state for the rescaled variables. Thus a de nition of and measurements on
avalanches become possible in rescaled variables. We nd an avalanche size
distribution following a power-law with exponent  = 1:48  0:02. Since  is
indistinguishable from 3=2, and this is the exponent for the rst-return-time
of a random walker, we consider the possibility that the relevant variable
performs a simple random walk, but must reject it. Temporal and spatial
correlations are examined, and non-trivial exponents for the resulting power
laws are found. In Sec. 5.9 we comment on di erences from the Bak-Sneppen
model and our rst economic model from the current one. In Sec. 5.10 two-
dimensional models are brie y discussed. Section 5.11 contains the summary.

26
CHAPTER 5. SOC ON AN ASYMPTOTE 27

5.1 Geometry and Game


We consider N agents, placed on a one-dimensional lattice with periodic
boundary conditions and positions numbered n = 1; 2; : : : ; N . This geome-
try is chosen in order to have a simple and speci c way of de ning who is
interacting with whom. We assume that agents cannot consume their own
output, so in order to consume they must trade, and in order to trade they
must produce. Each agent produces a quantity qn(prod.) , of one good, which is
sold at a price pn per unit, to his left neighbour at positionn 1. He next buys
and consumes the good, in quantity qn(trad.)
+1 , produced by his neighbour to the
right, who subsequently buys the good of his right neighbour, etc., until all
agents have made two transactions. This process is repeated inde nitely, say,
once per day. Initially, all prices in the system are xed.
The actions of the agent on site n are:
 produce good in quantity qn(prod.),
 o er it for sale at price pn per unit,
 buy good from the right neighbour (at position n + 1) at price pn+1 per
unit, and
 consume the acquired good of quantity qn(trad.)
+1 .

5.2 Goal and Strategy


The goal of each agent is to maximise his utility function
un = c(qn(prod.) ) + d(qn(trad.)
+1 ) (5.1)
while satisfying the constraint
pnqn(trad.) = pn+1 qn(trad.)
+1 ; (5.2)
where
qn(trad.) = min[qn(prod.) ; qn(cons.) ] ; (5.3)
is the traded amount of any good. It is always the minimum of the amount
o ered for sale and the amount demanded by the consumer. The rst term,
CHAPTER 5. SOC ON AN ASYMPTOTE 28

c, in the utility function, represents the agent's cost, or displeasure, asso-


ciated with producing qn units of the good he produces. The displeasure is
an increasing function of q, and c is convex, because, say, the agent grows
tired. The second term d, is the utility of the good he can obtain from his
neighbour. Its marginal utility is decreasing with q, so d is concave. This
choice of c and d is common in economics; see, e.g., [31].
The choice of constraint is again typical in economics. It is the simplest
possible choice, as it tells us that the agents have no trust in money; they do
not want to gain money and they do not want to lose money. The money thus
introduced therefore have minimal properties|there is no utility ascribed to
the possession of it.
An explicit example of the utility function is chosen for illustration and
analysis,
un = 12 (qn(prod.) )2 + 2 qn(trad.)
q

+1 : (5.4)
An agent has knowledge of the prices of his two neighbours at all times. The
amount of goods produced by the two neighbours is not known since, as we
shall see, this amount depends on the next nearest neighbours' prices, which
again depends on his neighbours' prices etc., by arguments of symmetry. The
demand for goods is not know either, for the same reasons.
Each agent makes a plan for how much to produce and how much to
purchase, given the prices, and assuming that everything he produces will be
sold, and all he wants to purchase will be available. That is, each agent will
estimate how much he should optimally produce and sell at the unit price he
has announced to his costumer, and how much he should optimally buy and
consume, given the unit price he has been informed by his right neighbour.
The task is a simple optimisation problem. First qn is isolated in Eq. (5.2).
Dropping all superscripts at the moment1, and substituted into Eq. (5.4).
The resulting expression,
2
p + 2pqn+1 ;
!
1
un = 2 qn+1 p n+1
(5.5)
n
1 The agents generally assume that they will sell their entire production qn(prod.) =
(trad.) , and be able to buy all they want to consume qn(cons.) (trad.)
qn +1 = qn+1 . This is the
simplest possible set of assumptions if we do not want to give the agents knowledge of the
entire economy or equip them with memories and adjustable strategies; see Ref. [25] for
a recent example of agents with adjustable strategies and memories.
CHAPTER 5. SOC ON AN ASYMPTOTE 29

is extremised with respect to qn+1 with the result:


1=3
qn = ppn
!

(5.6)
n+1
and 4=3
qn+1 = ppn
!

: (5.7)
n+1
In the more general situation of c(qn(prod.) ) = a(qn(prod.)) and d(qn(cons.)
+1 ) =
(cons.)
b(qn+1 ) , we nd
1=( ) =( )
b pn
! !

qn = a pn+1 (5.8)

and
b 1=( ) pn =( ) :
! !

qn+1 = a pn+1 (5.9)


This tells us how much agent n will produce, i.e., the value of qn(prod.) , and
how much he wants to consume, the value of qn(cons.)
+1 . We note here that the
level of production and consumption is independent of the absolute prices, as
seen in Eqs. (5.8) and (5.9). We can multiply all prices by a common factor
without changing the production of goods in the system. This tells us that
in case of in ation or de ation the production will not be a ected.
The second derivative of Eq. (5.5) is
pn+1 2 1 q 3=2 < 0 ;
!
d2 un = (5.10)
dq2n+1 pn 2 n+1
i.e., the extremum is a maximum of the utility function and this maximum is
what the agents will strive to achieve. However, unless the prices are the same
in the whole system, the agents will either not be able to sell their complete
production qn(prod.) 6= qn(trad.) , or they will be unable to buy as much as they
+1 6= qn+1 . If an agent nds himself in the fortunate situation of
want qn(cons.) (trad.)
being able to sell his production and buy all he wants, then his two neighbours
will be in the more frustrating situation of wanting to purchase more than is
CHAPTER 5. SOC ON AN ASYMPTOTE 30

supplied (left neighbour) or sell more than is demanded (right neighbour)2.


This situation comes about, since the agents do not have knowledge about
the entire economy of which they are a part.

5.2.1 How to increase the pro t


To improve his situation, an agent will try to analyse his situation and take
steps to improve it by adapting to his environment. An agent that is losing
money, for example, is obviously not able to sell all that he produces, and
the right thing to do is to decrease the production, as seen by the following
arguments. The pro t made by agent n is given by
sn = pnqn(trad.) pn+1qn(trad.) +1 ; (5.11)
where
qn(trad.) = min[qn(prod.) ; qn(cons.) ] : (5.12)
Since the total amount of money in the economy is conserved, as insured by
the periodic boundary conditions, the total pro t is zero
XN
sn = 0 ; (5.13)
n=1
and the agent with the lowest pro t, the `loser', will have negative pro t (or
all agents will have zero, in the spatially homogeneous situation). An agent
will have negative pro t if he does not sell what he planned to, i.e., if
qn(prod.) 6= qn(trad.) = qn(cons.) : (5.14)
In that situation his pro t will be
sn = pnqn(cons.) pn+1qn(cons.) +1
!4=3 !4=3

= pn p p n 1
pn+1 p p n
; (5.15)
n n+1
and it will respond to a change in price as
@sn = 1=3 pn 1 4=3 4=3 pn 1=3 < 0 ;
! !

@pn pn pn+1 (5.16)


2 The situation of qn(prod.) = qn(cons.) has (probability) measure zero since q 2 R:
CHAPTER 5. SOC ON AN ASYMPTOTE 31

i.e., to increase his pro t the agent should lower his price.
Furthermore, if agent n is unable to buy all he wants, so that
+1 6= qn+1 = qn+1 ;
qn(cons.) (trad.) (prod.)
(5.17)
we again nd
@sn = 1=3 pn 1 4=3 < 0 ;
!

@p p (5.18)
n n
so we arrive again at the conclusion that a lowering of the price will increase
the pro t.
In this model there is only one control parameter, the price pn. Given
its value, the strategy is xed, and the amount produced by the agent is
then determined by Eq. (5.6). Hence, a lowering of the price will, on the one
hand, a ect the estimate of how much should be produced, and, on the other
hand, increase the demand for that good (as mirrored in agent n's response,
Eq. (5.7), to a lowering of his right neighbour's price pn+1).
Actually, the interactions reach a bit further than just the two nearest
neighbours. When an agent n, is appointed loser and changes his production
plan, here represented by a change in the parameter pn, the pro t Eq. (5.11)
of four agents in total may be a ected: the loser, his right neighbour, and
the two agents to the left of the loser.

5.2.2 Random new price


We can choose not to let the agents perform the above analysis, so that they
do not know in which direction it is pro table to change the price. The
relevant agent will then change his price in an arbitrary direction and with
arbitrary amplitude. But since the right answer is a lowering of the price
by a certain amount, we can think of his price performing a random walk
terminating at the correct and lower price. Since this resembles the rst-
passage-time of a random walker in one dimension, all we will (and indeed
did) observe, is a power-law distribution of the time spent as a loser for the
individual agent, with exponent 3=2; see Eq. (B.5).

5.2.3 The Loser


An agent who makes money is, like the agent that loses money, not ful lling
the constraint Eq. (5.2), and, furthermore, his utility function has not at-
CHAPTER 5. SOC ON AN ASYMPTOTE 32

tained its maximal value. The interpretation is that he works too much and
consume too little.
Which of the agents is most frustrated? It is not possible to compare the
utilities of di erent agents, since one agent may have maximised his utility
function and yet have a lower utility than some frustrated agent elsewhere
in the system. The only global measure in this model is money, since the
amount of money made or lost can be compared for all the agents. For these
reasons, and with an eye on real life, the agent that loses most money per
trade is declared the overall loser in the economy.

5.3 The simulation


In a simulation of the model, N agents are given random prices drawn from
a uniform distribution on the interval [1,2]. The amount of goods produced
by agent n at every time step, qn(prod.), is calculated from Eq. (5.6), while
the amount of goods that he wants to consume, qn(cons.)
+1 , is calculated from
Eq. (5.7).
The traded quantity of any speci c good is found as the minimum of the
supplied and demanded amount,
qn(trad.) = min[qn(prod.) ; qn(cons.) ] : (5.19)
This transaction yields a pro t, sn, for each agent given by:
sn = pnqn(trad.) pn+1qn(trad.) +1 : (5.20)
After every update of the economy, the agent with the lowest (most nega-
tive) pro t is found, and is given a new price, between 0 and 1 percent less
than the original3. The fractional price changes  are drawn from a uniform
distribution on the interval [0,0.01]. The whole process is then repeated.
Summing up, the update scheme is:
 Find the agent with the lowest pro t.
 Assign a new price to this agent, p ! (1 )p.
 Calculate the new pro ts for the agents in the system.
3 The overall behaviour is insensitive to the exact size of the lowering.
CHAPTER 5. SOC ON AN ASYMPTOTE 33

 Repeat the process.


We studied systems of various sizes, ranging from 200 to 20000 agents, and
on time-scales from some hundreds to 108 updates. There are, obviously,
many parameters one can choose to monitor in this system, but we focus on
the spatial position of the worst performing agent|the loser|and on various
aspects of the pro t.
The rst thing we look at is the spatio-temporal distribution of agents
who update their prices|the time dependent position of the loser. Figure 5.1
shows a spatio-temporal plot of this activity.

4000

3000
count time

2000

1000

0
50 100 150 200
n

Figure 5.1: Spatio-temporal distribution of the losing agents in an economy


consisting of N = 200 agents. Abscissa: loser's coordinate. Ordinate: time.
One obvious feature is the right-wards drift in activity. This is caused by
the asymmetry of the model, making the losses propagate to the right. When
an agent lowers the unit price demanded for his good, he also decrease the
production of that good, and the consumption of his right neighbour's good.
Hence, he spends less money, and as a result the right neighbour obtains
CHAPTER 5. SOC ON AN ASYMPTOTE 34

a lower income. Quite often this e ect is strong enough to make the right
neighbour the next loser, resulting in (inclined) lines in the plot of the loser's
coordinate as a function of time.

5.4 Temporal correlations


Following de Boer, Jackson, and Wettig [8], we examine the temporal cor-
relations in our system. We examine the probability distribution for the
rst-return-time of the activity to a given site.
A reasonable demand on a (one-dimensional) critical system is that no
site will ever \die out," i.e., once active, the site should have a non-vanishing
probability of being activated again later. The simplest way to measure if
this is the case is to look at the probability distribution for the rst return
time of the activity to a speci c site.
Figure 5.2 shows the rst return time of the activity to an arbitrary but
xed site. The hump in the distribution around t = 300 (dashed line) is an
-3

Figure 5.2: Distribution of rst


-4
return times for the activity to
a given site in a system of size
log10(P(frt))

-5
100 (dashed line) and size 1000
(solid line). The system ran for
-6
107 (100 agents) and 108 (1000
agents) time steps after an initial
-7
0 1 2 3 4 5 transient phase.
log10(frt)

artifact of the periodic boundary condition, and its position gives the average
drift velocity in the system. The drift in the system caused by the asymmetry
of the model, makes the activity circle the system and return to a given site
by doing a full circle around the system. What is interesting, is the slope of
the distribution before the \hump." This slope was measured to be 1:2  0:1.
This means that P (t) dt  t dt < 1 (in the thermodynamic limit of
1 : 2
R R

N ! 1 and continuous time), i.e., the waiting time at a site to be active is


nite, hence no site ever \dies." This is the rst indication that we may be
dealing with a critical system.
CHAPTER 5. SOC ON AN ASYMPTOTE 35

5.5 Distribution of pro ts


We now take a closer look at the pro ts, pro t being the single variable
characterising the individual agent, giving his position in the monetary hi-
erarchy. Since the total amount of money is conserved in the system|see
Eq. (5.13)|and not all agents achieve \break-even" (zero pro t, sn = 0),
there will always be some agents having positive pro t and some having neg-
ative pro t. Figure 5.3 shows a typical snapshot of the spatial distribution
of pro ts. A lower threshold is clearly seen in the distribution, below which
only a few, spatially clustered, agents are found. Also, a delta-function in the
distribution of pro ts is visible at sn = 0|it appears as a horizontal line at
zero pro t. It is expected from the formulation of the model; see Appendix
1 for a separate discussion of these agents without pro t. We now focus on
0.02

0.015

0.01

0.005
profit

-0.005

-0.01

-0.015
0 500 1000 1500 2000
agent number

Figure 5.3: Spatial distribution of pro ts after 2  107 time steps, rescaled
by exp(2:5092  10 6t), notice the delta function at zero pro t. Dashed line:
sn = 0:0057
the activity below threshold. One problem that has to be dealt with in this
connection, is the e ect of the agents constantly lowering their prices. As a
result of this lowering, the price-level of the economy, and with it the ampli-
tude of the uctuations in pro t, will go to zero. Besides, we do not know
CHAPTER 5. SOC ON AN ASYMPTOTE 36

how the threshold behaves in time, for all we know it could perform strange
uctuations in position or even disappear.
In Fig. 5.4 the distribution of pro ts for 2000 agents is shown, and again
the threshold is seen very clearly, as is the delta-function at sn = 0. Also
shown is the distribution of the lowest pro t in the system.
0.04

0.03
P(profit)

0.02

0.01

0
-0.04 -0.03 -0.02 -0.01 0 0.01 0.02
profit

Figure 5.4: Distribution of pro ts, p(s), in the critical state (rightmost curve).
System of 2000 agents. The sn distribution is sampled at times 106{107 for
every 106 time step, then rescaled and plotted. The delta-function at zero
pro t is not shown in full. The, apparent, nite width of the -function is
due to binning of data. Also shown is the distribution of the lowest pro t
p1(s) (leftmost curve).

5.5.1 Comments on the shape of p1(s) near the thresh-


old
We denote the distribution of the lowest pro t in the economy by p1 (s).
In Fig. 5.4 we see that this distribution seems to follow a straight line, for
large values of s, until it vanishes above some threshold fc. We now try, by
analytical means, to obtain an understanding of what dynamical processes
CHAPTER 5. SOC ON AN ASYMPTOTE 37

takes place that can give the distribution p1 (s) this shape near the threshold
fc.
Neglecting correlations between pro ts s, we can write down the distri-
bution for the lowest pro t in the system
Z
smax  N 1
p1 (s) = Np(s) p(s0) ds0 ; (5.21)
s
where p(s) is the distribution of pro ts, N is the number of agents in the
system, and smax is the, system size dependent, maximal pro t. This can be
rewritten as
!N 1
Z fc 0 0
Z s
max
0 0
p1 (s) = Np(s) p(s ) ds + p(s ) ds
s fc
Z fc
!N 1
= Np(s) p(s0) ds0 + k 1 ; (5.22)
s

where k1 = fscmax p(s0) ds0 is constant. The observed ane dependence of p1


R

on s can be explained if we set N = 2 to get


!
Z fc
p1(s) = 2p(s)
s
p(s0) ds0 + k 1 / k2(fc s) + k1 ; (5.23)

where in the last step we assume that p(s) is constant (uniformly distributed)
on the interval [s; fc], with density k2.
The interpretation is that near the threshold fc, only two agents are
involved in the activity, the loser and, e.g., his right neighbour.

5.6 Stationarity of threshold


To test for stationarity of the threshold, it is necessary to scale out the overall
time trend, i.e., the drift to zero. An obvious and straightforward measure
of this trend could be the root-mean-square deviation of distributions. Un-
fortunately, closer examination of the distribution of pro ts reveals that the
second (central) moment, +11 s2 f (s)ds, does not exist, nor does the rst.
R

For this reason we take recourse to quartiles, using as rescaling factor the
di erence between the upper and lower quartile of the pro ts. Quartiles also
CHAPTER 5. SOC ON AN ASYMPTOTE 38

time/106 rescale factor suq=slq 


0 0.20 -1.00 -
1 2:5  10 4 -0.44 -0.02
2 2:2  10 5 -0.43 -0.01
3 1:6  10 6 -0.39 0.05
4 1:3  10 7 -0.41 -0.002
5 1:1  10 8 -0.47 -0.06
6 8:3  10 10 -0.36 0.05
7 7:9  10 11 -0.52 -0.11
8 6:1  10 12 -0.42 -0.005
9 4:5  10 13 -0.38 0.03
10 3:6  10 14 -0.33 0.08

Table 5.1: Column I: Number of time steps before measurement. Column


II: Rescaling factor: suq slq, di erence between upper quartile and lower
quartile. Column III: Ratio between upper and lower quartile. Column IV:
Deviations,  = suq =slq suq =slq.

facilitate a good test of scaling, since the ratio between the upper and lower
quartile must remain constant if the distribution scales.
Table 5.1 shows the rescaling factor and the ratio fupper quartileg/flower
quartileg for a system of 2000 agents at times 0{11  106 measured every 106
time step. The rst entrance shows us that the initial pro t distribution is
symmetric. The average over the last 10 q entries is suq=slq = 0:41, and the ad-
justed root mean square deviation ~n = n=(n 1)n = 0:056. We see that
three of the ten values di er by more than one, but less than two, standard
deviations from the mean value|as expected from Gaussian uctuations.

5.6.1 An analytical estimate of expected uctuations


in the value of the ratio of quartiles suq=slq.
Generally, if we have too few measurements of the ratio fupper quartileg/flower
quartileg to nd a meaningful estimate of its variance by the above method,
we can proceed as follows. First we write down the distribution of the upper
CHAPTER 5. SOC ON AN ASYMPTOTE 39

quartile pq(suq) (the form of the expression is the same for upper and lower
quartiles)
!

pq(suq ) = N N3N=41 P 3N=4(suq )QN=4 1 (suq )p(suq) ; (5.24)

where
Z s Z 1
P (s) = p(s0) ds0, Q(s) = p(s0) ds0, P + Q = 1 ; (5.25)
1 s
and p(s) is the distribution of pro ts sn. The distribution pq is properly
normalised as seen by direct calculation:
!

= N N3N=41
Z 1 1 3N=4
Z

pq (s) ds P (1 P )N=4 1 dP (5.26)


1 0

= (3N=4)!(NN=
!
4 1)!
(3N=4 + 1) (N=4)
(N + 1) (5.27)
= 1;
where we rst used P 0(s) = p(s), then the relationship between the Beta func-
tion and the -function B (m; n) = 01 tm 1 (1 t)n 1 dt = (m) (n)= (n+m),
R

and nally that (n + 1) = n!.


We notice that pq(s) is stationary for
3N p  N 0
1 Qp + Pp = 0


4 P 4 (5.28)
which to leading order in N gives
!
31 1 1 pN = 0 ; (5.29)
4P 4Q
i.e, P = 3=4 and Q = 1=4, as expected. We next write:
!

pq(suq ) = N N3N=41 exp [3N=4 ln P (suq) + (N=4 1) ln Q(suq) + ln p(suq )] :


(5.30)
CHAPTER 5. SOC ON AN ASYMPTOTE 40

Keeping only terms of leading order in N in the exponential, we now expand


pq around the stationary point (the quartile), to second order in s
 
3 1
pq (suq + s) / exp 3N=4 ln 4 + p(suq)s + 2! p (suq )(s)
0 2


+ N=4 ln 14 p(suq)s 2!1 p0(suq )(s)2 (: 5.31)


 

Finally, expanding the logarithm to second order in s we nd


pq(suq + s) / exp 38 Np2 (suq )(s)2
 

(5.32)
s
"
( ) 2#
= exp
2 2
; (5.33)
q
where  = 3=(N 16p2(suq )), i.e., we have a Gaussian distribution for the
measured position of the quartile with standard deviation , varying with
the size of the system as N 1=2 .
We can then use  as our best estimate of the variation in the position
and use this to nd the expected variation of the ratio z = suq =slq as
v
2 2
@z @z
u ! !
u
~ (z) = t
@slq  (slq) + @suq  (suq)
2 2
v
2
s
u !
uq
= jslq j
u
1
slq  (slq) +  (suq ) : (5.34)
t 2 2

This yields a value of ~ (z) = 0:06(1) in good agreement with the value 0.056
found directly from the data above.
The stationarity we have found in the ratio fupper quartileg/flower
quartileg shows that it is possible to obtain a stationary distribution of prof-
its, by measuring pro ts at a given time in units of suq slq at that time,
i.e., by a time dependent rescaling of the pro ts. However, the study we
have just made, of uctuations in a nite size system, shows that we should
be careful when later we set a threshold f0 and study the activity below it,
because this activity is very sensitive to the precise value of the threshold.
The time dependent scaling factor can now be determined by tting the
time development of the value of suq slq. We nd an exponential decay
CHAPTER 5. SOC ON AN ASYMPTOTE 41

with time constant 2:515  10 6  0:006  10 6, i.e., a standard error of 0:2%


(no gure this time, it is just a straight line in a log-linear plot|not terribly
interesting).

5.6.2 Analytical determination of the time dependence


of the scaling factor
It is possible to nd a good estimate for the time dependence of the scaling
factor by analytical means, i.e., to determine the time constant of the ex-
ponential decay of the uctuations in pro t. The time development of the
amplitude of the uctuations in pro t follows that of the average price level
in the system. We now determine the time development of the average price
level in the system.
Every time step the loser's price ploser is lowered by, say, 0 1%, on
average by 0:5%. Furthermore, the loser usually has a price that lies above
the average price4 . As a good rst approximation, assume that the loser
only has to change his price once to equilibrate with his surroundings, i.e.,
to obtain a price of the same size as the average price level of the system,
and in that way increase his pro t enough not to be the loser any more:
hpi = (1 0:005)ploser ; (5.35)
where hi denotes the system average. The time development of the mean
price then reads:
dhpi = 0:005 p = 0:005 hpi ; (5.36)
loser
dt N N (1 0:005)
where N is the system size, hence
hp(t)i / exp( kt) ; (5.37)
4 The role of \loser" circles the system, and as the loser always lowers his price, the
average price of the system goes to zero. When a new loser is found his price has not been
changed for a certain period of time, in which the average price has decreased, i.e., the
new loser has a price that lies above the average. If all the prices in the system are the
same we have a spatially homogeneous solution where all the pro ts are zero sn = 0 for
n = 1; : : : ; N .
CHAPTER 5. SOC ON AN ASYMPTOTE 42

where
k = N (10:005
0:005) = 2 :513  10 6 (for N = 2000) : (5.38)
This is in good agreement with the value 2:515(6)  10 6, obtained by tting
the time development of the quartile scaling factor.
0

-1 Figure 5.5: Probability that


an agent is the loser m con-
secutive times. 2000 agents,
-2
log10(P(m))

-3
107 time steps sampled af-
-4 ter the initial transient. The
dashed line is only to guide
the eye. The decay is, faster
-5

-6
1 2 3 4 5 6 7 8 than, exponential.
m

A better approximation is obtained by rst examining how many times


the loser actually has to change his price to obtain a pro t that is not the
smallest in the economy. This is shown in Fig. 5.5, where the probability
of being the loser m consecutive times is plotted on a log-linear scale as a
function of m. With the data from this plot taken into account, we get an
expression:
hpi = (1 0:005  5:55:5++(01::005)
2  1:6 + : : :
6+::: )ploser ; (5.39)
yielding a value for the time constant k = 2:5092  10 6.
We examined the stationarity of the rescaled distribution by reading o
the thresholds in plots of the pro t distribution like the one shown in Fig. 5.4.
By using the value k = 2:5092  10 6 in the rescaling of the distributions we
obtained stationarity of the threshold.

5.7 Avalanches
Having de ned a rescaled threshold that remains constant during the time
development of the economy, we next consider the activity below this thresh-
old. Setting a threshold f0 , slightly below the true threshold fc, we measure
CHAPTER 5. SOC ON AN ASYMPTOTE 43

the duration of activity below this threshold f0 , and refer to this duration as
the avalanche size, S . Since it is always the agent with the lowest pro t that
changes his price and causes activity, it is sucient to monitor the value of
this pro t, and check if it is above or below f0 . When sloser > f0, there is no
active avalanche by our de nition of avalanches. Figure 5.6 shows a log-log
plot of the avalanche size distribution for a system of 2000 agents. Measure-
ments were made during 107 time steps after discarding the rst 106 time
steps, taking the system through an initial transient to the state in which
measurements were made. We see a clear power law P (S ) / S 1:480:02. This
is the second indication of criticality.
The value of the exponent  = 1:48 of the power law for the avalanche
size distribution is so close to 3=2 that it begs for an explanation. 3=2 being
the exponent of the distribution of rst-return-times for an unbiased random
walker, see App. B.
-2

-3
log10(P(S))

-4

-5

-6

-7
0 1 2 3 4
log10(S)

Figure 5.6: Distribution of avalanche sizes in the critical state. The size of
an avalanche is the number of subsequent system updates with (rescaled)
pro ts less than f0 = 0:0057. 2000 agents, 107 time steps. Data have been
\coarse grained," see App. F.
CHAPTER 5. SOC ON AN ASYMPTOTE 44

5.7.1 Rejection of simple random-walk explanation


To nd out whether the avalanche distribution can be explained in terms
of a simple random walk in one dimension, we examine the distribution
of avalanche sizes as a function of threshold values. For a random walker
we expect a clean power law with decreasing amplitude as we move away
(down) from the threshold fc, since the rst return time can be de ned for
any level of the threshold. Figure 5.7 shows the avalanche size distribution,
for avalanches de ned relatively to di erent thresholds.
In Fig. 5.7(a) the critical threshold, fc, is approached from above. The
further above fc we are the higher is the probability of encountering an in nite
size avalanche. In a system of nite size however, we do not expect this to
happen, as the critical threshold of the system will undergo uctuations,
eventually terminating any avalanche. In the thermodynamic limit of N !
1, we do not expect to see the three lower graphs in Fig. 5.7(a).
In Fig. 5.7(b) we show the distribution of avalanches when the critical
threshold fc is approached from below. When f0 < fc there are fewer large
avalanches, suggesting a distribution function of a form well known from
percolation theory [30], where the Fisher exponent  , now plays the role of
the avalanche size distribution coecient
P (S ) = S  g(S (fc f0 )1= ) ; (5.40)
and where g(x) is a scaling function, with the properties g(x) ! 0 for x ! 1,
g(x) ! g(0) for x ! 0, and  is the avalanche cuto exponent [26]. Fig-
ure 5.7(b) shows that we are not studying a simple unbiased random walker
since the avalanche size distribution does not maintain its shape when we
move f0 away from the critical threshold fc. This shape remains unchanged
when we are studying the distribution of rst-return-times for an unbiased
random walker5 .
5 We have not ruled out (or examined) the possibility that the system is adequately
described by a random walker in a potential. If this is the case, information about the
potential can be obtained from the observed dependence of the avalanche size distribution
on the position of the threshold f0 .
CHAPTER 5. SOC ON AN ASYMPTOTE 45

-2

-3
log10(P(S))

-4

-5

-6

-7
0 1 2 3 4
log10(S)
-2

-3
log10(P(S))

-4

-5

-6

-7
0 1 2 3 4
log10(S)

Figure 5.7: Distribution of avalanche sizes in the self-organized critical state.


Avalanches were de ned relatively to thresholds: (a) (from above) f0 =
0:0057, f0 = 0:0053, f0 = 0:0052, f0 = 0:0050, and (b) (from right to
left) f0 = 0:0057, f0 = 0:0061, f0 = 0:0074, and f0 = 0:0098. System
of 2000 agents, measured in 107 time steps. Data have been coarse grained.
5.7.2 The Standard & Poor's 500-Stock Index
We now examine the distribution of jump sizes P (d), for the value of the
lowest pro t, d = st+1 st . In Fig. 5.8 this distribution, for a system size of
CHAPTER 5. SOC ON AN ASYMPTOTE 46

2000, is shown on a double logarithmic scale.


-1

-2
log10(P(d))

-3

-4

-5

-6
-6 -5 -4 -3 -2 -1
log10(d)

Figure 5.8: Double logarithmic plot of the distribution of jumps in the min-
imum value of the pro t d = sloser;t+1 sloser;t . The distribution with largest
probability for small jumps represents jumps to higher pro ts, the other dis-
tribution is that of jumps to smaller pro ts. System of size 2000, 106 updates.
The distribution is clearly asymmetrical and dominated by small jumps
to higher pro ts. From this it appears that the jump lengths are distributed
according to a power law for small jumps, and have a sharp cut-o for long
jumps. Thus it seems like we are looking at truncated Levy- ights [29]. The
mean value of the jumps is zero however.
Figure 5.9(a) is a log-linear plot of the data also shown in Fig. 5.8, made
symmetric by mirroring all points in d = 0. As an interesting, or maybe just
amusing, detail we show the distribution of jumps performed by the Standard
& Poor's 500-Stock Index (a price index of the New York Stock Exchange)
in Fig. 5.9(b). The jump size is taken as the di erence between successive
values y of the S&P 500 index Z (t) = y(t) y(t + t), where t = 1 min.
The solid line is a t of the Levy stable symmetrical distribution (see App. C)
1 Z 1

L (Z; t) =  exp( tk ) cos(kZ ) dk ; (5.41)


0
to the data; where = 1:40  0:025 and = 0:00375.
CHAPTER 5. SOC ON AN ASYMPTOTE 47

-2
log10(P(d))

-3

-4
-0.02 -0.01 0 0.01 0.02
d

Figure 5.9: Log-linear plot of the distribution of temporal jumps in the min-
imum value of the pro t d = sloser;t+1 sloser;t. All data points plotted in d
and d. The upper (in d = 0) curve represents jumps to higher pro ts, the
lower to smaller pro ts. System of size 2000, 106 updates Probability dis-
tribution for temporal variations in the Standard & Poor's 500-Stock Index.
The full line is a t to the Levy stable symmetric distribution. The dotted
line is a Gaussian distribution with its standard deviation set equal to the
experimental value of 0.0508. Reproduced from [22].

While (a) and (b) in Fig. 5.9 have some obvious common traits, including
\shoulders" and cut-o s, we shall not push the analogy, as we have made no
further data-comparison.
CHAPTER 5. SOC ON AN ASYMPTOTE 48

5.8 Spatial correlations


To get a measure for the state of the system that is independent of any theory
about thresholds, avalanches, and rescaling factors, we now study the spatial
correlations in the system6 . To nd out if there is any correlation in the
system on length scales larger than the explicit model dependent neighbour
interaction, viz. 1{2 sites to the right and left, we plot the distribution of
jump lengths for activity in the system. Spatial correlation can build up even
though the interactions are purely local, as is well known from, e.g., the Ising
model having nite correlation lengths away from the critical point, T 6= Tc.
Since our system has an obvious spatial bias, the distribution of jumps
in activity is examined separately for jumps to the left and to the right.
What is seen in the double logarithmic plots, Fig. 5.10 is that the distribu-
tions asymptotically follow straight lines, indicating power-law distributions
P (x) = x 1:8440:002 for jumps to the right and P (x) = x 2:0210:002 for jumps
to the left7.
The attening out of the curve can be explained partly by the e ect of
using a nite system and running it a nite time, and partly by the presence
of \avalanche starters." When an avalanche ends and all agents are above the
threshold, the update rule still tells us to pick the agent with the lowest pro t
and give him a new price. This agent will then obtain a higher pro t, but in
the process he is likely to push down one of his neighbours, starting a new
avalanche. However, the spatial distribution of these \avalanche starters" is
approximately uniform, causing a constant contribution to the distribution
of jump lengths; see Fig. 5.11. This background is small and is only visible
where P (x) is small, i.e., for large values of x.
Another e ect stems from the overall drift of the pro ts to zero, following
the price level as discussed in Sec. 5.6.2. The agents involved in an avalanche
6 Finite size e ect can of course in uence the spatial correlations, see App. D for a
separate discussion on this subject.
7 The same exponents were retrieved when the agents decreased their price 0{1% and
0{10%, as well as for di erent system sizes, so the result seems robust. For larger systems
the power law is followed up to larger values for the jump lengths and the attening
happens later. The reason why we have not included results from larger systems in this
plot is problems with obtaining really good data. The approach to the \steady state"
shows critical slowing down, so if the transient is over after T N time-steps for a system


of size N , a larger system of size N , will only be correlated on length scales up to order
0

N after T N time steps.



0
CHAPTER 5. SOC ON AN ASYMPTOTE 49

-1

-2

-3
log10(P(x))

-4

-5

-6

0 1 2 3
log10(x)
-1

-2

-3
log10(P(x))

-4

-5

-6

0 1 2 3
log10(x)

Figure 5.10: Distribution of spatial separation between successive losers.


Economy with 2000 agents, lowering of loser price up to 1% (high plateau)
and 0:1% (low plateau), 108 updates. (a): Jumps to the right. Exponent
(right) = 1:844(2). (b): Jumps to the left. Exponent (left) = 2:021(2).

are constantly lowering their prices and hence lowering the amplitude of their
pro ts. If the duration of the avalanche is long, the agents involved in it will
end up having smaller absolute pro ts than agents some other place in the
CHAPTER 5. SOC ON AN ASYMPTOTE 50

-0.005
profit of loser

-0.01

-0.015

-0.02
-400 -200 0 200 400
distance to previous loser

Figure 5.11: Scatter plot. Ordinate: the pro t that made the agent a loser.
Abscissa: the spatial distance to the former loser. The dashed line is at
the level of the critical threshold value of the pro t. Points above this line
represent \avalanche starters." Economy of 1000 agents, 5  104 time steps
(after the initial transient is over.)

system that has not been involved in activity for a long time. Thus a loser can
appear far away from the avalanche, without all the agents from the avalanche
being above threshold, i.e., without the avalanche having terminated yet.
This will cause the activity to jump from the active avalanche to some other
site in the system. The e ect is extremely small however, as the drift is very
slow. In the thermodynamic limit of N ! 1, this e ect disappears, and
even for N = 2000 it can be ignored. The e ect should be visible as an
increased probability of observing big avalanches since new avalanches are
started before the old is over. No such e ect is visible in Fig. 5.6, so this is
indeed a negligible contribution to the attening.
The values for the exponents, (right) = 1:844(2) and (left) = 2:021(2),
were obtained by tting the total distribution to two power laws (jumps left
CHAPTER 5. SOC ON AN ASYMPTOTE 51

and right) and a constant (avalanche starters)


P (x) = Ax (right) + B (N x) (left) + C ; (5.42)
i.e., we have only two and a half tting parameters for jumps to each side.
The uncertainties on the exponents, (right) and (left) , are the values of the
variance returned by the tting procedure. However, the precision implied
by the value of these variances should not be taken too literally, and an
exponent of (right) = 2 cannot be ruled out; see Appendix E. The obtained
t is shown (in the range over which the t was performed) in Fig. 5.12
where, for clarity of presentation, the data have been \coarse grained" and
plotted with error-bars; see Appendix F.
In popular terms, the system seems to have lost its locality and instead
has developed correlations on all length scales, even though the interactions
between agents are purely local in nature. Everybody is connected to every-
body and nobody can hide.
The connection between agents implies that the economical well-being
of an agent does not just depend on himself, but to a high degree on how
his neighbours are behaving. An agent has to position himself carefully in
regard to his surroundings, and even then his neighbours may change their
production or consumption plans in a way that forces the agent to adapt to
this new economical landscape.
We take the power law distribution of spatial separations between suc-
cessive active sites as the decisive indicator that the system is critical. The
system is also self-organised, as we have performed no ne tuning of param-
eters. Comparing with the de nition proposed in [12] and quoted in the
discussion of SOC, we see that all the points are satis ed except
(5) the medium is modi ed no more.
since our system never reaches a steady state. But as we have just shown,
the system is self-organized critical nevertheless, so the de nition of SOC
can now be reduced to 4 points, i.e., the class of systems expected to exhibit
SOC can be extended to include asymptotic systems.
The asymptotic behaviour can of course be scaled away so as to regain
(5) and facilitate the de nition and measurement of avalanches. Avalanche
size distributions are strange things though, as they are basically artifacts
of a nite system size. In an in nite system, the probability of encountering
CHAPTER 5. SOC ON AN ASYMPTOTE 52

-4
log10(P(x))

-5

-6

1.5 2 2.5 3
log10(x)

-4
log10(P(x))

-5

-6

1.5 2 2.5 3
log10(x)

Figure 5.12: Distribution of the spatial separation between successive losers.


Economy with 2000 agents, lowering of loser price up to 0:1%, 108 updates.
Error-bars are SEM (standard error on the mean). The full line is a single t
of Eq. (5.42) to the data shown in (a) and (b) with backing Pn (> 2 ) = 70%.
0

(a): Jumps to the right. Exponent (right) = 1:844  0:002. (b): Jumps to
the left. Exponent (left) = 2:021  0:002:
CHAPTER 5. SOC ON AN ASYMPTOTE 53

an avalanche of a size (duration) that is bigger than the duration of our


total measurement is nite|since we have a power law distribution for the
sizes that yields an in nite value for the average size of avalanches. For
this reason any measured avalanche size distribution depends not only on
where the threshold is placed, but also on the niteness of the system. The
same problem carries on to the determination of rst return times, that
again become meaningless in the in nite system. Free of these trouble is the
distribution of jump lengths, as measure of criticality.

5.9 Two words about prices


For completeness, we include in Fig. 5.13 a snapshot of the prices and pro ts
in a system of 2000 agents. In Fig. 5.13 we see that the agents with the
0.015
1.60
0.01

0.005

p_n
s_n

0 1.50

-0.005

-0.01
1.40
-0.015
0 500 1000 1500 2000
agent

Figure 5.13: Snapshot of the spatial distribution of pro ts (dots), and at


the same instant the spatial distribution of prices (dotted line). 2000 agents,
2  105 time steps after start.
lowest pro ts are not the agents with the highest or even the lowest prices.
It is clear from this, that the pro t of an agent depends only on his price
level compared to his neighbours|not on the level of the price measured on
some global scale. Comparing to the Bak-Sneppen model, we use, not one,
CHAPTER 5. SOC ON AN ASYMPTOTE 54

but two variables, the pro t and the price. The pro ts are used to nd the
overall loser in the system, but we do not adjust the pro t directly. Rather,
the system is driven by the losing agent's adjustment of his price, though he
is de ned by his (lack of) pro t. Thus it would seem that it does not matter
how a system is driven, as long as an extremal site is chosen and adjusted in
some way, the system will eventually exhibit a threshold in the variable used
to rank the agents.
Comparing to the rst part of this thesis, we see that the steady state is
not a spatially homogeneous one, with equal prices and production. On the
contrary, there are constantly some agents that are frustrated and in their
attempts to improve their situation they will in uence their neighbours and
eventually \pass on" the frustration.

5.10 On two-dimensional models


We brie y considered a two-dimensional version of this model, where agents
are arranged on a square lattice with four neighbours each. An agent pro-
duces some good that he tries to sell to his two neighbours to his east and
west (north and south), and he consumes some good that he will try to buy
from his two other neighbours to his north and south (east and west). The
system is driven in the same manner as the one-dimensional version. Prelim-
inary results suggest the possibility of the establishment of domains within
which the activity never enters. The interpretation is that the domain is big
enough to sustain itself, understood in the sense that the agents on the border
of the domain do not have enough interaction with their neighbours outside
the domain that these can a ect them suciently to make them losers, and
hence active. Analogies to real economy, ranging from small societies to
international economic unions, are easy to come up with.

5.11 Summary
We have shown that our model economy evolves to a critical state, when
driven by extremal dynamics. This happened without ne-tuning of pa-
rameters, i.e., the system is self-organised. The system is asymptotic but
we rescaled it to a (statistically) stationary state where the de nition of
CHAPTER 5. SOC ON AN ASYMPTOTE 55

avalanches is possible. After rescaling we found a power law for the distribu-
tion of avalanche sizes with exponent  = 1:48  0:02, and the simple random
walk was ruled out as a possible explanation. The distribution of rst return
times of activity to a given site was measured and found, asymptotically,
to follow a power law with exponent 1:2  0:1, hence no site ever \dies".
Finally we measured the distribution of spatial separations of consecutive
activity in the system and found two power laws (left and right) with expo-
nents (right) = 1:844  0:002 and (left) = 2:021  0:002, with a 70% backing
of the t. However, (left) = 2 cannot be ruled out as explained in App. E.
Although initially inspired by models of economic systems, the discussion in
this chapter eventually turned out to be of a more general, physics oriented,
character.
In short, we have found that our system, though not stationary, is self-
organized critical. Thus we have demonstrated by example that SOC can be
a useful concept in the analysis also of systems with no stationary attractor
state for their dynamics.
Chapter 6
Conclusion
Starting from the same basic ingredient as used in classical economic equilib-
rium theory, the utility function, we have found two fundamentally di erently
behaving systems. In the rst system we allow single agents, each with their
own utility function, to perform rather elaborate guesses about the future.
These guesses are necessary as the agents have only local knowledge, i.e., all
they know is the utility function of their neighbour from the previously per-
formed transaction. In this situation the system dynamics can be described
by the di usion equation, hence all uctuations will eventually die out and
a steady state will be reached. However, we are able to treat the nature of
this relaxation to the equilibrium state, as well as the response of the system
to perturbations and to noise-induced uctuations around the equilibrium.
In the second system, although starting from exactly the same utility
function, the di erent strategy employed by the agents leads to an entirely
new scenario. By letting the agent with the lowest (most negative) pro t
change his production and consumption plans, the entire economy enters
an asymptote and self-organises into a critical state. In this situation no
equilibrium is reached, although rescaling to a statistically stationary state is
possible. Thus we have shown that Self-Organized Criticality can be observed
in asymptotic systems, whereas SOC so far has only been observed (and
looked for) in stationary systems.
This kind of work would presumably have been carried out at the time
of the formulation of the equilibrium theory of economics|if computers had
been available.

56
Appendix A
How to increase the utility
There is always a large group of agents in the economy (some 20%) who
satisfy the constraint of zero pro t, Eq. (5.2). For these agents it holds true
that
@sn = 0 ; (A.1)
@pn
since sn for these agents is given by
sn = pnqn(prod.) pn+1qn(cons.)
+1
1=3 4=3
= pn pn pn
! !

pn+1 p n+1
pn+1 (A.2)
= 0: (A.3)
Furthermore
@un = pn 1=3 > 0 :
!

(A.4)
@pn pn+1
So such agents can enlarge their utility, without violating the constraint
of zero pro t, by simply increasing their prices. Of course this conclusion is
derived for in nitesimal changes in prices, and the agents will always perform
nite changes. Furthermore, the agents do not know over how wide a range
these relations really hold, since this depends on the prices of next nearest
neighbours. So, how much they increase their price, and hence production
and consumption, depends on how \bold" they are.

57
APPENDIX A. HOW TO INCREASE THE UTILITY 58

This aspect of the model (agents increasing their prices) has not been
thoroughly examined, but preliminary results show that the time develop-
ment of the mean price of the system depends on the rate of increase of these
`zero pro ters.' The number of these agents will also be a ected and settle
at some lower level than in the basic model. While these parameters are fun
to play with, they also complicate the model.
Figures A.1, A.2, and A.3 show the result of simulating an economy with
200 agents, where agents satisfying the monetary constraint sn = 0, increase
their price with 0{1% (uniformly distributed). Figure A.1 shows that the
system-averaged price seems to approach a constant value. Figure A.2 shows
that the number of agents achieving sn = 0 declines rapidly and settles at
a mean value of 1.0023(1) (average over 2 million time steps), this explains
why the average price is approximately constant. There is essentially only
one agent lowering his, higher than average, price by approximately 0:5% per
time step; and on average a little more than one agent that raises his, lower
than average, price by approximately 0:5% per time step. Figure A.3 shows
that the distribution of pro ts has a threshold which is constant in time.
To demonstrate that this modi cation of the model (letting `zero-pro ters'
increase their prices) is sensitive to how much the agents increase their prices,
Figs. A.4 and A.5 show some features from a system where the increase in
price by the `zero-pro ters' is now between 0 and 10%. Figure A.4 shows
another apparently steady state, while Fig. A.5 reveals that the activity of
the economy is localised to approximately 40 agents. The average number
of agents with sn = 0 is down to 0.104(1) in this example. Threshold be-
haviour is still observable like in Fig. A.3 (not shown), only now the losers
are con ned to a small region.
APPENDIX A. HOW TO INCREASE THE UTILITY 59

1.6

1.5

1.4
<p>

1.3

1.2

1.1
0 500000 1e+06 1.5e+06 2e+06
time

Figure A.1: Average price versus time, for an economy of 200 agents. Agents
having sn = 0 increase their price by 0{1%, the increase drawn from a uniform
distribution on [0; 1] .

50
number of zero-profitters

40

30

20

10

0
0 5 10 15 20 25 30 35 40 45 50
time

Figure A.2: Number of agents having sn = 0 in an economy of 200 agents.


Full line: agents having sn = 0 increase their price; the asymptotic mean
number of `zero-pro ters' is 1.0023. Broken line: agents having sn = 0 do
not increase their price.
APPENDIX A. HOW TO INCREASE THE UTILITY 60

0.03

0.02

0.01
s_n

-0.01

-0.02
200 200 200 200 200 200 200 200 200 200
n

Figure A.3: Ten snapshots of the spatial dependence of pro ts for an economy
of 200 agents. The snapshots are separated in time by 2  105 time steps.

1.9
<p>

1.8

1.7

1.6
0 500000 1e+06 1.5e+06 2e+06
time

Figure A.4: Average price versus time, for an economy of 200 agents. Agents
having sn = 0 increase their price by 0{10%, drawn from a uniform distribu-
tion on [0; 10] .
APPENDIX A. HOW TO INCREASE THE UTILITY 61

200

150
loser

100

50

0
0 500000 1e+06 1.5e+06 2e+06
time

Figure A.5: The position of the losing agent plotted against time, for an
economy of 200 agents. Agents having sn = 0 increase their price by 0{10%,
drawn from a uniform distribution on [0; 10].
Appendix B
First-return-times for a random
walker
In this appendix, for completeness, we derive the exponent for the distri-
bution of rst-return-times for an unbiased random walker. Consider the
continuous version of a one-dimensional random walker. The \density of
walkers" (x; t) then obeys the di usion equation
@t (x; t) = D@x2 (x; t) ; (B.1)
with normalised solution
(x; t) = p 1 exp 4xDt
!
2
(B.2)
4Dt
for delta-function initial condition (x; t = 0) = (x), where D is the di usion
constant, and @i denotes di erentiation with respect to the variable i = x; t.
To nd the probability distribution for the rst-return-time, we now consider
a set-up with an absorbing wall at x = 0. This boundary condition is imposed
by the method of mirrors, starting another group of random walkers with
\opposite charge" just to the left of the rst group, loosely speaking, i.e.,
the initial condition is (x; t = 0) = (x ) (x + ). The solution to the
di usion equation with this initial condition is
(x )2 (x + )2
! !!

(x; t) / t 1=2 exp 4Dt exp 4Dt : (B.3)

62
APPENDIX B. FIRST-RETURN-TIMES FOR A RANDOM WALKER 63

Letting  ! 0 we obtain
x2
!

(x; t) / t 1=2 @ exp


x
4Dt (B.4)

for the solution, and (x; t = 0) = @x (x) for the initial condition. The
number of random walkers is of course no longer conserved, as we have in-
troduced an absorbing wall at x = 0.
A fast and precise way to arrive at this result, is to realise that if (x; t)
is a solution to Eq. (B.1) with initial condition (x; t = 0) = 0 (x), then
an equally valid solution is @x (x; t) with initial condition @x0 (x). This
follows from the di erential equation Eq. (B.1) by applying @x on both
sides, and using that @x and @t commutes.
The ux across x = 0 measures the number of random walkers arriving at
x = 0 from x > 0, and is hence proportional to the probability distribution
of rst-return-times. Since the ux in x = 0 is proportional to the gradient
in x = 0 (j = D@x Fick's law) we nd for the rst return times tfr
x2
!

P (tfr ) / @x(x = 0; t) / t1=2 @ 2 exp
x 4Dt x=0



/ t 3=2 : (B.5)

Any bias or drift will of course change the exponent for the rst-return-time
probability distribution.
It seems tting here to remark that random walks (Brownian motion)
was rst described in 1900 by the economist Louis Bachelier who wanted
to understand the uctuations of the nancial market. It turns out that
these uctuations cannot be described adequately by a random walk; for an
amusing non-physicist account see [20] for a more scienti c exposition see
[21]. Later, in 1905, Einstein of course got the mathematical description of
Brownian motion right.
Appendix C
A brief introduction to Levy
Flights
Consider the identically distributed random variables Xi, and think of the
value of each variable Xi, as a step in a random walk. Levy asked when the
distribution of the sum, x, of n steps gn(x), is the same as the distribution,
g(x), of any term in the sum. One well known answer is that the sum of
Gaussians is a Gaussian. The Gaussian distribution is given by
g(x) = q 1 exp( x2 =2) (C.1)
(2)
and the distribution of the sum (n) 1=2 Pi Xi is given by
gn(x) = q 1 exp( x2 =2) : (C.2)
(2n)
The Fourier transform is
Z 1

gn(k) = gn(x) exp(ikx) dx = exp( nk2 =2) : (C.3)


1
If we generally choose
pn(k) = exp( njkj ); 0 <  2 ; (C.4)
where is some constant, then pn(x) and p(x) have the same distribution
(when properly rescaled). For = 2 we have the Gaussian distribution.
64
APPENDIX C. A BRIEF INTRODUCTION TO LEVY FLIGHTS 65

The second moment of pn(x) is easily found as

hx2 i = @ pn@k
2 (k = 0)
2 (C.5)
which for the Gaussian yields n, but for < 2 is in nite.
Denoting the inverse Fourier transform of Eq. (C.4) by L (x; n), we nd
1 Z 1

L (x; n) =  exp( nk ) cos(kx) dk  n=x1+ (C.6)


0
the power-law tail of the distribution signals the lack of a characteristic scale
in the system.
Appendix D
Finite size e ects
We looked for, but did not nd, any indications of nite size e ects in the
distribution of the spatial separation between successive active sites in our
system. To nd out if we have any nite size e ects in the system, we proceed
as follows. Take two systems of di erent size N and N 0 . Make the scaling
hypothesis for the distribution of jump lengths
P (x; N )  x  g(x; N ) ; (D.1)
where
g(x; N ) = g(x=N ) ; (D.2)
i.e., we assume that the scaling function depends only on the relative length
of a jump. Looking at the ratio between the distribution for two di erent
system sizes N and N 0 we nd
P (x; N ) = x  g(x=N ) :
 

(D.3)
P (x0; N 0 ) x0 g(x0=N 0 )
By studying this ratio for spatial separations of equal absolute size, x = x0 ,
we nd
P (x; N ) = g(x=N ) : (D.4)
P (x; N 0 ) g(x=N 0)
If nite size e ects are present we will nd P=P 0 6= 1. We found P=P 0 = 1
within error-bars, i.e., no nite size e ects.

66
APPENDIX D. FINITE SIZE EFFECTS 67

Equation (D.3) provides us with an alternative method for determining


the value of the exponent . If we con ne our study to those data points
satisfying x=N = x0 =N 0 we have
P (x; N ) = N  :
 

(D.5)
P (x0; N 0) N0
Knowing the ratio N=N 0 , the value of  can then be determined directly. This
method holds even if we have nite size e ects in the system, provided our
scaling hypothesis is correct. We determined one of the exponents, (right) =
1:86, in this manner for N = 1000 and N 0 = 2000. This value should
be compared with (right) = 1:844 obtained from direct curve tting of the
distribution of the spatial separation between successive losers; see App. E.
Appendix E
Data tting
We used the NAG routine nag_nlin_lsq to t the data for the distribution
of the spatial separation between successive losers. The routine performs a
nonlinear least-squares t, i.e., it minimises the `objective function'
Xm
F= [f (xi)]2 ; (E.1)
i=1
with `residuals' f (xi ) given by
f (xi) = pi (Px ()xi) (E.2)
i
where Pi is the expectation value for pi and (i ) 1 is the weight of term i.
Here pi is the experimentally determined probability of jumping the distance
xi (xi is a discrete variable, as we are on a lattice), and Pi is the value of the
function P (x), we try to t, evaluated in the point x = xi . We have
P (x) = Ax (right) + B (N x) (left) + C ; (E.3)
where N is the system size and there are ve tting parameters|two and a q
half for each side of the distribution (left and right). For i we use P (xi)
(assuming a Gaussian distribution for the measured values of pi for each i, by
invocation of the central limit theorem). That is, the weights are determined
by an iterative process; some initial estimate of the values for the ve tting
parameters is entered, the t is performed, the new set of tting parameters
is plugged into the expression for i etc.
68
APPENDIX E. DATA FITTING 69

The best t for a system of 2000 agents and  2 [0; 0:001] (p ! (1 )p
for the loser) was obtained by ignoring the rst 30 data points from each
side (the shortest jump lengths), and returned the following result (non-
normalised data):
Parameters
----------

number of residuals (m) 1940 number of variables (n) 5

list............... .true. print_level........ 1


lin_deriv.......... .true. linesearch_tol..... 9.00E-01
step_max........... 1.00E+05 optim_tol.......... 1.49E-08

deriv.............. .true. verify............. .true.


max_iter........... 100 unit............... 6

Verification of the Jacobian matrix


-----------------------------------

The Jacobian matrix seems to be ok.

Final Result
------------

x g Singular values
8.58585E+06 -4.9E-10 3.9E+03
1.84358E+00 2.1E-08 3.3E+03
2.76124E+07 -5.5E-09 3.7E+00
2.02071E+00 8.4E-08 1.3E-05
3.34153E+00 6.6E-11 4.5E-06

exit from nag_nlin_lsq after 99 iterations.

final objective value = 1902.345


APPENDIX E. DATA FITTING 70

final residual norm = 4.4E+01

final gradient norm = 8.7E-08

Variance-Covariance Matrix C:

6.5863E+09 1.7984E+02 3.8066E+09 3.5388E+01 2.0375E+04


1.7984E+02 5.0231E-06 1.1405E+02 1.0600E-06 6.0103E-04
3.8066E+09 1.1405E+02 4.8835E+10 4.1774E+02 5.0836E+04
3.5388E+01 1.0600E-06 4.1774E+02 3.6492E-06 4.6705E-04
2.0375E+04 6.0103E-04 5.0836E+04 4.6705E-04 1.6681E-01

The backing of this t is evaluated as


0 2 1 1
n
" ! #
Z
2
Pn (>  ) = 2 0
(n 2)= 2 ! 0
 n 1 e 2 =2 d
2
0


= 1 Z 10
n =2 1 e  d ; (E.4)
(n0=2) 2=2
where n0 = n l is the number of degrees of freedom in the t (number of data
points minus number of constraints/ tting parameters) and 2 is the sum of
the squares of the residuals. This is just the incomplete gamma function and
we evaluate it by calling another NAG routine (nag_incompl_gamma) to nd
P1935 (> 1902) = 70%. If more data points are kept, the backing is smaller,
e.g., P1955 (> 2013) = 18%.
This t tells us that we have a distribution of jump lengths with the two
exponents (right) = 1:844  0:002 and (left) = 2:021  0:002. The uncertainty
is the square root of the variance on the relevant tting parameter. We note
that the value of (left) is close to 2, but at the same time more than 9 standard
deviations away. However, these values and especially their precision, should
not be taken too literally as they are connected to each other as expressed
by the variance-covariance matrix and illustrated by the following. If we set
the value of (left) to 2, i.e., x it more than 9 standard deviations below the
tted level, the backing drops, but only to P1936 (> 2023) = 8:4%, i.e, not
enough to falsify the hypothesis that (left) = 2.
Appendix F
Coarse graining of data
The coarse graining procedure for the distribution of avalanche sizes and
jump lengths is explained below. Two procedures were used. The more
general one nds the arithmetic mean of the ordinates (probability or count
number) in a block of data, and plots it against the arithmetic mean of
their abscissae (avalanche sizes or jump lengths). This approximation is ex-
act when the underlying probability distribution is a rst-degree polynomial
across the block, and is correct to second order if the underlying distribution
is a power law. But if the underlying distribution is expected to follow a
power-law we can do better than this, simply by using the geometric mean
instead of the arithmetic mean, because a power-law is a rst degree polyno-
mial in log-log variable. Thus, when the underlying distribution is a power
law, this approximation is exact. In the coarse grained plots shown, the rst
10 data points are left unchanged, the next 90 are grouped in blocks of 5,
the next 900 are grouped in blocks of 25, etc.
Error-bars are calculated under the assumption that the count number
ci for a given event i follows a Binomial distribution. For large values of
ci (typically it is sucient that ci > 10) the central limit theorem then
gives that ci is Gaussian distributed with variance equal to its mean. This
means thatpthe standard deviation on each count number can be estimated
by (ci ) = ci, and this is what we use as error-bars.
When data are coarse grained, the error-bars decrease in size. The stan-
dard deviation can be calculated according to standard rules, i.e., for the

71
APPENDIX F. COARSE GRAINING OF DATA 72

arithmetic mean z = (c1 + : : : + cn)=n we have


 
2 (z) = 2(c1 ) + : : : + 2(cn) =n2 ' (c1 + : : : + cn) =n2 = z=n ; (F.1)
and for the geometric mean z = (c1 c2 : : : cn)1=n we have
Z 2  2 (c )  2 (c )
!

 2 (z ) = n2 C 2 + : : : + C 21
1
1 1
2
' (c1c2 : :n: cn)
1=n!

(1=c1 + : : : + 1=cn)

= z (1=c1 + : : : + 1=cn) ;
 2 

(F.2)
n
where Z = (C1C2 : : : Cn)1=n, Ci is the mean value of ci , and the best estimate
of Ci is just the observed quantity ci.
Bibliography
[1] W. Brian Arthur. Bounded rationality and inductive behavior (the el
farol problem). American Economical Review, 84:406{411, 1994.
[2] W. Brian Arthur. Complexity and the economy. Science, 284:107{109,
1999.
[3] Per Bak. How Nature Works. Oxford University Press, 1997.
[4] Per Bak, Simon F. Nrrelykke, and Martin Shubik. Dynamics of money.
Physical Review E, 60(3):2528{2532, 1999.
[5] Per Bak and Kim Sneppen. Punctuated equilibrium and criticality in
a simple model of evolution. Physical Review Letters, 71(24):4083{86,
1993.
[6] Per Bak, Chao Tang, and Kurt Wiesenfeld. Self-organized criticality:
An explanation of 1=f noise. Physical Review Letters, pages 381{384,
1987.
[7] D. Challet and Y.-C. Zhang. Emergence of cooperation and organization
in an evolutionary game. Physica A, 246:407{418, 1997.
[8] Jan de Boer, Bernard Derrida, Henrik Flyvbjerg, Andrew D. Jackson,
and Tilo Wettig. Simple model of self-organized biological evolution.
Physical Review Letters, 73:906{909, 1994.
[9] G. Debreu. Theory of Value. Wiley, New York, 1959.
[10] S. D. Edney, P. A. Robinson, and D. Chisholm. Scaling exponents of
sandpile-type models of self-organized criticality. Physical Review E,
58:5395{5420, 1998.
73
BIBLIOGRAPHY 74

[11] Stuart Field, Je Witt, and Franco Nori. Superconducting vortex


avalanches. Physical Review Letters, 74:1206{1209, 1995.
[12] Henrik Flyvbjerg. Simplest possible self-organized critical system. Phys-
ical Review Letters, 76(6):940{43, 1996.
[13] Herbert Goldstein. Classical Mechanics, chapter 1, pages 45{49.
Addison-Wesley, second edition, 1980.
[14] P. C. Hohenberg. Existence of long-range order in one and two dimen-
sions. Physical Review, 158:383{386, 1967.
[15] W. S. Jevons. Money and the Mechanism of Exchange. 1875.
[16] Osame Kinouchi and Carmen P. C. Prado. Robustness of scale in-
variance in models with self-organized criticality. Physical Review E,
59:4964{4969, 1999.
[17] Nabuhiro Kiyotaki and Randall Wright. A search-theoretic approach to
monetary economics. The American Economic Review, 83:63{77, 1995.
[18] Nobuhiro Kiyotaki and Randall Wright. A contribution to the pure
theory of money. Journal of Economic Theory, 53:215{235, 1991.
[19] Paul Krugman. The Self-Organizing Ecomomy. Blackwell, 1996.
[20] Burton G. Malkiel. A Random walk down Wall Street. Norton, 6th
edition, 1996.
[21] B. B. Mandelbrot. Fractals and Scaling in Finance : Discontinuity,
Concentration, Risk : Selecta volume E. N.Y. Springer, 1997.
[22] Rosario N. Mantegna and H. Eugene Stanley. Scaling behaviour in the
dynamics of an economic index. Nature, 376:46{49, 1995.
[23] Karl Marx. Kapitalen, volume 1, chapter 3, page 202. Rhodos, 1970.
Danish translation.
[24] N. D. Mermin and H. Wagner. Absence of ferromagnetism or antifer-
romagnetism in one- or two-dimensional isotropic heisenberg models.
Physical Review Letters, 17:1133{1136, 1966.
BIBLIOGRAPHY 75

[25] Maya Paczuski and Kevin E. Bassler. Self-organized network of com-


peting boolean agents. cond-mat/9905082, pages 1{4, 1999.
[26] Maya Paczuski, Sergei Maslov, and Per Bak. Avalanche dynamics in
evolution, growth, and depinning models. Physical Review E, 53(1):414{
443, 1996.
[27] David M. Raup. Biological extinction in earth history. Science, pages
1528{1533, 1986.
[28] Rudolf Richter. Money, chapter 1. Springer, 1989.
[29] Michael F. Shlesinger, George M. Zaslavsky, and Joseph Klafter. Strange
kinetics. Nature, 363:31{37, 1993.
[30] Dietrich Stau er and Amnon Aharony. Introduction to Percolation The-
ory, chapter 2. Taylor & Francis, revised second edition, 1994.
[31] Alberto Trejos and Randall Wright. Search, bargaining, money, and
prices. Journal of Political Economy, 103:118{141, 1993.
[32] Alessandro Vespignani and Stefano Zapperi. How self-organized critical-
ity works: A uni ed mean- eld picture. Physical Review E, 57(6):6345{
62, 1998.

You might also like