BioSystems 60 (2001) 5 – 21
www.elsevier.com/locate/biosystems
The physics of symbols: bridging the epistemic cut
Howard H. Pattee *
Systems Science and Industrial Engineering Department, T.J. Watson School of Engineering and Applied Science,
State Uni6ersity at Binghamton, Binghamton, NY 13902 -6000, USA
Abstract
Evolution requires the genotype– phenotype distinction, a primeval epistemic cut that separates energy-degenerate,
rate-independent genetic symbols from the rate-dependent dynamics of construction that they control. This symbol–
matter or subject– object distinction occurs at all higher levels where symbols are related to a referent by an arbitrary
code. The converse of control is measurement in which a rate-dependent dynamical state is coded into quiescent
symbols. Non-integrable constraints are one necessary condition for bridging the epistemic cut by measurement,
control, and coding. Additional properties of heteropolymer constraints are necessary for biological evolution.
© 2001 Elsevier Science Ireland Ltd. All rights reserved.
Keywords: Measurement and control; Non-integrable constraints; Subject– object; Symbol reference
How, therefore, we must ask, is it possible for
us to distinguish the living from the lifeless if
we can describe both conceptually by the motion of inorganic corpuscles?
Karl Pearson, The Grammar of Science
1. A brief history of the problem
At the end of the 19th century, there was
little interest among scientists in dualistic and
vitalistic views of life. It was because Karl Pearson (1937) believed that life was entirely a physical process that he was led to ask (in 1892)
what physically distinguishes the living from the
* Fax: + 1-607-7774094.
E-mail address: pattee@binghamton.edu (H.H. Pattee).
lifeless. He suggested that organic molecules
have ‘‘secondary characteristics,’’ but whether
these characteristics could be derived from fundamental inorganic laws, ‘‘we have not at
present the means of determining.’’ Early in the
20th century, largely as a result of quantum theory, there arose a feeling among many physicists
that there was still some essential mystery to life
that might require serious reinterpretation of
physics. For example, in 1949, Max Delbrück
(1949), influenced by Bohr’s (1933) earlier
speculations1, wrote:
1
Bohr’s views are difficult to abbreviate without misrepresentation: In Light and Life Bohr (1933) begins:
On the one hand, the wonderful features which are constantly revealed in physiological investigations and which
0303-2647/01/$ - see front matter © 2001 Elsevier Science Ireland Ltd. All rights reserved.
PII: S 0 3 0 3 - 2 6 4 7 ( 0 1 ) 0 0 1 0 4 - 6
H.H. Pattee / BioSystems 60 (2001) 5–21
6
It may turn out that certain features of the
living cell, including perhaps replication, stand
in a mutually exclusive relationship to the strict
application of quantum mechanics, and that a
new conceptual language has to be developed
to embrace this situation.
Gunther Stent (1966) described the situation in
the 1940s in the following way:
Thus it was the romantic idea that ‘other laws
of physics’ (Schrödinger) might be discovered
by studying the gene that really fascinated
physicists. This search for the physical paradox,
this quixotic hope that genetics would prove
incomprehensible within the framework of conventional physical knowledge, remained an important
element
in
the
psychological
infrastructure of the creators of molecular
biology.
Then, in the early 1950s began an explosive
growth of molecular biology and origin of life
experiments, the latter beginning in 1953 with the
abiogenic synthesis of amino acids by Miller and
Urey. The same year, Watson and Crick an-
differ so markedly from what is known of inorganic matter
have lead biologists to the belief that no proper understanding of the essential aspects of life is possible in purely
physical terms. On the other hand, the view known as
vitalism can hardly be given an unambiguous expression by
the assumption that a peculiar vital force, unknown to
physics, governs all organic life. Indeed, I think we all agree
with Newton that the ultimate basis of science is the
expectation that nature will exhibit the same effects under
the same conditions. If, therefore, we were able to push the
analysis of the mechanism of living organisms as far as that
of atomic phenomena, we should not expect to find any
features foreign to inorganic matter.
He then goes on:
…the idea suggests itself that the minimum freedom we
must allow the organism will be just large enough to permit
it, so to say, to hide its secrets from us. On this view, the
very existence of life must in biology be…[like the quantum
of action] taken as a basic fact that cannot be derived from
ordinary mechanical physics.
nounced the double helix of DNA. Some of the
other major advances were the use of X-ray diffraction to derive the structure of myoglobin and
hemoglobin by Perutz and Kendrew, and the
isolation of a DNA polymerase by Kornberg.
This was followed in the 1960s with the breaking
of the genetic code by Nirenberg and Khorana,
the discovery of gene regulation by Jacob and
Monod, the discovery of messenger RNA by
Brenner, Jacob and Meselson, the sequencing of
transfer RNA by Holly, and the discovery of
plasmids by Lederberg.
By 1970, there was no longer much interest in
possible paradoxes or revisions of physical theories to accommodate living systems. Nothing new
appeared to be needed. Kendrew (1967) summarized the molecular biologists’ position in Scientific American: ‘‘… up to the present time
conventional, normal laws of physics and chemistry have been sufficient.’’ This is now the generally accepted view among biologists2. But this
reductionist view is really only a response to
dualism and vitalism. This view does not even
address Pearson’s question. If it were stated as an
‘‘answer’’, it would be a total non sequitur: Life is
distinguished from the lifeless because it follows
the conventional, normal laws of physics and
chemistry of lifeless matter.
In contrast to this dominant reductionist view
of molecular biology, there continued to be a
minority of more skeptical and holistically
minded thinkers who believed that physical laws
are incomplete or inapplicable in their present
form (e.g. Wigner, 1961; Burgers, 1965; Elsasser,
2
The consensus among biologists in the 1970s can be found
in Biology and the Future of Man, edited by Philip Handler,
a report of a committee of the National Academy of Sciences
charged with presenting ‘‘a complete overview of the highlights
of current understanding of the Life Sciences.’’ Handler’s
preface states:
The theme of this presentation is that life can be understood in terms of the laws that govern and the phenomena
that characterize the inanimate, physical universe and, indeed, that at its essence life can be understood only in the
language of chemistry.
H.H. Pattee / BioSystems 60 (2001) 5–21
1975; Rosen, 1991)3. There have also continued to
be many speculations about whether life can be
adequately explained by classical models without
incorporating quantum dynamics.
2. Physics-free models — artificial life
In the last decade, there has arisen, in addition
to these opposing schools of physical reductionists
and physical skeptics, a third school that models
life and evolution disregarding elementary physical laws altogether. Some well-known examples
are Langton’s (1989) replicating cellular automata, Ray’s (1992) Tierra program, Holland’s
(1995) Echo model using genetic algorithms, random Boolean nets of Kauffman (1993), Fontana’s
(1992) algorithmic chemistry, and many artificial
life computer simulations. Von Neumann (1966)
is often cited as the founder of artificial life studies because of his logical theory of self-replication,
but it is important to emphasize that he did not
believe that such physics-free models would answer, ‘‘the most intriguing, exciting, and important question of why the molecules…are the sort
of things they are’’4.
3
One must consult their extensive writings for an adequate
perspective of these ideas. Eugene Wigner (1961) could not
accommodate self-replication using the linear laws of quantum
theory. J. M. Burgers (1965), following the philosopher A. N.
Whitehead, believed:
There are essential features of life which do not have an
unambiguous relationship to states of matter or fields that
can be objectively characterized [by normal physics and
mathematics]. Their explanation requires reference to subjective features.
Burgers viewed these features as a form of memory inherent in
the prebiotic universe but emerging and strengthening gradually in the course of evolution. Walter Elsasser (1975) accepted
quantum theory as correctly applying to life in principle, but
believed that there were also ‘‘biotonic’’ laws governing the
‘‘unfathomable complexity’’ that made life irreducibly nonlinear. Elsasser and Burgers both felt that life stores some forms
of information in other than ‘‘mechanical means’’. Robert
Rosen (1991) argues that it is ‘‘this very segregation into
independent categories of causation’’ that prevents the Newtonian picture from describing the‘‘entailments’’ and ‘‘linkage’’
relations characteristic of life. He focused on the class of
formal structures that model these relations.
7
Many other abstract descriptions of life now
fall under the title of complexity theory. This field
is dominated by mathematical approaches, nonlinear dynamics, ergodic theory, random manifolds, self-organized criticality, and information
and game theory (e.g. Cowan et al., 1994). Complexity theorists are looking for universal principles of complex systems that apply at all levels,
from spin glasses and sandpiles to cells and societies. The relation of these models to biology, and
even to physics, is often a controversial issue. The
power of computers to simulate models of selfreplication, development, evolution, and ecology
has resulted in many interesting behaviors. Computation also allows the study of non-linear dynamics that generate endless formal complexity.
However, because of the high degree of abstraction, these simulations are often difficult to interpret, and their applicability to biology is
uncertain. Direct empirical justification is hard to
find for such abstract models. In any case, since
these models do not directly involve any microscopic physical laws and apply to both living and
lifeless systems, they do not address Pearson’s
question. If asked Pearson’s question, the physicsfree modeler would answer that the essential
properties of life are distinguished by abstract
relations that do not depend on any particular
physical realization.
4
By axiomatizing automata in this manner one has thrown
half the problem out the window, and it may be the more
important half. One has resigned oneself not to explain how
these parts are made up of real things, specifically, how these
parts are made up of actual elementary particles, or even of
higher chemical molecules. One does not ask the most intriguing, exciting, and important question of why the molecules or
aggregates which in nature really occur in these parts are the
sort of things they are, why they are essentially very large
molecules in some cases but large aggregates in other cases,
why they always lie in a range beginning at a few microns and
ending at a few decimeter. This is a very peculiar range for an
elementary object, since it is, even on a linear scale, at least
five powers of ten away from the sizes of really elementary
entities.
(von Neumann, 1966, p. 77).
8
H.H. Pattee / BioSystems 60 (2001) 5–21
3. Model-free physics — autonomous agents and
situated robotics
Most recently, there has been great interest in
computer-controlled robots situated in a real
physical environment. The computer control is
often an artificial neural network, and adaptive
learning may involve genetic algorithms (e.g.
Varela et al., 1991; Brooks, 1992; Brooks and
Maes, 1994; Clark, 1997). These models generally
favor a coherent dynamic interpretation of control rather than the symbolic, rule-based ‘‘representations’’ of older artificial intelligence models.
Since the environment is real, there is no need to
model any physics. The concept of symbol is
usually regarded as an artifact generated by an
underlying dynamics. The physical world consists
of only those aspects of the environment that the
robot can actually detect.
This type of dynamical control for sensorimotor behavior appears to be a plausible model that
would help to account for the speed and complexity of responses in organisms with relatively complex sensorimotor behavior and small brains
(although it has not yet done so). Any form of
symbolic representation or rule-based computation at neuronal speeds is simply too slow and
would require much larger brains. Insect behaviors, such as flying around obstacles, landing on a
twig, or mating in flight in a gusty wind, do not
allow any solution except by some form of coherent real-time dynamics. If Pearson’s question were
asked, a roboticist would probably claim that
with respect to sensorimotor control, there is no
fundamental physical distinction between living
organisms and adaptive dynamically controlled
robots, even though they would agree that there is
at present an enormous gap between the most
complex robots and the simplest insects.
The problem with this view is that the dynamics
of sensorimotor control and learning is only one
aspect of life. The problem of reliable self-replication, the origin of novel sense organs and motor
structures, i.e. open-ended evolution, as yet has
no model based solely on a temporal dynamics.
Even if cognition and brain function should turn
out to be described as a temporally coded dynamics with no static symbol structures, this would
not adequately describe the quiescent molecular
structures that form the genome and the coding
constraints that have been controlling protein synthesis for billions of years.
4. Biologists’ views of the relation of biology to
physics
Many biologists consider physical laws, artificial life, robotics, and even theoretical biology as
largely irrelevant for their research. In the 1970s,
a prominent molecular geneticist asked me, ‘‘Why
do we need theory when we have all the facts?’’.
At the time, I dismissed the question as silly, as
most physicists would. However, it is not as silly
as the converse question, ‘‘Why do we need facts
when we have all the theories?’’. These are actually interesting philosophical questions that show
why trying to relate biology to physics is seldom
of interest to biologists, even though it is of great
interest to physicists. Questioning the importance
of theory sounds eccentric to physicists for whom
general theories are what physics is all about.
Consequently, physicists, like the skeptics I mentioned above, are concerned when they learn facts
of life that their theories do not appear capable of
addressing. However, biologists, when they have
the facts, need not worry about physical theories
that neither address nor alter their facts. Ernst
Mayr (1997) believes this difference is severe
enough to separate physical and biological
models:
Yes, biology is, like physics and chemistry, a
science. But biology is not a science like physics
and chemistry; it is rather an autonomous science on a par with the equally autonomous
physical sciences.
There are fundamental reasons why physics and
biology require different levels of models, the
most obvious reason being that physical theory is
described by rate-dependent dynamical laws that
have no memory, while evolution depends, at
least to some degree, on the control of dynamics
by rate-independent memory structures. A less
H.H. Pattee / BioSystems 60 (2001) 5–21
obvious reason is that Pearson’s ‘‘corpuscles’’ are
now described by quantum theory, while biological subjects require classical description in so far
as they function as observers. This fact remains a
fundamental problem for interpreting quantum
measurement, and, as I mention below, this may
still turn out to be essential in distinguishing real
life from macroscopic classical simulacra. I agree
with Mayr that physics and biology require different models, but I do not agree that they are
autonomous models. Physical systems require
many levels of models, some formally irreducible
to one another, but we must still understand how
the levels are related. Evolution also produces
hierarchies of organization from cells to societies,
each level requiring different models, but the
higher levels of the hierarchy must have emerged
from lower levels. Life must have emerged from
the physical world. This emergence must be understood if our knowledge is not to degenerate
(more than it has already) into a collection of
disjointed specialized disciplines.
5. Personal history of the problem
I first became aware of Karl Pearson’s question
about 1939 when I was in the 8th grade. My
Headmaster and science teacher, Dr. P.L.K.
Gross, had given me the 1937 Everyman edition
of The Grammar of Science (the first edition was
published in 1892). Most of the book was beyond
my comprehension, but I thought I understood
the chapter on Life, subtitled, ‘‘The Relation of
Biology to Physics’’. To make the long story of
my education short, a decade later, working on
my Ph.D. in physics, I graduated to Hermann
Weyl’s (1949) equally profound but more up-todate, Philosophy of Mathematics and Natural Science, which concludes with appendices on
‘‘Physics and Biology’’, and ‘‘Morphe and Evolution’’. This became my philosophy of science
source book. At the time, physics and mathematics were inseparable in my mind. I did not clearly
distinguish formal symbolic models from reality.
As Weyl notes,
9
Perhaps the philosophically most relevant feature of modern science is the emergence of
abstract symbolic structures as the hard core of
objectivity behind — as Eddington puts it —
the colorful tale of the subjective storyteller
mind.
In the 1960s, my first serious thinking about the
relation of abstract symbol structures to the physical laws they represent was revealed to me by the
physicist Max Born (1969) in a paper entitled
‘‘Symbol and Reality’’, in which he recalls as a
young student his own shock when it dawned on
him that all our perception and mental imagery,
‘‘everything without exception’’, is entirely subjective, and that only by the use of symbols can we
communicate any objective components of our
subjective, private experiences. Born’s condition
for the objective use of symbols is ‘‘decidability,’’
a term he coined to express the function of experiment. If a symbolic expression lacks empirical
decidability potentially available to all observers,
it has no necessary relation to any objective reality. I was also intrigued by Eugene Wigner’s
(1960) paper ‘‘On the Unreasonable Effectiveness
of Mathematics in the Natural Sciences’’, which
asks many fundamental questions about the nature of mathematical symbols5. Thus, following
Born, my effort toward answering the question
‘‘How is physics related to biology?’’ was augmented by the question ‘‘How are physical laws
related to the mathematical symbols on which
their representation depends?’’ I now often state
the same questions more generally: ‘‘How are
universal, inexorable, natural laws related to local, arbitrary symbols?’’.
5
Wigner (1960) set the tone of his paper with a quote from
C. S. Peirce: ‘‘…and it is probable that there is some secret
here which remains to be discovered.’’ His point is that the
most effective symbolic formalism, such as matrices, complex
numbers and infinite dimensional Hilbert space, cannot be
derived from physical observations or physical laws, or even
common sense. Yet they appear to be perfectly suited, if not
essential, for quantum theory. Wigner concluded with a speculation about life that there may be a conflict between laws of
heredity and quantum theory (see also Wigner, 1961).
10
H.H. Pattee / BioSystems 60 (2001) 5–21
My first motivation for understanding the relation of symbols to living organisms arose earlier
from the origin of life problem. In 1954, I had
completed my doctoral research on X-ray optics
used to study biological structures. Discovering
the physical structure of nucleic acids and
proteins was then a major problem of the new
molecular biology. I realized, however, that selfreplication of such complex structures was an
entirely different and more difficult type of
problem. I thought it was obvious that reliable
self-replication
would
require
objective
communication of whatever structure is replicated. In other words, for evolution to be possible, any description of a ‘‘self’’ must be
communicated objectively to all descendent cells,
no matter what particular ‘‘self’’ is being replicated. Here, objective simply means that the same
instructions will produce the same results in all
descendants.
I tried several self-organizing schemes using
automata models for generating and replicating
simulated copolymer sequences (Pattee, 1961,
1965), but it became clear that the evolutionary
potential of all these models was very limited. I
eventually recognized a fundamental problem in
all such rule-based self-organizing schemes,
namely, that in so far as the organizing depends
on internal fixed rules, the generated structures
will have limited potential complexity, and in so
far as any novel organizing arises from the outside environment, the novel structures have no
possibility of reliable replication without a symbolic memory that could reconstruct the novel
organization. The first computer simulation I felt
had some interesting evolutionary potential was
developed by Michael Conrad (1969), in which
genetic, cellular, population, and ecological levels
were all represented. However, other than abstract conservation principles, this was a physicsfree model that did not address Pearson’s
question or the nature of symbols in measurement
and control processes (Conrad and Pattee, 1970).
By the 1970s, I believed I had some insight on
Pearson’s question. These ideas, which I will summarize below, were presented in the four volumes
of Waddington’s Bellagio conferences (Waddington, 1968– 1972) on theoretical biology. My first
question then was: ‘‘How can we describe in
physical language the most elementary heritable
symbols?’’. It has turned out that for even the
simplest known case, the gene, an adequate description requires the two irreducibly complementary concepts of dynamical laws and
non-integrable constraints that are not derivable
from the laws. This primeval distinction between
the individual’s local symbolic constraints that
first appear at the origin of life and the objective
universal laws reappears in many forms at higher
levels6. From von Neumann (1955), I learned that
this same epistemic cut occurs in physics in the
measurement process, i.e. the fact that dynamical
laws cannot describe the measurement function of
determining initial conditions.
Later, I saw these as special cases of the general
epistemic problem: how to bridge the separation
between the observer and the observed, the controller and the controlled, the knower and the
known, and even the mind and the brain. This
notorious epistemic cut has motivated philosophical disputes for millennia, especially the problem
of consciousness that only recently has begun to
be treated as possibly an empirically decidable
problem (e.g. Shear, 1997; Taylor, 1999). My
second question was whether bridging the epistemic cut could even be addressed in terms of
physical laws.
Of course, my answers to Pearson’s question
were neither complete, nor of much interest to
biologists. I had only stated some necessary, but
by no means sufficient, conditions for the physical
description of symbolic control. This was the easy
6
I define a symbol in terms of its structure and function.
First, a symbol can only exist in the context of a living
organism or its artifacts. Life originated with symbolic memory, and symbols originated with life. I find it gratuitous to use
the concept of symbol, even metaphorically, in physical systems where no function exists. Symbols do not exist in isolation but are part of a semiotic or linguistic system (Pattee,
1969a). Semiotic systems consist of (1) a discrete set of symbol
structures (symbol vehicles) that can exist in a quiescent,
rate-independent (non-dynamic) states, as in most memory
storage, (2) a set of interpreting structures (non-integrable
constraints, codes), and (3) an organism or system in which
the symbols have a function (Pattee, 1986). There are innumerable symbol functions at many hierarchical levels, but control
of construction came first.
H.H. Pattee / BioSystems 60 (2001) 5–21
part of the question. My concept of what symbolic behavior must also entail was greatly enlarged by Ernst Cassirer’s (1957) Philosophy of
Symbolic Forms. Later, at the Bellagio meetings,
the philosopher Marjorie Grene (1974) introduced
me to Michael Polanyi’s insights on the failure of
all our symbolic expressions, especially formal
mathematical expressions, to achieve the ideal of
objectivity. Polanyi’s (1964) anti-reductionist arguments show how all of our explicit symbolic
descriptions must be grounded in a reservoir of
ineffable structures and subsidiary knowledge.
But more important to me at the time was his
paper, ‘‘Life’s Irreducible Structure’’ (Polanyi,
1968) because it made the same essential point I
had made that the structural complexity we associate with life can only be described in the language of physics as special constraints or
machine-like ‘‘boundary conditions’’ that ‘‘harness’’ the laws, but that are not formally derivable
from physical laws7. Polanyi also recognized the
irreducibility of all higher evolved functional hierarchical levels to the lower levels from which
they evolved (Pattee, 1969a).
The most profound historical influence was
John von Neumann. His 1966 discussion of selfreproducing automata suggested that efficient
control of dynamical construction requires a nondynamic ‘‘quiescent description’’, and this I interpreted as equivalent to an epistemic cut between
objective dynamical laws and subjective non-dynamic symbolic constraints describing the ‘‘self’’.
Von Neumann also asked a question that I found
to be closely related to Pearson’s question: ‘‘Why
are the basic macromolecules of organisms so
much larger than the fundamental particles of
physical theory?’’4.
7
Polanyi (1968) could be included with other non-reductionists (see note 3). However, I believe the logic of his
argument against reductionism can be made from within normal physical theory and does not require postulating some
rejection, modification, or extension of physical language or
concepts. He claims only an emergent hierarchy of physical
‘‘boundary condition’’ structures obeying all the normal laws,
but acquiring complexity that needs new observables and new
models. Emergence, a concept once shunned by philosophers
and physicists, is now accepted in artificial life, complexity
theory, neuroscience, philosophy, and physics (e.g. Anderson,
1972; Cariani, 1992; Ray, 1992; Anderson, 1997).
11
Equally influential was von Neumann’s (1955)
discussion of the necessity of an epistemic cut in
any measurement process (see Section 9) showing
that the function of measurement is necessarily
irreducible to the dynamics of the measuring
device. This logic is closely related to the necessary separation of symbols and dynamics for control of self-replication since measurement and
control are inverse processes, i.e. measurement
transforms physical states to symbols in memory,
while memory-stored controls transform symbols
to physical states.
6. Dynamical laws — the problem of determinism
Again, if all movement is interconnected, the
new arising from the old in a determinate order,
if the atoms never swerve so as to originate
some new movement that will snap the bonds
of fate, the everlasting sequence of cause and
effect what is the source of the free will possessed by living things throughout the earth?
(Lucretius, On the Nature of Things).
If life is distinguished by memory-stored controls, and if memory and control imply alternative
movements, then in order to answer Pearson’s
question, we must first answer Lucretius’ question. How do we snap the bonds of the inexorable
dynamical laws that do not allow new alternative
movements? To understand the problem, it is
essential to focus on how physical laws are actually described in their mathematical form. The
‘‘unreasonable effectiveness’’ of this type of formal description is largely the result of the precision of its execution (which I will not duplicate
here). This formulation has a developmental history that is also important. Newtonian dynamics
began with point masses moving under laws of
gravitational force or generally under laws derived
from the potential and kinetic energy of the system. These laws are expressed as differential equations in time (equations of motion) that define an
infinite family of possible orbits in the state space
(phase space).
12
H.H. Pattee / BioSystems 60 (2001) 5–21
Three closely related epistemic conditions are
fundamental for this type of dynamical description and need to be emphasized:
1. To begin this type of description, the world
must be separated into the states and the laws
that change the states. The detailed (microscopic) laws are expressed as rate-dependent
differential equations that define families of
orbits in a state space. This means that the
paths from states to states are unambiguously
deterministic and reversible.
2. Only when a particular system is located in
this space by the act of measurement (determining its initial conditions, i.e. the positions
and velocities of all particles at a particular
time) do the equations lead to any observable
consequences and allow an actual orbit to be
calculated by integrating the equations of
motion.
3. Finally, and most importantly, this form of
description can claim to be objective only if
the act of measurement does not influence the
form of the laws, and if the laws do not
influence the act of measurement.
The first extension of this model was to solid
bodies that can be pictured as many point masses
held together by fixed (non-dynamic) forces. The
nature of these forces was a mystery to Newton.
We now attribute them to electromagnetic, quantum, or fundamental particle forces that in principle may also be described in more detail by
dynamical laws. Usually, these internal forces do
no work and therefore play no role in the dynamical laws of the solid body. They may be interpreted as fixed, thereby greatly reducing the
number of variables (degrees of freedom) that
enter into the equations of motion. These internal
(reactive or geometric) forces are called forces of
constraint. What we call more or less rigid structures, from natural molecules, crystals, and rocks,
to artificial tables, buildings, and bridges are held
together by forces of constraint.
However, there are also flexible forces of constraint that hold together the innumerable articulated assemblies of rigid structures we call
machines, as well as labile assemblies of not-sorigid structures like the biopolymers that execute
measurement and control processes in organisms.
It is the physical descriptions of these flexible and
articulated constraints that need to be explained
in more detail in order to begin to answer both
Lucretius’ and Pearson’s questions.
7. An epistemic cut is required for all dynamical
laws
Before defining these flexible constraints, I need
to emphasize the generality of the dynamics that
they can control. The Newtonian or classical picture is often believed to have been replaced by
relativity and quantum theory, but this is not a
fair assessment of Newtonian dynamics. First,
classical laws are still valid for gravitational forces
and velocities that are small compared to the
velocity of light. Second, the results of all measurements of both relativistic and quantum mechanical systems must be expressed in this
classical language. It is true that the forms of
these modern dynamical laws are different from
Newton’s, and that the concept of state is defined
entirely differently, but the three fundamental
epistemic conditions must still hold for all dynamical laws to make objective sense. It is still required that (1) the laws and the initial conditions
be crisply separated, (2) initial conditions be determined by measurements, and (3) measurement
and laws not influence each other8. It is for this
8
The explicit recognition that these conditions are an essential requirement for objectivity arose only gradually, culminating in this century with what are now called invariance or
symmetry principles (e.g. Wigner, 1964). All candidate theories
must first conform to these principles. Ideally, objectivity is the
belief that exactly the same events would occur whether or not
they were actually observed. Except for the very rare ‘‘quantum non-demolition’’ case, this ideal cannot be reached since
measurement in quantum theory alters the state of the system
(but not the laws). Postmodern philosophers argue that this
ideal of objectivity is also unattainable because of cultural
influences. Even the objectivity of fundamental particles has
been criticized on culture-based grounds (e.g. Pickering, 1984).
Of course, many aspects of our physics theories are conventional social constructs, but other aspects appear inexorably
objective and have withstood far more rigorous and severe
challenges from within the skeptical and competitive physics
community than have been offered by the social constructivists.
H.H. Pattee / BioSystems 60 (2001) 5–21
reason that Eugene Wigner (1982) considered Newton’s greatest discovery, not his laws but rather,
‘‘his sharp separation of initial conditions and laws
of nature’’.
This highly developed intellectual distinction
between initial conditions and laws is a form of
epistemic cut, but I want to make the point that this
cut has a primitive origin and is found in all living
organisms. It is simply an extreme case of the
distinction made, even by the first cells, between
stimuli that cannot be correlated and stimuli that
can be correlated or that follow a recognizable
pattern. In terms of information storage, we say
that some records of events can have a compressed
description (like laws) because of intrinsic correlations, while other records (like initial conditions)
have no shorter description than the records themselves. Dynamical laws express a maximal compression of all events of a particular type, namely, those
on orbits determined by one set of initial conditions. Given the initial conditions, all past and
future events are determined by the laws (except for
singularities). Historically, this behavior led to the
ideal of Laplacean determinism. Today, it is more
modestly called state-determined behavior. Nevertheless, viewed as a formal description, all dynamical laws remain syntactically deterministic, even if
they are interpreted as probabilities. Furthermore,
our perceptions as well as our natural languages
support a deterministic, either-or logical syntax and
causal semantics that conform to a classical dynamics. It is for this reason that the interpretation of
quantum theory is largely ineffable.
This maximal compression of events by dynamical laws has indeed proven unreasonably effective
in describing all the fundamental microscopic laws.
But we know from everyday decision-making that
all our experiences are not completely compressible
and that our life is not state-determined. As anyone
can directly observe, there exists between these
extremes of universal and maximally compressible
laws and local incompressible initial conditions
many intermediate levels of natural and artificial
local constraint structures that have measurement
and control functions and that exhibit various
degrees of local and partially compressible behaviors. As Gell-Mann (1994) has observed: ‘‘the
effective complexity [of the universe] receives only
13
a small contribution from the fundamental laws.
The rest comes from the numerous regularities
resulting from ‘frozen accidents’.’’ I would add that
to be effective in evolution, these regularities from
frozen accidental constraints must be heritable.
That means they must be reconstructible from a
memory. Both machines and organisms are characterized by such constraints. The question remains:
How can such special-purpose constraints be described in precise physical terms?
8. The non-integrable condition for bridging the
epistemic cut
Since we know that a heritable genetic memory
is an essential condition for life, my approach to the
problem of determinism began by expressing the
precise requirements for a constraint that satisfies
the conditions for heritability. I can do no better
than to restate my early argument (Pattee, 1969b):
A physical system is defined in terms of a
number of degrees of freedom which are represented as variables in the equations of motion.
Once the initial conditions are specified for a
given time, the equations of motion give a
deterministic procedure for finding the state of
the systems at any other time. Since there is no
room for alternatives in this description, there
is apparently no room for hereditary processes… The only useful description of memory
or heredity in a physical system requires introducing the possibility of alternative pathways
or trajectories for the system, along with a
‘genetic’ mechanism for causing the system to
follow one or another of these possible alternatives depending on the state of the genetic
mechanism. This implies that the genetic mechanism must be capable of describing or representing all of the alternative pathways even
though only one pathway is actually followed in
time. In other words, there must be more degrees of freedom a6ailable for the description of
the total system than for following its actual
motion… Such constraints are called nonholonomic.
14
H.H. Pattee / BioSystems 60 (2001) 5–21
In more common terminology, this type of
constraint is a structure that we say controls a
dynamics. To control a dynamical system implies that there are control variables that are
separate from the dynamical system variables,
yet they must be described in conjunction with
the dynamical variables. These control variables
must provide additional degrees of freedom or
flexibility for the system dynamics. At the same
time, typical control systems do not remove degrees of freedom from the dynamical system, although they alter the rates or ranges of system
variables. Many artificial machines depend on
such control constraints in the form of linkages,
escapements, switches and governors. In living
systems, the enzymes and other allosteric macromolecules perform such control functions. The
characteristic property of all these non-holonomic structures is that they cannot be usefully
separated from the dynamical system they control. They are essentially non-linear in the sense
that neither the dynamics nor the control constraints can be treated separately.
This type of constraint, which I prefer to call
non-integrable, solves two problems. First, it answers Lucretius’ question. These flexible constraints literally cause ‘‘atoms to swerve and
originate new movement’’ within the descriptive
framework of an otherwise deterministic dynamics (this is still a long way from free will). They
also account for the reading of a quiescent, rateindependent memory so as to control a rate-dependent dynamics, thereby bridging the
epistemic cut between the controller and the
controlled. Since law-based dynamics are based
on energy, in addition to non-integrable memory
reading, memory storage requires alternative
states of the same energy (energy degeneracy).
These flexible, allosteric, or configuration-changing structures are not integrable because their
motions are not fully determined until they couple an explicit memory structure with rate-dependent laws (removal of degeneracy).
The crucial condition here is that the constraint acts on the dynamic trajectories without
removing alternative configurations. Thus, the
number of coordinates necessary to specify the
configuration of the constrained system is always greater than the number of dynamic degrees of freedom, leaving some configurational
alternatives available to ‘‘read’’ memory structures. This in turn requires that the forces of
constraint are not all rigid, i.e. there must be
some degeneracy to allow flexibility. Thus, the
internal forces and shapes of non-integrable
structures must change in time partly because of
the memory structures and partly as a result of
the dynamics they control. In other words, the
equations of the constraint cannot be solved
separately because they are on the same formal
footing as the laws themselves, and the orbits of
the system depend irreducibly on both (Whittaker, 1944; Goldstein, 1953; Sommerfeld, 1956;
Neimark and Fufaev, 1972).
What is historically amazing is that this common type of constraint was not formally recognized by physicists until the end of the last
century (Hertz, 1894). Such structures occur at
many levels. They bridge all epistemic cuts between the controller and the controlled, the
classifier and the classified, the observer and the
observed. There are innumerable types of nonintegrable constraints found in all mechanical
devices in the forms of latches, and escapements, in electrical devices in the form of gates
and switches, and in many biological allosteric
macromolecules like enzymes, membrane channel proteins, and ciliary and muscle proteins.
They function as the coding and decoding structures in all symbol manipulating systems.
There is a significant difference between the
way in which non-integrable constraints are
treated in classical and quantum mechanics. Describing non-integrable constraints in quantum
theory restricts the wave function as if some
measurement is being made (Eden, 1951). This
led me to speculate that the non-integrable constraints of the enzyme molecule execute a quantum measurement and rate control function with
a specificity and speed that no classical measurement and control device could match (Pattee,
1967).
H.H. Pattee / BioSystems 60 (2001) 5–21
9. The irreducibility of the epistemic cut
The concept of constraint is not considered
fundamental in physics because the (internal, geometric reactive) forces of constraint can, in principle, be reduced to active impressed forces
governed by energy-based microscopic dynamical
laws. The so-called fixed geometric forces are just
stationary states of a faster, more detailed dynamics. This reducibility to microscopic dynamics is
possible in principle for structures, even if it is
computationally completely impractical. However, describing any bridge across an epistemic cut
by a single dynamical description is not possible,
even in principle.
The most convincing general argument for this
irreducible complementarity of dynamical laws
and measurement function comes again from von
Neumann (1955) (p. 352). He calls the system
being measured, S, and the measuring device, M,
that must provide the initial conditions for the
dynamic laws of S. Since the non-integrable constraint, M, is also a physical system obeying the
same laws as S, we may try a unified description
by considering the combined physical system (S+
M). But then we will need a new measuring
device, M%, to provide the initial conditions for
the larger system (S+M). This leads to an infinite
regress, but the main point is that even though
any constraint like a measuring device, M, can in
principle be described by more detailed universal
laws, the fact is that if you choose to do so, you
will lose the function of M as a measuring device.
This demonstrates that laws cannot describe the
pragmatic function of measurement, even if they
can describe the detailed dynamics of the measuring constraints correctly and completely.
This same argument holds also for control
functions, which includes the genetic control of
protein construction. If we call the controlled
system, S, and the control constraints, C, then we
can also look at the combined system (S+ C), in
which case, the control function simply disappears
into the dynamics. This epistemic irreducibility
does not imply any ontological dualism. It arises
whenever a distinction must be made between a
subject and an object or, in semiotic terms, when
a distinction must be made between a symbol and
15
its referent or between syntax and pragmatics.
Without this epistemic cut, any use of the concepts of measurement of initial conditions and
symbolic control of construction would be
gratuitous.
That is, we must always divide the world into
two parts, one being the observed system, the
other the observer. In the former, we can follow
up all physical processes (in principle at least)
arbitrarily precisely. In the latter, this is meaningless. The boundary between the two is arbitrary to a very large extent… but this does not
change the fact that in each method of description, the boundary must be placed somewhere,
if the method is not to proceed vacuously, i.e. if
a comparison with experiment is to be possible.
(von Neumann, 1955, p. 419)
10. The primeval epistemic cut
The epistemic cut or the distinction between
subject and object is normally associated with
highly evolved subjects with brains and their models of the outside world, as in the case of measurement. As von Neumann states, where we place the
cut appears to be arbitrary to a large extent. The
cut itself is an epistemic necessity, not an ontological condition. That is, we must make a sharp cut,
a disjunction, just in order to speak of knowledge
as being ‘‘about’’ something or ‘‘standing for’’
whatever it refers to. What is going on ontologically at the cut (or what we see if we choose to
look at the most detailed physics) is a very complex process. The apparent arbitrariness of the
placement of the epistemic cut arises in part because the process cannot be completely or unambiguously described by the objective dynamical
laws, since in order to perform a measurement,
the subject must have control of the construction
of the measuring device. Only the subject side of
the cut can measure or control.
The concept of an epistemic cut must first arise
at the genotype–phenotype control interface.
Imagining such a subject– object distinction before life existed would be entirely gratuitous, and
16
H.H. Pattee / BioSystems 60 (2001) 5–21
to limit control only to higher organisms would
be arbitrary. The origin problem is still a mystery.
What is the simplest epistemic event? One necessary condition is that a distinction is made by a
subject that is not a distinction derivable from the
object. In physical language, this means that a
subject must create some form of distinction or
classification between physical states that is not
made by the laws themselves (i.e. measuring a
particular initial condition, removing a degeneracy or breaking a symmetry). In the case of the
cell, the sequences of the gene are not distinguished by physical laws since they are energetically degenerate. Where does a new distinction
first occur? It is where this memory degeneracy is
partially removed, and that does not occur until
the protein-folding process. Transcription, translation, and copying processes treat all sequences
the same and therefore make no new distinctions,
but of course, they are essential for constructing
the linear constraints of the protein that partially
account for the way it folds. The folded protein
removes symbol vehicle degeneracy, but it still has
many degenerate states (many conformations)
that are necessary for it to function as a non-integrable constraint.
It is important to recognize that the details of
construction and folding at this primeval epistemic cut make no sense except in the context of
an entire self-replicating cell. A single folded
protein has no function unless it is a component
of a larger unit that maintains its individuality by
means of a genetic memory. We speak of the
genes controlling protein synthesis, but to accomplish this, they must rely on previously synthesized and organized enzymes and RNAs. This
additional self-referent condition for being the
subject-part of an epistemic cut I have called
semantic (or semiotic) closure (Pattee, 1982, 1995).
This is the molecular chicken–egg closure that
makes the origin-of-life problem so difficult.
inorganic corpuscles’’ alone. The logic of this
answer is that life entails an epistemic cut that is
not distinguishable by microscopic (corpuscular)
laws. As von Neumann’s argument shows, any
distinction between subject and object requires a
description of the constraints that execute measurement and control processes, and such a functional description is not reducible to the dynamics
that is being measured or controlled9.
This is still far from a complete answer. There
is more to evolvability than heritable variation
and natural selection and control of dynamical
laws by memory and non-integrable constraints.
It is not at all obvious why such local control
structures should persist in a real world full of
uncorrelated irregular events. Controlling predictable physical laws is only part of the problem
of survival. There are additional physical conditions for evolvability. These are not abstract principles but specific requirements on how
efficaciously the epistemic cut is actually bridged.
I will only mention three of these conditions.
Evolution depends critically on (1) how easily
gene sequences corresponding to functional
proteins can be found, (2) how reliably these
11. Answering Pearson’s question
For the many reasons given above, I do not agree that
dynamics alone is a rich enough language to describe life.
Natural selection in particular implies much more than a
population distribution with a statistical dynamics. The units
forming the population must replicate with control of their
own individual variations.
We can now give a direct answer to Pearson’s
question: It is not possible to distinguish the living
from the lifeless by the most detailed ‘‘motion of
9
Some dynamic modelers believe that natural selection can
occur as a purely dynamic process without the need of symbolic memory constraints. For example, Goodwin (1994)
states:
What this makes clear is that there is nothing particularly
biological about natural selection: it is simply a term used by
biologists to describe the way in which one form replaces
another as a result of their different dynamic properties…
We could, if we wished, simply replace the term natural
selection with dynamic stabilization, the emergence of the
stable states in a dynamical system.
Similarly, Kelso (1995) states:
Thus, selection and self-organization go together like bread
and butter. Indeed, the language of selection is in precisely
the same terms as the underlying pattern dynamics.
H.H. Pattee / BioSystems 60 (2001) 5–21
sequences can control construction of proteins,
and on (3) how smoothly or gradually variations
in the sequences can produce adaptation of
function. In other words, evolvability depends
on the many physical and statistical details of
how the actual epistemic bridge from symbols to
dynamics is executed.
12. The illusion of autonomous symbol systems
There is a real conceptual roadblock here. In
our normal everyday use of languages, the very
concept of a ‘‘physics of symbols’’ is completely
foreign. We have come to think of symbol systems as having no relation to physical laws. This
apparent independence of symbols and physical
laws is a characteristic of all highly evolved languages, whether natural or formal. They have
evolved so far from the origin of life and the
genetic symbol systems that the practice and
study of semiotics do not appear to have any
necessary relation whatsoever to physical laws.
As Hoffmeyer and Emmeche (1991) emphasize,
it is generally accepted that, ‘‘No natural law
restricts the possibility-space of a written (or spoken) text.’’, or, in Kull’s (1998) words, ‘‘Semiotic
interactions do not take place of physical necessity.’’ Adding to this illusion of strict autonomy
of symbolic expression is the modern acceptance
of abstract symbols in science as the ‘‘hard core
of objectivity’’ mentioned by Weyl. This isolation of symbols is what Rosen (1987) has called
a ‘‘syntacticalization’’ of our models of the
world, and also an example of what Emmeche
(1994) has described as a cultural trend of
‘‘postmodern science’’ in which material forms
have undergone a ‘‘derealization’’.
Another excellent example is our most popular artificial assembly of non-integrable constraints, the programmable computer. A
memory-stored programmable computer is an
extreme case of total symbolic control by explicit non-integrable hardware (reading, writing,
and switching constraints) such that its computational trajectory determined by the program is
unambiguous, and at the same time independent
of physical laws (except laws maintaining the
17
forces of normal structural constraints that do
not enter the dynamics, a non-specific energy
potential to drive the computer from one constrained state to another, and a thermal sink).
For the user, the computer function can be operationally described as a physics-free machine,
or alternatively as a symbolically controlled,
rule-based (syntactic) machine. Its behavior is
usually interpreted as manipulating meaningful
symbols, but that is another issue. The computer
is a prime example of how the apparently
physics-free function or manipulation of memory-based discrete symbol systems can easily give
the illusion of strict isolation from physical dynamics.
This illusion of isolation of symbols from
matter can also arise from the apparent arbitrariness of the epistemic cut. It is the essential
function of a symbol to ‘‘stand for’’ something
— its referent — that is, by definition, on the
other side of the cut. This necessary distinction
that appears to isolate symbol systems from the
physical laws governing matter and energy allows us to imagine geometric and mathematical
structures, as well as physical structures and
even life itself, as abstract relations and Platonic
forms. I believe this is the conceptual basis of
Cartesian mind– matter dualism. This apparent
isolation of symbolic expression from physics is
born of an epistemic necessity, but ontologically,
it is still an illusion. In other words, making a
clear distinction is not the same as isolation
from all relations. We clearly separate the genotype from the phenotype, but we certainly do
not think of them as isolated or independent of
each other. These necessary non-integrable equations of constraint that bridge the epistemic cut
and thereby allow for memory, measurement,
and control are on the same formal footing as
the physical equations of motion. They are
called non-integrable precisely because they cannot be solved or integrated independently of the
law-based dynamics. Consequently, the idea that
we could usefully study life without regard to
the natural physical requirements that allow effective symbolic control is to miss the essential
problem of life: how symbolic structures control
dynamics.
18
H.H. Pattee / BioSystems 60 (2001) 5–21
13. Real-life conditions for bridging the epistemic
cut
Finally, I will summarize some of the physical
requirements for successfully bridging the epistemic cut. In effect, we are answering von Neumann’s ‘‘most intriguing, exciting, and important
question of why the molecules… are the sort of
things they are.’’ First is the search problem. It
was a problem for Darwin, and with the discovery
of the DNA helix and the code that precisely
maps base sequences to protein sequences the
search problem appeared worse. By assuming that
molecular details are significant, one sees a base
sequence space that is hopelessly large for any
detailed search. But while this assumption is correct for the symbolic side of the cut, we now know
that the assumption is wrong for the function on
the other side of the cut. Bridging the epistemic
cuts implies executing classifications of physical
details, and the quality of the classifications determines the quality of function. We know that
protein sequences are functionally highly redundant and that many amino acid replacements do
not significantly alter the function. We also know
that many base sequence aliases can construct
proteins with essentially the same shape. Also,
simplified models of RNA secondary folding suggest that the search is not like looking for a
specific needle in an infinite haystack, but looking
for any needle in a haystack full of needles that
are uniformly distributed (e.g. Schuster, 1994).
There is also evidence that the search is far more
efficient than classical blind variation. Artificial
genetic algorithms have shown unexpected success
in finding acceptable solutions for many types of
search problems that appear logically or algorithmically intractable.
The second requirement is for reliable self-replication. This is a complex adaptive balancing act
between conflicting requirements at many levels.
On the one hand, complete reliability would not
allow any search, variation, or evolution at all.
On the other hand, too little reliability will produce extinction by an error catastrophe. At the
folding level where the degeneracy of base sequences is partially removed, there must be a
balance between a stable energy landscape to
allow rapid folding and permanence, and the
complex conformational degeneracies necessary
for flexible specific binding and rapid catalysis.
The folding process is uniquely complex in many
ways. It is a transformation across all three spatial
dimensions, over temporal scales covering many
orders of magnitude, and involving strong bonds
and many weaker forces in coherent highly nonlinear interactions. The complexity of any detailed
quantum mechanical description of such non-integrable constraints means that such folding problems can only be treated statistically. Even
formulating a microscopic description appears intractable. It is not even obvious that a linear
sequence of several hundred amino acids, or any
such heteropolymer, should fold reliably into a
specific globular shape. That such flexible globules
should be able to perform high-speed, highly specific catalysis is even less obvious. Yet we know
this is the case, and we usually take these incredible functions for granted (e.g. Frauenfelder and
Wolynes, 1994).
The last requirement I mentioned was how
smoothly variations in the genetic sequences can
produce adaptation in functions. Here, again,
there must be a balance between conflicting requirements. Rapid folding and stability of a
protein require steep energy landscapes, while optimization of function requires fine tuning of the
folded shape of the protein by small changes in
genetic sequences. This requires a relatively
smooth energy landscape. Balancing these requirements is eased by sufficiently large molecules
so that major folding conditions are buffered
from local fine-tuning changes in sequences (e.g.
Conrad, 1990). The degree to which these and
other requirements are met by natural selection
on the one hand and by non-selective ordering
principles on the other hand will only be decided
by an empirical study of the molecular details.
14. Conclusion
Pearson’s question is about understanding the
nature of life. Understanding will depend in part
on objective criteria like finding agreement with
facts. But there are also subjective criteria, such as
H.H. Pattee / BioSystems 60 (2001) 5–21
the level of abstraction one finds interesting, and
the degree of generality one can tolerate in analogies and metaphors. For example, Newton’s laws
abstract the earth to a mere mass point, an abstraction much too extreme to interest most biologists. Nevertheless, life could not evade these
laws, and the course of evolution is profoundly
affected by them, as Haldane (1927) pointed out
in his essay ‘‘On Being the Right Size’’. At the
other extreme, quantum theory is much too detailed to interest most biologists, but life could
not evade these laws either, and may in fact
depend on them in essential ways that we do not
yet appreciate. Evolution by heritable variations
and selection has resulted in many strange structures and behaviors, but all of them, without
exception, obey gravitational, electrodynamic,
and quantum and statistical dynamical laws. Artificial life models, complexity theory, and studies
of complex adaptive systems are motivated by the
hope of discovering more of these abstract and
general laws that life must follow.
But there is another type of subjective feeling
about understanding life that motivated Pearson’s
question, the same, I think, that motivated Lucretius’ and von Neumann’s questions. It is a
feeling of paradox, the same feeling that motivated Bohr, Wigner, Polanyi, the skeptics, and,
somewhat ironically, the founders of what is now
reductionist molecular biology, like Delbrück
(1949). They all believed that life follows laws, but
from their concept of law, they could not understand why life was so strikingly different from
non-life. So I find another way of asking this type
of question: What exactly does our 6iew of uni6ersal dynamical laws abstract away from life, so that
the striking distinctions between the li6ing and the
lifeless
become
obscure
and
apparently
paradoxical?
My first answer is that dynamical language
abstracts away the subject side of the epistemic
cut. The necessary separation of laws and initial
conditions is an explicit principle in physics and
has become the basis (and bias) of objectivity in
all the sciences. The ideal of physics is to eliminate the subjective observer completely. It turned
out that at the quantum level, this is a fundamental impossibility, but that has not changed the
19
ideal. Physics largely ignores the exceptional effects of individual (subjective) constraints and
boundary conditions and focusses on the general
dynamics of laws. This is because constraints are
assumed to be reducible to laws (although we
know they are not reducible across epistemic cuts)
and also because the mathematics of complex
constraints is often unmanageable. Philosophers
have presented innumerable undecidable metaphysical models about the mind–brain cut, and
physicists have presented more precise but still
undecidable mathematical models about quantum
measurement. But at the primeval level, where it
all began, the genotype– phenotype cut is now
taken for granted as ordinary chemistry.
My second answer is that if you abstract away
the details of how subject and object interact, the
‘‘very peculiar range’’ of sizes and behaviors of
the allosteric polymers that connect subject and
object, the memory-controlled construction of
polypeptides, the folding into highly specific enzymes and other functional macromolecules, the
many-to-many map of sequences to structures, the
self-assembly, and the many conformation dependent controls — in other words, if you ignore the
actual physics involved in these molecules that
bridge the epistemic cut, then it seems unlikely
that you will ever be able to distinguish living
organisms by the dynamic laws of ‘‘inorganic
corpuscles’’ or from any number of coarsegrained artificial simulations and simulacra of life.
Is it not plausible that life was first distinguished
from non-living matter, not by some modification
of physics, some intricate non-linear dynamics, or
some universal laws of complexity, but by local
and unique heteropolymer constraints that exhibit
detailed behavior unlike the behavior of any other
known forms of matter in the universe?
References
Anderson, P.W., 1972. More is different. Science 177, 393 –
396.
Anderson, P.W., 1997. Is measurement itself an emergent
property? Complexity 3 (1), 14 – 16.
Bohr, N., 1933. Light and life. Nature 131, 421 Reprinted in
Bohr, N., Atomic Physics and Human Knowledge, John
Wiley, New York, p. 3.
20
H.H. Pattee / BioSystems 60 (2001) 5–21
Born, M., 1969. Symbol and reality. In: Physics in My Generation. Springer-Verlag, New York, pp. 132 – 146.
Brooks, R.A., 1992. Artificial life and real robots. In: Varela,
F.J., Bourgine, P. (Eds.), Toward a Practice of Autonomous Systems. MIT Press, Cambridge, MA, pp. 3 – 10.
Brooks, R.A., Maes, P. (Eds.), 1994. Artificial Life 4. MIT
Press, Cambridge, MA.
Burgers, J.M., 1965. Experience and Conceptual Activity. MIT
Press, Cambridge, MA.
Cariani, P., 1992. Emergence and artificial life. In: Langton,
C., Taylor, C., Farmer, J., Rasmussen, S. (Eds.), Artificial
Life II. Addison-Wesley, Redwood City, CA, pp. 775 – 797.
Cassirer, E., 1957. The Philosophy of Symbolic Forms. In: The
Phenomenology of Knowledge, vol. 3. Yale University
Press, New Haven, CT.
Clark, A., 1997. Being There. MIT Press, Cambridge, MA.
Conrad, M., 1969. Computer experiments of the evolution of
co-adaptation in a primitive ecosystem, Ph.D. dissertation,
Biophysics Program, Stanford University, Stanford, CA.
Conrad, M., 1990. The geometry of evolution. BioSystems 24,
61 – 81.
Conrad, M., Pattee, H.H., 1970. Evolution experiments with
an artificial ecosystem. J. Theor. Biol. 28, 393 – 409.
Cowan, G.A., Pines, D., Meltzer, D. (Eds.), 1994. Complexity,
Metaphors, Models, and Reality. Addison-Wesley, Reading, MA.
Delbrück, M., 1949. The Connecticut Academy of Arts and
Sciences, 38, 173. Cited from: Cairns, J., Stent, G.S.,
Watson, J.D. (Eds.), Phage and the Origins of Molecular
Biology. Cold Spring Harbor Laboratory on Quantitative
Biology, 1966, page 20.
Eden, R.J., 1951. The quantum mechanics of non-holonomic
systems. Proc. Roy. Soc. (Lond.) 205A, 583 – 595.
Elsasser, W.M., 1975. The Chief Abstraction of Biology.
North-Holland, Amsterdam.
Emmeche, C., 1994. The Garden in the Machine. Princeton
University Press, Princeton, NJ, p. 158.
Fontana, W., 1992. Algorithmic chemistry. In: Langton, C.,
Taylor, C., Farmer, J., Rasmussen, S. (Eds.), Artificial Life
II. Addison-Wesley, Redwood City, CA, pp. 159 – 209.
Frauenfelder, H., Wolynes, P.G., 1994. Biomolecules: where
the physics of complexity and simplicity meet. Phys. Today
47, 58 – 64.
Gell-Mann, M., 1994. The Quark and the Jaguar. W.H.
Freeman, New York, p. 134.
Goldstein, H., 1953. Classical Mechanics. Addison-Wesley,
Cambridge, MA.
Goodwin, B., 1994. How the Leopard Changed Its Spots.
Simon & Schuster, New York, p. 53.
Grene, M., 1974. The Knower and the Known. University of
California Press, Berkeley, CA.
Haldane, J.B.S., 1927. On being the right size. In: Possible
Worlds and Other Essays. Chatto & Windus, London
Reprinted in: J.M. Smith (Ed.), On Being the Right Size,
Oxford University Press, Oxford, 1985.
Hertz, H., 1894. Die Principien der Mechanik in neuem
Zusammenhange dargestellt (Leipzig). English translation:
The Principles of Mechanics, Dover, New York, 1956.
Hoffmeyer, J., Emmeche, C., 1991. Code duality and the
semiotics of nature. In: Anderson, M., Merrell, F. (Eds.),
On Semiotic Modeling. Mouton de Gruyter, Berlin, pp.
117 – 166.
Holland, J., 1995. Hidden Order. Addison-Wesley, Reading,
MA.
Kauffman, S., 1993. The Origins of Order. Oxford University
Press, Oxford, p. 25, 36.
Kelso, J.A.S., 1995. Dynamic Patterns. MIT Press, Cambridge, MA, p. 183.
Kendrew, J.C., 1967. Review of Phage and the Origins of
Molecular Biology. Sci. Am. 216, 142.
Kull, K., 1998. On semiosis, Umwelt, and semiosphere. Semiotica 120 (3 – 4), 299 –310.
Langton, C., 1989. Artificial life. In: Langton, C. (Ed.), Artificial Life. Addison-Wesley, Redwood City, CA, p. 15.
Mayr, E., 1997. This Is Biology. Harvard University Press,
Cambridge, MA, p. 32.
Neimark, I.I., Fufaev, N.A., 1972. The Dynamics of Nonholonomic Systems. Translations of mathematical monographs, Vol. 33, American Mathematical Society.
Pattee, H.H., 1961. On the origin of macromolecular sequences. Biophys. J. 1, 683 – 710.
Pattee, H.H., 1965. Recognition of heritable order in primitive
chemical systems. In: Fox, S. (Ed.), The Origins of Prebiological Systems. Academic Press, New York.
Pattee, H.H., 1967. Quantum mechanics and the origin of life.
J. Theor. Biol. 17, 410 – 420.
Pattee, H.H., 1969a. How does a molecule become a message?
In: Lang, A. (Ed.), 28th Symposium of the Society of
Developmental Biology, Academic Press, New York, pp.
1– 16.
Pattee, H.H., 1969b. Physical problems of heredity and evolution. In: Waddington, C.H. (Ed.), Towards a Theoretical
Biology 2. Edinburgh University Press, Edinburgh, p. 274.
Pattee, H.H., 1982. Cell psychology: An evolutionary approach to the symbol –matter problem. Cogn. Brain Theory 5 (4), 325 – 341.
Pattee, H.H., 1986. Universal principles of measurement and
language functions in evolving systems. In: Casti, J.L.,
Karlqvist, A. (Eds.), Complexity, Language, and Life.
Springer-Verlag, Berlin, pp. 579 – 581.
Pattee, H.H., 1995. Evolving self-reference: matter symbols
and semantic closure. Commun. Cogn. –Artif. Intell. 12
(1 – 2), 9– 27.
Pearson, K., 1937. The Grammar of Science. J.M. Dent and
Sons, London, p. 287 First editions published 1892.
Pickering, A., 1984. Constructing Quarks: A Sociological History of Particle Physics. Edinburgh University Press,
Edinburgh.
Polanyi, M., 1964. Personal Knowledge. Harper, New York.
Polanyi, M., 1968. Life’s irreducible structure. Science 160,
1308 – 1312.
Ray, T., 1992. An approach to the synthesis of life. In:
Langton, C., Taylor, C., Farmer, J., Rasmussen, S. (Eds.),
Artificial Life II. Addison-Wesley, Redwood City, CA, pp.
371 – 408.
H.H. Pattee / BioSystems 60 (2001) 5–21
Rosen, R., 1987. On the scope of syntactics in mathematics
and science: the machine metaphor. In: Casti, J.L., Karlqvist, A. (Eds.), Real Brains, Artificial Minds. North-Holland, New Yorl, pp. 1 – 23.
Rosen, R., 1991. Life Itself. Columbia University Press, New
York.
Schuster, P., 1994. How do DNA molecules and viruses
explore their worlds? In: Cowan, G.A., Pines, D., Meltzer,
D. (Eds.), Complexity, Metaphors, Models, and Reality.
Addison-Wesley, Reading, MA, pp. 383 – 414.
Shear, J., 1997. Explaining Consciousness: The Hard Problem.
MIT Press, Cambridge, MA.
Sommerfeld, A., 1956. Mechanics. Academic Press, New
York, p. 80.
Stent, G.S., 1966. In: Cairns, J., Stent, G.S., Thompson, J.D.
(Eds.), Phage and the Origin of Molecular Biology. Cold
Spring Harbor Laboratory on Quantitative Biology, Cold
Spring Harbor, NY, p. 4.
Taylor, J.G., 1999. The Race for Consciousness. MIT Press,
Cambridge, MA.
Varela, F.J., Thompson, E., Rosch, E., 1991. The Embodied
Mind. MIT Press, Cambridge, MA.
von Neumann, J., 1955. The Mathematical Foundations of
Quantum Mechanics. Princeton University Press, Princeton, NJ.
21
von Neumann, J., 1966. In: Burks, A.W. (Ed.), Theory of
Self-reproducing Automata. University of Illinois Press,
Chicago, IL.
Waddington, C.H. (Ed.), 19682. Towards a Theoretical
Biology. Edinburgh University Press, Edinburgh
Vol. 1, Prolegomena, 1968; Vol. 2, Sketches, 1969; Vol. 3,
Drafts, 1970; Vol. 4, Essays, 1972, 1970; Vol. 4, Essays,
1972.
Weyl, H., 1949. Philosophy of Mathematics and Natural
Science. Princeton University Press, Princeton, NJ, p. 237.
Whittaker, E.T., 1944. A Treatise on the Analytical Dynamics
of Particles and Rigid Bodies, fourth ed. Dover, New
York, p. 33 and Chapter VIII.
Wigner, E.P., 1960. The unreasonable effectiveness of mathematics in the natural sciences. Commun. Pure Appl. Sci.
13, 1 – 14.
Wigner, E.P., 1961. On the impossibility of self-replication. In:
The Logic of Personal Knowledge. Kegan Paul, London,
p. 231.
Wigner, E.P., 1964. Events, laws, and invariance principles.
Science 145, 995 –999.
Wigner, E.P., 1982. The limitations of the validity of presentday physics. In: Mind in Nature. Nobel Conference XVII.
Harper & Row, San Francisco, CA, p. 119.
.