Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content
John  Collier
  • Philosophy, UKZN
    King George V Avenue
    Durban, 4041
    South Africa
  • +27720903176
Every Thing Must Go argues that the only kind of metaphysics that can contribute to objective knowledge is one based specifically on contemporary science as it really is, and not on philosophers' a priori intuitions, common sense, or... more
Every Thing Must Go argues that the only kind of metaphysics that can contribute to objective knowledge is one based specifically on contemporary science as it really is, and not on philosophers' a priori intuitions, common sense, or simplifications of science. In addition to showing how recent metaphysics has drifted away from connection with all other serious scholarly inquiry as a result of not heeding this restriction, they demonstrate how to build a metaphysics compatible with current fundamental physics ('ontic structural realism'), which, when combined with their metaphysics of the special sciences ('rainforest realism'), can be used to unify physics with the other sciences without reducing these sciences to physics itself. Taking science metaphysically seriously, Ladyman and Ross argue, means that metaphysicians must abandon the picture of the world as composed of self-subsistent individual objects, and the paradigm of causation as the collision of such objects.

Everything Must Go also assesses the role of information theory and complex systems theory in attempts to explain the relationship between the special sciences and physics, treading a middle road between the grand synthesis of thermodynamics and information, and eliminativism about information. The consequences of the author's metaphysical theory for central issues in the philosophy of science are explored, including the implications for the realism vs. empiricism debate, the role of causation in scientific explanations, the nature of causation and laws, the status of abstract and virtual objects, and the objective reality of natural kinds.
Evolutionary moral realism is the view that there are moral values with roots in evolution that are both specifically moral and exist independently of human belief systems. In beginning to sketch the outlines of such a view, we examine... more
Evolutionary moral realism is the view that there are moral values with roots in evolution that are both specifically moral and exist independently of human belief systems. In beginning to sketch the outlines of such a view, we examine moral goods like fairness and empathetic caring as valuable and real aspects of the environments of species that are intelligent and social, or at least developing along an evolutionary trajectory that could lead to a level of intelligence that would enable individual members of the species to recognize and respond to such things as the moral goods they in fact are. We suggest that what is most morally interesting and important from a biological perspective is the existence and development of such trajectories, rather than the position of one particular species, such as our own, on one particular trajectory.
We argue that living systems process information such that functionality emerges in them on acontinuous basis. We then provide a framework that can explain and model the normativity of biological functionality. In addition we offer an... more
We argue that living systems process information such that functionality emerges in them on acontinuous basis. We then provide a framework that can explain and model the normativity of biological functionality. In addition we offer an explanation of the anticipatory nature of functionalitywithin our overall approach. We appeal to a Peircean approach to semiotics, and especially toBiosemiotics, as well as to a dynamical approach to Digital-Analog relations and the interplaybetween different levels of functionality in autonomous systems, taking an integrative approach. Wethen apply the underlying logic to a particular biological system, giving a model of the BCR signalingsystem, in order to demonstrate how biosemiotic concepts can be used to build an account of biological information and functionality. Next we show how this framework can be used to explainand model more complex aspects of biological normativity, for example, how cross-talk betweendifferent signaling pathways can be avoided. Overall, we describe a robust theoretical framework forthe emergence of normative functions and, consequently, for the way information is transducedacross several interconnected organizational levels in an autonomous system, and we demonstratehow this can be applied in real biological phenomena. Our aim is to open the way towards realistictools for the modeling of information and normativity in autonomous biological agents.
It is generally agreed that organisms are Complex Adaptive Systems. Since the rise of Cybernetics in the middle of the last century ideas from information theory and control theory have been applied to the adaptations of biological... more
It is generally agreed that organisms are Complex Adaptive Systems. Since the rise of Cybernetics in the middle of the last century ideas from information theory and control theory have been applied to the adaptations of biological organisms in order to explain how they work. This does not, however, explain functionality, which is widely but not universally attributed to biological systems. There are two approaches to functionality, one based on etiology (what a trait was selected for), and the other based in autonomy. I argue that the etiological approach, as understood in terms of control theory, suffers from a problem of symmetry, by which function can equally well be placed in the environment as in the organism. Focusing on the autonomy view, I note that it can be understood to some degree in terms of control theory in its version called second order cybernetics. I present an approach to second order cybernetics that seems plausible for organisms with limited computational power, due to Hooker, Penfold and Evans. They hold that this approach gives something like concepts, certainly abstractions from specific situations, a trait required for functionality in its system adaptive form (i.e., control of the system by itself). Using this cue, I argue that biosemiotics provides the methodology to incorporate these quasi concepts into an account of functionality.
The paradigm of Laplacean determinism combines three regulative principles: determinism, predictability, and the explanatory adequacy of universal laws together with purely local conditions. Historically, it applied to celestial... more
The paradigm of Laplacean determinism combines three regulative principles: determinism, predictability, and the explanatory adequacy of universal laws together with purely local conditions. Historically, it applied to celestial mechanics, but it has been expanded into an ideal for scientific theories whose cogency is often not questioned. Laplace’s demon is an idealization of mechanistic scientific method. Its principles together imply reducibility, and rule out holism and emergence. I will argue that Laplacean determinism fails even in the realm of planetary dynamics, and that it does not give suitable criteria for explanatory success except within very well defined and rather exceptional domains. Ironically, the very successes of Laplacean method in the Solar System were made possible only by processes that are not themselves tractable to Laplacean methodology. The results of some of these processes were first observed in 1964, and violate the Lapacean requirements of locality and predictability, opening the door to holism and nonreducibility, i.e., emergence. Despite the falsification of Laplacean methodology, the explanatory resources of holism and emergence remain in scientific limbo, though emergence has been used somewhat indiscriminately in recent scientific literature. I make some remarks at the end about the proper use of emergence in its traditional sense going back to C.D. Broad.
Abstract: There are many different mathematical definitions of information that have their various uses, but I will be concerned with notions of information used in applications in various branches of science that are distinguished by... more
Abstract: There are many different mathematical definitions of information that have their various uses, but I will be concerned with notions of information used in applications in various branches of science that are distinguished by their topic, i.e., what they apply to. I describe the major uses information, and show their relations to each other. I will argue that the various uses form a nested hierarchy, in which each is a restriction on the previous, inheriting the properties of its predecessor, but adding in new features that make it a special case. The lowest level is physical information determined by distinctions and the highest is explicit representation in linguistic social communication. Is there anything common to information at all these levels? I will argue that there is, and that information in each case is what Donald MacKay (1969) called a distinction that makes a difference. What distinguishes the use of information at each level is what distinctions make a causal difference at that level. At each successive level distinctions that make a difference at a previous level make no difference at that level. In order to create this sort of filter new levels have to be formed by cohesion peculiar to the identifying characteristics at that level. A consequence of this view is that information must have causal powers, and that there is a tight connection between information and causation.
Almost fifty years ago Wilfrid Sellars described two competing ways of imagining the world, the Manifest Image and the Scientific Image. The Manifest Image is an idealization of common sense aided by critical philosophy, whereas the... more
Almost fifty years ago Wilfrid Sellars described two competing ways of imagining the world, the Manifest Image and the Scientific Image. The Manifest Image is an idealization of common sense aided by critical philosophy, whereas the Scientific Image is the product of our best science. The methodologies of the two images are very different: the Manifest Image deals with experience and looks only at relations among bits of experience and analysis of experience into the relations that must lie behind it, whereas the Scientific Image is grounded in explanations of experience, typically causal explanations. This need not be a problem if the two images are compatible. Sellars argued, however, that the Manifest Image implies continuity, but the best science of the time told us (or appeared to tell us) that the world is made up of discrete subatomic particles and discrete transitions between quantum states, making the two incompatible. Although Sellars noted that future science might show that the world is continuous, he did not follow this up. Science in the last fifty years has given much more evidence for continuity in the world from complexity studies and Quantum Mechanics, so perhaps the two images can be reconciled after all.
Research Interests:
Both natural and engineered systems are fundamentally dynamical in nature: their defining properties are causal, and their functional capacities are causally grounded. Among dynamical systems, an interesting and important sub-class are... more
Both natural and engineered systems are fundamentally dynamical in nature: their defining properties are causal, and their functional capacities are causally grounded. Among dynamical systems, an interesting and important sub-class are those that are autonomous, anticipative and adaptive (AAA). Living systems, intelligent systems, sophisticated robots and social systems belong
to this class, and the use of these terms has recently spread rapidly through the scientific literature.
Central to understanding these dynamical systems is their complicated organisation and their consequent capacities for re- and self- organisation. But there is at present no general analysis of
these capacities or of the requisite organisation involved. We define what distinguishes AAA systems from other kinds of systems by characterising their central properties in a dynamically
interpreted information theory.
Research Interests:
Research Interests:
Research Interests:
Research Interests:
Research Interests:
Research Interests:
Research Interests:
Consideration of an example of successful reference gives rise to two important insights. The first is that reference should be understood most fundamentally in terms of the pragmatic success of each individual utterance. The second is... more
Consideration of an example of successful reference gives rise to two important insights. The first is that reference should be understood most fundamentally in terms of the pragmatic success of each individual utterance. The second is that linguistic conventions need to be understood as on a par with the non-linguistic regularities that competent language users rely upon to refer. Syntax and semantics are part of what Barwise and Perry (1983) call the context of the utterance, contributing to the pragmatics of the utterance.
We show why reference should be understood in pragmatic terms and point out that, since success is often achieved in non-standard, creative ways, any formalization of pragmatics can
only be partial. We show that the need for such an inventive approach to referring traces back to the need for language to be highly efficient, with expressions underdetermining their interpretation. Our second step is to argue that the semantic and syntactic regularities, which might seem to be independent of the context of an utterance, should actually be understood as also being part of that context. In doing so, our account spells out some of the possible implications of Millikan’s (1998) account of conventions and how it makes the creative use of language possible.
Research Interests:
Research Interests:
Research Interests:
Research Interests:
It has been suggested that thermodynamic irreversibility is merely a subjective result of our lack of knowledge of the world. This view stems from two sources: first, that the fundamental (and hence presumably objective) laws of dynamics... more
It has been suggested that thermodynamic irreversibility is merely a subjective result of our lack of knowledge of the world. This view stems from two sources: first, that the fundamental (and hence presumably objective) laws of dynamics are time reversible, and, second, that probability is merely a measure of our lack of information. The exorcism of the naturalistic Maxwell's demon shows that irreversibility is more than "merely" a consequence of our lack of information, since there is nothing we can do about it. Any new information we can get won't change the fact of irreversibility. The initial state of our information is part of the state of the world we need to take into consideration. On the other and, the possibility of fluctuation driven sorting, like Loschmidt's  Paradox, shows that the Second Law cannot be derived from dynamics. It is irreducibly statistical. The fluctuation argument
goes further than the reversibility argument, however. It shows not only that entropy decrease is dynamically possible, but that prior entropy increases can be reversed, however unlikely or uncontrollable such reversals are. Reversible systems are possible, but they are highly unlikely. Taking these results together suggests the conclusion that there are both objectively reversible systems and objectively irreversible systems, as Prigogine and Stengers have argued. The problem of justifying the Second Law isn't to
show why it is necessary, but to determine what types of systems obey it, and why they are so common.
Research Interests:
Research Interests:
Research Interests:
Research Interests:
Emergence is a term used in many contexts in current science; it has become fashionable. It has a traditional usage in philosophy that started in 1875 and was expanded by J.S. Mill (earlier, under a different term) and C.D. Broad. It is... more
Emergence is a term used in many contexts in current science; it has become fashionable. It has a traditional usage in philosophy that started in 1875 and was expanded by J.S. Mill (earlier, under a different term) and C.D. Broad. It is this form of emergence that I will be concerned with here. I will distinguish it from uses like ‘computational emergence’, which can be reduced to combinations of program steps, or its application to merely surprising new features that appear in complex combinations of parts. I will be concerned specifically with ontological emergence that has the logical properties required by Mill and Broad (though there might be some quibbling about the details of their views). I restrict myself to dynamical systems that are embodied in processes. Everything that we can interact with through sensation or action is either dynamical or can be understood in dynamical terms, so this covers all comprehensible forms of emergence in the strong (nonreducible) sense I use. I will give general dynamical conditions that underlie the logical conditions traditionally assigned to emergence in nature. The advantage of this is that, though we cannot test logical conditions directly, we can test dynamical conditions. This gives us an empirical and realistic form of emergence, contrary those who say it is a matter of perspective.
Research Interests:
Evolutionary moral realism is the view that there are moral values with roots in evolution that are both specifically moral and exist independently of human belief systems. In beginning to sketch the outlines of such a view, we examine... more
Evolutionary moral realism is the view that there are moral values with roots in evolution that are both specifically moral and exist independently of human belief systems. In beginning to sketch the outlines of such a view, we examine moral goods like fairness and empathetic caring as valuable and real aspects of the environments of species that are intelligent and social, or at least developing along an evolutionary trajectory that could lead to a level of intelligence that would enable individual members of the species to recognize and respond to such things as the moral goods they in fact are. We suggest that what is most morally interesting and important from a biological perspective is the existence and development of such trajectories, rather than the position of one particular species, such as our own, on one particular trajectory.
The subject of this chapter is the identity of individual dynamical objects and properties. Two problems have dominated the literature: transtemporal identity and the relation between composition and identity. Most traditional approaches... more
The subject of this chapter is the identity of individual dynamical objects and properties. Two problems have dominated the literature: transtemporal identity and the relation between composition and identity. Most traditional approaches to identity rely on some version of classification via essential or typical properties, whether nominal or real. Nominal properties have the disadvantage of producing unnatural classifications, and have several other problems. Real properties, however, are often inaccessible or hard to define (strict definition would make them nominal). I suggest that classification should be in terms of dynamical properties of systems, starting with individual systems rather than classes, and working up by abstractions that fit causal generalities. The advantage of this approach is that individuality is testable and revisable as we come to know more about systems. Another advantage is that if anything is real, then it is the dynamical. Once I have presented this approach in general, I will show that the central concept of dynamical cohesion (the "dividing glue") is amenable to giving a principled account of individuation as a process, at the same time explaining the origin of diversity. Some other advantages of this approach are presented, including how it can be used as a basis for testable classifications. This last has moral implications, since cohesion at the individual and the social levels, and their interactions, can impinge on proper moral decisions.
Emergence has traditionally been described as satisfying specific properties, notably nonreducibility of the emergent object or properties to their substrate, novelty, and unpredictability from the properties of the substrate. Sometimes... more
Emergence has traditionally been described as satisfying specific properties, notably nonreducibility of the emergent object or properties to their substrate, novelty, and unpredictability from the properties of the substrate. Sometimes more mysterious properties such as independence from the substrate, separate substances and teleological properties are invoked. I will argue that the latter are both unnecessary and unwarranted. The descriptive properties can be analyzed in more detail in logical terms, but the logical conditions alone do not tell us how to identify the conditions through interactions with the world. In order to do that we need dynamical properties – properties that do something. This paper, then, will be directed at identifying the dynamical conditions necessary and sufficient for emergence. Emergent properties and objects all result or are maintained by dissipative and radically nonholonomic processes. Emergent properties are relatively common in physics, but have been ignored because of the predominant use of Hamiltonian methods assuming energy conservation. Emergent objects are all dissipative systems, which have been recognized as special only in the past fifty years or so. Of interest are autonomous systems, including living and thinking systems. They show functionality and are self governed.
Anticipation allows a system to adapt to conditions that have not yet come to be, either externally to the system or internally. Autonomous systems actively control their own conditions so as to increase their functionality (they... more
Anticipation allows a system to adapt to conditions that have not yet come to be, either externally to the system or internally. Autonomous systems actively control their own conditions so as to increase their functionality (they self-regulate). Living systems self-regulate in order to increase their own viability. These increasingly stronger conditions, anticipation, autonomy and viability, can give an insight into progressively stronger classes of models of autonomy. I will argue that stronger forms are the relevant ones for Artificial Life. This has consequences for the design of and accurate simulation of living systems.
Keywords: autonomy, modelling, function, simulation, anticipation, emergence
Symmetry often indicates the deep structure of things; for example, the conservation laws of physics and the symmetries found in religious artifacts. However, symmetry implies invariance under transformation (redundancy), a reduction of... more
Symmetry often indicates the deep structure of things; for example, the conservation laws of physics and the symmetries found in religious artifacts. However, symmetry implies invariance under transformation (redundancy), a reduction of information content. This presents a paradox: although many symmetries surprise us and surprise implies new information, far from creating the unexpected, symmetries ensure that the known can be extended through invariant transformations. Rhythmic entrainment is the formation of regular, predictable patterns in time or space through interactions within or between systems. It is the complement to symmetry breaking. Entrainment can be either forced or spontaneous, the latter resulting from spontaneous self-organization. Interestingly, spontaneous entrainment requires less power to form and maintain. Spontaneous organization is efficient but hard to control, whereas forced order is more predictable, but wastes energy. I make some suggestions on how a form a management called “facilitative” can help to reconcile these two extremes.
The most casual observer notices that order, complexity and organization are found in biological organisms. The most striking evidence for evolution is the regular increase in these properties displayed in the fossil succession. Despite... more
The most casual observer notices that order, complexity and organization are found in biological organisms. The most striking evidence for evolution is the regular increase in these properties displayed in the fossil succession. Despite this, the core of conventional selectionist evolutionary theory avoids mentioning the order concepts altogether. Natural selection of fit traits merely requires that fitness promotes reproduction in a line of descent. The order concepts are required only to account for the promotion of reproduction, which is kept out of the core of the theory, and relegated to boundary conditions. This tactic is peculiar at best, since it deliberately relegates explanation of an obvious fact of biology to the periphery of evolutionary theory. Furthermore, as I will argue later, if the environment is the only cause of biological order, cyclic changes in environmental characteristics affecting fitness should result in cyclic changes in biological forms. This is a direct result of omitting order concepts from the core of evolutionary theory. A unified general theory of evolution must include order among its key concepts. Rigorous, and preferably quantitative, definitions of order, complexity and organization would be helpful for formulating and comparing both specific evolutionary hypotheses and general evolutionary theories. These concepts are vague in ordinary usage, so our common notions must be replaced with more precise concepts. The precise and quantitative concept of entropy is widely thought to underlie order through its relation to information. The connection of these concepts to complexity and organization is less clear. Whatever the connection is, it isn't straightforward: It isn't possible to have organization without order, but it is possible to have complex disorder. Despite this, the information required to specify any complex or any organized system is high. This suggests that entropy and information are not related in any simple way, contrary to the observations of both communications theorists (Shannon and Weaver, 1949) and measurement theorists (Brillouin, 1962). I believe that a theory of information coded in physical systems is required to resolve the interrelationships of entropy, information, order, complexity and organization (Collier, 1986). Before explaining this theory, though, I would like to describe a related problem concerning biological order. Most things in the world, if left alone, tend to disintegrate rather than organize themselves. Biological order, on the other hand, appears to originate spontaneously. This peculiarity led Schrodinger (1945) to describe life as negentropic. This leads to a paradox: Why should life be negentropic if the physical and chemical processes that underlie it are entropic? The usual response is that life depends on the existence of an entropy gradient around it, and that it gets its order at the expense of the surrounding environment. This answer is not entirely satisfactory, since although it shows the possibility of negentropic systems, it does not show why they come to exist. This requires an understanding of the dynamics of biological order. Since the rise of the neo-Darwinian synthesis, and especially with the successes of molecular biology since 1953, the genetic code is the key to understanding the nature and evolution of biological order. Biological organisms themselves do not evolve; they live and die. Nor does the material substance of organisms evolve; it becomes a part of some organism, and then ceases to be a part of that organism. Not even the forms of organisms evolve; forms appear and disappear as the organisms that have them are born and die. What evolves is an historical sequence of forms, each causally related to its predecessor. The continuity between forms is provided by the information transmitted to successors through replication. The medium of transmission (largely, if not exclusively) is the genetic code. Fundamentally, biological evolution is the evolution of genetic encodings, the physical embodiment of the information responsible for biological organization.
Anticipation allows a system to adapt to conditions that have not yet come to be, either externally to the system or internally. Autonomous systems actively control the conditions of their own existence so as to increase their overall... more
Anticipation allows a system to adapt to conditions that have not yet come to be, either externally to the system or internally. Autonomous systems actively control the conditions of their own existence so as to increase their overall
viability. This paper will first give minimal necessary and sufficient conditions for autonomous anticipation, followed by a taxonomy of autonomous anticipation. In more complex systems, there can be semi-autonomous subsystems that can anticipate and adapt on their own. Such subsystems can be integrated into a system’s overall autonomy, typically with greater efficiency due to modularity and specialization of function. However, it is also possible that semi-autonomous subsystems can act against the viability of the overall system, and have their own functions that conflict with overall
system functions.
Keywords: anticipation, autonomy, models, representation, function.
Formal pragmatics plays an important, though secondary, role in modern analytical philosophy of language: its aim is to explain how context can affect the meaning of certain special kinds of utterances. During recent years, the adequacy... more
Formal pragmatics plays an important, though secondary, role in modern analytical philosophy of language: its aim is to explain how context can affect the meaning of certain special kinds of utterances. During recent years, the adequacy of formal tools has come under attack, often leading to one or another form of relativism or antirealism. Our aim will be to extend the critique to formal pragmatics while showing that sceptical conclusions can be avoided by developing a different approach to the issues. In particular, we will show that formal pragmatics cannot provide a complete account of how context affects the meaning of utterances, both on its own terms and when faced with evidence of important aspects of natural languages. The focal issue is the relevant kind of context in which pragmatics should examine utterances. Our contention will be that the relevant context of an utterance is determined by the function of that utterance, this function being dependent upon the primary function of language – to convey information. We will argue that the functions of utterances and of language are too broad to be caught by the tools of formal pragmatics of the sort advocated by Montague (1968, 1974), which are an extension the methods of traditional
model-theoretic semantics. The particular formal approach we will use as the main example is David Kaplan’s position (1979, 1989), an extension of Montague’s program.
Economic logic impinges on contemporary political theory through both economic reductionism and economic methodology applied to political decision making (through game theory). We argue that the sort of models used are based on... more
Economic logic impinges on contemporary political theory through both economic reductionism and economic methodology applied to political decision making (through game theory). We argue that the sort of models used are based on mechanistic and linear methodologies that have now been found wanting in physics. We argue that complexity based self-organization methods are better suited to model the complexities of economy and polity and their interactions with the overall social system.
I review some concepts of information about the world (Dretske, Carnap and Bar-Hillel, Barwise and Perry) and argue that they have various problems. Dretske and Barwise and Perry use frankly intentional accounts of information, which I... more
I review some concepts of information about the world (Dretske, Carnap and Bar-Hillel, Barwise and Perry) and argue that they have various problems. Dretske and Barwise and Perry use frankly intentional accounts of information, which I think is on the right track for an account of representation. This approach fails, though, when we need to look at the objects of representations in the world. I argue that for information accounts to work, the world must conform to the mathematical structure of these accounts. Fortunately, there is an existing account of physical information that fits this requirement, giving both mind and world a common currency. I make some fine distinctions within the idea of physical information, and sketch how this allows information to flow from the world to our minds.
"Contents 1. Introduction 2. Autonomy, a key concept 3. Peirce’s categorisation of signs and the pragmatics of interpretation 4. Information and causation 5. The importance of dissipativity for the possibility of reference 6. Some... more
"Contents
1. Introduction
2. Autonomy, a key concept
3. Peirce’s categorisation of signs and the pragmatics of interpretation
4. Information and causation
5. The importance of dissipativity for the possibility of reference
6. Some simple signs: nominal and genuine signs
7. Representational autonomy
8. Conceptual autonomy
9. Conclusions"
The main reason I post this is because of a continuing hold of the fallacies of the irreversibility of quantum measurements and the idiotic idea of the "collapse of the wave packet". "The chapters by physicists James Leggett and Phil... more
The main reason I post this is because of a continuing hold of the fallacies of the irreversibility of quantum measurements and the idiotic idea of the "collapse of the wave packet".

"The chapters by physicists James Leggett and Phil Stamp deal with the distinction between quantum decoherence and dissipation. Although it has been widely remarked that quantum mechanics is formally reversible, many have thought that the 'collapse of the wave packet' implies that measurement imposes a direction on time. Leggett and Stamp thoroughly refute this position by distinguishing between decoherence and the usual statistical mechanical dissipation. Although they are not essential to the basic argument, 'macroscopic' quantum systems demonstrate that decoherence is reversible. The so-called collapse of the wave packet introduces nothing new to the problem of the direction of time."
Supervenience is a relationship which has been used recently to explain the physical determination of biological phenomena despite resistance to reduction (Rosenberg, 1978, 1985; Sober, 1984a). Supervenience, however, is plagued by... more
Supervenience is a relationship which has been used recently to explain the physical determination of biological phenomena despite resistance to reduction (Rosenberg, 1978, 1985; Sober, 1984a). Supervenience, however, is plagued by ambiguities which weaken its explanatory value and obscure some interesting aspects of reduction in biology. Although I suspect that similar considerations affect the use of supervenience in ethics and the philosophy of mind, I don't intend anything I have to say here to apply outside of the physical and biological cases I consider. The main point of this paper is that there is a property of biological systems which makes it both misleading and inappropriate to reduce central biological phenomena to the properties of underlying components. Despite this, reductive explanation has been a major source of innovation in biological theory. The apparent tension can be resolved if underlying properties are explanatorily relevant to the higher level phenomena even though the latter are not strictly reducible to the former.
Supervenience, I will argue, is not robust enough to deny reduction while supporting explanatory relevance. The required property entails supervenience, but is not entailed by it. I call this property cohesion. Roughly, a system is cohesive if there are causal interactions among its parts which make it insensitive to fluctuations in the properties of its lower level components. When a system is cohesive, these fluctuations are irrelevant to its state description, and it is both pointless and misleading to describe the system in terms of the properties of its lower level components.
Putnam's writings on realism have stirred up a rash of responses which raise questions both about Putnam's argument against metaphysical realism, and about internal realism, his positive view. A number of papers question Putnam's "brain... more
Putnam's writings on realism have stirred up a rash of responses which raise questions both about Putnam's argument against metaphysical realism, and about internal realism, his positive view. A number of papers question Putnam's "brain in a vat" argument by trying to show that Putnam equivocates. Others argue that Putnam relies to heavily on formal methods, or on an overly restricted formalism. Internal realism has been accused of being incapable of giving a complete account of truth, and of being at least as problematic as metaphysical realism. I will argue that all of these responses fail to come to terms with Putnam's position through a common failing. I believe that isolating this failing will help to focus the debate.
Daniel R. Brooks and E. O. Wiley have proposed a theory of evolution in which fitness is merely a rate determining factor. Evolution is driven by nonequilibrium processes which increase the entropy and information content of species... more
Daniel R. Brooks and E. O. Wiley have proposed a theory of evolution in which fitness is merely a rate determining factor. Evolution is driven by nonequilibrium processes which increase the entropy and information content of species together. Evolution can occur without environmental selection, since increased complexity and organization result from the likely "capture" at the species level of random variations produced at the chemical level. Speciation can occur as the result of variation within the species which decreases the probability of sharing genetic information. Critics of the Brooks-Wiley theory argue that they have abused terminology from information theory and thermodynamics. In this paper I review the essentials of the theory, and give an account of hierarchical physical information systems within which the theory can be interpreted. I then show how the major conceptual objections can be answered.
Thomas Kuhn (1962) proposed that there are theories which are not only incompatible but also semantically incommensurable in order to explain historical evidence that scientists who hold consecutive theories often fail to come to terms... more
Thomas Kuhn (1962) proposed that there are theories which are not only incompatible but also semantically incommensurable in order to explain historical evidence that scientists who hold consecutive theories often fail to come to terms with each other, being unable to resolve differences by appeal to evidence, authority or convention. Despite Kuhn's objections, this thesis has generally been interpreted by friends and foes alike so as to preclude direct rational communication across revolutionary divides in science. In this paper, I will sketch a weaker form of incommensurability which allows eventual comparison of incommensurable theories, but is consistent with Kuhn's model of science.
I argue that Hanson, Kuhn, Feyerabend and Churchland are correct in thinking that observation is theory laden in a way which infects our choice of theories. Although I agree with critics of theory ladenness that theory independent... more
I argue that Hanson, Kuhn, Feyerabend and Churchland are correct in thinking that observation is theory laden in a way which infects our choice of theories. Although I agree with critics of theory ladenness that theory independent observation is possible, I hold that evaluation of the evidential significance of observations for a theory must rely on the conceptual resources of the theory. Empirical evidence is either not interpreted, and is not rich enough to guide theory choice, or else it is rich enough to guide theory choice, and its significance depends on the theory for which it is evidence. Either we avoid theory ladenness and severely limit science, or else we accept theory ladenness and its consequences. Given a preference for a non-trivial science, we must ask how the difficulties theory ladenness presents for the objective evaluation of theories can be alleviated. I infer some lessons from the analogy of perception.
Although the complexity of biological systems and subsystems like DNA and various transcription and translation pathways is of interest in itself, organization is of fundamental importance to understanding biological systems. It would be... more
Although the complexity of biological systems and subsystems like DNA and various transcription and translation pathways is of interest in itself, organization is of fundamental importance to understanding biological systems. It would be convenient to have a general definition of organization applicable to biological systems. I propose that C.H. Bennett’s notion of logical depth is a suitable candidate. I discuss the problems with using complexity measures alone, and then the relations between logical depth and algorithmic complexity. Last, I give some examples in which depth gives a better measure of what might naively be taken to be complexity in biological systems by any biologists,,and then argue that this must be augmented by consideration of dynamical processes.
Research Interests:
The issue of ecosystem individuation is of both theoretical and practical importance. Ecosystems are dynamical systems, so a dynamical account of ecosystem is more appropriate than a static definition. Dynamical definitions are also more... more
The issue of ecosystem individuation is of both theoretical and practical importance. Ecosystems are dynamical systems, so a dynamical account of ecosystem is more appropriate than a static definition. Dynamical definitions are also more useful if we want to study ecosystem change and the possible limits of that change. A dynamical account is especially useful for ecosystem management and intervention, since, aside from the issue of matching management scale with ecosystem scale, these are dynamical interactions themselves, and their dynamics must be incorporated into the existing ecosystem dynamics. Because ecosystems are typically complexly organized, and thus not subject to one grand model, it is useful to develop a number of working models that can be applied in specific cases as appropriate. In many cases more than one model or metamodels will apply, and different models can be used to constrain each other, especially in cases where ecosystems skirt the borders of specific metamodels.
Research Interests:
Research Interests:
Research Interests:
Research Interests:
Research Interests:
The management model best for self-organized complex social systems encourages diversity with minimal top-down control. Any top-down control should work with natural properties of the system. The outcome cannot be determined in advance. I... more
The management model best for self-organized complex social systems encourages diversity with minimal top-down control.
Any top-down control should work with natural properties of the system.
The outcome cannot be determined in advance.
I call this facilitation.
Dynamical systems theory applies to anything that changes with time. In mathematics this is interpreted rather broadly, but in physics, and often in other sciences, it applies to systems with forces and flows, often in a network, that are... more
Dynamical systems theory applies to anything that changes with time. In mathematics this is interpreted rather broadly, but in physics, and often in other sciences, it applies to systems with forces and flows, often in a network, that are typically open to exchanges with the outside. This makes it well suited to the study of ecosystems. Ecosystems are not only open to outside influences, but are often nested by scale in space and time. One of the first problems in discussing ecosystem function, then, is to give a definition of ecosystem individuation and its consequences. One of the consequences is that it is reasonable to define functionality within an ecosystem in terms of contributions to the maintenance of this individuation, as I have done elsewhere for organisms, using a dynamical notion of autonomy. I will briefly argue that common etiological accounts of function are not suitable for discussing ecosystem function. We don’t typically think of ecosystems as autonomous, but autonomy comes in degrees, so even if the word is not apt, the idea is. I will distinguish between ecosystem role in general and functionality in particular. Ecosystem role, which is sometimes identified with function, can actually undermine ecosystem functionality. I will also distinguish between ecosystem functions and ecosystem services. The latter serve some larger or separate systems (whence again the importance of individuation). They are important for understanding how nested ecosystems are related to each other through functional dependence.
Research Interests:
This course will have two parts. The first part will deal generally with complex systems, specifically complexly organized systems of the kind that are found widely in biology. The issues dealt with will be complex systems and their... more
This course will have two parts. The first part will deal generally with complex systems, specifically complexly organized systems of the kind that are found widely in biology. The issues dealt with will be complex systems and their properties, methods of dealing with complex systems, how to individuate systems in a way that allows testing, emergent properties and entities, hierarchy, autonomy and functionality. These topics will then be applied to ecological systems, respecting their peculiar nature. In particular, individuation in ecological systems is less sharp than for, say organisms, and their stabilizing properties are less strict than the autonomy we find in organisms. This means that the notion of ecological functionality is less clear and it must be carefully distinguished from candidate properties like ecological role, ecological services and biodiversity. In many cases the ecological functions within an ecosystem are rather abstract, and individual entities (such as predator and prey roles) play only an indirect function. A proper understanding of ecosystem function will also help towards understanding ecosystem robustness, resilience to perturbations, and to some extent to ecosystem management.
I have provided some background readings for each section. The required readings will be a much smaller set, given for each class..
Research Interests:
Information is usually in strings, like sentences, programs or data for a computer. Information is a much more general concept, however. Here I look at information systems that can be three or more dimensional, and examine how such... more
Information is usually in strings, like sentences, programs or data for a computer.
Information is a much more general concept, however. Here I look at information systems that can
be three or more dimensional, and examine how such systems can be arranged hierarchically so that
each level has an associated entropy due to unconstrained information at a lower level. This allows
self-organization of such systems, which may be biological, psychological, or social.
Dimensional analysis is a technique used by scientists and engineers to check the rationality of their calculations, but it can also be used to determine the nature of the quantities used. Information is usually measured in bits, or... more
Dimensional analysis is a technique used by scientists and engineers to check the
rationality of their calculations, but it can also be used to determine the nature of the quantities used.
Information is usually measured in bits, or binary digits, but it could be measured using any other
base. I will be arguing that, given the possibility of an objective measure of information in terms of
asymmetries, and the relation of information to order, Schrὂdinger’s suggestion that negentropy
was an appropriate measure should be taken seriously. After clarifying this notion, I use
dimensional analysis to show that negentropy has units of degrees of freedom, and that this is a
sensible unit of information.
Dynamical systems theory applies to anything that changes with time. In mathematics this is interpreted rather broadly, but in physics, and often in other sciences it applies to systems in which there are forces and flows, often in a... more
Dynamical systems theory applies to anything that changes with time. In mathematics this is interpreted rather broadly, but in physics, and often in other sciences it applies to systems in which there are forces and flows, often in a network. In my work with Cliff Hooker on dynamical approaches to mind and cognition we adopt a general approach to the world that we call Dynamical Realism. We used this to develop ideas of unity and individuation, emergence, functionality and intentionality. Our basic working hypothesis is that anything that is real is dynamical, or can be understood dynamically. In some respects this is pretty much trivial: any system is a dynamical system. Nonetheless, dynamical realism imposes some discipline through its implication that things should be viewed dynamically in order to understand what they are really like. I will start with a description of some of the features of dynamical systems, including the intractability of complexly organized systems, and indicate how we might deal with such systems. I then sketch how these ideas can be applied to the issues of individuation, emergence, functionality and intentionality. The last two are explained in terms of a sense of autonomy that was first identified by Kant, who proposed that we need a different notion of causation. In particular, autonomous systems require 1) non-equilibrium conditions, 2) internal dynamical differentiation, 3) hierarchical and interactive process organization, 4) incomplete closure, 5) openness to the world, 6) openness to infrastructural inputs, 7) the existence of autonomy is identical to the corresponding process closure, and is not something complementary or over and above this closure. I end with some implications for dealing with mind.
I start with a brief summary of kinds of information used in science, showing how they are nested (or hierarchical), with inner kinds inheriting properties of the outer kinds. I further argue that within each kind there is also... more
I start with a brief summary of kinds of information used in science, showing how they are nested (or hierarchical), with inner kinds inheriting properties of the outer kinds. I further argue that within each kind there is also hierarchical organization, and that the major kinds are distinguished by their dynamics, not just being ordered in a hierarchy. Next I argue that similar rules that apply to non-equilibrium thermodynamics apply also to information systems, and give some examples of resulting self-organization, or what we have called “rhythmic entrainment” [1]. I point out that entrainment that results from properties within a system are more efficient than ones that are entrained by outside forces. This also gives a sort of resilience to such systems, and in higher kinds of information allows for self-adaptation via accommodating both external forces and internally generated forces. I then apply these lessons to management and argue that the most efficient and creative form of management comes not from severe control from the top, or from imposed “efficiency” but through self-organization allowed by a low degree of control and the encouragement of diversity. This form of management I call facilitation. There may be specific people assigned a facilitation role, but this is not required; any member of a group can act as a facilitator. What is required, however, is that members of the group are accustomed to being open-minded and flexible. This form of management is most compatible with anarchism as a political (and management) theory, but has benefits in pretty much any political system. I then go into some complications of this view and some of their consequences.
In everyday usage, information is knowledge or facts acquired or derived from study, instruction or observation. Information is presumed to be both meaningful and veridical, and to have some appropriate connection to its object.... more
In everyday usage, information is knowledge or facts acquired or derived from study, instruction or observation. Information is presumed to be both meaningful and veridical, and to have some appropriate connection to its object. Information might be misleading, but it can never be false. Standard information theory, on the other hand, as developed for communications [1], measurement [2] induction [3; 4] and computation [5; 6], entirely ignores the semantic aspects of information. Thus it might seem to have little relevance to our common notion of information. This is especially true considering the range of applications of information theory found in the literature of a variety of fields. Assuming, however, that the mind works computationally and can get information about things via physical channels, then technical accounts of information strongly restrict any plausible account of the vulgar notion. Some more recent information-oriented approaches to epistemology [7] and semantics [8] go further, though my introduction to the ideas was through Michael Arbib, Michael Scriven and Kenneth Sayre in the profoundly inventive late 60s and early 70s.

In this talk I will look at how the world must be in order for us to have information about it. This will take three major sections: 1) intrinsic information -- there is a unique information in any structure that can be determined using group theory, 2) the physical world (including our minds) must have specific properties in order for us to have information about the world, and 3) the nature of information channels that can convey information to us for evaluation and testing. In the process I will outline theories of physical information and semantic information. Much of the talk will be an, I hope simplified, version of [9] and [10], and other sources on my web page, and the book, Every Thing Must Go [10].
1. Shannon, C.E. and Weaver, W. 1949. The Mathematical Theory of Communication. Urbana, University of Illinois Press.
2. Brillouin, L 1962. Science and Information Theory, 2nd edition. New York, Academic Press.
3. Solomonoff, R. 1964. A formal theory of inductive inference, Part I. Information and Control, Vol 7, No. 1: 1-22.
4. Solomonoff, R. 1964. A formal theory of inductive inference, Part II. Information and Control, Vol 7, No. 2: 224-254.
5. Kolmogorov, A.N. 1965. Three approaches to the quantitative definition of information. Problems of Inform. Transmission 1: 1-7.
6. Chaitin, G.J. 1975. A theory of program size formally identical to information theory. J. ACM 22: 329-340.
7. Dretske, F. 1981. Knowledge and the Flow of Information. Cambridge, MA, MIT Press.
8. Barwise, Jon and John Perry. 1983. Situations and Attitudes. Cambridge, MA, MIT Press.
9. Collier, John 1990. Intrinsic information. in Philip Hanson (ed) Information, Language and Cognition: Vancouver Studies in Cognitive Science, Vol. 1. University of British Columbia Press, now by Oxford University Press: 390-409.
10. Collier, John. 2012. Information, causation and computation. Information and Computation: Essays on Scientific and Philosophical Understanding of Foundations of Information and Computation. Gordana Dodig Crnkovic and Mark Burgin (eds), Singapore, World Scientific: 89-106.
11. Ladyman, J., Ross, D., with Collier, J., Spurrett, D. 2007. Every Thing Must Go. Oxford, Oxford University Press.
Dimensional analysis is a technique used by scientists and engineers to check the rationality of their calculations, but it can also be used to determine the nature of the quantities used. Information is usually measured in bits, or... more
Dimensional analysis is a technique used by scientists and engineers to check the rationality of their calculations, but it can also be used to determine the nature of the quantities used. Information is usually measured in bits, or binary digits, but it could be measured using any other base. I will be arguing that, given the possibility of an objective measure of information in terms of asymmetries, and the relation of information to order, Schrὂdinger's suggestion that negentropy was an appropriate measure should be taken seriously. After clarifying this notion, I use dimensional analysis to show that negentropy has units of degrees of freedom, and that this is a sensible unit of information.
Research Interests:
Information is usually in strings, like sentences, programs or data for a computer. Information is a much more general concept, however. Here I look at information systems that can be three or more dimensional, and examine how such... more
Information is usually in strings, like sentences, programs or data for a computer. Information is a much more general concept, however. Here I look at information systems that can be three or more dimensional, and examine how such systems can be arranged hierarchically so that each level has an associated entropy due to unconstrained information at a lower level. This allows self-organization of such systems, which may be biological, psychological, or social.
Research Interests:
I start with a brief summary of kinds of information used in science, showing how they are nested (or hierarchically arranged), with inner kinds inheriting properties of the outer kinds. I further argue that within each kind there is also... more
I start with a brief summary of kinds of information used in science, showing how they are nested (or hierarchically arranged), with inner kinds inheriting properties of the outer kinds. I further argue that within each kind there is also hierarchical organization, and that the major kinds are distinguished by their dynamics, not just being ordered in a hierarchy, though similar principles apply at all levels. Next I argue that rules applying to non-equilibrium thermodynamics apply also to information systems, and I give some examples of resulting self-organization, or what we have called “rhythmic entrainment” [1]. I point out that entrainment that results from forces within a system are more efficient than ones that are entrained by outside forces. This gives a sort of resilience to such systems, and in higher kinds of information allows for self-adaptation via accommodating both external forces and internally generated forces. I then apply these lessons to management and argue that the most efficient and creative form of management comes not from severe control from the top, or from imposed “efficiency” but through self-organization allowed by a low degree of control and the encouragement of diversity. This form of management I call facilitation. There may be specific people assigned a facilitation role, but this is not required; any member of a group can act as a facilitator. What is required, however, is that members of the group are accustomed to being open-minded and flexible. This form of management is most compatible with anarchism as a political (and management) theory, but has benefits in pretty much any political system.
Research Interests:
Causation can be understood as a computational process once we understand causation in informational terms. I argue that if we see processes as information channels, then causal processes are most readily interpreted as the transfer of... more
Causation can be understood as a computational process once we understand causation in informational terms. I argue that if we see processes as information channels, then causal processes are most readily interpreted as the transfer of information from one state to another. This directly implies that the later state is a computation from the earlier state, given causal laws, which can also be interpreted computationally. This approach unifies the ideas of causation and computation.
The notion of information has developed in a number of different ways (as discussed in this volume), and many of them have been applied to biology, both usefully and gratuitously, and even misleadingly. These multiple notions of... more
The notion of information has developed in a number of different ways (as discussed in this volume), and many of them have been applied to biology, both usefully and gratuitously, and even misleadingly. These multiple notions of information have not surprisingly led to apparently contradictory claims by authors who have really been talking past each other, although there are also substantive issues at stake. The aim of this chapter is to review some of the ways that notions of information have been used in biology, to disentangle them, and to evaluate their implications and aptness, as well as to point out some of the more widespread confusions.
The aim of this book is to defend a radically naturalistic metaphysics. By this we mean a metaphysics that is motivated exclusively by attempts to unify hypotheses and theories that are taken seriously by contemporary science. For reasons... more
The aim of this book is to defend a radically naturalistic metaphysics. By this we mean a metaphysics that is motivated exclusively by attempts to unify hypotheses and theories that are taken seriously by contemporary science. For reasons to be explained, we take the view that no alternative kind of metaphysics can be regarded as a legitimate part of our collective attempt to model the structure of objective reality.

Book Chapter written with Don Ross and James Ladyman in Ladyman & Ross's 'Everything Must Go'.
Four general approaches to the metaphysics of causation are current in Australasian philosophy. One is a development of the regularity theory (attributed to Hume) that uses counterfactuals (Lewis, 1973; 1994). A second is based in the... more
Four general approaches to the metaphysics of causation are current in
Australasian philosophy. One is a development of the regularity theory (attributed to Hume) that uses counterfactuals (Lewis, 1973; 1994). A second is based in the relations of universals, which determine laws, which in turn determine causal interactions of particulars (with the possible exception of singular causation, Armstrong, 1983). This broad approach goes back to Plato, and was also held in this century by Russell, who like Plato, but unlike the more recent version of Armstrong (1983), held there were no particulars as such, only universals. A third view, originating with Reichenbach and revived by Salmon (1984), holds that a causal process is one that can be marked. This view relies heavily on ideas about the transfer of information and the relation of information to probability, but it also needs uneliminable counterfactuals. The fourth view was developed recently by Dowe (1992) and Salmon (1994). It holds that a causal process involves the transfer of a non-zero valued conserved quantity. A considerable advantage of this approach over the others is that it requires neither counterfactuals nor abstracta like universals to explain causation.
The theory of causation offered here is a development of the mark approach that entails Dowe’s conserved quantity approach. The basic idea is that causation is the transfer of a particular token of a quantity of information from one state of a system to another. Physical causation is a special case in
which physical information instances are transferred from one state of a physical system to another. The approach can be interpreted as a Universals approach (depending on ones approach to mathematical objects and qualities), and it sheds some light on the nature of the regularity approach.
After motivating and describing this approach, I will sketch how it can be used to ground natural laws and how it relates to the four leading approaches, in particular how each can be conceived as a special case of my approach. Finally, I will show how my approach satisfies the requirements of Humean supervenience. The approach relies on concrete particulars and computational logic alone, and is the second stage of constructing a minimal metaphysics, started in (Collier 1996, The necessity of natural kinds).
ABSTRACT. Complex systems are dynamic and may show high levels of variability in both space and time. It is often difficult to decide on what constitutes a given complex system, i.e., where system boundaries should be set, and what... more
ABSTRACT. Complex systems are dynamic and may show high levels of variability in both space and time. It is often difficult to decide on what constitutes a given complex system, i.e., where system boundaries should be set, and what amounts to substantial change within the system. We discuss two central themes: the nature of system definitions and their ability to cope with change, and the importance of system definitions for the mental metamodels that we use to describe and order ideas about system change. Systems can only be considered as single study units if they retain their identity. Previous system definitions have largely ignored the need for both spatial and temporal continuity as essential attributes of identity. After considering the philosophical issues surrounding identity and system definitions, we examine their application to modeling studies. We outline a set of five alternative metamodels that capture a range of the basic dynamics of complex systems. Although Holling’s adaptive cycle is a compelling and widely applicable metamodel that fits many complex systems, there are systems that do not necessarily follow the adaptive cycle. We propose that more careful consideration of system definitions and alternative metamodels for complex systems will lead to greater conceptual clarity in the field and, ultimately, to more rigorous research.
Complex systems are dynamic and may show high levels of variability in both space and time. It is often difficult to decide on what constitutes a given complex system, i.e., where system boundaries should be set, and what amounts to... more
Complex systems are dynamic and may show high levels of variability in both space and time. It is often difficult to decide on what constitutes a given complex system, i.e., where system boundaries should be set, and what amounts to substantial change within the system. We discuss two central themes: the nature of system definitions and their ability to cope with change, and the importance of system definitions for the mental metamodels that we use to describe and order ideas about system change. Systems can only be considered as single study units if they retain their identity. Previous system definitions have largely ignored the need for both spatial and temporal continuity as essential attributes of identity. After considering the philosophical issues surrounding identity and system definitions, we examine their application to modeling studies. We outline a set of five alternative metamodels that capture a range of the basic dynamics of complex systems. Although Holling’s adaptive cycle is a compelling and widely applicable metamodel that fits many complex systems, there are systems that do not necessarily follow the adaptive cycle. We propose that more careful consideration of system definitions and alternative metamodels for complex systems will lead to greater conceptual clarity in the field and, ultimately, to more rigorous research.
Progress has become a suspect concept in evolutionary biology, not the least because the core concepts of neo-Darwinism do not support the idea that evolution is progressive. There have been a number of attempts to account for... more
Progress has become a suspect concept in evolutionary biology, not the least because the core concepts of neo-Darwinism do not support the idea that evolution is progressive. There have been a number of attempts to account for directionality in evolution through additions to the core hypotheses of neo-Darwinism, but they do not establish progressiveness, and they are somewhat of an ad hoc collection. The standard account of fitness and adaptation can be rephrased in terms of information theory. From this, an information of adaptation can be defined in terms of a fitness function. The information of adaptation is a measure of the mutual information between biota and their environment. If the actual state of adaptation lags behind the state of optimal adaptation, then it is possible for the  information of adaptation to increase indefinitely. Since adaptations are functional, this suggests the possibility of progressive evolution in the sense of increasing adaptation.
I argue that natural kinds are necessary for science, and that they are properties, not classes of objects. They depend on causal regularities, not on some transcendent essence. I also argue that their metaphysical necessity arises not... more
I argue that natural kinds are necessary for science, and that they are properties, not classes of objects. They depend on causal regularities, not on some transcendent essence. I also argue that their metaphysical necessity arises not from a transcendent necessity, but because they exist in all and only possible worlds in which they have the relevant causal regularities. Thus they can be both necessary and contingent -- they cannot be otherwise, but they might not exist.
Research Interests:
Research Interests:
Research Interests:
Research Interests:
Research Interests:
Research Interests:
I have previously explored autonomy as the foundation of functionality, intentionality and meaning, which are best explained coherently via information theory. Later I argued that autonomous systems accommodate the unexpected through... more
I have previously explored autonomy as the foundation of functionality, intentionality and meaning, which are best explained coherently via information theory. Later I argued that autonomous systems accommodate the unexpected through self-organizing processes,
together with some constraints that maintain autonomy. A system is autonomous if it uses its own information to modify itself and its environment to enhance its survival, responding to both environmental and internal stimuli to modify its basic functions to increase its viability. Autonomy has not played much of a role in biology and cognitive science until fairly recently. The first to bring the importance of autonomy to widespread attention were Maturana and Varela, who presented a theory of autopoietic systems based on cells as a paradigm. Autopoietic systems are dynamically closed to information. This gives the curious result that humans, who transfer information if anything does, are either not autonomous or else in some sense information is not really transferred between humans.
Similar problems can be seen to arise cutting the autopoietic aspects from infrastructure in biological cells. This problem also holds for Robert Rosen’s account of living system. The real situation is not a choice between third person openness and first person closure. On our  account, autonomy is a matter of degree depending on the relative organization of the system and system environment interactions. Furthermore, autonomy can come in levels, and the aims of the levels can contradict each other.
Research Interests: