Professional Documents
Culture Documents
Black Boxes - How Science Turns Ignorance Into Knowledge
Black Boxes - How Science Turns Ignorance Into Knowledge
Black Boxes
How Science Turns Ignorance into Knowledge
M A R C O J. NAT HA N
1
3
Oxford University Press is a department of the University of Oxford. It furthers
the University’s objective of excellence in research, scholarship, and education
by publishing worldwide. Oxford is a registered trade mark of Oxford University
Press in the UK and certain other countries.
DOI: 10.1093/oso/9780190095482.001.0001
1 3 5 7 9 8 6 4 2
Printed by Integrated Books International, United States of America
Dedicated, with love, to Jacob Aaron Lee Nathan.
Benvenuto al mondo.
Like all general statements, things are not as simple as I have written
them, but I am seeking to state a principle and refrain from listing
exceptions.
—Ernest Hemingway, Death in the Afternoon
Contents
Preface ix
References 281
Index 291
Preface
Every intellectual project has its “eureka!” moment. For this book, it happened
a few summers ago during a lonely walk on the beach of Marina di Campo,
on the beautiful island of Elba, off the Tuscan coast. I suddenly started seeing
a guiding thread, a leitmotiv, that connected much of my reflections on
the nature of science since I started working on these issues back in grad-
uate school. Simply put, it dawned upon me how I viewed most scientific
constructs as placeholders. Explanations, causal ascriptions, dispositions,
counterfactuals, emergents, and much else. They could all be viewed as
boxes, more or less opaque, standing in for more detailed descriptions. That
got me thinking about how to provide a more unified account that puts all
the tiles of the mosaic together. At the same time, I had realized that the very
concept of a black box, so frequently cited both in specialized and popular
literature, has been unduly neglected in philosophy and in the sciences alike.
This book is the result of my attempts to bring both insights together, in a
more or less systematic fashion.
The intellectual journey sparked by my preliminary reckoning on a sandy
beach has taken several years to complete. Along the way, I have been hon-
ored by the help and support of many friends and colleagues. Philip Kitcher
and Achille Varzi encouraged me to pursue this project from the get-go. Many
others provided constructive comments on various versions of the manu-
script. I am especially grateful to John Bickle, Giovanni Boniolo, Andrea
Borghini, Stefano Calboli, Guillermo Del Pinal, George DeMartino, Enzo
Fano, Tracy Mott, Emanuele Ratti, Sasha Reschechtko, Michael Strevens, and
Anubav Vasudevan for their insightful comments. A special thank you goes
to Mika Smith, Roscoe Hill, Mallory Hrehor, Naomi Reshotko, and, espe-
cially, Bill Anderson, all of whom struggled with me through several drafts
and minor tweaks, in my endless—futile, but no less noble—quest for clarity
and perspicuity.
Over the years, the University of Denver and, especially, the Department
of Philosophy have constantly provided a friendly, supporting, and stimu-
lating environment. Various drafts of the manuscript were presented as part
of my advanced seminar Ignorance and Knowledge in Contemporary Scientific
x Preface
At the outset of a booklet, aptly entitled The Art of the Soluble, the eminent
biologist Sir Peter Medawar characterizes scientific inquiry in these terms:
Good scientists study the most important problems they think they can
solve. It is, after all, their professional business to solve problems, not
merely to grapple with them. The spectacle of a scientist locked in combat
with the forces of ignorance is not an inspiring one if, in the outcome, the
scientist is routed. (Medawar 1969, p. 11)
Many readers will find this depiction captivating, intuitive, perhaps even
self-evident. What is there to dispute? Is modern science not a spectacularly
successful attempt at solving problems and securing knowledge?
Yes, it is. Still, one could ask, what makes the spectacle of a scientist
locked in combat with the forces of ignorance so uninspiring? Why is it that
we seldom celebrate ignorance in science, no matter how enthralling, and
glorify success instead, regardless of how it is achieved? To be fair, we may
excuse ignorance and failure, when they have a plausible explanation. But ig-
norance is rarely—arguably never—a goal in and of itself. Has a Nobel Prize
ever been awarded for something that was not accomplished?
The key to answering these questions, and for understanding Medawar’s
aphorism, I maintain, is to be sought in the context of a long-standing
image of science that has, more or less explicitly, dominated the scene well
into the twentieth century. The goal of scientific inquiry, from this hallowed
Black Boxes. Marco J. Nathan, Oxford University Press. © Oxford University Press 2021.
DOI: 10.1093/oso/9780190095482.003.0001
2 Black Boxes
not know, what we cannot know, and what we got wrong. In a multifaceted
word, what is lacking from the old conception of science is the productive
role of ignorance. But what does it mean for ignorance to play a “productive”
role? Could ignorance, at least under some circumstances, be positive, per-
haps even preferable to the corresponding state of knowledge?
The inevitable presence of ignorance in scientific practice is neither novel
nor especially controversial. Generations of philosophers have extensively
discussed the nature and limits of human knowledge and their implications.
Nevertheless, ignorance was traditionally perceived as a hurdle to be over-
come. Over the last few decades it has turned into a springboard, and a more
constructive side of ignorance began to emerge. Allow me to elaborate.
In a recent booklet, Stuart Firestein (2012, p. 28), a prominent neurobiolo-
gist, remarks that “Science [ . . . ] produces ignorance, possibly at a faster rate
than it produces knowledge.” At first blush, this may sound like a pessimistic
reading of Medawar, where the spectacle of a scientist routed in combat by
forces of ignorance becomes uninspiring. Yet, as Firestein goes on to clarify,
this is not what he has in mind: “We now have an important insight. It is that
the problem of the unknowable, even the really unknowable, may not be a
serious obstacle. The unknowable may itself become a fact. It can serve as a
portal to deeper understanding. Most important, it certainly has not inter-
fered with the production of ignorance and therefore of the scientific pro-
gram. Rather, the very notions of incompleteness or uncertainty should be
taken as the herald of science” (2012, p. 44).
Analogous insights abound in psychology, where the study of cognitive
limitations has grown into a thriving research program.1 Philosophy, too, has
followed suit. Wimsatt (2007, p. 23) fuels his endeavor to “re-engineer phi-
losophy for limited beings” with the observation that “we can’t idealize devi-
ations and errors out of existence in our normative theories because they are
central to our methodology. We are error prone and error tolerant—errors are
unavoidable in the fabric of our lives, so we are well-adapted to living with
and learning from them. We learn more when things break down than when
they work right. Cognitively speaking, we metabolize mistakes!” In short, ig-
norance pervades our lives.
Time to connect a few dots. We began with Medawar’s insight that sci-
ence is in the puzzle-solving business. When one grapples with a problem
and ends up routed, it is a sign that something has gone south. All of this fits
negligible or even useful? To put all of this in general terms, how does science
turn ignorance and failure into knowledge and success?
Much will be said, in the chapters to follow, about the nature of productive
ignorance, what distinguishes it from “a stubborn devotion to uninformed
opinions,” and how it is incorporated into scientific practice. Before doing so,
in the remainder of this section, I want to draw attention to a related issue,
concerning pedagogy: how science is taught to bright young minds. If, as
Firestein notes, ignorance is so paramount in science, why is its role not ex-
plicitly incorporated into the standard curriculum?
As noted, most scholars no longer take seriously the cumulative ideal of
science as the slow, painstaking accumulation of truths. Now, surely, there
is heated debate and disagreement on how inaccurate this picture really is.
More sympathetic readings consider it a benign caricature, promoting a
simple but effective depiction of a complex, multifaceted enterprise. Crankier
commentators dismiss it as an inadequate distortion that has overstayed its
welcome. Still, few, if any, take it at face value, for good reason.
This being said, this superseded image is still very much alive in some
circles. It is especially popular among the general public. Journalists,
politicians, entrepreneurs, and many other non-specialists explicitly en-
dorse and actively promote the vision of science as a gradual accumula-
tion of facts and truths. While some tenants of the ivory tower of academia
might take this as an opportunity to mock and chastise the incompetence
of the masses, we should staunchly resist this temptation. First, note, we
are talking about highly educated and influential portions of society. Lack
of schooling can hardly be the main culprit. Second, and even more im-
portant, it is not hard to figure out where the source of the misconception
might lie.
Textbooks, at all levels and intended for all audiences, present science as a
bunch of laws, theories, facts, and notions to be internalized uncritically. This
pernicious stereotype trickles down from schools and colleges to television
shows, newspapers, and magazines, eventually reaching the general public.
Now, surely, some promising young scholars will go on to attend graduate
schools and pursue research-oriented careers: academic, clinical, govern-
mental, and the like. They will soon learn, often the hard way, that actual
scientific practice is very different—and way more interesting!—than what is
crystallized in journals, books, and articles. Yet, this is a tiny fraction of the
overall population. Most of us are only exposed to science in grade school or
college, where the old view still dominates. By the time one leaves the class-
room to walk into an office, the damage is typically done.
6 Black Boxes
Where am I going with this? The bottom line is simple. The textbooks by
which students learn science, which perpetuate the cumulative approach, are
written by those same specialists who, in their research, eschew that brick-
by-brick analogy. Why are experts mischaracterizing their own disciplines,
promoting an old image that they themselves no longer endorse?
This book addresses this old question, which has troubled philosophers of
science, at least since the debates of the 1970s in the wake of Kuhn’s Structure.
My answer can be broken down to a pars destruens and a pars construens.
Beginning with the negative claim, textbooks do not adopt the cumulative
model with the intention to deceive. The reason is that no viable alternative
is available. The current scene is dominated by two competing models of
science, neither of which supplants the “brick-by-brick” ideal. There must
be more effective ways of popularizing research, exposing new generations
to the fascinating world of science. The positive portion of my argument
involves a constructive proposal. The next two sections present these two
theses in embryonic form. Details will emerge in the ensuing chapters.
ECON O MIC S
NEUROPSYCH
BIOLOGY
PHYSICS
2 The expression “special sciences” refers to all branches of science, with the exception of funda-
mental (particle) physics. This moniker is somewhat misleading, as there is nothing inherently “spe-
cial” about these disciplines, aside from having been relatively underexplored in philosophy. Yet, this
label has become commonplace and I shall stick to it.
3 To be clear, any characterization of levels as “coarse-grained” vs. “fine-grained,” “higher-level” vs.
“lower-level,” or “micro” vs. “macro,” should be understood as relativized to a specific choice of expla-
nandum. From this standpoint, the same explanatory level Ln can be “micro” or “lower-level” relative
to a coarser description Ln+1, and “macro” or “higher” relative to a finer-grained depiction Ln−1.
Bricks and Boxes 9
4 For a discussion of the origins and foundations of Laplacian determinism, see van Strien (2014).
5 “Philosophical” discussions of Laplacian Demons and the future of physics have been under-
taken by prominent philosophers and scientists such as Dennett (1987); Putnam (1990); Weinberg
(1992); Mayr (2004); Chalmers (2012); Nagel (2012); and Burge (2013).
10 Black Boxes
of some branch of science? Would we still need the special sciences? Would
physics replace them and truly become a scientific theory of everything?
There are two broad families of responses to these questions. The first,
which may be dubbed reductionism, answers our quandaries in the posi-
tive. Reductionism comes in various degrees of strength. Any reductionist
worth their salt is perfectly aware that current physics is still eons away from
achieving the status of a bona fide accurate and exhaustive description of the
universe. And even if our best available candidates for fundamental laws of
nature happened to be exact, the computing power required to rephrase rel-
atively simple higher-level events in physical terms remains out of reach, at
least for now. This is to say that physics, as we presently have it, is not yet an
overarching theory of everything. Still, the more radical reductionists claim,
physics will eventually explain all events in the universe, and contemporary
theories have already put us on the right track. More modest reductionists
make further concessions. Perhaps physics will never actually develop to the
point of becoming the utmost theory of reality. Even if it did, gaining the
computing power to completely dispose of all non-physical explanations
may remain a chimera. Hence, the special sciences will always be required
in practice. Nevertheless, in principle, physics could do without them. In this
sense, the special sciences are nothing but convenient, but potentially dis-
posable, scaffoldings.
The second family of responses, antireductionism, provides diametrically
opposite answers to our questions concerning the future of physics. Like its
reductionist counterpart, antireductionism comes in degrees. More radical
versions contend that, because of the fundamental disunity or heterogeneity
of the universe, a physical theory of everything is a metaphysical impossi-
bility. Even the most grandiose, all-encompassing, and successful physical
theories could not explain all that goes on in the material universe because
many—or, some would say, most—events covered by the special sciences fall
outside of the domain of physics. Less uncompromising antireductionists
may make some concessions toward modest forms of reductionism. Perhaps
physics could, in principle, explain every scientific event. Still, the special
sciences are nevertheless not disposable. This is because they provide objec-
tively “better” explanations of higher-level happenings. In short, the antire-
ductionist says, the success of physics is no threat for higher-level theories.
Special sciences are not going anywhere, now or ever.
In sum, the debate between reductionism and antireductionism in general
philosophy of science boils down to the prospects of developing a physical
Bricks and Boxes 11
6 For an excellent discussion of the rise of scientific philosophy, see Friedman (2001).
7 Similar conclusions have been reached, via a different route, by Wimsatt (2007), and developed
by the “new wave of mechanistic philosophy,” presented and examined in Chapter 7. Relatedly, Gillett
(2016) has noted some discrepancy between the models of reduction and emergence developed in
philosophy versus the ones adopted in scientific practice.
12 Black Boxes
the tip of the iceberg. References to black boxes can be found in the work of
many prominent philosophers, such as Hanson (1963), Quine (1970), and
Rorty (1979), just to pick a few notable examples.
And, of course, black boxes are hardly limited to the philosophy of mind,
or even the field of philosophy tout court. As a contemporary biologist puts it,
“the current state of scientific practice [ . . . ] more and more involves relying
upon ‘black box’ methods in order to provide numerically based solutions
to complex inference problems that cannot be solved analytically” (Orzack
2008, p. 102). And here is an evolutionary psychologist: “The optimality
modeler’s gambit is that evolved rules of thumb can mimic optimal behavior
well enough not to disrupt the fit by much, so that they can be left as a black
box” (Gigerenzer 2008, p. 55). These are just a few among many representa-
tive samples, which can be found across the board.
The use (and abuse) of black boxes is criticized as often as it is praised. Some
neuroscientists scornfully dub the authors of functional models containing
boxes, question marks, or other filler terms, “boxologists.” In epidemiology—
the branch of medicine dealing with the incidence, distribution, and possible
control of diseases and other health factors—there is a recent effort to over-
come the “black box methodology,” that is, “the methodologic approach that
ignores biology and thus treats all levels of the structure below that of the in-
dividual as one large opaque box not to be opened” (Weed 1998, p. 13). Many
reductionists view black boxes as a necessary evil: something that does occur
in science, but that is an embarrassment, not something to celebrate.
In short, without—yet—getting bogged down in details, references to
black boxes, for better or for worse, are ubiquitous. Analogous remarks can
be found across every field, from engineering to immunology, from neuro-
science to machine learning, from analytic philosophy to ecology. What are
we to make of these boxes that everyone seems to be talking about?
Familiar as it may ring, talk of boxes here is evidently a figure of speech.
You may actually find a black box on an aircraft or a modern train. But you
will not find any such thing in a philosophy professor’s dusty office any more
than you will find it in a library or research lab. What exactly is a black box?
Simply put, it is a theoretical construct. It is a complex system whose struc-
ture is left mysterious to its users, or otherwise set aside. More precisely, the
process of black-boxing a specific phenomenon involves isolating some of its
core features, in such a way that they can be assumed without further micro-
explanation or detailed description of its structure.
Bricks and Boxes 15
This is our ambitious goal. Before getting down to business, we still have a
couple of chores. First, I need to map the terrain ahead of us and clarify my
perspective. This will be the task of section 1.4, which offers a synopsis of the
chapters to come. Finally, section 1.5 concludes this preliminary overview
with a couple of heads up and caveats about the aims and scope of the project.
This book is divided into ten chapters, including the introduction you are
currently reading. Chapter 2, “Between Scylla and Charybdis,” provides an
overview of the development of the reductionism vs. antireductionism de-
bate, which has set the stage for philosophical analyses of science since the
early decades of the twentieth century. Our point of departure is the rise and
fall of the classical model of reduction, epitomized by the work of Ernest
Nagel. Next is the subsequent forging of the “antireductionist consensus” and
the “reductionist anti-consensus.” Once the relevant background is set, the
chapter concludes by arguing that modest reductionism and sophisticated
antireductionism substantially overlap, making the dispute more termino-
logical than is often appreciated. Even more problematically, friends and foes
of reductionism tend to share an overly restrictive characterization of the in-
terface between levels of explanation. Thus, it is time for philosophy to move
away from these intertwining strands, which fail to capture the productive
interplay between knowledge and ignorance in science, and to develop new
categories for charting the nature and advancement of the scientific enter-
prise. Reductionism and antireductionism will return in the final chapter.
Before doing so, we shall explore a new path by focusing on an explanatory
strategy that, despite being well known and widely employed, currently lacks
a systematic analysis. This strategy is black-boxing.
Chapter 3, “Lessons from the History of Science,” starts cooking our alter-
native to “Scylla” and “Charybdis” by providing four historical illustrations
of black boxes. The first two case studies originate from two intellectual
giants in the field of biology. Darwin acknowledged the existence and signif-
icance of the mechanisms of inheritance. But he had no adequate proposal to
offer. How could his explanations work so well, given that a crucial piece of
the puzzle was missing? A similar shadow is cast on the work of Mendel and
his early-twentieth-century followers, the so-called classical geneticists, who
posited genes having little to no evidence of the nature, structure, or even
Bricks and Boxes 17
the physical reality of these theoretical constructs. How can the thriving field
of genetics be founded on such a fragile underpinning, a crackling layer of
thin ice? The answer to both conundrums lies in the construction of black
boxes, which effectively set to the side the unknown or mistaken details of
these explanations without impacting their accuracy and robustness. Then
came the Modern Synthesis, first, and the Developmental Synthesis later,
which began to fill in the blanks, opening Darwin’s and Mendel’s black boxes,
only to replace them with new black boxes. This phenomenon is by no means
unique to biology. Another illustration is found in the elimination of mental
states from the stimulus-response models advanced by psychological behav-
iorism. A final example comes from neoclassical economics, whose “as if ”
approach presupposes that the brain can be treated as a black box, essen-
tially setting neuropsychological realism aside. The history of science, I shall
argue, is essentially a history of black boxes.
In addition to illustrating the prominence of black boxes across the sci-
ences, these episodes also show that, contrary to a common if tacit belief,
black-boxing is hardly a monolithic, one-size-fits-all strategy that allows
scientific research to proceed in the face of our ignorance. These theoret-
ical constructs can play various subtly different roles. Yet, despite substantial
methodological differences, there is a common thread. All four case histories
point to the same core phenomenon: the identification of mechanisms that,
for various reasons, are omitted from the relevant explanations. This is done
via the construction of a black box. But what is a black box and how does
black-boxing work? How are these entities constructed? What distinguishes
a “good” box from a “bad” one? To answer these questions, we need to pro-
vide a more systematic analysis of this explanatory strategy.
Chapter 4, “Placeholders,” poses the foundations of this project. It should
be evident, even just from this cursory introduction, that black boxes func-
tion as placeholders. But what is a placeholder? What role does it play in
science? I set out to answer these questions by introducing two widespread
theses concerning the concept of biological fitness. First, fitness is commonly
defined as a dispositional property. It is the propensity of an organism or trait
to survive and reproduce in a particular environment. Second, since fitness
supervenes—that is, depends, in a sense to be clarified—on its underlying
physical properties, it is a placeholder for a deeper account that dispenses
with the concept of fitness altogether. Plausible as they both are, these two
theses are in tension. Qua placeholder, fitness is explanatory. Qua disposi-
tion, it explicates but cannot causally explain the associated behavior. In the
18 Black Boxes
second part of the chapter, I suggest a way out of this impasse. My solution,
simply put, involves drawing a distinction between two kinds of placeholders.
On the one hand, a placeholder may stand in for the range of events to be
accounted for. In this case, the placeholder functions as a frame. It spells
out an explanandum: a behavior, or range of behaviors, in need of explana-
tion. On the other hand, a placeholder may stand in for the mechanisms,
broadly construed, which bring about the patterns of behavior specified by
the frame, regardless of how well their nature and structure are understood.
When this occurs, the placeholder becomes an explanans and I refer to it as a
difference-maker.
Both kinds of placeholders—frames and difference-makers—play a piv-
otal role in the construction of a black box. Chapter 5, “Black-Boxing 101,”
breaks down this process into three constitutive steps. First, in the framing
stage, the explanandum is sharpened by placing the object of explanation
in the appropriate context. This is typically accomplished by constructing a
frame, a placeholder that stands in for patterns of behavior in need of expla-
nation. Second, the difference-making stage provides a causal explanation of
the framed explanandum. This involves identifying the relevant difference-
makers, placeholders that stand in for the mechanisms producing these
patterns. The final representation stage determines how these difference-
makers should be portrayed, that is, which mechanistic components and
activities should be explicitly represented, and which can be idealized or ab-
stracted away. The outcome of this process is a model of the explanandum, a
depiction of the relevant portion of the world. This analysis provides and jus-
tifies the general definition we were looking for. A black box is a placeholder—
frame or difference-maker—in a causal explanation represented in a model.
By now, we will be ready to put this to work.
Is this three-step recipe adequate and accurate? Does the proposed defi-
nition capture the essence of black-boxing? What are its advantages? What
are its limitations? These questions are taken up in the following chapters.
Chapter 6, “History of Science, Black-Boxing Style,” revisits our case studies
from the perspective of the present analysis of black boxes. By breaking down
these episodes into our three main steps, we are able to see how it was pos-
sible for Darwin to provide a simple and elegant explanation of such a com-
plex, overarching explanandum: distributions of organisms and traits across
the globe. It also explains why Mendel is rightfully considered the founding
father of genetics, despite having virtually no understanding of what genes
are, how they work, and even if they existed from a physiological perspective.
Bricks and Boxes 19
Furthermore, if Darwin and Mendel are praised for skillfully setting the
mechanisms of inheritance and variation aside and keeping them out of
their explanations, why is Skinner criticized for providing essentially the
same treatment of mental states? What distinguishes Darwin’s and Mendel’s
pioneering insights from Skinner’s influential, albeit outmoded, approach
to psychology? Finally, our analysis of black boxes sheds light on the con-
temporary dispute over the goals and methodology of economics, dividing
advocates of traditional neoclassical approaches from more or less revolu-
tionary forms of contemporary “psycho-neural” economics.
After providing a systematic definition of black boxes and testing its ad-
equacy against our case histories, it is time to address and enjoy the phil-
osophical payoff of all this hard work. This begins in Chapter 7, “Diet
Mechanistic Philosophy,” which compares and contrasts the black-boxing
approach with a movement that has gained much traction in the last couple
of decades within the philosophy of science: the “new wave of mechanistic
philosophy.” The new mechanistic philosophy was also born as a reaction to
the traditional reductionism vs. antireductionism divide. Unsurprisingly, it
pioneers many of the theses discussed here. This raises a concern. Is my treat-
ment of black boxes as novel and original as I claim? Or it is just a rehashing
of ideas that have been on the table since the turn of the new millennium? As
we shall see, the black-boxing recipe fits in quite well with the depiction of
science being in the business of discovering and modeling mechanisms. All
three steps underlying the construction of a black box have been stressed,
in some form or degree, in the extant literature. Nevertheless, the construc-
tion of black boxes, as I present it here, dampens many of the ontological
implications that characterize the contemporary landscape. This allows us
to respond to some objections raised against traditional mechanism. For this
reason, I provocatively refer to black-boxing as a “diet” mechanistic philos-
ophy, with all the epistemic flavor of your ole fashioned mechanistic view, but
hardly any metaphysical calories. Now, we can really begin to explore philos-
ophy of science, black-boxing style.
Reductionism contends that science invariably advances by descending to
lower levels. Antireductionism flatly rejects this tenet. Some explanations,
it claims, cannot be enhanced by breaking them down further. But why
should this be so? What makes explanations “autonomous”? A popular
way of cashing out the antireductionist thesis involves the concept of emer-
gence. The core intuition underlying emergence is simple. As systems be-
come increasingly complex, they begin to display properties which, in some
20 Black Boxes
sense, transcend the properties of their parts. The main task of a philosoph-
ical analysis of emergence is to cash out this “in some sense” qualifier. In
what ways, if any, do emergents transcend aggregative properties of their
constituents? How should one understand the alleged unpredictability, non-
explainability, or irreducibility of the resulting behavior? Answering these
questions might seem simple at first glance. But it has challenged scientists
and philosophers alike for a hot minute. Chapter 8, “Emergence Reframed,”
presents, motivates, and defends a strategy for characterizing emergence
and its role in scientific research, grounded in our analysis of black boxes.
Emergents, I maintain, can be characterized as black boxes: placeholders
in causal explanations represented in models. My proposal has the wel-
come implications of bringing together various usages of emergence across
domains and to reconcile emergence with reduction. Yet, this does come at
a cost. It requires abandoning a rigid perspective according to which emer-
gence is an intrinsic or absolute feature of systems, in favor or a more contex-
tual approach that relativizes the emergent status of a property or behavior to
a specific explanatory frame of reference.
Chapter 9, “The Fuel of Scientific Progress,” addresses a classic topic
that, over the last couple of decades, has been unduly neglected: the ques-
tion of the advancement of science. Setting up the discussion will require
us to retrace our steps back to the roots of modern philosophy of science.
Logical positivism provided an intuitive and prima facie compelling ac-
count of scientific knowledge. Science advances through a slow, constant,
painstaking accumulation of facts or, more modestly, increasingly precise
approximations thereof, in a “brick-by-brick” fashion (§1.1). These good old
days are gone. In the wake of Kuhn’s groundbreaking work, positivist philos-
ophy of science was replaced by a more realistic and historically informed
depiction of scientific theory and practice. However, over half a century has
now passed since the publication of Structure. Despite valiant attempts, we
still lack a fully developed, viable replacement for the cumulative model
presupposed by positivism. At the dawn of the new millennium, mainstream
philosophy eventually abandoned the project of developing a grand, over-
arching account of science. The quest for generality was traded in for a more
detailed analysis of particular disciplines and practices. I shall not attempt
here a systematic development of a post-Kuhnian alternative to logical posi-
tivism. More modestly, my goal is to show how the black-boxing strategy can
offer a revamped formulation of scientific progress, an important topic that
Bricks and Boxes 21
lies at the core of any general characterization of science, and to bring it back
on the philosophical main stage, where it legitimately belongs.
Chapter 10, “Sailing through the Strait,” takes us right back to where we
started. Chapter 2 characterizes contemporary philosophy of science as
metaphorically navigating between Scylla and Charybdis, that is, between
reductionism and antireductionism. There, I ask two related families of
questions. First, is it possible to steer clear of both hazards? Is there an al-
ternative model of the nature and advancement of science that avoids the
pitfalls of both stances and, in doing so, provides a fresh way of presenting
science to an educated readership in a more realistic fashion? Second, how
does science bring together the productive role of ignorance and the pro-
gressive growth of knowledge? The final chapter cashes out these two prom-
issory notes. These two sets of problems have a common answer: black
boxes. Specifically, the first four sections argue that the black-boxing strategy
outlined throughout the book captures the advantages of both reductionism
and antireductionism, while eschewing more troublesome implications. The
final section addresses the interplay of ignorance and knowledge.
I conclude this introductory overview by clarifying the aim and scope of this
work. At the most general level, I have three main targets in mind.
My first goal is a philosophical analysis of an important theoretical con-
struct and how it affects scientific practice. Talk about black boxes is ubiq-
uitous. This metaphor is widely employed by scientists, philosophers,
historians, sociologists, politicians, and many others. Yet, no one ever tells
us exactly what to make of this figure of speech. I offer to pick up this tab.
I should make it clear that my aim is hardly to dismiss and replace all the
excellent work in general history and philosophy of science that has been de-
veloped over the last few decades. The collective goal of the ensuing chapters
is twofold. On the one hand, they develop and refine the process of black-
boxing by appealing to some traditional debates in general philosophy of sci-
ence. On the other hand, I also want to suggest that black boxes provide a
clear and precise framework to systematize an array of traditional concepts,
whose nature has proven to be notoriously elusive, especially after the uni-
fying force of logical positivism has waned. More generally, my objective
is to advocate and justify a shift in perspective. If we move away from the
22 Black Boxes
8 The notion of rational reconstruction is rooted in the groundbreaking work of Carnap (1956b);
Kuhn (1962); and, later, Lakatos (1976).
9 I try to adhere to two adequacy conditions borrowed from Kitcher (1993, pp. 12–13). First, if
something is attributed to a figure, that attribution is correct. Second, nothing is omitted which, if
introduced into the account, would undermine the point made.
2
Between Scylla and Charybdis
§2.1. Introduction
* “Even as waves that break above Charybdis, //each shattering the other when they meet, //so
must the spirits here dance the round dance.” Translation by A. Mandelbaum.
Black Boxes. Marco J. Nathan, Oxford University Press. © Oxford University Press 2021.
DOI: 10.1093/oso/9780190095482.003.0002
24 Black Boxes
1 Some authors draw a fine-grained distinction between “logical positivism” and “logical empir-
icism,” separating the early stages of the movement, revolving around the Vienna Circle, from later
phases, following the post–World War II diaspora. Unless specified otherwise, for the sake of sim-
plicity, I shall use the two expressions interchangeably.
Between Scylla and Charybdis 25
Our excursus into the modern history of reductionism sets out by going back
to the dawn of the contemporary debate. The story starts with the classical
model of derivational reduction, a legacy of logical positivism.2
In what came to be a locus classicus, Ernest Nagel (1961) characterized
reduction as the logical deduction of the laws of a reduced theory S from the
laws of a reducing theory P, a condition known as “derivability.”3 For such
2 The exposition in this section draws heavily from Nathan and Del Pinal (2016).
3 “S” and “P” stand in, respectively, for one of the special sciences and fundamental physics. Yet, the
same schema can be applied, mutatis mutandis, to any pair of sciences hierarchically arranged: neu-
roscience and psychology, biology and chemistry, economics and sociology, etc. For the sake of sim-
plicity, let us assume that the language of these theories does not substantially overlap, that is, most
Between Scylla and Charybdis 27
predicates of S are not predicates of P and, vice versa, the majority of P-predicates do not belong to
the vocabulary of S.
4 As Fodor (1974) notes, the bridge laws required by Nagelian reduction express a stronger posi-
tion than token physicalism, the plausible view that all events which fall under the laws of a special
science supervene on physical events. Statements like R1 and R2 presuppose type physicalism, a more
controversial tenet according to which kinds figuring in the laws of special sciences must be type-
identical to more fundamental kinds.
28 Black Boxes
event. Along the same lines, tasting sea salt and tasting sodium chloride
(NaCl) are identical types of events.
Intuitive and influential as it once was, Nagel’s framework has now fallen
on hard times, undermined by powerful methodological arguments. Before
presenting some objections, it is instructive to focus on a couple of desiderata
that the classical model accomplishes and does so quite well.
First, reductionism, in general, offers a clear and precise concept of the
unity of science. Two theories are said to be unified when one is reduced to
the other or both are subsumed under a broader, more general theory. To be
sure, many contemporary scholars reject the identification of unity with der-
ivational reduction. Some even question whether science should be viewed
as unified at all.5 Yet, this is typically a side effect of the shortcomings of re-
ductionism, specifically, of Nagel’s model. To the extent that reduction offers
a viable general model of science, it also captures its unity. Thus, unsurpris-
ingly, during the heyday of positivism, the unity of science was treated as a
matter of logical relations between the terms and laws of various fields, to be
achieved through a series of inter-theoretic reductions.6
A second accomplishment of classical reductionism is the provision of
a clear-cut account of how lower-level discoveries can, in principle, inform
higher-level theories. Let me illustrate the point with simple examples. Suppose
that we are testing a psychological hypothesis LPsy: Psy1x → Psy2x, which posits
a law-like connection between two psychological predicates: Psy1 and Psy2. If
we had a pair of reductive bridge laws that map Psy1 and Psy2 onto neural kinds
Neu1 and Neu2, then we could confirm and explain the nomological status of LPsy
directly by uncovering the neural-level connection LNeu: Neu1x → Neu2x. This is
because, as noted, the bridge laws presupposed by derivational reduction ex-
press contingent type-identities. If Psy1 and Psy2 are type-identical to Neu1 and
Neu2, and there is a law-like connection between Psy1 and Psy2, there will also be
a corresponding nomological connection between Neu1 and Neu2.
Readers unfamiliar with these debates might find the following analogy
more intuitive. Assume that water and sea salt are type-identical to H2O
and NaCl, respectively. If one provided a successful explanation of why
NaCl dissolves in H2O, under specific circumstances, then one has thereby
explained why sea salt dissolves in water, under those same conditions.
In short, the reductive approach suggests a general model for how micro-
theories can be used to advance their macro-counterparts. The goal is to look
for lower-level implementations of higher-level processes, which—on the
presupposition of reductionism—can then be used directly to test and ex-
plain macroscopic laws and generalizations. Reduction is unification, and
unification, the story goes, is the goal and benchmark of scientific progress.
The good news is over. Now, on to some objections. The best-known
problem with derivational reduction stems from the observation that natural
kinds seldom correspond neatly across levels in the way presupposed and
required by reductive bridge laws. One could arguably find a handful of suc-
cessful Nagelian reductions in the history of science. For instance, one could
make a strong case that sea salt and NaCl are type-identical, that the action-
potential of neurons is derivable from electric impulses, or that heat has been
effectively reduced to the mean kinetic energy of constituent molecules.
Assume, for the sake of the argument, that these reductions do, in fact, fit in
well with the classical model presented earlier.7 Still, contingent event iden-
tities are way too scarce to make classical reductionism a plausible, accurate,
and general inter-theoretic model of scientific practice. In most cases, there
are no physical, chemical, or macromolecular kinds that correspond—in
the sense of being type-identical—to biological, psychological, or economic
kinds, in the manner required by Nagel’s framework. This, in a nutshell, is the
multiple-realizability argument, first spelled out by Putnam and Fodor. The
basic idea is that most higher-level kinds are multiply-realizable and func-
tionally describable. Consequently, we rarely have bridge principles like R1
and R2 (“S1x ↔ P1x”; “S2x ↔ P2x”), which posit one-to-one mappings of kinds
across levels. Rather, what we typically find in science are linking laws such
as R3, which capture how higher-level kinds can be potentially realized by a
variety of lower-level states:
7 This is hardly uncontroversial. For some difficulties and qualifications affecting the “classical” re-
duction of thermodynamics to statistical mechanics, see Sklar (1993).
30 Black Boxes
8 This view is spelled out in Suppes (1960); van Fraassen (1980); and Lloyd (1988).
Between Scylla and Charybdis 31
establish this epistemic claim.12 Next, section 2.5 will consider some re-
ductionist comebacks.
One of the most influential arguments in support of explanatory autonomy
was originally developed by Putnam in his article “Philosophy and Our
Mental Life.” Putnam’s intended goal was to show that traditional philosoph-
ical discussions of the mind-body problem rest on a misleading assumption.
He begins by noting how all parties involved presuppose, more or less im-
plicitly, the following premise. If human beings are purely material entities,
then there must be a physical explanation of our behavior. Materialists em-
ploy this conditional claim in a modus ponens inference. Together with the
additional premise that we are material entities, it vindicates the in-principle
possibility of physically explaining our behavior:
Dualists accept premise (a) but turn the modus ponens into a modus tol-
lens by revising (b) and (c). Given that there can be no physical explana-
tion of human behavior, they argue, we cannot be purely material beings:
12 It is important to distinguish the epistemic form of antireductionism addressed here from var-
ious metaphysical variants. For instance, Galison, Hacking, Cartwright, Dupré, and other members
of the “Stanford School” have defended an ontological thesis, the heterogeneity of the natural world,
from which follows a methodological tenet fundamentally at odds with the positivist outlook: the
disunity of science. Reductionism, which Dupré (1993, p. 88) defines as “the view that the ultimate
scientific understanding of phenomena is to be gained exclusively from looking at the constituents of
those phenomena and their properties” becomes a derivative target, in virtue of its connection with
the unity of science. A different form of metaphysical antireductionism, associated with the possi-
bility of downward causation and strong emergence, has become popular in science in the context of
systems biology. We shall discuss some of these metaphysical stances—in particular, emergence—in
Chapter 8. For the moment, I shall only be concerned with antireductionism understood primarily as
an epistemic tenet of scientific explanation.
Between Scylla and Charybdis 33
Both arguments, Putnam contends, miss the mark. Physicalists and dualists
make the same mistake by accepting the conditional premise (a). To establish
his point, he introduces an example that has since become famous.
Consider a simple physical system constituted by a rigid board with two
holes—a circle one inch in diameter and a square one inch high—and a rigid
cubical peg just under one inch high (Figure 2.1). Our task is to explain the
intuitive observation that the peg can pass though the square hole, but it will
not go through the round hole. Why is this the case?
Putnam sketches two kinds of explanations. The first begins by noting that
both the board and the peg are rigid lattices of atoms. Now, if we compute the
astronomical number of all possible trajectories of the peg, we will discover
that no trajectory passes through the round hole. There is, however, at least
one (and likely quite a few) trajectories that will pass through the square hole.
A second kind of explanation begins in exactly the same way, by noting that
the board and the peg are both rigid systems. But, rather than computing
trajectories, it points out that the square hole is slightly bigger than the cross-
section of the peg, whereas the round hole is smaller. Let us call the first kind
of explanation “lower-level,” “micro,” or “physical” and, correspondingly,
label the second one “higher-level,” “macro,” or “geometrical.” The predict-
able follow-up question is: are both explanations adequate? If not, why not?
And, if so, which one is better?
Putnam maintains that the geometrical explanation is objectively superior,
from a methodological perspective, to its physical counterpart. (Actually, he
goes as far as saying that the physical description is not an explanation at all.
Yet, we can set this more controversial thesis to the side.) The reason is that
the physical description evidently applies only to the specific case at hand,
since no other pegs and boards will have exactly the same atomic structure.
In contrast, the geometrical story can be straightforwardly generalized to
similar systems. Whereas the macro-explanation brings out the relevant ge-
ometrical relations, the micro-explanation conceals these laws. As Putnam
puts it: “in terms of real life disciplines, real life ways of slicing up scientific
problems, the higher-level explanation is far more general, which is why it is
explanatory” (1975a, p. 297).
The moral drawn by Putnam from his example is the explanatory au-
tonomy of the mental from the physical. Higher-level explanations, regard-
less of whether they involve pegs or mental states, should not be explained at
lower levels, in terms of biochemical or physical properties. Doing so does
not produce a better explanation of the explanandum under scrutiny.
Putnam’s conclusion has fired up a long-standing debate that rages on
to the present day. We can all agree that, for creatures like us, the geomet-
rical explanation is simple and good enough for most intents and purposes.
Furthermore, the physical explanation is likely intractable and overkill. That
much virtually no one disputes. But is it really true that macro-explanations
are objectively superior to their lower-level counterparts, in the sense that, if
we were to advance them, they would explain more and better? As we shall
see in section 2.5, reductionists beg to disagree. Before getting there, in the
remainder of this section, let us focus on how Putnam’s insight has contrib-
uted to forging an antireductionist consensus.
The square-peg–round-hole analogy involves a simple toy system. Yet,
its general conclusion, the explanatory autonomy of the macro-level, can be
readily applied to scientific cases. A famous extension of Putnam’s example
in the philosophy of biology is Kitcher’s article: “1953 and All That: A Tale
of Two Sciences.” Developing insights originally pioneered by David Hull,
Kitcher considers the prospects of reducing classical Mendelian genetics to
molecular biology. His conclusion is negative. He examines three arguments
for the in-principle impossibility of the reduction in question. First, genetics
does not contain the kind of general laws presupposed by Nagelian reduction.
Second, the vocabulary of Mendelian genetics cannot be translated into the
language of lower-level sciences, nor can these terms be connected in other
suitable ways for reasons pertaining to multiple-realizability. Third, even if
such reduction were possible, it would not be explanatory. Sure, looking at
the underlying mechanisms allows one to extend higher-level depictions in
Between Scylla and Charybdis 35
13 As Strevens (2016, pp. 156–57) aptly summarizes it in a recent discussion, “Kitcher appears to
have the following view: moving down the levels of potential explanation from ecology to physi-
ology to cytology to chemistry to fundamental physics, there is some point beyond which fur-
ther unpacking of mechanisms becomes entirely irrelevant. [ . . . ] In ‘1953’ Kitcher accounts for
failures of transitivity by proposing that, when transitivity falls through it is because categories es-
sential to explaining high-level phenomena cannot be ascribed explanatory relevance by lower-level
explanations due to their not constituting natural kinds from the lower-level point of view.”
Between Scylla and Charybdis 37
general, as the explanatory autonomy of the higher levels. The main thesis,
pioneered by Putnam and Fodor with an eye to the philosophy of mind,
made some form of non-reductive physicalism the default position. These
original insights were developed and applied to various other branches of
science, such as biology, psychology, and the social sciences.14
Still, some would not budge. For one, despite the clear shortcomings
outlined earlier, many continued to find the overall reductionist stance con-
vincing. Furthermore, as we shall shortly see, antireductionism is confronted
by controversial implications, just like its reductive counterpart. For these
and related reasons, the last few decades have witnessed the forging of a “re-
ductionist anti-consensus” across philosophical discussions of the sciences.
This new wave of reductionism proposes a refined framework. Moving away
from the original logico-positivist model, revamped epistemic reductionism
contends that explanations are always improved by shifting down to lower
levels. It is time to focus on this theoretical comeback.
14 See Hull (1974); Garfinkel (1981); Kitcher (1984, 1999); Fodor (1999); Kincaid (1990).
17 For a more detailed discussion of this point, see Nathan and Del Pinal (2016).
18 Fodor (1974) calls this thesis “token physicalism,” or “the generality of physics,” and accepts it
as unproblematic. Hütteman and Love (2016) dub it “metaphysical reductionism,” to distinguish it
from the “epistemological” reductionism presently under discussion.
Between Scylla and Charybdis 39
one first raised in section 1.2 of Chapter 1: in principle, can physics explain
everything? Is it true that every fact explained by a special science can be
explained just as well—indeed, better, in greater depth—by physics?
Antireductionism offers a negative answer. As Putnam’s square-peg–
round-hole example purports to show, size, shape, rigidity, and other macro-
states supervene on the micro-structure: position, velocity, and other atomic
properties. And, yet, the macro-level is superior, from an epistemic stand-
point. The physical depiction is less perspicuous than the geometrical one
because it cites inessential facts and obscures relevant features. Other influ-
ential antireductionism arguments, such as Kitcher’s, build on this funda-
mental insight: the explanatory autonomy of higher-level descriptions.
Not everyone, however, was convinced. One response insists that de-
tailed physical stories do, in fact, provide enhanced illuminations of
macro-systems. We all acknowledge that biochemistry explains why taking
morphine alleviates pain. Why then would atomic structure not explain
the basic geometrical structure of the square-peg–round-hole system? The
problem with this line of reasoning is that it quickly turns into a slippery
slope. Once we apply it across the board, it turns out that the special sciences
seemingly have no explanatory problems of their own. In practice, our lim-
ited cognitive capacities may prevent human beings from being able to com-
pute the micro-explanation or from employing it. Yet, in principle, physics
always provides a deeper account of higher-level events. From this stand-
point, the autonomy of the special sciences is a mere pragmatic byproduct of
our ignorance or lack of computing power. Many reductionists suggest that
the theoretical disposability of higher levels does not undermine the intrinsic
value of biology, neuropsychology, or economics, vis-à-vis the omnipotence
of physics.19 Antireductionists typically beg to disagree.
A stronger objection turns Putnam’s argument on its head, dismissing
allegedly “autonomous” geometrical explanations as either incomplete or
straight up false.20 The effectiveness of geometrical depictions, the story goes,
presupposes a host of physical information regarding, say, the rigidity of
materials and their behavior in the conditions explicitly presented—or tac-
itly assumed—in the description of the system. As these depictions are pro-
gressively completed, filling in gaps, the crucial relevance of the fundamental
physical states and corresponding laws will become more evident.
20 This rejoinder to Putnam’s argument is found in Rosenberg (2006, p. 35, fn. 3).
40 Black Boxes
21 This is Ken Waters’s (1990) response to Kitcher’s “explanatory incompleteness objection.” Waters
also responds to another class of antireductionist arguments, which attempt to establish an unbridge-
able conceptual gap between these areas of biology because of subtle differences in the meaning of
parallel terms (Hull 1974; Rosenberg 1985). The main idea underlying this “unconnectability objec-
tion” is that, while both classical geneticists and molecular geneticists talk about “genes,” the term is
used very differently across these theories. Mendelian genes are identified through their phenotypes.
And the relation between molecular genes and phenotypes is exceedingly complex, frustrating any
systematic attempt to connect the two concepts along Nagel’s lines. Waters responds by arguing that
this unconnectability objection presupposes an oversimplified conception of Mendelian genes. Once
the original theory is correctly understood, he claims, “The Mendelian gene can be specified in mo-
lecular biology as a relatively short segment of DNA that functions as a biochemical unit. [ . . . ] I con-
clude that the antireductionist thesis that there is some unbridgeable conceptual gap lurking between
[Classical Mendelian Genetics] and its molecular interpretation is wrong” (1990, p. 130).
22 Eliminative materialism was jointly developed by Patricia and Paul Churchland (1979, 1986).
On this view, mental states are theoretical entities posited by “folk psychology.” Commonsensical as
it seems, folk psychology, they argue, is a flawed theory of mind and is thus not a credible candidate
Between Scylla and Charybdis 41
Time to take stock. This chapter began with a discussion of classical reduc-
tionism and its well-known shortcomings. Next, we presented the gradual
emergence of an antireductionist consensus. While antireductionism re-
mains an influential player in the game, it is no longer hegemonic, as its
foundations have been challenged by a resurgence of the reductionist per-
spective. This revitalized reductionism acknowledges the limits of the clas-
sical model. It contends, however, that this was never the real issue at stake.
Reductionism, its new-wave defenders claim, should not be abandoned. It
should be reconfigured, from a logical relation between theories to an ep-
istemic tenet regarding scientific explanation. This shift is evident in the
writing of many influential contemporary authors.23 But the main questions
remain largely unsettled. Do micro-explanations always deepen the explan-
atory power of macro-depictions? Or are the higher levels epistemically au-
tonomous from the lower levels, as suggested by Putnam and his followers?
Can this long-standing debate be settled, once and for all?
So, who wins? The reductionist or the antireductionist? Are we going to get
crushed by the rock or drowned by the whirlpool? The answer hinges on
whether we can—and should—describe every scientific event at more fun-
damental levels, and whether these micro-depictions invariably enhance
for integration. Rather, it should be eliminated and replaced by a developed neuroscience, which
will be more predictive, more explanatory, and better connected with extant scientific research.
Eliminative materialism is, at the most general level of description, a detailed and provocative at-
tempt to apply a revised account of classical reduction to the traditional mind-body problem. To be
sure, eliminativists prefer to talk about “elimination,” rather than “reduction.” Yet, the former can be
treated as a limiting case of the latter. Other notable reductionist strategies include Lewis’s (1972) and
Kim’s (2005) sophisticated versions of identity theory and Bickle’s (1998, 2003) “ruthlessly reductive”
account.
23 For example, Rosenberg (2006, p. 12) characterizes biological reductionism as the tenet that
“there is a full and complete explanation of every biological fact, state, event, process, trend, or gener-
alization, and [ . . . ] this explanation will cite only the interaction of macromolecules to provide this
explanation. This is the reductionism that a variety of biologists and their sympathizers who identify
themselves as antireductionists need to refute.” Or, again, “the impossibility of postpositivist reduc-
tion reveals the irrelevance of this account of reduction to contemporary biology, not the impossi-
bility of its reduction to physical science. Thus, the debate between reductionists and antireductionists
must be completely reconfigured” (Rosenberg 2006, p. 22). Similarly, in a recent article addressing
Kitcher’s arguments, Strevens (2016) suggests that the in-practice autonomy of the special sciences
can be made compatible with reductionism. This requires understanding the high-level sciences’ sys-
tematic explanatory disregard of lower-level details of implementation as practically, as opposed to
intellectually, motivated. For similar stances, see Schaffner (1967, 1993, 2006) and Hooker (1981).
42 Black Boxes
explanatory power. Once again, clear-cut answers are still wanting, and not
for lack of trying. How come? The reason, I maintain, has largely to do with
the two parties talking past each other. Allow me to elaborate.
A popular litmus test to locate one’s position on the reductionism-
antireductionism spectrum is to make an informed prediction concerning
the future state of science. This is because no one really wants to argue that
current physics is in a position to replace biology, psychology, economics, or
any other branch of science. We lack the deep understanding of subatomic
systems and the computing power required to approximate the perfect vi-
sion of a Laplacian Demon. Still, reductionism claims, in principle, it would
be possible to provide a host of physical depictions that could not merely
replace extant higher-level accounts but enhance their explanatory power.
Antireductionists beg to disagree. Even if we were in a position to translate
biological, psychological, or economic generalizations in molecular terms,
we should not do so. If anything, this would muddy the waters.
This is how the debate was first introduced in section 1.2 of Chapter 1.
Intuitive and thought provoking as they are, these questions have a short-
coming: they are untestable. The reason is simple. How do we assess, at our
present time, the fate of future science, that is, its development in the long
run? To adjudicate between these competing stances, we need to look at
whether current science better conforms to the standards of reductionism or
antireductionism. When we try to do so, the matter becomes less substantive
than most participants like to admit. To corroborate this claim, I focus, in
turn, on how the dispute has unraveled in two fields: biology and psychology.
Can all biological events be subsumed under a physical explanation?
Unfortunately, this question cannot yet be answered, and this is unlikely to
change anytime soon. Sure, we are starting to see some fruitful interdiscipli-
nary overlap between physics and biology. Still, the truth is that we are too far
from integrating these two disciplines to warrant an informed response, one
way or the other. For this reason, the fate of reductionism in biology hinges
on a more modest thesis, namely, molecular reductionism.24 According to
this tenet, there is an explanation of every biological fact that mentions only
biochemical properties of molecules and their interactions.
Whereas the general reductionist query—can physics explain every-
thing?—pertains to the realm of science fiction, molecular reductionism
may be assessable in practice, not merely in principle. As noted in section 2.5,
24 This thesis is spelled out in detail in Sarkar (1998) and Rosenberg (2006).
Between Scylla and Charybdis 43
26 These divergent characterizations of a “fully molecular” language can be found in the work
of Rosenberg (2006) and Schaffner (2006), on the reductionist side, and Culp and Kitcher (1989),
Kincaid (1990), and Franklin-Hall (2008), from the opposite perspective.
Between Scylla and Charybdis 45
affirmative answer looks more promising.27 Yet again, if the issue is whether
or not all psychological events could and should be explained more thor-
oughly by describing them in neuroscientific terms, the answer depends on
how we choose to characterize psychology and neuroscience. If the “language
of neuroscience” is restricted to talking about individual neurons and their
additive interactions, this is clearly insufficient to explain psychology, period.
Indeed, such a narrowly construed neuroscience would not even be powerful
enough to describe neural events, let alone mental ones. As we shall discuss,
in greater detail, in Chapter 8, neural systems, as currently understood, are
irreducible to individual causes and transcend the localization and decom-
position of their elements. In contrast, sufficiently rich characterizations of
the appropriate vocabulary—including functional, behavioral, and cognitive
concepts—will make much of psychology describable in neural terms, not
only in principle, but in practice as well. And such a reduction may well be
fruitful and insightful.
It is important to emphasize, once again, that the vagueness affecting
current debates on psycho-neural reductionism is not grounded in factual
ignorance. Sure, cognitive neuroscience is, relatively speaking, a young dis-
cipline and much still needs to be learned. Nevertheless, a look at more es-
tablished fields suggests that lack of knowledge cannot be the main issue at
stake. Consider, once again, the situation in biology. Biologists are often able
to successfully pinpoint the implementation of functional structures at the
molecular level. We already know quite a bit about, say, how phylogenetic
adaptations develop at the ontogenetic level. Many complex conditions can
be causally explained by identifying their genetic difference-makers and the
subsequent cascade of processes. Prima facie, this might suggest that the case
for or against molecular reductionism has been settled, or is getting close to
being settled, in the life sciences. Yet, as we saw earlier, this is not the case.
The debate over molecular reductionism is as open as ever. For this reason,
we should not expect advances and discoveries concerning where and
how cognitive functions are computed in the brain to solve the question of
psycho-neural reduction. Philosophers of neuropsychology should learn the
hard lesson from their colleagues in biology.
Before exploring alternatives, it is time to tie up some loose ends.
To wrap things up, let us return to the question that has fueled our discussion.
Who wins, the reductionist or the antireductionist? Scylla or Charybdis? The
bulk of this chapter laid out several ways of presenting the two stances, and
various arguments for and against them. Section 2.6 argued that the question
of whether macro-explanations can be enhanced by rephrasing them at the
micro-level has no clear, definitive answer. The issue depends more on termi-
nology or linguistic preference than substantive disagreement. Paraphrasing
one of Wittgenstein’s aphorisms, reductionism and antireductionism are
matters of expression, not facts of the world.28 Are there more substantive
disagreements to be found? I believe there are.
Both reductionism and antireductionism center on explanation. This
strikes me as correct. Still, the interface between higher and lower levels,
implicitly adopted by both parties, is overly restrictive. Reductionists
suggest that micro- explanations invariably advance macro- depictions.
Antireductionists respond that some higher levels exhibit an epistemic au-
tonomy of sorts, in the sense that they are perfectly explanatory without the
addition of further details that would only muddy the waters. Autonomy and
reduction are typically assumed to be mutually exclusive. Indeed, autonomy
is routinely defined as the rejection of reduction and, vice versa, reduction
is characterized as the rejection of autonomy. I want to argue that these two
tenets may actually be reconciled. What does this entail?
Compared to geometrical accounts, physical descriptions provide more
comprehensive understanding of why square pegs do not go through round
holes whose diameter is approximately the length of their cross section. Still,
the geometrical explanation delivers the goods in a more succinct fashion,
omitting unnecessary detail. How is this possible? Do these observations
not flatly contradict each other? Contrary to conventional philosophical
wisdom, I shall answer these questions in the negative. By focusing not on
what our models state, but on what they leave out, it will become clear that
autonomy and reduction are really two sides of the same coin.
28 Here is Wittgenstein’s original quote: “The fallacy we want to avoid is this: when we reject some
form of symbolism, we are inclined to look at it as though we had rejected a proposition as false. It is
wrong to treat the rejection of a unit of measure as though it were rejection of the proposition. “The
chair is three feet rather than two”. This confusion pervades all of philosophy. It is the same confusion
that considers a philosophical problem as though such a problem concerned a fact of the world in-
stead of a matter of expression” (1979, p. 69).
48 Black Boxes
§3.1. Introduction
Black Boxes. Marco J. Nathan, Oxford University Press. © Oxford University Press 2021.
DOI: 10.1093/oso/9780190095482.003.0003
50 Black Boxes
1 Following conventional wisdom, I refer to “evolution by natural selection” in the singular, treating
it as an individual, monolithic, and cohesive theory. Yet, strictly speaking, the “theory” of evolution
encompasses a package of independent theses. These include the constant presence of reproductive
surplus, the continuous production of individual differences, the heritability of traits, sexual selec-
tion, and a few other tenets (Mayr 1991).
2 The following reconstruction of Darwin’s tenets is inspired by Kitcher (1985, 1993).
3 Darwin has a simpler conception of natural selection, compared to contemporary evolutionists
(Mayr 1991). For Darwin, selection is essentially a one-step process, which involves a steady produc-
tion of individuals, generation after generation. Some of these are bound to be “superior,” in virtue
of having some reproductive advantage over competitors. Modern mainstream evolutionary theory,
in contrast, depicts natural selection as a two-tiered process. The first step consists in the generation
of genetically distinct individuals, the production of variation. The second step is the actual selection
process, which determines the survival and reproductive success of these organisms themselves.
Lessons from the History of Science 51
4 Darwin considered inheritance and its laws as a less-immediate concern than variation and
its causes, which troubled him from the initial stages of his evolutionary thinking. Two aspects
of variation were especially problematic (Mayr 1982). First, while Darwin was clearly thinking
about the range of variation for domesticated species, he never draws a clear distinction between
interpopulational and intrapopulational variation, that is, between individual and geographical vari-
eties. Second, although Darwin acknowledges the existence of discontinuous variation, he stresses
the prevalence and biological significance of continuous variation. It was genetics which eventually
showed that there is no fundamental difference between continuous and discontinuous variation.
5 This label is imprecise because such a view typically included the modifiability of genetic material
by climatic and other environmental conditions (“Geoffrism”) or by nutrition directly, without the
intermediary role of phenotypic traits (Mayr 1982, p. 687).
52 Black Boxes
The laws governing inheritance are quite unknown; no one can say why
a peculiarity in different individuals of the same species, or in individuals
of different species, is sometimes inherited and sometimes not so; why the
child often reverts in certain characters to its grandfather or grandmother
or other more remote ancestor; why a peculiarity is often transmitted from
one sex to both sexes, or to one sex alone, more commonly but not exclu-
sively to the like sex. (1859, p. 13)
6 As stressed by Mayr (1982), Darwin discusses three potential sources of soft variation: changes
in the environment that induce increased variability via the reproductive system, unmediated
influences of the environment, and the effect of use and disuse.
Lessons from the History of Science 53
7 According to Mayr (1982, p. 694), he was especially wary about the transmission hypothesis,
which he referred to as a “mad dream” that nonetheless “contains a great truth.”
54 Black Boxes
One also finds rudimentary, and yet remarkably plausible, hypotheses re-
garding the formation of complex organs that require myriad generations to
evolve but could not obviously do so in a piecemeal fashion. What use are a
quarter of a wing or a not fully functional eye? Origin’s answer turned out to
be on the right track.
In short, Darwin was aware of the role of variation and inheritance in his
theory of evolution and realized how much information he lacked. He also
later integrated his views with speculative and, for the most part, incorrect
assumptions. How is it possible for his explanations to be so successful?
This puzzling aspect of Darwin’s work has not gone unnoticed among
scientists, philosophers, historians, and other scholars interested in biology.
A clear statement of the main issue, together with a sketch of a solution, can
be found in Mayr’s seminal excursus into the growth of biological thought:
Mayr can be paraphrased as claiming that the two projects of identifying the
mechanisms of transmission and variation and of spelling out the processes
of evolution are not inextricably tied to each other. One can acknowledge the
existence of hereditary variation and its fundamental biological role while
effectively setting its nature and structure aside. This seems plausible. Yet,
Mayr’s observation raises a further philosophical question. How is it possible
to segregate the mechanisms of inheritance—a notion which lies at the very
heart of the phenomenon we are explaining: evolution by natural selection—
without compromising the integrity of the explanations, making them par-
tial, superficial, or otherwise inadequate?
Lessons from the History of Science 55
Mayr’s insight already contains the key to answering this question. Darwin
treats inheritance and variation as black boxes. The variability and herita-
bility of certain traits is undeniable. Darwin was clear and explicit about this.
Nonetheless, he deliberately set aside and postponed puzzles concerning
their nature and structure until more auspicious times. And, when he did try
to answer these issues, in later publications, his wrongheaded speculations
left his evolutionary analyses unscathed.
Taking stock, Darwin stressed the role of variation and heritability in his
theory. Still, he “black-boxed” the underlying mechanisms. He acknowl-
edged their existence and significance but took them for granted. Now, surely
Darwin would have loved to know more about these processes. Nevertheless,
the concepts of inheritance and transmission, as they figure in Origin, did
not require any micro-explanation to be effectively employed within evolu-
tionary theory. All this is well known. But how exactly does it work?
The aim of this book is to spell out the foundations and implications of
this black-boxing strategy. What is a black box? How do we create one?
Under what conditions is this epistemic construct legitimate, successful, or
justified? Are there better or worse ways of isolating a phenomenon? Before
delving into the nuts and bolts of our explanatory strategy, I would like to in-
troduce a few more examples. These fascinating case studies emphasize how
pervasive black-boxing is across disciplines. In addition, they reveal a mul-
tifarious array of reasons underlying the deliberate decision to leave certain
phenomena unspecified, while unveiling others. As we shall see, these motiv-
ations are vastly richer, more complex, and more exciting than Mayr’s mere
need to “postpone and await for more auspicious times.”
The obvious follow-up is: why is Mendel unanimously considered the fa-
ther on modern genetics, given that he did not understand the fundamental
structure of genes any better than other fellow naturalists of his time?
In response, Mayr stresses two crucial differences between the Bohemian
scientist and illustrious colleagues of his, such as Darwin, Galton, and
Weismann. First, Mendel discovered a consistent 3:1 ratio in inheritance
Lessons from the History of Science 57
The Modern Synthesis established much of the foundation for how evo-
lutionary biology has been discussed and taught for the past sixty years.
However, despite the monikers of “Modern” and “Synthesis,” it was in-
complete. At the time of its formulation and until recently, we could say
that forms do change, and that natural selection is a force, but we could
say nothing about how forms change, about the visible drama of evolution
as depicted, for example, in the fossil record. The Synthesis treated embry-
ology as a “black box” that somehow transformed genetic information into
three-dimensional, functional animals. (Carroll 2005, p. 7)
that cannot be answered and hypotheses that cannot be verified. This igno-
rance does not stifle biological research; it enhances it. Borrowing a Kuhnian
expression, these glitches are anomalies, not falsifications. As Mayr put it,
Darwin wisely did not waste time and energy on problems then insoluble.
He effectively postponed these issues until more auspicious times. Mendel
followed suit and so did many other successful scientists. Unraveling the his-
tory of the life sciences requires keeping track of black boxes being set up and
broken down. Our goal is figuring out how this works.
In conclusion, two points should be stressed. First, while the first part of
this chapter focused on biology, black-boxing is by no means confined here.
It permeates every area of science. Second, the tales of Darwin, Mendel, and
their successors illustrate how black boxes play various crucial and subtly dif-
ferent parts in the advancement of science. The next two sections establish
these claims by looking at two more cases.
9 The overview in this section draws on Flanagan (1991) and Hatfield (2002).
62 Black Boxes
explaining and predicting human conduct and, more generally, the behavior
of organisms.
Behavioral psychology comes in various forms and degrees of rigidity.
Early proponents, such as William McDougall and Walter Pillsbury, advo-
cated an account of behavior that unabashedly helped itself to the mental-
istic vocabulary of traditional psychology. Thus, introspection was among
the conceptual tools that could be employed in the process of explaining the
mind. This proto-behavioristic science essentially introduced behavior as the
chief object of study without eschewing mentalism tout court. More radical
approaches rejected the validity of introspection for psychological analysis
but retained the use of mentalistic terms in the description of human con-
duct. The most extreme proposals purported to expunge entirely all mental-
istic talk from the vocabulary of psychology.
The most influential radical variant of early behaviorism was offered by
John B. Watson. Watson was a hardcore materialist, strongly committed to
the tenet that ultimate explanations of behavior would be grounded in the
principles and language of physics and chemistry. Watson and his followers
were well aware that psychology had ways to go before developing into a full-
fledged behavioral science which eschewed all forms of intentional language.
For this reason, they allowed the use of provisional explanations which
charted stimulus-response patterns along the lines of Pavlov’s conditioning
theory and Thorndyke’s laws of effects. Yet, this is where the line was drawn.
Stimuli, responses, and other measurable bodily states were supposed to be
rigorously described by using only the—allegedly—objective vocabulary of
physical theories. In rejecting all inherently mentalistic notions, whether in-
trospective or descriptive of behavior, psychology could finally join the ranks
of legitimate natural sciences.
Psychology’s path toward scientific respectability has become the subject
of interesting debates in the history of science. According to mainstream
reconstructions, this maturation was brought to completion when neo-
behaviorists such as Tolman, Hull, and Skinner joined the methodological
positivism of Schlick, Carnap, Hempel, and other thinkers associated with
the “Vienna Circle.”10 On these traditional accounts, logical empiricism fur-
nished psychology with the conceptual tools to rid itself of untestable meta-
physical claims, thereby gaining rigor and objectivity. This was attempted by
ensuring that descriptions of behavior and its triggers be translatable—that
10 Classical accounts along these lines can be found in Boring (1950) and Leahey (1980).
Lessons from the History of Science 63
is, reducible to—the language of physics along the lines of what eventually
become the classical model, discussed in section 2.3 of Chapter 2.
Over the last few decades, this standard narrative of American psychology’s
alliance with positivism has been challenged. One revisionist account is
founded upon the rejection of two assumptions presupposed, more or less ex-
plicitly, in the story rehearsed earlier.11 From this standpoint, a first mistake
is the belief that after psychology disenfranchised itself from philosophy, in
the 1890s, a state of hostility existed between these two disciplines, until the
rise of positivism reconciled them. A second misunderstanding lies in the
failure to appreciate that virtually all neo-behaviorist psychologists rejected
the main conceptual pillars of logical positivism, such as verificationism
and the analytic-synthetic distinction. This does not mean that philosophy
had no role to play in the development of behaviorism. Quite the contrary,
philosophers were involved from the very beginning, and such discussions
played a formative role in the maturation of Hull’s and, especially, Tolman’s
thought. This revisionist reconstruction rejects the common view that phi-
losophy of science only dawned in North America when positivists migrated,
following the surge of Nazism across Europe. Mainstream American phil-
osophical traditions—such as realism, neo-realism, and critical realism—
had long called for philosophy to take the sciences seriously. In short, from
this perspective, the actual relations between philosophy and psychology in
American academia were already reasonably amiable, promoting a fruitful
spirit of intellectual exchange.
Adjudicating between historical narratives lies beyond my interest and
professional competence. Be that as it may, let us fast-forward to the end of
the story and take a look at the outcome of this revolution in psychology.
The most developed form of mature behaviorism comes from the work
of B. F. Skinner, arguably the most influential psychologist to ever live and
work in the United States. Constructing psychology as the science of beha-
vior, in stark opposition to previous characterizations as the study of mind
or consciousness, Skinner followed the trail blazed by early behaviorists.
Especially influential on the young Skinner was Watson, who had diagnosed
the delayed growth of psychology as the effect of pervasive metaphysical and
epistemological vices. These attitudes were readily identified as a nefarious
legacy of Descartes’s substance dualism, which implied that minds are non-
physical, private, unobservable entities, essentially rendering any serious
11 The following reconstruction is due to Amundson (1983, 1986) and Smith (1986).
64 Black Boxes
One can perhaps paraphrase Chomsky’s insight along the following lines.
What makes Skinner’s project controversial is his deliberate choice to omit
any reference to structures “internal” to the organism, the structures which
govern the relation between inputs and outputs. Whether stimuli and envi-
ronment play a role in shaping behavior is not in question—of course they
do. And, in the absence of neuropsychological data, they might well be the
main or even the only kind of evidence.13 The striking move is the complete
rejection of abilities and capacities germane to the organism itself.
These considerations highlight how Skinner’s strategy mirrors the
approaches of Darwin and Mendel. All these authors effectively set aside the
details of the mechanisms producing the behavior in question. Many readers
will find this perplexing. Behaviorism might survive, in various revised
14 As Skinner (1974, p. 233) remarked in a later book: “The organism is, of course, not empty, and it
cannot be adequately treated simply as a black box, but we must carefully distinguish between what is
known about what is inside and what is simply inferred.”
Lessons from the History of Science 67
What is economics? In Chapter One of his Essay On the Nature and Significance
of Economic Science, Lionel Robbins proposes one of the most influential
modern definitions: “Economics is the science which studies human behaviour
as a relationship between ends and scarce means which have alternative uses”
(1932, p. 16). The fundamental axioms of the enterprise of economics, Robbins
68 Black Boxes
Why the human animal attaches particular values in this sense to partic-
ular things, is a question which we do not discuss. That is quite properly a
question for psychologists or perhaps even physiologists. All that we need
to assume as economists is the obvious fact that different possibilities offer
different incentives, and that these incentives can be arranged in order of
their intensity. (Robbins 1932, p. 86)
This is an adage that most students of economics learn at an early stage. Still,
in the ensuing discussion, Robbins adds a remark that many of his colleagues
would now find disconcerting. No economic explanation, he claims, can be
considered adequate, let alone complete, without invoking elements of a sub-
jective or psychological nature, such as preference.16
This ineliminable subjective component in economic explanations raises
issues. From a historical perspective, it strikingly contrasts with the meth-
odological orthodoxy of the day—logical empiricism—which purported to
15 This disclaimer is required because nothing prevents the same individual or research team from
pursuing agendas in both economics and psychology. Still, for Robbins’s point to go through, it is
enough to distinguish psychological questions from economic ones.
16 “But even if we restrict the object of Economics to the explanation of such observable things as
prices, we shall find that in fact it is impossible to explain them unless we invoke elements of a sub-
jective or psychological nature. It is surely clear [ . . . ] that the most elementary process of price de-
termination must depend inter alia upon what people think is going to happen to prices in the future.
[ . . . ] It is quite easy to exhibit such anticipations as part of a general system of scales of preference.
[Footnote omitted.] But if we suppose that such a system takes account of observable data only we
deceive ourselves. How can we observe what a man thinks is going to happen? It follows, then, that
if we are to do our jobs as economics, if we are to provide a sufficient explanation of matters which
every definition of our subject-matter necessarily covers, we must include psychological elements”
(Robbins 1932, pp. 88–89).
Lessons from the History of Science 69
reduce all synthetic knowledge to statements that are testable, at least in prin-
ciple. Mental states are not directly observable. Thus, they cannot be tested,
in practice or in theory. According to the verificationist semantics endorsed
by logical positivism, this turns them into meaningless gibberish.
The methodology of economics vis-à-vis its status as a science quickly be-
came a contentious matter requiring serious discussion. Prominent scholars,
such as Terence Hutchison (1938), maintained that economics, as a bona fide
science, must formulate falsifiable generalizations and subject them to rig-
orous test. This requirement might initially appear plausible, even truistic.
Upon further scrutiny, it poses strictures of all sorts. Conforming economic
methodology to the basic tenets of logical positivism turned out to be a real
challenge, independently of its appeal to psychological elements.
For one thing, several mainstream economic assumptions are heavily
hedged with ceteris paribus qualifications and, hence, not directly testable.
The reason is straightforward. An unqualified generalization such as “cutting
taxes is always followed by a raise in employment rates” can be corroborated
by ensuring that the consequent, a raise in employment rates, invariably
follows the antecedent, a cut in taxation rates. In contrast, the hypothesis that
“all things being equal, cutting taxes is followed by a raise in employment
rates” is harder to assess. When exactly should one deem it falsified? When
should things be considered equal and by what standards?
A second and more troubling issue emerged when empirical economic
claims were actually tested. Many turned out to be false. For instance, ac-
cording to a well-known theoretical hypothesis, economic agents operate
under the assumption of marginal cost pricing. However, a survey was
published revealing that real-world firms actually base their calculations on
full-cost pricing. A few years later, another study showed that firms do not
maximize profits by equating marginal costs and marginal revenues, another
core tenet of early neoclassical economics.17 In short, it soon became evident
that economic models provide an inaccurate depiction of business conduct.
If economists were supposed to behave like responsible scientists—which, in
the logico-positivist intellectual milieu of the early twentieth century, meant
responsible physicists—they were not doing a very good job.
This impasse triggered various kinds of responses. Economists associ-
ated with the Austrian School—such as Frank Knight, Carl Menger, Ludwig
von Mises, and Friedrich Hayek—willingly accepted the conclusion that the
17 These results are reported by Hall and Hitch (1939) and Lester (1946), respectively.
70 Black Boxes
For an insightful philosophical discussion of idealization—which will also be discussed more sys-
tematically in Chapter 5 and Chapter 7—see Appiah (2017).
assess neoclassical economics tout court. Rather, I wish to bring to light some
methodological presuppositions of contemporary economics.
One observation is especially important from our perspective. As noted,
the combination of Friedman’s “as if ” approach with the revealed preference
view of utility effectively made economic theory kosher from a positivist
standpoint. Yet, in doing so, it also completely shielded economic models
from psychological influence and critique. The rationale is simple. In neo-
classical economics, preference is not something that is open to further anal-
ysis. The identification of preference with choice turns both into raw data
that explain the behavior of a rational—that is, a logically consistent—agent.
In a nutshell, choice, preference, and utility are all black boxes.
This screening off of psychology had a triple effect on twentieth-century
economics. First, identifying preference with choice provided a clear, simple,
and operational way to talk about reasons and preferences. Complications
arising from the idiosyncrasy of human psychology and the neural imple-
mentation of these processes, both of which, at that time, were far beyond
our ken, could be unabashedly swept under the rug. Second, if, from an eco-
nomic perspective, all there is to preference is revealed choice, then psycho-
neural discoveries about the human motivational system, in principle,
cannot shed any light on economics. We can effectively talk about behavior
without mentioning its causes. In addition, and more radically, specifying
such causes will leave our understanding of economic behavior untouched.
Third, the isolation of economics from psychology was a powerful and effec-
tive strategy to screen off the core assumptions of neoclassical theory from
potential attacks. Specifically, the superiority of free markets could not be
questioned by appealing to the psychological states of economic agents. One
could not argue, for instance, that the preferences that led to choice were
somehow normatively deficient.
The insulation of economics might have been relatively inconsequential
during the days of logical positivism, when psychology and neuroscience
were still relatively young, underdeveloped, and had little to contribute to
the study of economic phenomena. To be sure, the critical examination of
the relation between preference and behavior has a long history, harking
back to Plato and Aristotle. Still, until recently, we lacked any systematic
theory which might shed light on how these choice-mechanisms are actually
implemented in the human mind, allowing for more accurate predictions
and sharper explanations. Things have changed quite drastically over the last
74 Black Boxes
few decades. Following the cognitive revolution, proposals regarding the im-
pact of psychology and neuroscience on economics have emerged.
The claim underlying contemporary “psycho-neuro-economics” (PNE) is
that new insights into how minds and brains actually frame, compute, and re-
solve problems challenges fundamental assumptions regarding the behavior
of economic agents.24 For instance, behavioral economics has denounced
the lack of reference points in expected utility theory. In doing so, it has
emphasized how real humans, as opposed to economic agents (“econs”),
use heuristics and biases in making decisions, rather than calculating ex-
pected utilities. Neuroeconomics has also made some progress revealing
how and where mechanisms governing choice are implemented in the brain,
unveiling further inconsistencies with conventional wisdom. In short, the
story goes, classical economic notions, such as risk aversion, time preference,
and altruism, are due for a makeover. Is it finally time to open the black boxes
that were so skillfully crafted by Robbins, Friedman, and their colleagues?
Supporters of PNE answer affirmatively. This is clearly stated in one of the
founding manifestos of neuroeconomics (Camerer et al. 2005, p. 53):
27 To wit, an influential neuroeconomist, Camerer (2010, p. 43), offers a broad definition of eco-
nomics as “the study of the variables and institutions that influence economic choices, choices with
important consequences for health, wealth, and material possession and other sources of happi-
ness.” While Gul and Pesendorfer (2008) do not offer an explicit comprehensive definition of eco-
nomics, they narrow the field substantially with assertions such as “standard economics focuses on
revealed preference because economic data comes in this form” (p. 8). It should be evident that, on
the former definition, it becomes hard to dismiss the relevance of psychological discoveries on ec-
onomic choices. In contrast, the latter delimitation virtually dismisses the import of economics on
psychology by definition, by severely constraining what may and may not count as economic data.
Lessons from the History of Science 77
regulation of gene expression and the principles governing the new “omics”
fields in biology, initially left untouched, started to emerge.
The third case history explored the history of psychology. Until the nine-
teenth century, psychologists typically considered mental states to be pri-
vate conscious phenomena not amenable to public scrutiny. In an attempt
to endow their discipline with a stricter scientific methodology, behaviorists
proposed that psychology restrict itself to seeking laws linking stimuli to be-
havior. The rationale was that only what is publicly observable is a fit subject
for science, essentially excluding mental states, as traditionally conceived,
from rigorous scientific examination. From our perspective, the black-
boxing of mental states enacted by behaviorism is interesting for two reasons.
First, it provides yet another clear and historically significant instance of the
widespread use of black boxes in scientific practice. Second, and more im-
portant, this methodological segregation of mental states highlights some
aspects of the general strategy that were not evident in the biological cases.
My final example came from the field of economics. In eighteenth-and
nineteenth-century Britain, economics and psychology were considered two
branches of a single subject: moral philosophy. Understanding the behavior
of markets presupposed a reasonably accurate psychological portrait of in-
dividual agents. It is thus unsurprising that seminal works by authors such
as Smith, Mill, Edgeworth, Marshall, and Pigou included discussions of how
mental states affect economic interactions. With the development of neoclas-
sical economics, however, feelings and other psychological states came to be
set aside, effectively isolating economics from fields such as psychology and,
later, neuroscience. Why did this occur? Simply put, feelings came to be per-
ceived as useless constructs, meant to predict behavior, but which could only
be inferred from behavior itself. In the 1940s, the concepts of ordinal utility
and revealed preference eliminated the superfluous intermediate step of pos-
iting unmeasurable feelings, equating unobserved preferences with observed
choices. The risk of circularity was avoided by requiring consistency in be-
havior. Once an agent reveals a preference for a over b, they ought not sub-
sequently choose b over a. This made the theory falsifiable, conforming
economics to positivist strictures. This “as if ” approach makes perfect sense
as long as the brain is considered a black box. Yet, following almost a century
of separation, economics has begun reimporting insights from psychology.
Behavioral economics is now a prominent fixture on the landscape and has
spawned applications to various areas of economics, such as game theory,
labor economics, public finance, law, and macroeconomics. Assessing
Lessons from the History of Science 79
that yes, it does make sense. Setting important divergences aside, all these re-
markable case studies from the history of science converge on the same core
phenomenon. This is the identification of some principle that, for various
reasons, is deemed dispensable from a set of explanations. It is now time to
delve in greater detail into the nature of this practice. How exactly do we con-
struct a black box? How do we determine what to include and what to leave
out? How does one judge whether the construction has achieved its intended
purpose?
The following chapters aim to provide a more general, precise, and sys-
tematic philosophical analysis of the practice underlying all the examples
described in the previous pages. A precise definition of black boxes will have
to wait until Chapter 5. First, we shall explore the foundations of an impor-
tant scientific construct: the concept of placeholder.
4
Placeholders
§4.1. Introduction
Where do we stand on our conceptual map? The main goal of this book is
to present and examine a form of explanation called “black-boxing.” Black
boxes, I contend, promise to reconcile the main insights of both reduc-
tionism and antireductionism, thereby revealing the productive role of ig-
norance in science. The historical excursus in Chapter 3 played a dual role.
First, it showed that the construction of black boxes is widespread across the
sciences. Second, it emphasized how this nuanced and complex practice may
accomplish a variety of functions. Darwin, Mendel, and the subsequent bi-
ological syntheses illustrate how black boxes effectively set aside questions
and problems that, for the time being, are intractable. Skinner and his fellow
radical behaviorists constructed black boxes to expunge entities, such as
mental states, that were deemed unsuitable for scientific investigation, in
spite of their hallowed place in traditional nineteenth-century psychology.
Modern economics tells a different story. Friedman and other neoclassical
* “The learned doctor asks me //The cause and reason why //Opium sends to sleep. To him I make
reply //Because there is in it //A virtue dormitive //The nature of which is //The senses to allay.”
Translated by Sir William Maddock Bayliss.
Black Boxes. Marco J. Nathan, Oxford University Press. © Oxford University Press 2021.
DOI: 10.1093/oso/9780190095482.003.0004
Placeholders 83
Before getting on with it, a quick note to readers. We are now embarking
on a foundational analysis of black-boxing that will keep us busy until the
end of Chapter 5. Empirically oriented scholars with little interest in phil-
osophical discussions should consider glancing over the next two sections
and focus on the illustrations and systematization in the second half of the
chapter. Still, I should stress that abstract discussions of dispositional proper-
ties and explanations are crucial to fully comprehend the role of placeholders
in science.
1 Strictly speaking, talking about the “fitness” of x is ambiguous, as it may refer either to the fitness
of trait x or to the fitness of organisms bearing trait x. Nevertheless, for the sake of brevity, I shall often
make unqualified references to the “fitness of x,” leaving it up to the reader to determine the appro-
priate interpretation, depending on the context.
Placeholders 85
2 The philosophical overview of fitness throughout this section draws from influential theoretical
discussions, such as Godfrey-Smith (2014) and, especially, Sober (1984, 2000).
86 Black Boxes
predation. For this reason, p-variants are more prevalent, in the envisioned
ecosystem, than q-types. Still, as a result of a wildfire, a large portion of the
p-population is wiped out. In this hypothetical scenario, more q-organisms
survive than p ones, and will therefore leave behind more offspring, at least
for a few generations. Yet, this is due to casualty and not—as the defini-
tion of fitness as actual survival and reproduction patterns implies—due to
differences in fitness between p-types and q-types.
The extent to which these theoretical considerations affect scientific prac-
tice is debatable. Working biologists often employ actual frequency patterns
as a proxy for fitness, and this seems to work reasonably well. Is there re-
ally a pressing need for change? At the field-work level, the answer may well
be negative. Things are different, however, from a pedagogical perspective.
Here, more accurate and perspicuous definitions of biological fitness, which
overcome the conceptual limitations just rehearsed, contribute to making
the content of evolutionary theory crisper and clearer.
So, what is fitness? Can we find more adequate definitions? Various
proposals have been developed in the literature. An influential one is the
“propensity interpretation of fitness.”3 On this view, fitness refers neither to
the actual number of offspring spawned by an individual or trait relative to a
reference point, nor to the physical constitution of organisms. Moreover, it is
not assumed as a primitive, undefined term of evolutionary theory. Rather,
in the words of its main proponents, “fitness can be regarded as a complex
dispositional property of organisms. Roughly speaking, the fitness of an or-
ganism is its propensity to survive and reproduce in a particularly specified
environment and population” (Mills and Beatty 1979, p. 270).
The propensity interpretation is not devoid of controversy. For one, pro-
viding precise mathematical characterizations of an organism or trait’s ex-
pected number of offspring is no trivial task. Furthermore, ascribing an
expected number of offspring to members of a population is not necessarily
a reliable predictor of evolutionary change. Nonetheless, let us ignore these
difficulties. Mills and Beatty’s dispositional interpretation of fitness does mit-
igate the problematic implications mentioned earlier. Readers who find this
analysis misguided are advised to skip to other examples in the following. As
we shall see, fitness is just an instance of a much broader phenomenon.
With all of this in mind, set definitions of fitness aside and focus on the
second issue raised at the outset. What role does fitness play in evolutionary
3 For a rival approach, which I shall not discuss here, see Rosenberg (1983).
Placeholders 87
4 Simply put, an explanation provides causal information, broadly construed, about how or why an
event, or pattern of events, occurs. An explication, in contrast, spells out a semantic analysis, a defini-
tion of the concept in question.
Placeholders 89
Cashing out the argument presented at the end of the previous section
requires us to delve into the nature of a class of properties that has received
much attention in philosophy: dispositions. While, so far, I have appealed to
an intuitive understanding of dispositions, a succinct definition will make
the following discussion more rigorous. Dispositional properties express
the capacity, the ability, or the tendency of entities to act, under particular
circumstances. For instance, solubility captures how salt dissolves in water
and fragility underlies the tendency of glass to shatter when struck with
force. Mills and Beatty’s propensity analysis defines fitness along the same
lines. The fitness of organisms and traits, on their view, is a propensity to sur-
vive and reproduce, relative to specific environments. This section discusses
whether and how dispositions contribute to scientific explanations.
It is common to presuppose, more or less explicitly, that dispositions
causally explain the behavior of entities that satisfy them. Indeed, many
philosophers would consider this a truism to be taken for granted.5 Is the
solubility of salt not the reason that salt dissolves in water? Does a dropped
vase not shatter because it is fragile? From this perspective, it is hardly sur-
prising that, although Sober treats fitness as a placeholder standing in for,
and replaceable by, a deeper analysis, he distinguishes it from the idea of a
dormitive virtue, that is, an empty, question-begging circular analysis.
Things, however, are not quite that simple. How exactly does a disposition
account for the behavior of the entities to which it is ascribed? What kind of
knowledge does it provide, and how does it convey it? On what basis should
the explanatory power of solubility, fragility, and fitness be distinguished
from the empty idea of virtus dormitiva? For the sake of simplicity, let us
begin by focusing on mundane examples: salt, glass, and the like. In due time,
I shall generalize the argument to scientific dispositions like fitness.
In his celebrated masterpiece, The Imaginary Invalid, the French play-
wright Molière mocks a group of physicians who purport to uncover the
sleep-inducing quality of opium by ascribing to it a virtus dormitiva. What
triggers the humorous effect in this goofy attempt? The issue cannot be fal-
sity. Opium, after all, does have the power to put people to sleep. The problem
is rather vacuity, complete lack of informativeness. Saying that substance
5 The reason, I argue elsewhere, is an assumption typically presupposed without further motiva-
tion, namely, that dispositions are properties of entities (Nathan 2015a).
90 Black Boxes
x has a dormitive virtue cannot explain why x puts people to sleep because
ascribing a dormitive virtue to x is just a fancier way of saying that x induces
sleep. The entailment is analytic—or quasi-analytic6—turning the alleged ex-
planation into a restatement. Molière skillfully plays with this intuitive idea
that paraphrases are explicative, not explanatory.
None of this is especially novel or controversial. The aspect of Molière’s in-
sight that I want to stress is how broadly applicable it is. Having a dormitive
virtue is being able to induce sleep. Thus, virtus dormitiva cannot explain
sleep-induction because having a virtus dormitiva is synonymous with
having the power to induce sleep. This being so, is an ascription of solubility
to x not to say that x has the capacity to dissolve? Is the meaning of fragility
not expressible as a tendency to shatter when struck? Answering in the pos-
itive entails that solubility cannot account for the dissolving of salt and shat-
tering is not causally explained by fragility. Setting aside the wit—or lack
thereof—these examples are structurally analogous to the mockery staged
by Molière in The Imaginary Invalid. Counterintuitive as it may sound, it
appears that none of these dispositions is explanatory after all, at least, not on
any causal reading of explanation.
What about scientific dispositions such as biological fitness? Recall that,
on the propensity interpretation, stating that a-flies are “fitter” than b-flies
means that the former have a higher propensity to survive and reproduce
than the latter. Then, can fitness attributions causally explain the distribu-
tion of traits in the population of Drosophila? Is fitness any more informative
6 The unorthodox expression “quasi-analytic” calls for elucidation, which will take us on a slight
detour into analytic philosophy. Readers with little interest for such technicalities are advised to skip
this footnote entirely. In the central decades of the twentieth century a close connection was noted
between dispositions and subjunctive conditionals. Thus, the fragility of a glass entails that said ob-
ject would shatter, if struck. This observation was generalized into a “simple conditional analysis”
of dispositions (SCA): x is disposed to D when C iff x would D if it were the case that C. Basic as it
may sound, this thesis was endorsed by eminent philosophers, including Ryle, Goodman, Quine,
and Mackie. However, there is now a widespread consensus that this analysis is fatally flawed be-
cause the connection between disposition and entailed conditional breaks down in cases of “finkish”
dispositions, where objects temporarily lose or acquire dispositions, or when the manifestation
of a disposition is “masked” or “mimicked.” The philosophical community is still divided on what
should replace the SCA. Some responded by replacing the “simple” conditional with a more sophisti-
cated one. Others, such as myself, attempted to salvage the SCA (Nathan 2015a). Some explored the
prospects of non-conditional analyses. Others abandoned altogether the search for explication in
favor of a non-reductive account of dispositions. An assessment of these routes lies beyond the scope
of this work. The point is that most authors accept the existence of a connection between dispositions
and subjunctives. How is this connection to be cashed out? Advocates of conditional analyses, in
the original or revised forms, will presumably maintain that the relation between dispositions and
associated behaviors is analytic. Naysayers will likely resist identifying such connection with full-
fledged analyticity. What is it then? For lack of a better term, I call this weaker form of entailment
“quasi-analytic.”
Placeholders 91
The remainder of this section dispels the apparent paradox. I will stick to
common wisdom and argue that both theses are ultimately correct. Qua
placeholder, fitness is perfectly explanatory, in a causal sense. But fitness can
also be analyzed as a dispositional property and, as such, it provides no causal
explanation. Reconciling these two prima facie incompatible claims will re-
quire disambiguating the notion of a placeholder. I will do so by drawing a
distinction between two kinds of placeholders in scientific explanations.
Before doing so, let us explore a different route. Attentive readers will have
surely noted that, to avoid the contradiction (v), it is sufficient to reject one
92 Black Boxes
7 This is a nod to Hempel (1945) and Davidson (1970), who present apparent paradoxes and pur-
port to resolve them by reconciling seemingly incompatible assumptions.
94 Black Boxes
two propositions. First, dispositions are explicative but not causally ex-
planatory. Dispositions are like dormitive virtues: they capture behaviors
without explaining them. Second, dispositions are higher-level properties
that stand in for unspecified mechanisms without being identical to them.
As such, I shall argue, they may provide bona fide causal explanations.
My strategy to enact this reconciliation and dispel the paradoxical flavor
is to diagnose an ambiguity in the notion of placeholder. Dispositions like
“fitness” and other expressions denoting supervenient properties may
stand in for two different sets of entities. On the one hand, they may take
the place of mechanisms that produce specific patterns of behavior. When
this is the case, the placeholder functions as a difference-maker. On the
other hand, higher-level properties may stand in for the target range of
behaviors that one is attempting to explain, in which case the placeholder
functions as a frame. It explicates and lays out these behaviors but does not
causally explain them.
Section 4.4 illustrates this frame vs. difference- maker distinction
with examples that should by familiar to most readers. Next, section
4.5 provides a more systematic characterization of these two types of
placeholders, which will provide the key for our general analysis of black
boxes in Chapter 5.
Before moving on, let me stress again that expressions such as “mech-
anism” and “behavior” should be interpreted in the broadest and most ecu-
menical sense. In the present context they are essentially blanket terms that
designate objects, processes, structures, physical interactions, and many
other kinds of entities and activities that function either as explanantia or
explananda in science. Also, as noted in Chapter 1, the idea of a property or
description being “higher vs. lower level” or “macro vs. micro” is always rela-
tive to a choice of level of explanation.
§4.4.1. Solubility
Imagine your five-year-old niece querying you about what happened to that
handful of salt crystals that you just poured into a boiling pot of water. You
reply by promptly noting that salt is soluble and, as such, it dissolves in water.
What kind of question did she ask? And what sort of answer did you provide?
Note that the issue, as just posed, is ambiguous, as your young interlocutor
could be asking either of two questions. On the one hand, she could be in-
quiring into the behavior of salt. On the other, she could be wondering about
what produces the behavior in question. Accordingly, your appeal to solu-
bility provides two different, but equally legitimate types of answers.
First, suppose that your niece is vaguely familiar with salt. She knows
what salt is. She has seen salt crystals in shakers and tasted salt in food.
Yet, she does not know much about its behavior. What happens when one
immerses salt in water? Does it float? Does it explode? Does it become invis-
ible? Does anything happen at all? Appealing to solubility effectively answers
this question: salt dissolves in water, a piece of information which could also
be obtained by observing salt in water. Here, the ascription of solubility to
salt is perfectly informative without being causally explanatory in the least.
Solubility indicates the relevant behavior, what happens to salt, without
saying anything at all about why or how the behavior itself occurs.
Next, consider a variant of this scenario where your young niece has an
altogether different question in mind. She has learned in school that certain
substances, such as salt and sugar, dissolve in water. She has also been shown
that not all substances are water-soluble. Glass, sand, and plastic, for in-
stance, are not. Intrigued by these remarks, she would like to understand this
difference in behavior. Note that both her question and your answer may be
phrased in exactly in the same terms. “What happened to salt in water? Salt
is water-soluble: it dissolved.” Yet, the implicit contrast class is completely
different. In the former case, she wants to know that salt dissolves in water, as
opposed to floating, exploding, changing color, remaining unaltered, etc. In
this latter case, she wonders in virtue of what salt and sugar typically dissolve
in water, whereas sand, glass, and plastic do not. Hence, pointing out that salt
dissolves will not cut it. We need to say something about why or how it does.
96 Black Boxes
§4.4.2. Fitness
We are now in a position to address the puzzle introduced in section 4.2. The
stage was set by outlining two widely accepted theses. First, fitness is typi-
cally defined as a dispositional property, namely, a propensity of organisms
allow flies with genotype a to produce a thicker thorax, providing better in-
sulation during colder winter months.
In sum, fitness can be defined dispositionally, as the propensity of
organisms and traits to survive and reproduce. Thus construed, fitness may
function as two distinct kinds of placeholders. First, it may stand in for dis-
tribution patterns. This lays out the explanandum by stating that traits vary
and by how much, without addressing why they do. Second, fitness can also
act as a placeholder standing in for the properties that underlie the produc-
tion of the traits in question. As noted by Sober, this shallower explanation
can be eliminated and replaced by a deeper account, which dispenses with
the concept of fitness altogether. The bottom line, once again, is that there are
two kinds of placeholders: frames and difference-makers. As we shall see in
Chapter 6, keeping them distinct is the key to understanding the role of fit-
ness in biological explanations, from Darwin to our day and age.
I have no quibbles with any of this. Still, I do want to emphasize that this is
only part of the story. Mental states play two distinct and equally important
roles within psychological investigations. On the one hand, as just noted,
mental states may function as difference-makers in the production of beha-
vior. On the other hand, mental states may be frames standing in for patterns
of behaviors to be explained. Allow me to clarify.
Imagine that Taylor and Dana spend a lot of time together. They make
dinner plans. They go out dancing. They talk on the phone daily. And they
often blush in the presence of one another. All these behaviors—which note,
constitute a wildly heterogeneous bunch—can be explained, at least provi-
sionally, by positing that Taylor and Dana are in love with each other. Could
one provide lower-level explanations of these phenomena? Of course! For in-
stance, we could provide a detailed account of how, in the presence of Taylor,
Dana’s body releases adrenaline, speeding up heart rate and dilating blood
vessels, improving blood flow and oxygen delivery in facial veins. We now
have two explanations of Dana’s blushing: a higher-level one and a lower-
level one. Does this story have a familiar ring to it? If so, it is likely because it
mirrors our discussion of fitness from section 4.2.
The precise relation between these two explanations is a controversial
matter. On the one hand, reductionists will argue that once a more detailed
physiological explanation is obtained, there is no need to use the expres-
sion “being in love” to account for blushing. The mental state can be reduced
to a deeper, lower-level analysis, thereby eliminating it. Antireductionists
will retort that no such reduction can be enacted. We shall return to the re-
ductionism vs. antireductionism debate in Chapter 10. For the time being,
the issue is that both stances can be reconciled with the claim that mental
states are placeholders, difference-makers standing in for mechanisms pro-
ducing behavior. The point of contention is the nature and character of
the mechanisms explaining the behavior. Reductionists will characterize
them at lower levels—brain processes, environmental stimuli, and the like.
Antireductionists will provide higher-level characterizations, appealing to
functional descriptions, qualia, or other macro-descriptions. But the mental-
states-qua-placeholder thesis is perfectly consistent with both stances, in-
cluding radical forms of materialism and a hard-core dualism rejecting
mind-body supervenience.
There is also, however, a different way in which “being in love” is a place-
holder. It can function as a frame, that is, it may stand in for a range of
behaviors that we are trying to explain. To illustrate, imagine that a team of
100 Black Boxes
than his first. (Whether this amount of utility is less than the utility he receives
from his first unit of oranges is a separate question, that we shall set aside.)
In general, it is assumed that Bill will consume a mixture of these fruits, and
he will do so until MUx/px = MUy/py for all x and y to maximize total utility
obtained. Without getting into technicalities, we can simplify this by positing
that Bill assigns a higher utility to apples than oranges: u(a) > u(o), and these
utilities can only be ranked, as opposed to quantified precisely. Note that this
claim is affected by the same kind of ambiguity that characterizes all previous
examples. On the one hand, the proposition that u(a) > u(o) may function as
a frame. In this former sense, it is a placeholder for the relevant pattern of be-
havior in need of explanation. On the other hand, that same proposition may
act as a difference-maker. In this latter sense, it is a placeholder standing in
for mechanisms that underlie and produce these observable patterns of pos-
sible and actual behaviors. How does this work?
First, imagine that the goal is to explain why Bill consistently prefers apples
to oranges. What is going on in Bill’s head when he is presented with this
basic choice? The notion of utility provides a preliminary answer: Bill assigns
a higher utility to apples than oranges. Note that, in the expression u(a) >
u(o), “u” is a placeholder. It stands in for whatever psychological mechanism
underlies the choice at hand. Just as we saw in the cases of solubility, fitness,
and mental states, this coarse-grained depiction can be replaced by a richer
micro-explanation that makes no reference to utility, but only describes the
psycho-neural processes that trigger the behavior in question. In a nutshell,
in this first instance, utility is a difference-maker.
Next, consider a slightly different scenario. We embark in a systematic
study of Bill’s fruit-related behavior. On day one, Bill is presented with a
choice between apples and oranges, and he selects the former. The following
morning, confronted with a similar range of options, he still picks apples
over oranges. On day three, the store is out of apples, and Bill picks pears over
oranges. On day four, the store is out of oranges, and Bill selects apples over
pears. It should be obvious that this depiction is somewhat simplified. Real-
life agents face choice selections that are exponentially more complex and
involve numerous other variables. Nonetheless, this basic scenario should
suffice to establish the main point. Can we systematize all these behaviors
in a coherent fashion? Utility provides a simple and convenient way of doing
so. Consider the proposition that u(a) > u(p) > u(o), which states that Bill
assigns a higher marginal utility to apples than pears and that his utility for
pears is higher than the value assigned to oranges. This simple claim allows
102 Black Boxes
may converge in their use of utility as a frame, that is, on the kind of behavior
that falls within the domain of economics.
On this note, what counts as “economic” behavior? How does one pick
out the set of phenomena to be covered? The central aim of economics is
to explore the consequences of choice. Which choices? Whenever options
are characterized by differences in utility, such choices can be studied
from an economic standpoint. At this general level of description, NCE
and PNE converge on Robbins’s definition, presented in section 3.5 of
Chapter 3: “Economics is the science which studies human behaviour as a
relationship between ends and scarce means which have alternative uses.”
Utility frames the behavior to be explained by both approaches. How are
these patterns to be accounted for? Here the two theories diverge. One side
opts for an idealized “as if ” mathematical model. The other side cashes out
a mechanistic model. In sum, the two approaches may be subsumed under
the same frame, the same choice of object of explanation. But the underlying
selections of difference-makers are clearly distinct.
§4.4.5. Phlogiston
former sense, “phlogiston” is a frame. It picks out the class of events that the
theory is supposed to account for and, as such, it is explicative without being
explanatory.
On the other hand, “phlogiston” may refer to the principle, substance, or
process that causes and underlies the reaction itself. The theory posits the ex-
istence of a single principle, phlogiston, that is emitted in all cases of combus-
tion. In this second sense, “phlogiston” is employed as a difference-maker. It
picks out whatever it is that makes a difference to why and how the process
of combustion occurs. As such, it is perfectly explanatory. It just so happens
that it provides an inaccurate, superseded causal explanation.
In sum, qua frame, phlogiston theory provides valid explananda; we just
no longer use the terms of this old theory to describe them. Qua difference-
maker, it provides discarded explanantia, replaced by atomic chemistry.
Incidentally, why exactly has phlogiston theory been discarded? If “rich
in phlogiston” is synonymous with “combustible,” and combustion is a real
phenomenon in nature, then why was the concept of phlogiston eventually
eliminated and replaced by atomic chemistry? The obvious reason is that talk
about “phlogiston” is not perspicuous enough. In particular, it introduces
a nonexistent element, phlogiston. In doing so, it fails to address a number
of important distinctions. By positing a single substance that is emitted in
all cases of combustion, phlogiston theory clashes into the same category
of events which should be kept distinct, like the burning of wood and the
heating of metals. Unsurprisingly, modern atomic chemistry fares much
better on this score. By differentiating between, say, the combustion of wood
and coal and the oxidation of mercury, iron, and other metals, it provides
more precise causal-mechanistic explanations and picks out a more perspic-
uous range of behaviors. Still, these observations raise further issues. Why is
the transition from phlogiston theory to atomic chemistry a progressive one?
Addressing this and related questions requires a more systematic discussion
of black boxes. We will return to it in Chapter 9. Now, it is time to wrap up
our discussion of placeholders.
Time to tie up some loose ends. This chapter began by presenting the notion
of biological fitness. I outlined two widespread theses. First, fitness is com-
monly defined as a dispositional property. It is the propensity of an organism
106 Black Boxes
comic effect skillfully crafted by Molière. Yet, when virtus dormitiva stands in
for the mechanisms that sedate people, appeals to dormitive virtue become
unproblematically explanatory. Sure, they might not be very enlightening.
Virtus dormitiva may be closer to solubility than fitness, mental states, or
utility. But a vague, general explanation is an explanation, nonetheless.
It is worth stressing that my notion of placeholder is quite broad, and in-
tentionally so. Specifically, placeholders may range from having much to very
little content. To wit, the higher-level properties discussed in this chapter—
solubility, fitness, mental states, utility, phlogiston, and the like—provide
a minimal specification of the structure of the mechanisms represented.
Nevertheless, if all supervenient properties are placeholders, it is also pos-
sible to provide much richer, more detailed, characterizations of the un-
derlying properties. This anticipates an important point to be developed in
Chapter 5. My definition of a black box, based on the dual role of placeholders
introduced in the present chapter, encompasses not only stereotypical “black
boxes,” but less opaque concepts, such as gray boxes, and quasi-transparent
boxes too, as long as these supervene on lower, physical levels.
In conclusion, how does all of this fit into the philosophy of science? What
implications does our analysis of placeholders have vis-à-vis the role and con-
tribution of higher-level properties to scientific inquiries? Many authors have
embedded these considerations into the broader context of the reductionism
vs. antireductionism debate. Sober, for one, generalizes his discussion of bio-
logical fitness to the conclusion that all properties in the special sciences su-
pervene on physical properties. This, in turn, raises an overarching question
about the nature of scientific explanation. If the special sciences supervene
on physics, is there a fundamental physical explanation for any phenomenon
explained by other sciences? If higher level properties are placeholders that
can be replaced by more detailed lower-level descriptions, are these micro-
level explanations always deeper than their macro-counterparts?9 These are
precisely the type of issues that we have previously encountered in Chapters 1
9 Sober explores two kinds of answers. First, he suggests that, while all non- fundamental
explanations retain practical value, in principle, they are all disposable. This seems like a modest
form of reductionism, since it acknowledges that, while physics is not yet able to explain everything,
in principle, it could. Next, he goes on to provide a second answer, which falls in line with sophisti-
cated antireductionism. Even if it would be possible to best explain all individual token events at the
most fundamental level, explanations of the type sort cannot be paraphrased at lower levels, either
in practice or in principle. For a similar argument, from the standpoint of physics, see Batterman
(2002).
108 Black Boxes
and 2. Once again, we are hopelessly haunted by the Homeric hazard. What is
our doom? Are we going to get devoured by Scylla or drowned by Charybdis?
By now, readers will not be surprised to hear that I intend to explore a dif-
ferent route. There is no need to pick your poison. We can have the cake and
eat it too. Specifically, reductionists are absolutely correct that it is always pos-
sible to replace a causal-mechanistic explanation of how and why a certain
phenomenon occurs with a more detailed, lower-level, micro-depiction. At
the same time, this does not invalidate the antireductionist tenet that many
explanations in the special sciences are “autonomous,” in the sense that they
stand alone perfectly well, without the need to enrich them with additional
micro-details. Showing how these seemingly incompatible claims can be rec-
onciled is the ultimate task of this book. Before getting there, we have ways to
go. Since my positive proposal will focus on black-boxing, the first move is to
elucidate this strategy. With this target in sight, the following chapter identi-
fies and discusses the constitutive stages of this form of explanation.
5
Black-Boxing 101
§5.1. Introduction
Chapter 4 began a more systematic investigation of the black boxes first iden-
tified by our historical excursus in Chapter 3. Specifically, I introduced a dis-
tinction between two kinds of placeholders in science. Frames stand in for
explananda, patterns of behavior in need of explanation. Difference-makers
stand in for explanantia, mechanisms that produce and explain the patterns
in question. Both kinds of placeholders will play a pivotal role in my anal-
ysis of black boxes, developed with an eye on the following issues. What is a
black box and how is it constructed? How does a single concept accomplish
so many tasks, across a variety of fields? Can black boxes explain the inter-
play between productive ignorance and knowledge?
Here is the punchline. This chapter sets out to decompose the black-boxing
strategy into three constitutive phases. The first step involves sharpening the
explanandum by placing the object of explanation in the appropriate context.
This is typically accomplished by constructing a frame, a placeholder that
stands in for the pattern(s) to be covered. For this reason, I refer to this phase
as the framing stage. The second step, the difference-making stage, provides
a causal explanation of the explanandum, now appropriately framed. This,
simply put, involves identifying the relevant difference makers, that is,
placeholders standing in for the mechanisms that produce the patterns under
scrutiny. The third step, the representation stage, determines how these differ-
ence makers should be characterized, which features of the mechanism are to
Black Boxes. Marco J. Nathan, Oxford University Press. © Oxford University Press 2021.
DOI: 10.1093/oso/9780190095482.003.0005
110 Black Boxes
science must pay attention to both its scientific and philosophical precursors.
Having said this, frustrating my audience is neither kind nor wise on my
part. Empirically minded readers with little interest in philosophical debates
may want to consider glossing over the detailed discussion of the three stages
of black-boxing, focus on the succinct summary in section 5.5, and dig into
the real-life scientific examples and applications in Chapter 6.
1 From this D-N perspective, also known as the “covering-law model,” the explanation of an event
consists in a formal derivation of an explanandum from a set of laws and initial conditions. Thus,
one can explain why the match lit by observing that, given initial conditions, such as the presence of
oxygen in the air, the structure of the match, and the striking of the match, together with the relevant
laws of nature, the explanandum event was logically bound to happen. The problem of asymmetry
was noted and presented, through a variety of equivalent examples, by various authors, especially
Michael Scriven and Sylvain Bromberger.
Black-Boxing 101 113
2 As Kitcher and Salmon (1987, p. 316) put it in their scathing review of van Fraassen’s theory of
explanation, “Failing to appreciate that arguments are explanations [ . . . ] only relative to context, we
assess the explanatory merits of the derivations by tacitly supposing contexts that occur in everyday
life. With a little imagination, we can see that there are alternative contexts in which the argument we
dismiss would count as explanatory.”
114 Black Boxes
and whether the explanans is ultimately successful.3 In some cases, the sa-
lient alternatives may be stated explicitly. Why did you finish the pizza but
not the broccoli? More typically, the underlying contrast class is presupposed
at a tacit level. Dretske’s example is a case in point. Whether the question is
why Clyde lent Alex $300, why Clyde lent Alex $300, or why Clyde lent Alex
$300 is unspecified. It must be inferred from the context. Similarly, Sutton’s
remark that banks are where the money lies is a perfectly good explanation
of why Sutton robs banks, but only provided that his commitment to robbing
is assumed. It should be clear from the context that the priest is not willing to
take that premise for granted.
Time to take stock. All requests for explanation are subject to interpre-
tation. Even simple, ordinary, everyday questions—Why did the match
light? Why did Clyde lend Alex $300? Why does Sutton rob banks?—are
tacitly equipped with assumptions, “given that” clauses, and pragmatic
presuppositions that place them in the appropriate context so that they
can be parsed, processed, and addressed. In other words, the object of ex-
planation is never an event simpliciter. The explanandum is always an event
together with a contrast class, that is, a set of alternatives C1 . . . Cn that deter-
mine what counts as an acceptable explanation of E.4
3 What exactly is a contrast space? Garfinkel (1981, p. 40) describes its basic structure as follows. If
Q is some state of affairs, a contrast space for Q is a set of states [Qa] such that (i) Q is one of the Qa;
(ii) every Qa is incompatible with every other Qb; (iii) at least one element of the set must be true;
and (iv) all of the Qa have a common presupposition, that is, there is a P such that, for every Qa, Qa
entails P.
4 Why must explanations thus limit our options? The answer, Garfinkel suggests, “lies in our need
to have a limited negation, a determinate sense of what will count as the consequent’s “not” hap-
pening” (1981, p. 30). This point is elucidated with the help of an example. Suppose that one morning,
I go out for a drive. I am going over 110 mph when I round a bend where a truck has stalled. Unable to
stop in time, I crash into the truck. Later, you chastise me: if you hadn’t been speeding, you would not
have crashed. That is true, I retort. But then, had I not had breakfast, I would have reached that spot
before the truck. So, had I not eaten breakfast, I would not have crashed. Why do you not blame me
for having breakfast? What makes my reply fishy? Here is Garfinkel’s answer (1981, pp. 30–31): “My
claim is based on the assertion that if something (eating breakfast) had not happened, the accident
would not have happened. The problem is, what is going to count as that accident’s not happening? If
‘that accident’ means, as it must if my statement is going to be true, ‘that very accident,’ that concrete
particular, then everything about the situation is going to be necessary for it: the shirt I was wearing,
the kind of truck I hit, and so forth, since if any one of them had not occurred, it would not have
been that accident. But this is absurd, and in order to escape this absurdity and not have everything
be necessary for the accident, we must recognize that the real object of explanation is not my having
had that accident. [ . . . ] [T]he real object of explanation is an equivalence class under [a] relation
[determined by a set of ‘irrelevant’ perturbations]. The equivalence relation determines what is going
to count as the event’s not happening.” As Michael Strevens has brought to my attention, appealing
to blameworthiness as a test for causal or explanatory relevance is problematic, as these features do
not invariably go hand in hand. Causally explanatory factors like icy or oily roads may not attract
blame. Vice versa, non-explainers such as negligence sans adverse effects can be morally relevant.
Causal relevance will be discussed in detail in section 5.3. The important point, for the time being, is
Black-Boxing 101 115
simply that explanatory relevance depends on the contrast class. This is independent of the specifics
of Garfinkel’s example.
116 Black Boxes
5 For a classic discussion of these “interfield theories,” see Darden and Maull (1977).
Black-Boxing 101 117
the most appropriate one may be selected. Genetic models do not come easy!
A viable solution must recognize the need for a cheaper, tentative scaffolding
that provides an initial sketchy characterization of what we are trying to ex-
plain, informative enough to get the inquiry going, without making it overly
expensive. How is this done without overburdening researchers with endless
requests for alternative models?
This problem has a simple way out. Recall from Chapter 4 the concept of
frame: a placeholder that stands in for a range of behaviors in need of expla-
nation. Frames are the key to solving our conundrum. To specify an expla-
nandum, one need not spell out an entire model. It is sufficient to provide
the relevant “frame,” that is, a shorthand description of the class of behaviors
that the model will then set out to explain. This preliminary scaffolding,
this coarse-grained characterization of the explanandum, is what kickstarts
an exploration, suggesting how to move the inquiry further. Are these
placeholders permanent, or are they progressively eliminated as science
advances? This will be addressed in Chapter 10. For the time being, let me
illustrate the main idea with an intuitive example. More realistic applications
to actual scientific research are postponed until Chapter 6.
Why did Clyde lend Alex $300? Dretske’s question is subject to interpre-
tation. How does one determine whether the object of explanation is why
Clyde lent Alex $300, why Clyde lent Alex $300, or why Clyde lent Alex $300?
Garfinkel proposes an effective strategy to clarify the explanandum: specify
a contrast class. Do we want to know why Clyde lent Alex $300 as opposed
to $100 or $500, why Clyde lent Alex $300 as opposed to gifting the money,
or why Clyde as opposed to Scrooge lent Alex $300? These coarse-grained
characterizations are frames: placeholders that stand in for patterns of beha-
vior, thereby sharpening our explananda.
Three brief comments: First, providing the appropriate frame, that is,
specifying the contrast class, in and of itself, does not provide an answer to
the question at hand. All it does is pinpoint which question we are trying to
answer, which events are to be explained. Second, identifying the relevant
contrast class clearly falls short of providing a complete model. A full-fledged
model of why Clyde lent Alex $300 as opposed to $100 or $500 will specify
why Alex needs the money, the relation between Alex and Clyde, how Clyde
feels about Alex’s financial situation, and much else. Yet, framing the ques-
tion by specifying a contrast class is the first, preliminary step. It indicates the
object of explanation, indirectly clarifying how the explanandum ought to be
addressed and many other implicit features of the model. Third, the relation
118 Black Boxes
The first step toward the construction of a black box involves clarifying the
structure of the explanandum. As we have seen in section 5.2, this is a com-
plex endeavor, digging deep into the pragmatics of languages and theories.
Fortunately, this aim can be achieved, in a fairly painless fashion, by pro-
viding a frame, a placeholder that stands in for the patterns to be covered.
Once the inquiry is in focus, the next task is spelling out the explanation
itself.
The topic of explanation has spawned a hefty literature that cannot be
adequately reviewed here, let alone critically discussed. Given our present
concerns, I shall focus on a prominent form of scientific explanation: causal
explanation. How does one provide a causal explanation? The idea, simply
put, is to explain an event by identifying the set of causes that produce it.6
6 Here is David Lewis’s (1986, p. 217) influential formulation: “to explain an event is to provide
some information about its causal history.” As the quote suggests, Lewis’s contention goes further than
mine. He maintains that all explanation is causal. I wish to remain agnostic on this controversial
Black-Boxing 101 119
tenet. I assume the platitude that causal explanation is one form of explanation, without presup-
posing that all explanation is causal.
120 Black Boxes
results whenever we have multiple potential causes of an event, but only one
of them should be identified as the explanation of the event itself.7 Any viable
analysis of causal explanation must provide a principled distinction between
explanatorily relevant factors and irrelevant ones.
How does one supplement minimalism with an appropriate selection
principle? Strevens distinguishes two families of strategies. One-factor ap-
proaches develop a more nuanced concept of cause. From this standpoint,
minimalism is correct that all causes explain. Still, not all influences are gen-
uine causes. The cold weather and the pungent aroma are causes of my sip-
ping coffee. Pluto and the Big Bang are not. A different path is followed by
two-factor approaches, which agree with minimalism that all influences are
causal, but stress that not all causes are explanatory. Pluto and the Big Bang
might well be causes of me sipping coffee, or so the suggestion runs. Yet, an
independent selection rule, a principle of causal relevance, will indicate that
these causes play no role in the explanation in question.
Which theory of causal explanation best serves our purposes? Several
options are available in the philosophical literature. The following is a
non- comprehensive list. Regularity theories follow Hume in analyzing
causal relations in terms of uniformities in nature. Statistical theories treat
difference-making causes as factors that alter the probability of the expla-
nandum. Counterfactual theories and manipulability theories view causes
as conditions sine qua non for their effects: had the cause not occurred, the
effect would not have occurred either. Finally, process theories and mecha-
nistic theories view causes as physical connections, understood in terms of
their capacity to transmit marks, exchange conserved quantities, or produce
particular effects. Each strategy can be adapted to explanation in either the
one-factor or two-factor guise. On the former reading, the theory specifies
what counts as a cause of the explanandum event. On the latter interpreta-
tion, it suggests which causes are relevant to the explanation.
Strevens finds none of these accounts of relevance fully satisfactory. His
own theory of explanation is founded on a different criterion for assessing
the significance of causal factors. This is the “kairetic” recipe which, simply
put, goes like this. Begin by taking a complete causal model of the production
7 Strevens illustrates this with the story of Rasputin. Rasputin’s assassins first attempted to poison
him and failed. Next, they shot him and failed again. Finally, they successfully drowned him in a
frozen river. Strevens considers several minimalist responses but argues that they all ultimately
miss the mark. I concur and refer to him for further discussion. For structurally analogous scientific
examples, see Nathan (2014).
Black-Boxing 101 121
of the explanandum, that is, the enormous description of all the causal
influences leading up to the events to be explained. Next, make the descrip-
tion as abstract as possible without violating the following two conditions.
First, it is important not to undermine the explanans’ entailment of the ex-
planandum, that is, one should not make the depiction so general that the
explanandum-event no longer follows from the specification of causes and
initial conditions. Second, the model must remain a causal model so that,
for example, one cannot abstract away by replacing everything with a prop-
osition to the effect that the explanandum occurred. If this process is applied
correctly, what remains in the model after this gradual process of abstraction
is completed are all and only the factors that actually made a difference to the
occurrence of the event that we are trying to explain.
A simple example should help drive the point home. Consider a simple
causal explanation: a piece of butter melts because it is heated. Note that a lot
of properties will influence how the butter melts: its shape, weight, outside
temperature, humidity, and many other factors. Yet, none of these features
makes a difference to the explanandum, that the butter, in fact, melts.
Therefore, they can be removed from the description without invalidating
its status as a causal explanation. In contrast, the claim that the frying pan is
heated above the melting point of butter cannot be abstracted away because it
does make a difference to the occurrence of the effect under scrutiny.
So, which criterion or criteria of causal relevance should we employ to
provide and assess causal explanations? Shall we opt for a regularity or sta-
tistical approach? A counterfactual or manipulability account? A process-
mechanistic analysis? While each theory has its virtues and limitations, all of
them will get the job done, one way or another. Fortunately, we are not forced
to choose. I will not commit to any specific theory of causation or causal
explanation. For the sake of illustration, I often borrow Strevens’s kairetic
account, that—setting aside the issue of whether and how it is possible, in
practice, to perform the required operations of abstraction—provides a clear,
simple, and compelling perspective. Nonetheless, I stress that nothing hinges
on this particular choice. My analysis of black boxes can be paraphrased,
mutatis mutandis, in terms of any theory of causal explanation, as long
as it provides a recipe for identifying factors that make a difference to the
outcome.
Next, what kind of entity produces the explanandum event? Depending
on the specifics of the system under investigation, it may be a set of objects,
events, processes, actions, or something else altogether. To avoid the tedious
122 Black Boxes
8 In some cases, causes and effects occur roughly at the same level, as when we say that the match
lit because it was struck. But the causal mechanisms producing an event may also occur at finer or
coarser levels. Consider accounts of the lighting of the match which appeal to chemical reactions trig-
gered when phosphorus on the head reacts with potassium chlorate mix on the side of the matchbox.
A precise characterization of levels of explanation is no trivial endeavor (Craver 2007). I appeal to an
intuitive characterization of levels, on the assumption that causes must be “commensurate” to their
effects.
Black-Boxing 101 123
On the one hand, the nature of the mechanism could be known. Nevertheless,
one may choose to leave out details for the sake of pragmatic convenience.
Suppose that you see fragments of ceramic scattered across my office floor. You
ask what happened and I reply that I dropped my mug. I just provided you with
a rudimental causal explanation. Pointing to the pieces of ceramic provides the
frame. The dropping of the mug is the difference-maker. Could I say more? Of
course. I could present both explanans and explanandum in greater detail by
clarifying that it was my favorite mug, or that I dropped it accidentally on my
way to class. With the help of some elementary physics, I could also tell you
more about why and how the mug broke. Still, most interlocutors will be sat-
isfied with my preliminary explanation. Additional information about the tra-
jectory and the force of the mug hitting the ground, or my exact intentions, can
be omitted for the sake of convenience, without affecting the adequacy of this
coarse account.
Note that this is precisely the situation in Sober’s Drosophila scenario. We
can describe the relation between type-a and type-b flies in terms of fitness,
although we could also provide a deeper explanation of the evolution of the
population that dispenses with the concept of fitness altogether.
On the other hand, there are cases where the precise nature of the mech-
anism is unknown. It must therefore be left out as a matter of necessity.
Consider sport-related discussions, common in bars across the globe. Your
friend Jim is offering an unsolicited reconstruction of the last Superbowl,
passionately explaining how the outcome was determined by coaching strat-
egies. The winning coach’s wisdom, the panegyric goes, was the difference-
maker of the game. Jim can provide some evidence. He can pinpoint a few
successful plays and offer a broad-brushed sketch of the strategy itself. Yet,
the precise mechanisms that govern the amazingly complex relation between
tactics and game results cannot be laid out precisely, by Jim or by anyone
else. Here, the details are omitted not by choice, but by necessity. Does this
mean that Jim’s explanation is wanting? Not really. Assuming that Jim is, in
fact, correct, about the difference-makers of the game, the explanandum has
been causally explained. Sure, there are myriad other factors that have not
been covered. Precisely which aspects of the coaching strategy are respon-
sible for the outcome? How did the coach’s decisions affect the game? Could
the overall strategy be improved? These, however, are altogether different
explananda. These questions are framed very differently, and thus require an
altogether different causal explanation.
124 Black Boxes
Scientific analogs of this latter situation can be found in the work of Darwin
and Mendel, who were in no position to accurately describe the mechanisms
underlying their explanations. More contemporary examples involve neuro-
degenerative diseases, such as Huntington or Alzheimer’s, and certain types of
cancer, whose molecular basis is only partially understood.
Time to take stock. All explanations need to be contextualized. Providing the
appropriate setup is no easy task, especially in the case of scientific explanations
that presuppose a host of inter-field and intra-field relations. Pragmatic con-
venience dictates that we employ a shortcut, developing some preliminary
scaffolding that specifies the object of explanation in a fast and frugal fashion,
while omitting lots of detail. This can be typically achieved by constructing a
frame, a placeholder standing in for a range of behaviors in need of explanation.
This is the first step of the black-boxing strategy, the “framing stage.” The next
step involves spelling out the causal explanation, a specification of the variables
that make a difference to the target explanandum, thus contextualized. For this
reason, I refer to this second phase as the “difference-making stage.” The un-
derlying causal mechanisms that produce the behavior in question are typically
quite complex. The specifics of how mechanisms are implemented are often
irrelevant. They can be omitted, because of either ignorance or convenience.
The causal explanation provides abstract depictions of these mechanisms. I
refer to such descriptions as difference-makers: placeholders standing in for the
mechanisms that produce the behavior in question, as noted in Chapter 4.
We are now in a position to fully appreciate the importance of the framing
stage in the causal-explanatory process. Consider the lighting of the match.
Which factors should we enlist among the difference-makers, relative to this
choice of explanandum? I will follow the kairetic recipe. Alternatively, feel free
to pick your difference-making theory of choice. Begin with a complete spec-
ification of all the causal influences leading to the event in question. This web
of influence will be enormous, including all sorts of minor disturbances. Next,
make the description as abstract as possible, removing all the factors that do not
affect the occurrence of the explanandum. Counterfactual reasoning dictates
that, say, the gravitational pull of Jupiter can be omitted because, had Jupiter not
been there, the match would still light. Similarly, we can take out the color of the
matchbox, and many other irrelevant features. In contrast, the striking cannot
be removed because, intuitively, had the match not been struck, it would not
have lit. This shows that the striking is, indeed, a cause of the lighting.
Wait. Sure, had the match not been struck, it would not have lit. But had no
oxygen been present in the atmosphere, the match would not have lit either.
Black-Boxing 101 125
And had the match been faulty, or made of plastic, it would also not have
lit. This is Goodman’s problem of counterfactuals.9 Why do we treat striking
as the cause of lighting, but not oxygen or the absence of manufacturing
defects? The answer, once again, lies in the framing process, which will influ-
ence the selection of causes, the structure of the model, feeding back on the
nature of the explanandum.
My central contention—the relativization of all causal statements to a
frame of reference—is hardly novel. In his classic study, The Cement of the
Universe, J. L. Mackie (1974, pp. 34–35) notes that causal statements “are
commonly made in some context against a background which includes the
assumption of some causal field.”10 Strictly speaking, then, what is caused
is not an event simpliciter, but an event relative to a causal field. This has
implications for our understanding of causal explanation. In assessing what
caused event E relative to field F, some conditions that play a necessary role
in the production of E can be dismissed as having no genuine causal role.
These are the conditions that are part of F. But, if we evaluated the same state-
ment relative to a different field, call it “G,” then parts of F may now be said
to cause E and, vice versa, some of the causes of E relative to F might become
part of G, fading into the background.
Analogous considerations have been revamped by Strevens (2008).
Echoing Mackie, Strevens treats the “given that” clauses which constitute the
framework as part of a fixed portion of the explanation against which spe-
cific difference-makers are evaluated. More precisely, a state of affairs s is part
of the framework F of explanandum E if all causal explanations of E must
9 In Fact, Fiction, and Forecast, Goodman introduces the “problem of counterfactuals” as the
task of defining “the circumstances under which a given counterfactual holds, while the opposing
conditional with the contradictory consequent fails to hold” (1955, p. 4). Simple as this may ap-
pear, Goodman is quick to point out two formidable difficulties. A first issue, the “problem of law,”
challenges us to describe the nature of the systematic connection between the antecedent and the
consequent of a subjunctive, given that such connection will typically not be a matter of logic but a
natural, physical, or causal law. This endeavor is rendered thorny by the threat of circularity. Laws are
counterfactual-supporting; accidents are not. Appealing to counterfactuals to identify laws and, si-
multaneously, employing laws to analyze counterfactuals generates a vicious circle or infinite regress.
Goodman’s second difficulty, more directly pertinent to our concerns, is known as the “problem
of relevant conditions,” or, alternatively, the “problem of co-tenability.” The connection between
antecedent and consequent in a counterfactual presupposes the occurrence of stable background
conditions, often left implicit. “Had the match been struck, it would have lit” only holds provided that
the match is well made and dry, oxygen is present, wind is absent, etc. Specifying which features must
be taken in conjunction with the antecedent to infer the consequent is a long-standing philosophical
problem. The connection between the context-dependency of explanation and subjunctives is noted
by van Fraassen (1980).
10 Mackie attributes the introduction of causal fields to John Anderson’s “The Problem of
Causation” (1938) and draws a connection to Russell’s (1913) “causal environment.”
126 Black Boxes
11 Two points are worth stressing. First, Strevens is more explicit than Mackie that frameworks
can be introduced in discourse in a number of ways. It may be presupposed, implied, stated overtly,
etc. Second, for Strevens, the introduction of a framework is optional. Many explanatory claims, he
argues, are not relative to any framework and, therefore, they specify absolute as opposed to relative
relations of difference-making. I respectfully disagree. I maintain that all explanations, no matter
how simple or apparently unambiguous, presuppose a host of background assumptions. At the same
time, my argument does not require this stronger claim. The main point, for present purposes, is the
weaker observation that all scientific explanations presuppose a framework.
Black-Boxing 101 127
12 Giere develops these views in his classic Explaining Science: A Cognitive Approach (1988). A sim-
ilar perspective has been defended by Cartwright (1983), who maintains that the fundamental laws
of modern physics, such as Schrödinger’s equation, are not true. In her celebrated phrase, they “lie.”
Giere agrees with the content of Cartwright’s proposal but opts for a different reformulation. For him,
the general laws of physics cannot tell lies about the world because they are not statements about
the world at all. They are, as Cartwright herself sometimes suggests, parts of the characterization of
theoretical models, which may represent real systems. Here, I focus on Giere’s presentation because
his stance on models is more developed and explicit than Cartwright’s. Be that as it may, the relevant
issue, for our present purposes, is not much whether or not models “lie,” but that they do not provide
accurate representations.
13 Contrary to the positivist conception of theories as interpreted axiomatic systems, discussed
in Chapter 2, for Giere theories are not well-defined entities. No set of necessary and sufficient
conditions determines which models and hypotheses belong to a theory.
128 Black Boxes
respects and degrees.” Unlike models, theoretical hypotheses are true or false,
depending on whether or not the asserted relation actually holds. However,
as Giere (1988, p. 81) is quick to point out, “that theoretical hypotheses can
be true or false turns out to be of little consequence. To claim a hypothesis
is true is to claim no more or less than that an indicated type and degree of
similarity exists between a model and a real system. We can therefore forget
about truth and focus on the details of the similarity. A ‘theory of truth’ is not
a prerequisite for an adequate theory of science.”
In sum, Giere characterizes models as abstract and idealized systems,
vehicles for representing the world based on similarity. Giere was neither
the first nor the last to recognize the importance of models in scientific
practice. His main insight was replacing the old relation between model
and reality in terms of correspondence with a relation of stipulation.
Models are defined—created by their description. Consequently, whether
a model sufficiently resembles the world is not a brute fact, but a collec-
tive decision of the members of the appropriate portion of the scientific
community.
With this in mind, we can now consider, in greater detail, the issue of sci-
entific representation. How do models represent their target systems? And
how do we assess whether or not they succeed in doing so? Answering these
questions will require us to discuss the anatomy of models.
Giere’s account, originally developed with an eye to mathematics and
physics, is difficult to apply to less formal branches of sciences, such as parts
of biology, neuroscience, and psychology. For this reason, I will adopt a
broader view of representation, which generalizes Giere’s proposal while re-
taining its spirit. Weisberg (2013) characterizes models as interpreted struc-
tures. Unpacking this definition entails decomposing models into their two
basic constituents: structures and interpretations.
Weisberg distinguishes three types of models—concrete, mathematical,
and computational—based on the underlying structure. Concrete models
are real physical entities that stand in some representational relation to some
real or imagined system, the model’s “intended target.” Mathematical models
employ formalized structures to represent states and relations between states.
Finally, computational models represent causal properties of their targets by
relating these causes to procedures. Setting these differences to the side, at
the basic ontological level, all models are structures.
The second constitutive feature of models is their interpretation. Weisberg
breaks down the interpretation of a model—what he calls the model’s
Black-Boxing 101 129
GENE
DNA
NUCLEOTIDE
model represent single strands of DNA or double helices? Are ribosomes in-
cluded? Which properties convey information (number of beads, proximity)
and which should not be taken literally (size, shape, texture)?
The third and fourth components of construal are its fidelity criteria. These
determine how similar the model must be to the world, and in what respects,
for the representation to be considered “adequate.” Weisberg distinguishes
two types of criteria. Dynamical fidelity criteria specify how close the output
of the model, the values of dependent variables in the model and in the world,
must be to the input of real-world phenomena. This is the “error tolerance” of
the system, how far off predictions made on the basis of our beads on a string
are from the behavior of real genes. The second family of fidelity criteria are
representational.14 Typically, these provide standards for evaluating how well
the causal structure of the model maps onto the structure of the world, for
the representation to be adequate. Do our large, colored, spherical beads pro-
vide a good enough approximation of real biochemical molecules? Can this
simple structure capture the complexities of gene replication? What are the
salient analogies and differences between the causal structure of the world
and our representation of it?15
We are now in a position to state the final step of the black-boxing strategy.
The representation stage determines which features of causal explanations
should be portrayed and which can be idealized and abstracted away.
Was this not already done at the difference-making stage? Not quite. The
goal of difference-making is, first, to identify the factors that make a differ-
ence to the occurrence of an event of choice and, second, to distinguish be-
tween genuine causes and conditions that may fade into the background.
Once these causes have been pinpointed, there is still a question of how they
should be represented. Precisely what goes in the model, and in what ways?
Representation is achieved by constructing a model, an interpreted structure
that represents parts of the world based on relations of similarity.
Following Weisberg, I distinguished three kinds of structures—physical,
mathematical, and computational— and four aspects of interpretation.
Assignment and scope map the structure onto the model. Dynamical and
representational fidelity criteria determine whether the representation
provides an adequate, informative characterization of the system under
scrutiny.
This is all admittedly vague. Invoking the notion of similarity, in and of it-
self, does not resolve the issue of representation, which constitutes one of the
thorniest and most resilient open problems in the philosophy of science.16
Similarly, appealing to models does not, ipso facto, tell us how the expla-
nandum should be represented and explained. Unfortunately, constructing
and assessing models cannot be done from the proverbial armchair. It
requires a painstaking combination of theoretical and empirical work. Still,
the work of Giere and Weisberg provides an effective depiction of the repre-
sentation stage. Metaphorically speaking, it involves constructing the frame-
work that offers the biggest “bang for the buck” relative to the explanatory
purposes at hand.17
Before moving on, a brief comment concerning the role of models in sci-
entific inquiry. Models have been the object of much discussion in the phi-
losophy of science.18 Yet, traditionally, scholars have focused on a particular
aspect of models, namely, their role as explanantia and explananda.19 While
models are a worthwhile object of investigation, and their contribution to the
explanation of a phenomenon or range of phenomena is certainly significant,
this is not the entire story. As we have seen throughout the chapter, models
play a variety of other roles. For one, models frame the explanandum, making
it more precise and perspicuous. Second, models provide the “toolbox” of
explanantia. They specify the array of all the entities, laws, mechanisms, and
This chapter has dissected the practice of black-boxing into three phases.
The first step, the framing stage, involves sharpening the object of expla-
nation. Specifying the explanandum in detail presupposes a full-fledged
model, which is not easy to obtain, especially in actual scientific practice.
Fortunately, to get the inquiry going, it is sufficient to employ a frame, that
is, a coarse-grained placeholder that stands in for patterns of behaviors in
need of explanation which, in principle, could be described at a finer scale.
The second step, the difference-making stage, provides a causal explanation
of the target by specifying which features of the explanans make a difference
to the occurrence of the explanandum.20 While there are various strategies
for doing so, I borrowed Strevens’s effective kairetic approach. Many of these
difference-makers may be left unpacked. In some cases, the micro-structure
is omitted because of mere convenience, to draw the boundaries of a field, or
to insulate a concept from empirical refutation. Other times, the decision is
dictated by ignorance, as these details are actually unknown. The third and
20 As discussed at length, I do not have any novel account of causal explanation to offer and I wish
to not take a stance on the question of whether all explanation is truly “causal.” Still, it is worth
stressing that the notion of difference-making adopted here is weak: it does not presuppose any phys-
ical interaction. Hence, my notion of black box can be applied to mathematical, statistical, and other
forms of abstract explanation.
Black-Boxing 101 133
and idealized interpreted structure. The model also determines how these
difference-makers should be represented—which aspects of the underlying
mechanisms must be explicitly depicted, and which ones can fade into the
background. When detail is omitted, this is done on the assumption that it
does not affect the autonomy of higher-level explanations. No wonder it took
us so long to cover all this ground. Phew!
Second, my characterization places both frames and difference-makers
on their own, independent, and mutually exclusive spectrum. These
placeholders range from general, coarse-grained depictions—”the mech-
anism responsible for genetic inheritance” or “the range of behaviors
typically described as ‘falling in love’ ”—to detailed descriptions of the un-
derlying entities and activities. Hence, my general definition of a “black box”
encompasses also less-opaque constructs, sometimes dubbed “gray boxes”
or “semitransparent boxes,” which shall be discussed, at greater length, in
Chapter 7. What all these placeholders have in common, from my perspec-
tive, is that they result from a process of abstraction and idealization.
Third, as anticipated in section 5.1, these stages are necessary and jointly
sufficient for the construction of a black box. Not all valuable scientific work
requires all three phases. But all black boxes do.21 Yet, these steps should
not be viewed as chronologically ordered. Sure, there is a natural progres-
sion which begins by framing the explanandum, then provides the explana-
tion, which is finally represented in a model. Elegant as it is, this conception
of “well-ordered science” is just a regulative ideal. Actual practice is messy.
Research teams often begin by hypothesizing a causal explanation, which is
then found wanting, leading to the revision of the explanandum. Or a re-
vised representation of difference-makers might lead to a revisitation of
the explanandum, which, in turn, changes the nature of the causal explana-
tion. In short, the implementation of these steps may involve permutations,
repetitions, and feedback loops. All of this is captured by Firestein’s (2012,
p. 19) gloss: “ ‘Let’s get the data and then we can figure out the hypothesis,’
I have said to many a student worrying too much about how to plan an
21 Some readers might accept that all three steps are required to construct a black box, understood
as a difference-maker. But why does the construction of a black box qua frame require difference-
making and representation? Is the framing process not sufficient? My reason for answering in the
negative is simple. While a frame, per se, is a preliminary exploration, a black box is always relativized
to an entire model. And this requires the processes of difference-making and representation, in addi-
tion to framing.
Black-Boxing 101 135
22 Broadly speaking, data mining can be characterized as the practice of examining large databases
in order to generate new information. Data mining is controversial because it tends to seek out large-
scale correlations and set aside any causal-mechanistic underpinning.
6
History of Science, Black-Boxing Style
The history of science, after all, does not just consist of facts and
conclusions drawn from facts. It also contains ideas, interpretations of
facts, problems created by conflicting interpretations, mistakes and so
on. [ . . . ] This being the case, the history of science will be as complex,
chaotic, full of mistakes, and entertaining as the ideas it contains and
these ideas in turn will be as complex, chaotic, full of mistakes, and en-
tertaining as the minds of those who invented them.
—Paul Feyerabend, Against Method, p. 3
§6.1. Introduction
Black Boxes. Marco J. Nathan, Oxford University Press. © Oxford University Press 2021.
DOI: 10.1093/oso/9780190095482.003.0006
History of Science, Black-Boxing Style 137
of the role and significance of black boxes across scientific practice and,
second, to discuss its philosophical applications and implications.
We see in these facts some deep organic bond, prevailing throughout space
and time, over the same areas of land and water, and independent of their
physical conditions. The naturalist must feel little curiosity, who is not led
to inquire what this bond is. This bond, on my theory, is simply inheritance,
that cause which alone, as far as we positively know, produces organisms
quite like, or, as we see in the cases of varieties, nearly like each other. The
dissimilarity of the inhabitants of different regions may be attributed to
modification through natural selection, and in a quite subordinate de-
gree to the direct influence of different physical conditions. (1859 [2008],
pp. 257–258)
Thus, in a few short statements, Darwin provides the key to one of the great
mysteries of science: the question of biogeography. How is this possible?
Some readers might feel inclined to downplay the brevity of Darwin’s
analysis by appealing to its obviousness. From a historical perspective, this
would be a mistake. Darwin’s view, deeply rooted in contemporary bio-
logical thinking, might sound like a truism to a modern audience. Yet, one
should not ignore the originality of the proposal and the stark opposition
confronting evolutionary theory since its inception, and which continues
inexorably to this day, at least in some circles. Darwin’s insight challenged
some of the most basic and entrenched, but still widely debated, credence of
its age. These include the belief that organic diversity is the result of divine
creation, that the Earth is much younger than it is, that natural phenomena
must be explained via teleological concepts like design, and an anthropocen-
trism placing humans at the pinnacle of the scala naturae. In addition to his
contribution to discrediting these hallowed ideas, Darwin also bolstered a
number of key concepts, such as the replacement of essentialism with popu-
lation thinking, the principle of natural selection, geographic speciation, and
the understanding of evolution as a process. Some will surely quibble with
Mayr’s somewhat hyperbolic conclusion that “no other philosopher or sci-
entist has had as great an impact on the thinking of modern man as Darwin”
(1988, p. 194). Yet, it is hard to overstate how evolution by natural selection
constitutes a traumatic rupture with the worldview of orthodox Christians,
natural theologians, laypeople, as well as many philosophers and scientists
born and raised in the nineteenth-century intellectual milieu.
These considerations bring us back to our original question. Given this
complex thicket of conceptual innovation and intellectual battles, how is it
140 Black Boxes
1 To be clear, I am not arguing that contemporary evolutionary theory has nothing to say about
chance or the origins of life. It does. My point is that Darwin was is no position to answer these
questions. In this sense, the title of his masterpiece is misleading. He offers lots of insights on the evo-
lution of species, but not much about their “origin.”
History of Science, Black-Boxing Style 141
2 I refer to this as a “rational reconstruction” because, to the best of my knowledge, Darwin him-
self never explicitly identifies these three steps. Still, reformulating the explanation in contemporary
terms allows us to identify and characterize its distinctive aspects.
142 Black Boxes
too complex for the explanation at hand? Solving these problems is the task
of the representation phase of the black-boxing strategy.
The key to Darwin’s representation is his preliminary characterization
of the theory. In the initial chapters of Origin, the English naturalist breaks
down evolution by natural selection to its basic constituents: competition,
variation, fitness, and heritability. These are all placeholders in a causal ex-
planation represented in a model. These are Darwin’s black boxes.3
To illustrate, recall from section 4.2 of Chapter 4 that fitness plays a dual
role in evolution. On the one hand, it is a frame that captures distributions
of organisms and traits. On the other hand, it is a difference-maker that
stands in for mechanisms producing the distribution in question. Analogous
considerations apply to competition, variation, heritability, and other
key constituents of evolutionary theory. All of them figure prominently as
placeholders, frames, and difference- makers, in Darwin’s evolutionary
explanations.
In conclusion, are Darwin’s explanations successful? Generations of
biologists agree on a positive answer and it is not hard to see why. Darwin
framed the right questions about the evolution of species, identified the
correct difference-makers, and appropriately represented their salient
aspects in his model. Sure, there is much that Darwin did not know or
even got plainly wrong. At the time Origin was published, Darwin was
ignorant about the mechanisms of variation and inheritance, and his sub-
sequent speculative theory strikingly misses the mark. Nevertheless, igno-
rance and mistakes do not affect his main accomplishments. How can this
be so?
We now have a sketch of an answer. The model does the work ex-
pected from a scientific theory: framing, explaining, and representing
phenomena. Two kinds of placeholders allow Darwin to provide an ex-
planation that is simple, elegant, and concise. Frames pick out patterns
of behavior needing to be covered. Difference- makers stand in for
mechanisms that produce these patterns, regardless of whether these pro-
cesses are known, unknown, or partly identified. These placeholders are
Darwin’s black boxes.
3 In choosing my words, I am perfectly aware that the expression “Darwin’s black box” is some-
times used as a scornful dismissal or undermining of Darwinian explanations (Behe 1996). My
perspective is diametrically opposite. As Mayr promptly recognized, Darwin’s black boxes are what
enhance his explanations, in the face of what the great naturalist could not know.
144 Black Boxes
4 I borrow this expression, and much else, from Griffiths and Stotz (2013). Excellent discussions of
classical genetics can also be found in Allen (1975, 1978); Darden (1991); Morange (1998); Waters
(2004, 2006); Weber (2005); Falk (2009); and Dupré (2012).
History of Science, Black-Boxing Style 145
5 Some scholars have raised doubts about Mendel’s results being “too good to be true.” Assessing
claims of Mendel’s data being “fabricated” transcends my historical competence. Fortunately, from
our present standpoint, settling this issue is not necessary to establish my main philosophical point.
146 Black Boxes
This shows that, when Mendel talks about “experiments in plant hybridiza-
tion,” the expression is what I call a “frame.” It is a coarse-grained placeholder
that stands in for a particular range of behaviors in need of coverage. The ob-
ject of explanation is not inheritance simpliciter. The range of explananda is
carefully selected and embedded in a specific framework.
This point can be reinforced by looking at how Mendel’s own work was
subsequently developed in the twentieth century. Mendelian ratios are typ-
ically presented and explained by appealing to two of Mendel’s laws: the
law of segregation and the law of independent assortment.6 This reconstruc-
tion, however, is not quite accurate, from a historical perspective. This is
because, right after the “rediscovery” of Mendel’s work, the two laws were
not considered conceptually distinct. Initially, the focus was on segregation.
Independent assortment was viewed as a simple extension of segregation to
two-character cases or “dihybrid crosses.” This view was eventually aban-
doned when exceptions to independent assortment began to appear.
In 1905, Bateson, Saunders, and Punnett were able to confirm Mendel’s
law of segregation for various characters in sweet peas. At the same time,
when they tried to perform two-character crossings with purple vs. red
flowers and long vs. dry pollen, they encountered deviations from the 9:3:3:1
ratios. Simply put, the problem was that the traits “purple” and “long” dis-
played a tendency to be inherited together, and so did “red” and “round.”
6 As noted in section 3.3 of Chapter 3, paraphrased in contemporary terms, the law of segregation
states that each parent passes on only one allele to each offspring. The law of independent assortment
says that which allele an organism gets from a parent in one locus has no effect on which allele it
gets in another locus. Mendel’s third law, the law of dominance, tells us that, for every pair of alleles,
one allele is “dominant,” the other “recessive.” Organisms who have a copy of each will look just like
organisms that have two copies of the dominant allele.
History of Science, Black-Boxing Style 147
the two organisms. Johannsen clarified this by introducing the term “gene,”
referring to the entities which constitute the organism’s genotype. Each gene
comes in a variety of alternative forms (“alleles”) and an organism contains a
number of places (“loci”), one for each gene. Thus, a genotype is the combi-
nation of all the pairs of alleles at that locus.
So, what does all of this mean? The heart of Mendelian genetics—the
gene—has a distinctive status. It is not observable. But it is more than a mere
unobservable posit to explain data. Mendel’s factors are a tool for predicting
and explaining Mendelian ratios in breeding patterns. They may stand in for
these patterns of behavior or, alternatively, for the underlying mechanisms
that make a difference to these variants. It was only natural for many
geneticists to hope that the gene, thus construed, would eventually be shown
to exist. Yet, as T. H. Morgan noted in his 1934 Nobel lecture, the centrality of
genes in early genetics did not depend, in any significant way, on their status
as physical particles. This, in short, is the theoretical role of genes, their rep-
resentation in the model of classical genetics.7
A comprehensive historical reconstruction lies beyond the scope of this
work. From our perspective, the important point is that genes are essentially
placeholders in a causal explanation represented in a model. Genes have a
multifaceted nature. They may stand in for various mechanisms or patterns
of behavior. They can be frames or difference-makers. Accordingly, these
details may be integrated in different ways or appropriately omitted. This is
why they play so many different roles. Genes are Mendel’s black boxes.
7 The historian Raphael Falk (2009) sums up the situation by saying that the gene of Mendelian
genetics has two separate identities: as a hypothetical material entity and as an instrumental entity.
The future development of genetics was the result of the interplay between these roles. Griffiths and
Stotz suggest a different reading: “Most recent scholars agree that the real achievement of classical
Mendelian genetics was not a theory centered on a few principles of high generality, but rather an
experimental tradition in which the practice of hybridising organisms and making inferences from
patterns of inheritance was used to investigate a wide range of biological questions” (2013, p. 17).
“Classical genetics was not a theory under test, or a theory that was simply applied to produce pre-
dictable results. It was a method of expanding biological knowledge” (2013, p. 19). Similar points
have been raised by Darden (1991); Waters (2004, 2006); and Weber (2005).
150 Black Boxes
state and its relation to environmental and behavioral variables must now be
explained. And this is problematic. Mental states cannot be observed directly
and are therefore underdetermined by evidence. This lack of depth is evidenced
by Freud’s metaphorical language. Speculative mythological stories depicting
the mysteries of the mind are so enthralling that, regardless of their accuracy
or plausibility, they end up stealing the show. This, Skinner argues, distracts
from the difference-making factors that truly cause behavior and that can thus
be used to effectively manipulate conduct. Teaching parents to raise well-func-
tioning adults requires focusing on actual child-rearing practice, not fanciful
tales about their mental life. In short, behavioral and environmental stimuli are
the difference-makers responsible for behavior and, as such, they are the key to
efficacious interventions.
We are now in a position to fully appreciate the radical behaviorist stance
in action. Skinner’s dismissal of mentalistic psychology, of both Cartesian
and Freudian ilk, rests on a general methodological argument known as the
theoretician’s dilemma.8 Return to our psychological causal chain of the E → M
→ B form. What is the relation between these relata? Logic dictates that either
mental states link environmental stimuli and conduct deterministically, or they
must do so in a non-deterministic fashion. In the first case, Skinner argues,
one can fully explain B without mentioning M at all. Internal mental states, in-
cluding neurophysiological processes, turn out to be completely unnecessary
for explaining behavior. Alternatively, if the M → B relation is indeterministic,
then M itself becomes useless and thus, Skinner claims, it should be elimi-
nated. Either way, references to mental states do not contribute to psychological
explanations and should be omitted.9 Environmental stimuli are all we need to
account for behavior.
In sum, no mental properties mediate the relation between independent
and dependent variables. References to consciousness and other superfluous
mental states are eliminated once the controlling independent variables are
properly understood. All mechanisms producing behavior are characterized
in terms of environmental stimuli—E in our schema.
8 The moniker “theoretician’s dilemma” might sound puzzling to some readers. The rationale un-
derlying this terminological choice is that, abiding to the conventions of logical empiricism, Skinner
employs the expression “theoretical term” to refer to any unobservable object, law, or process. Thus,
the name “theoreticians’ dilemma” comes from its application to non-observable entities in science.
9 As Flanagan (1991) notes, the argument, as stated, is neutral with respect to whether to eschew
E or M. The fortified version of the dilemma thus runs as follows. One ought to avoid referring to
mental events whenever possible because such references are logically eliminable and because they
are epistemically problematic and useless in practice.
154 Black Boxes
10 As Leahey (2018, p. 363) puts it, “Skinner assumed that physiology would ultimately be able to
detail the physical mechanisms controlling behavior, but that analysis of behavior in terms of func-
tional relationships among variables is completely independent of physiology. The functions will re-
main even when the underlying physiological mechanisms are understood.”
11 For related analyses, see Chomsky (1959); Dennett (1981); and Sober (2015).
History of Science, Black-Boxing Style 155
12 The St. Petersburg paradox, also known as the “St. Petersburg lottery,” is a puzzle related to prob-
ability and decision theory. A fair coin is flipped until it comes up heads the first time, at which
point the player wins 2n where n is the number of times the coin was flipped. How much should
one be willing to pay for playing this game? The expected monetary value approaches infinity. And
yet, it would be rejected by virtually any gambler. While the paradox was first invented by Nicolaus
Bernoulli in 1713, it takes its name from its resolution by Daniel Bernoulli, Nicolaus’s cousin, who at
the time was a resident of the eponymous Russian city.
158 Black Boxes
13 This, of course, is only one among many equivalent formulations of the axioms under-
lying the concept of rational choice presupposed in the neoclassical model. A broader, alternative
characterization—which is closer to the original formalization provided by von Neumann and
Morgenstern—states that an agent must choose in accordance with a system of preferences that sat-
isfies the following properties. (a) The system is complete and consistent. (b) Any object which is a
combination of other objects with stated probabilities is never preferred to every one of these other
objects, nor is every one of them ever preferred to the combination. (c) If object a is preferred to ob-
ject b, and b to object c, there will be some probability combination of a and c such that the individual
is indifferent between it and b. This formulation shows more clearly that there is little difference be-
tween the plausibility of this hypothesis and the typical indifference-curve explanation of risk-less
choices. Yet, it is more abstract than the one provided in the main text.
14 Here is an example of behavior that would contradict the hypothesis. Imagine an individual who
is willing to pay more for a gamble than the maximum amount she could win—for instance, a gam-
bler who is willing to pay $1 for a chance to win, at most, 99¢. Such an agent displays inconsistent
preferences. Therefore, her behavior is economically “irrational” and, as such, it cannot be captured
in terms of a monotonic utility function.
History of Science, Black-Boxing Style 159
any, empirical import? Different readers will likely disagree on this judg-
ment, based on their background and general methodological assumptions.
Unfortunately for him, mental properties do play a crucial role in the framing
and production of action. His capital sin was not black-boxing per se, which
is inevitable and important across the sciences. His mistakes pervade all
three phases: framing wrong explananda, providing incomplete causal
explanations, and representing them in inadequate models.
Are Friedman’s black boxes more like Darwin’s and Mendel’s, or more
like Skinner’s? Is neoclassical economics the crown jewel of the social sci-
ences or a regressive research project? This remains controversial, dividing
economists and social scientists into opposing camps. I shall not attempt an
answer here. I do, however, contend that the present framework provides
the resources to recast, in more perspicuous and fruitful terms, the ongoing
debate between neoclassical economists and psycho-neural economists. In
doing so, it provides the foundation for more productive exchanges.
We are now in a position to fully comprehend, to a greater degree, the cen-
tral claim advanced in Chapter 3. The history of science, I maintained, is a
history of black boxes, constantly unwrapping old ones and constructing new
ones. This process can be broken down to three constitutive steps: framing
the explanandum, constructing a causal explanation, and representing these
placeholders in a model. This chapter emphasized the prominent role of
the three phases of black-boxing in the history of science. With all of this in
mind, we can now move on to the philosophical payoff of all our hard work.
Chapter 7 begins this final portion of the book with a critical discussion of
the notion of mechanism, “black-boxing style.”
7
Diet Mechanistic Philosophy
§7.1. Introduction
Chapter 5 broke down the construction of a black box into three constitutive
steps. The framing stage sharpens the object of explanation. The difference-
making stage provides a causal analysis of this explanandum by identifying
the factors which influence significantly its production. The representation
stage optimizes the explanatory “bang for the buck” by embedding the causal
narrative into a suitable model. This led to the definition of black boxes as
placeholders in causal explanations represented in models. I optimistically
hope that readers find this intuitive enough. Still, this one-liner embeds var-
ious technical notions that required some elucidation.
Chapter 6 applied this three-step recipe to the case studies first introduced
in Chapter 3. We saw that there may be various, often conflicting reasons for
idealizing or abstracting away the structural details from a causal explana-
tion. Sometimes, the underlying mechanisms are unknown, partially identi-
fied, or incorrectly understood. This situation was exemplified by the stories
of Darwin and Mendel. The English evolutionist and the Bohemian geneticist
were perfectly aware that there must be some biological apparatus governing
the process of heredity. Still, whether or not they were personally invested in
finding it, neither was able to do so successfully. In other circumstances, de-
tail may be omitted because it is, rightly or wrongly, deemed irrelevant to the
target at hand. This was the case with radical behaviorism. Skinner noted ex-
plicitly that treating organisms as black boxes is a drastic oversimplification.
Nevertheless, he considered mental states and other “internal” psycholog-
ical variables as disposable intermediaries that make little to no self-standing
Black Boxes. Marco J. Nathan, Oxford University Press. © Oxford University Press 2021.
DOI: 10.1093/oso/9780190095482.003.0007
Diet Mechanistic Philosophy 163
7.5 draws a connection between mechanisms and black boxes. Section 7.6
discusses some implications of the “diet” approach, and section 7.7 wraps up
the discussion with concluding remarks.
The term “mechanism” has been used extensively throughout this book.
Recall our distinction between two types of placeholders. Frames stand in
for behaviors in need of explanation. Difference-makers stand in for the
mechanisms that produce the behaviors in question. Such use of “mech-
anism” falls perfectly in line with the definition presented in section
7.2: a collection of entities and activities generating a specific phenomenon.
Indeed, as we shall now see, the analogies between the new wave of mecha-
nistic philosophy and black-boxing cut much deeper than this shared min-
imal definition.
In their recent monograph, Craver and Darden (2013) provide an influ-
ential analysis which breaks down the discovery of mechanisms into four
main phases. First, one must characterize the phenomenon under investi-
gation. This involves providing a precise-enough description of the behavior
in need of explanation. The second step involves representing the mech-
anism which produces the behavior in question. This is done by constructing
a schema which generates a space of possible mechanisms responsible for
the explanandum. Third comes the evaluation of this schema. In the fourth
revisitation phase, the initial representation can be amended and improved.
Diet Mechanistic Philosophy 167
This process should have a familiar ring to it. Craver and Darden’s four
stages resemble the three steps of the strategy detailed in Chapter 5. These
similarities are hardly haphazard. They are the consequence of a shared
scientific outlook. Indeed, black-boxing echoes many important insights
stressed by the new wave of mechanistic philosophers, beginning with
its broadly naturalistic stance. This section shows how all three steps in-
herent to the construction of black boxes—framing, difference-making, and
representation—are present, in some form or degree, in the neo-mechanistic
literature and, more generally, across contemporary philosophy of science.
Does this mean that black-boxing is just the repackaging of old ideas and
well-known adages? No, it does not. Section 7.4 will stress some key points
where the present account departs, quite drastically, from traditional mecha-
nistic approaches. Before doing so, however, we should focus on similarities.
Begin with the first stage of the black-boxing strategy. Recall from sec-
tion 5.2 of Chapter 5 that framing involves specifying and contextualizing
the object of explanation. Making the explanandum fully explicit requires
the construction of a model, which is no trivial task, especially in the context
of a scientific hypothesis. Fortunately, in order to get the inquiry going, it is
sufficient to specify a frame, a preliminary scaffolding that provides a coarse-
grained depiction of a range of behaviors in need of explanation.
Framing is the preliminary step in model- construction. Craver and
Darden (2013, p. 62) posit an analogous process in the building of mecha-
nistic models:
The search for mechanisms must begin with at least a rough idea of the
phenomenon that the mechanism explains. A complete characterization
of a phenomenon details its precipitating, inhibiting, and modulating
conditions, as well as noting nonstandard conditions in which it can (be
made to) occur and any of its by-products. During discovery episodes, a
purported phenomenon might be recharacterized or discarded entirely as
one learns more about the underlying mechanisms. Lumping and splitting
are two common ways of revising one’s characterization of the phenom-
enon in the search of mechanisms.
The issue here is not merely the truism that, in order to discover the mech-
anism underlying a certain kind of behavior, one must first determine which
behavior is under investigation. The contention is stronger. Mechanists
are adamant in stressing that all mechanisms are mechanisms for some
168 Black Boxes
phenomenon. This entails that the very nature and boundaries of a mech-
anism depend, in large part, on the phenomenon being produced.3
In short, the inherently perspectival identity of mechanisms and the ex-
planatory relativity underlying the black-boxing strategy point to the same
core phenomenon. The significance of a framing process lies in sharp-
ening the explanandum and determining the nature and boundaries of the
explanans.
Moving on to the second stage of black- boxing, difference-making
provides a causal explanation of the framed explanandum by identifying the
factors that make a significant contribution to its occurrence. Once again,
I do not have a novel account of causal explanation to offer. Quite frankly,
I am skeptical that that any single, monolithic definition can be nuanced
and general enough to subsume the broad range of phenomena classified as
causes. Still, black-boxing, as presented here, need not commit to any specific
approach to causal explanation. Any theory that specifies difference-makers
of an event may be employed in the construction of a black box.
These observations raise questions. How should we understand the rela-
tion between mechanisms and causation? In what ways is the black-boxing
recipe similar to and different from the new wave of mechanistic philosophy?
The exact connection between mechanisms and causation is a controver-
sial matter. In general, three categories of approaches can be distinguished.
First, there are authors who argue that mechanisms are the key for under-
standing the nature of causation. For instance, Machamer (2004), Bogen
(2008), and Glennan (2017) have articulated theories of causation, all of
which take mechanisms to be truth-makers for causal claims. A second
family of views encompasses various responses which reject the need for a
distinctively “mechanistic” approach to causation. Craver’s (2007) anal-
ysis of explanation in neuroscience, for one, borrows from manipulability
theory, a non-reductive approach falling within the difference-making camp.
3 Glennan (2017, p. 44) is explicit on this point: “The fundamental point is that boundary
drawing—whether spatial boundaries between parts of mechanisms or between a mechanism and its
environment, or temporal boundaries between the start and endpoints of an activity or mechanical
process—has an ineliminable perspectival element. But the perspectives from which these bound-
aries are drawn are not arbitrary or unconstrained. The perspective is given by identifying some phe-
nomenon. This phenomenon is a real and mind-independent feature of the world, and there are real
and independent boundaries to be found in the entities and activities that constitute the mechanism
responsible for that phenomenon.” Similarly, Craver and Darden (2013, p. 52) stress that “character-
izing the phenomenon to be explained is a vital step in the discovery of mechanisms. Characterizing
the phenomenon prunes the space of possible mechanisms (because the mechanism must explain
the phenomenon) and loosely guides the construction of this hypothesis space (because certain phe-
nomena are suggestive of possible mechanisms).”
Diet Mechanistic Philosophy 169
Third, and finally, there are philosophers, such as Bechtel and Richardson
(2010) and many of their collaborators, who view mechanisms primarily as
epistemic and explanatory constructs. Consequently, they try to avoid alto-
gether the thorny ontological issue of causation. In short, there is no offi-
cial consensus among mechanists regarding the relation between causes and
mechanisms. This remains an important open issue.
It should be evident that the black-boxing strategy fits quite naturally with
the second and third paths just delineated. Any account of difference-making
can be squared with the story sketched in section 5.4 of Chapter 5 and, if
mechanisms have nothing to say about causation, we can pick our favorite
alternative. This, however, is not to say that black-boxing is flatly at odds with
mechanistic or process-based accounts of causation. Causal relevance—the
heart and soul of difference-making—is such a central concept that all the-
ories of causation and causal explanation, mechanistic or otherwise, must
say something about it. After all, can we really do science without difference-
makers or some surrogate?
In sum, the relation between mechanisms and causation is complicated.
Some contend that mechanistic philosophy has something to say about the
nature of causation. Others rest content with supplementing mechanistic
insights with preexisting concepts. Either way, any theory of causation and
causal explanation worth its salt requires a notion of relevance. If my story of
difference-making is not quite it, it must be something rather similar.
Finally, let us consider the third and final phase of black-boxing. The rep-
resentation stage seeks to embed the causal narrative into a suitable model.
This is neither original nor controversial. The importance of models and
other vehicles of representation is hardly novel. As mentioned in section 5.4,
the idea that models function as mediators between theories and the world
has become rather commonplace, at least among philosophers of science.
The new wave of mechanistic philosophy has incorporated and devel-
oped these insights by focusing—unsurprisingly—on the representation of
mechanisms. Mechanistic models describe entities and activities respon-
sible for specific behaviors. A distinction is commonly drawn between two
components. The “phenomenon description” provides a model of the expla-
nandum; the “mechanism description” depicts the system that produces the
behavior in question. Once again, this dichotomy should have a familiar ring
to it. It roughly corresponds to the two kinds of placeholders introduced in
Chapter 4. Phenomena descriptions are what I called “frames.” Mechanism
descriptions pick out difference-makers that bring about the explanandum.
170 Black Boxes
RNA
following analogy. Imagine holding a picture of the Mont Blanc and uttering,
“That mountain is beautiful!” Am I talking about the picture or the moun-
tain itself? Obviously, since the photograph is supposed to faithfully mirror
reality, I am referring to both: the mountain depicted in the photo is beau-
tiful. The same applies to genes and cells. When I claim that gene transcrip-
tion requires the unwinding of the DNA double helix, this claim holds in the
model and in reality. In short, the question whether “mechanism” refers to
models or reality is wrongheaded. It refers to both.
Intuitive as it may seem, this response is overly simplistic. Let me be very clear
that I do not intend to deny that models purport to represent bits of the world.
Of course they do. Similarly, I know better than to question the existence of a
physical reality—idealism, as noted earlier, is out of the equation. My point is
rather that the model-world relation is much more nuanced, complex, and in-
direct than the photograph analogy would suggest. And, as we shall see, this has
robust philosophical implications.
To begin, take a closer look at the diagram in Figure 7.1. This representa-
tion of the cellular environment is inaccurate in several respects. First, DNA is
depicted as a self-standing double helix in which all base pairs are available and
accessible to protein. Second, all enzymes and transcription factors are either
absent or tacitly assumed to be present in just the right quantity. Third, there are
no repression mechanisms, such as DNA methylation, that could potentially in-
terfere with gene transcription. Finally, the cellular environment is represented
as a uniform, spacious system in which molecules are free to roam aimlessly
without impediments.
Diet Mechanistic Philosophy 173
In short, this simplified model represents a set of ideal conditions that are
seldom or never instantiated in reality.4 Real-life cells, it goes without saying,
are way more complicated. DNA is tightly coiled around histones in ways
that prevent enzymes from accessing regulatory or structural regions of the
gene. Methyl-groups hide nucleotides from transcription factors. Proteins
required for the expression of the gene are often absent or inaccessible. And,
finally, molecular interactions do not occur in a void but, rather, in a crowded
ecosystem. These details are not minor omissions or irrelevant by-products.
They are what turn formless blobs of cells into finely tuned organisms ca-
pable of selectively coordinating the transcription and translation of nu-
merous genes in the right place and at the right time.
These considerations invite a natural question. Why would biology
textbooks deliberately provide simplified representations that do not ac-
curately represent real organisms? The obvious answer is that biological
organisms are extremely complex. Describing all the components of an actual
cell would complicate the representation exponentially. The oversimplifica-
tion, from this standpoint, is a small price to pay in exchange for tractability.
There is surely something true captured by this intuitive answer.
Pragmatic convenience is definitely a factor. But is this all there is to it? The
answer is negative. To see why, it is crucial to draw a distinction between
abstraction, the omission of detail, and idealization, the deliberate misrep-
resentation of detail. Tractability may explain the presence of abstraction
in the model. But how do we account for the deliberate distortion in the
exact places where the mechanistic descriptions gain force? Why present a
model that, as mentioned earlier, captures neither necessary nor sufficient
conditions for gene expression?5 It is one thing to leave things out. But
why the flat misrepresentation of reality? In short, perspicuousness is only
half of the story, and not the controversial part. So, the issue remains: why
would textbooks intentionally provide inaccurate representations?
4 More precisely, whether these ideal conditions are “seldom” or “never” instantiated depends on
how strictly we interpret the representation itself (Nathan 2015). On a looser reading, while real-
life cells sometimes instantiate the conditions portrayed in the diagram, they often do not. From a
stricter perspective, one could say that the conditions depicted here are never realized. However,
some real-life conditions resemble the situation more closely than others. For our purposes, we need
not choose between these two alternatives. The important point, from the present standpoint, is that
real cellular environments are substantially different from the diagrammed ones which, sensu stricto,
fail to provide necessary or sufficient conditions for the expression of genes.
5 For a detailed discussion of how mechanistic explanations often intentionally distort the central
difference-makers in causal explanations, see Love and Nathan (2015).
174 Black Boxes
6 This has a paradoxical flavor, given the instrumental role of mechanists in replacing deductive-
nomological approaches in favor of more naturalistic analyses of explanation.
7 For instance, Glennan (2017, Ch. 3) criticizes attempts to distinguish between “phenomenal” and
“mechanistic” models. He does so on the grounds that separating these functions requires an appeal
to the intentions of the modeler.
8 The concept of multiple-model idealization is borrowed from Weisberg’s (2013, Ch. 6) three
kinds of idealization. In addition to the multiple-models approach, Weisberg adds two more variants.
The first is Galileian idealization, the practice of introducing distortions for the sake of simplifying
theories. The second, minimalist idealization, corresponds to the practice of introducing only causal
factors that make a difference to the occurrence of an event or phenomenon. All three kinds of ide-
alization play a prominent role within the black-boxing strategy. Multiple-models follow from the
framing stage. If models are relativized to the framing of an explanandum, then reframing the object
of the explanation along alternative lines will require a different model, with varying abstractions and
idealizations. Minimalist idealization occurs at the difference-making stage, when the causes that
significantly influence the (framed) explanandum are distilled. Galileian idealization chiefly pertains
to the representation stage, when the causal explanation is embedded in an appropriate model, that
should be presented as perspicuously as possible.
9 This point was originally developed in Love and Nathan (2015). For a more overarching critique
of the ontic conception of explanation, see Wright and Van Eck (2018).
176 Black Boxes
The previous section (§7.4) attempted to draw a wedge between the meta-
physics and the epistemology of mechanisms. Specifically, I argued that
mechanisms should not be characterized as ontological posits. Mechanisms
are not things “out there” in the world. Rather, mechanisms are best under-
stood as epistemic constructs, model-theoretic vehicles of representation.
Section 7.6 will argue that, by adopting this deflationary stance, one may ef-
fectively respond to some concerns that have been raised against traditional
mechanistic theory. Before doing so, the present section discusses the con-
nection between mechanisms and black boxes. After clarifying the nature of
the relation in question, I compare and contrast my conception of black boxes
with two influential accounts taken from the two most systematic accounts
of black boxes in the philosophical literature, to the best of my knowledge.
The upshot of our discussion is that mechanisms are best understood
as vehicles of representation. Given that models may depict real systems
at different scales, it seems reasonable to insist that mechanisms are a kind
of placeholder. After all, a depiction, verbal or pictorial, “stands in” for the
system it purports to represent. All of this is, hopefully, rather intuitive.
Attentive readers, however, will surely note that this claim is in tension with
some tenets advanced in the first half of this book. Let me explain.
Back in Chapter 5, black boxes were defined as placeholders in causal
explanations represented in models. Previously, Chapter 4 distinguished
two kinds of placeholders: frames, which stand in for patterns of behaviors
in need of explanation, and difference-makers, which stand in for the
mechanisms that produce the behavior in question. It follows that one kind
of black box—namely, difference-makers—are, for all intents and purposes,
placeholders standing in for mechanisms, framed and represented in a
178 Black Boxes
Mn + 1
Mn
Mn – 1
Ln + 1
Ln
Ln – 1
Figure 7.2. Mechanisms and black boxes are relative to levels of explanation.
Diet Mechanistic Philosophy 179
These issues will be addressed in Chapter 10. In the meantime, in the re-
mainder of this section, I further clarify my perspective on the relation be-
tween mechanisms and black boxes by comparing and contrasting it with
two influential depictions due, respectively, to Craver and Strevens.
In a classic discussion, Hanson (1963, Ch. 2) draws a distinction between
three different kinds of “boxes” in science: black, gray, and glass. Simply put,
a black box is the representation of a phenomenon where the component
functions are unknown. A gray box is a model where all components are
specified in some degree of detail. Finally, a glass box is a depiction of the
target system that is complete or, more modestly, complete for the purposes
at hand. All relevant components are exhaustively depicted. Elaborating on
this distinction, Craver characterizes the process of scientific discovery as the
gradual transformation of a black box into a gray box and, eventually, into a
glass box.10 This progressive opening of boxes is part of the general process of
turning a “mechanistic sketch”—an incomplete depiction that characterizes
parts, activities, and features of a mechanism while leaving gaps—into a
“complete mechanistic model,” that is, a comprehensive description of all
the components and activities of a mechanism. From this standpoint, a black
box is part of a mechanistic sketch. It is something that marks a “hole” in
our knowledge of how things work. This, Craver notes, does not necessarily
invalidate the description in question. Ignorance is not a crime: just ask
Darwin! Yet, these gaps are often masked by filler terms that create a mis-
leading impression that the explanation in question is complete.
Black boxes, question marks, and acknowledged filler terms are innocuous
when they stand as place-holders for future work or when it is possible to
replace the filler term with some stock-in-trade property, entity, activity, or
mechanism (as is the case for “coding” in DNA). In contrast, filler terms are
barriers to progress when they veil failures of understanding. If the term
“encode” is used to stand for “some-process-we-know-not-what,” and if the
provisional status of that term is forgotten, then one has only an illusion of
understanding. For this reason, neuroscientists often denigrate the authors
of black-box models as “diagram makers” or “boxologists.” (Craver 2007,
pp. 113–114)
Begin by comparing the black-boxing strategy with the “more detail is al-
ways better” approach. From this standpoint, adding any kind of specificity
to a representation invariably increases its explanatory power. The outcome
of this process of including micro-constituents, taken to the limit, is a truly
complete mechanistic model that contains no black boxes or gray boxes. All
complex components have been broken down to their fundamental building
blocks, whatever that may mean. Thus understood, the mechanistic ap-
proach is fundamentally at odds with the black-boxing strategy advanced in
this book. It roughly corresponds to my crude characterization of radical re-
ductionism, from the first two chapters.
Nevertheless, it should be stressed that this “the more detail the better”
approach is not the perspective articulated and defended by Craver and his
collaborators. Indeed, Craver and Kaplan (2020) explicitly reject the “all
details are necessary” thesis as “ridiculously stringent” (p. 310). They clarify
that their position should rather be understood along the lines of a “more
relevant details are better,” according to which “if model M contains more
explanatory relevant details than M* about the [Salmon-Completeness]
mechanism for P versus P′, then M has more explanatory force than M* for P
versus P′, all things equal” (2020, p. 303).
We need not delve into the specifics of the “3M” approach.12 The impor-
tant point, for present purposes, is that, if not all details matter, then there
will be some aspects of the mechanism that are inevitably left out of the
representation. Determining which features are relevant and which are not
requires the three phases characterized in detail in Chapter 5—framing,
difference-making, and representation—or something along those lines.
What this goes to show is that my black-boxing recipe does not merely en-
compass what is commonly described as a “black box,” that is, a represen-
tation where no underlying feature has been uncovered. In addition, it also
captures placeholders where the underlying mechanisms or pattern of be-
havior is only partially understood, or whose representation is more or less
deliberately left incomplete. These correspond to the Hanson-Craver gray
12 Simply put, “3M” captures the “model-to-mechanism mapping” requirement that makes the
commitment to a mechanistic view of explanation explicit: “In successful explanatory models in
cognitive and systems neuroscience (a) the variables in the model correspond to components, ac-
tivities, properties, and organizational features of the target mechanism that produces, maintains,
or underlies the phenomenon, and (b) the (perhaps mathematical) dependencies posited among
these variables in the model correspond to the (perhaps quantifiable) causal relations among the
components of the target mechanism” (Kaplan and Craver 2011, p. 611). For a more nuanced, con-
trastive formulation, see Craver and Kaplan (2020, p. 297).
182 Black Boxes
boxes and, indeed, their “glass boxes,” once these are rightly understood in a
contextual fashion.
Another influential discussion of black boxes has been provided by
Strevens in his book Depth, where he articulates the “kairetic” strategy, the
account of causal explanation outlined in Chapter 5. Strevens claims that
explanations containing black boxes do not stand on their own. To make
his point, he begins by distinguishing two ways in which a scientific expla-
nation can be treated as “standalone.” First, an explanation stands alone if
it specifies all the difference-makers for its explanandum. An explanation
containing black boxes may technically be complete in this former sense, as
long as the black boxes stand in for the mechanisms that produce the expla-
nandum event. A second, stronger kind of standalone explanation provides
a full understanding of the explanandum in an unqualified, absolute fashion.
A “deep” standalone explanation of this kind provides an exhaustive under-
standing of the explanandum.13 Deep standalone explanations contain no
black boxes. All filler terms have been unraveled; all mysteries unveiled. Thus,
on Strevens’s picture, explanations containing black boxes are technically
complete, but only in the former, derivative sense. Black-boxed explanations,
at best, confer partial, qualified, understanding of their explanandum. This
is because they require the support of a mechanism-containing framework
in order to stand alone. These technical remarks call for some clarification.
What exactly does he have in mind?
Strevens clarifies the sense in which black boxes are only complete in a de-
rivative sense by distinguishing two situations, depending on whether or not
the black box in question is embedded in an explanatory framework. Focus,
initially, on the latter scenario, where the black box is not framed. Strevens
argues that a causal model containing a black box outside of the context of
a framework is deficient, for two reasons. First, since—on Strevens’s view,
not mine—black boxes are causally empty, they have no causal powers. Black
boxes cannot entail their target because they lack the capacity to produce
anything; they have no mechanistic components. Second, since black boxes
are multiply realizable, they are typically incohesive. The range of systems
that black boxes stand in for is so broad that the box itself does not consti-
tute a homogeneous entity. For these reasons, an explanation containing a
13 The limiting case of these “deep” standalone explanations is what Strevens calls “exhaus-
tive” standalone explanations, that is, maximally elongated, possibly infinite, indefinitely large
characterizations of the causal history of the explanandum. There seem to be some analogies here
with Craver’s notion of a “complete mechanistic model.”
Diet Mechanistic Philosophy 183
black box does not stand alone. Only a “deep standalone explanation,” which
contains no black boxes and exhaustively describes all of its fundamental
constituents, can survive outside of a framework.
Now, shift to the second, alternative scenario. When a black box is un-
derstood as a placeholder for a mechanism within the context of a frame-
work, the two problems discussed in the previous paragraph do not subsist.
This is because the box stands in for a specific mechanism, specified by the
framework. This makes the black box cohesive, and it bestows upon it some
causal capacities. Thus, Strevens claims, assuming that everything else is in
order, an explanation containing a black box may stand alone. Thanks to the
tacit support of the framework, and in virtue of the causal powers of the un-
derlying homogeneous mechanism, black-boxed explanations can be causal,
cohesive, and complete. There is, however, a catch. In these circumstances,
what is being explained is not an explanandum simpliciter, an absolute event.
Rather, the object of explanation is the occurrence of something given that a
certain mechanism is in place. The framework-relativity of the explanation
thus limits the explanatory power of the model. To explain a phenomenon,
given that so-and-so is in place, Strevens claims, is a lesser achievement than
explaining a phenomenon in an absolute sense. The ultimate goal of science
is unqualified understanding, which can only be achieved via framework-
independent explanatory models. Once the interior of all black boxes is illu-
minated, their darkness will finally be dispelled.
Time for a brief summary. Strevens’s and Craver’s positions on scien-
tific explanation diverge in several important respects. For instance, while
Strevens advances an overtly reductionist stance, Craver stresses the antire-
ductionist flavor of his mechanistic approach as an integrative perspective.
Correspondingly, their analyses of black boxes go in different directions.
Nevertheless, they share some important features. For one, both philosophers
treat black boxes as placeholders standing in for mechanisms. Second, on
both views, black boxes seem to have no “productive” role to play in scientific
practice. Sure, when used correctly, they allow researchers to proceed in the
face of ignorance, postponing further inquiry until more auspicious times.
When things go south, they become an impediment to true comprehension.
Either way, we are always better off when a black box is replaced by a gray box
and, eventually, with a glass one. In other words, on both accounts, black-
boxing is something that we must come to terms with. But it is unclear that
willingly introducing a black box brings any added value to scientific know-
ledge and understanding. In Chapter 10, I shall argue that black boxes play
184 Black Boxes
14 To be precise, Strevens notes, this becomes more of a problem when one is explaining a regu-
larity or a generalization, as opposed to an individual event, where the presence of a cohesive causal
mechanism can be usually taken for granted.
Diet Mechanistic Philosophy 185
This brings us to the third and most significant difference. Strevens and
Craver both maintain that explanations containing black boxes are not ex-
haustive. But exhaustive with respect to what? Presumably, relative to the un-
derlying layer of mechanisms that, if described in detail, would render the
explanation complete. It should now be evident why this idea makes little
sense from my perspective. If the difference between mechanisms and black
boxes is only a difference in levels of description, replacing the former with
the latter will make the explanation no more exhaustive or complete. If all
scientific explanations are perspectival, they will inevitably contain boxes.
And the presence of a black box in a model does not necessarily signal the
incompleteness of the model in question. Now, to be sure, we can always
open the box and provide more detail. Yet, as we shall see in Chapter 10, this
presupposes a shift in explanandum. We are not invariably explaining more
and better. We are explaining something altogether different.
One final remark. What about mechanistic philosophers, who explicitly
advocate a less-ontological mechanistic approach? Is my “diet” mechanistic
philosophy just a version of the metaphysically sober, epistemic conception
of mechanisms advanced by Bechtel and his collaborators? It seems to me
that the answer is negative. While traditional epistemic mechanism explicitly
dampens its ontological implications, it remains committed to controversial
mechanistic tenets, such as decomposition and localization.15 In contrast,
the diet approach, which treats mechanisms as placeholders, also relaxes
these methodological constraints. While my black-boxing recipe recognizes
mechanistic explanation as one form of explanation in science, it poses
hardly any strictures on how it works. That is what ultimately distinguishes it
from extant forms of mechanistic philosophy.
In conclusion, we can all agree that black boxes are placeholders. These
placeholders may stand in for mechanisms (difference-makers), but they can
also stand in place of patterns of behavior in need of explanation (frames).
They may be introduced because of ignorance, but also to implement and
promote a specific research program. As I have defined them—placeholders
in causal explanations represented in models— black boxes are broad,
encompassing what are traditionally conceived as black, gray, and glass
boxes. Black boxes are mechanisms, and may also stand in for mechanisms,
as long as we relativize black boxes and mechanisms to different explanatory
levels. In this respect, I disagree with both Strevens’s idea of a “standalone”
16 For a general overview and discussion of this difficulty, see Boniolo (2013).
Diet Mechanistic Philosophy 187
But it hardly goes far enough. Furthermore, I agree with Nicholson that the
scientifically important notion is the concept of causal mechanism, which
includes heuristic devices that facilitate the explanation of phenomena. The
abstract and idealized entities postulated in models are often quite different
from the entities that are represented, and the two must be kept separate.
My solution is simple. Draw an explicit distinction between real enti-
ties and processes in the world, on the one hand, and mechanisms, under-
stood as theoretical representations, on the other. By treating mechanisms
as placeholders in causal explanations framed in models, we can eschew the
confusion noted by Nicholson. Characterized as placeholders, mechanisms
shed their ontological implications. They become boxes: black, gray, or glass,
depending on the amount of detail represented or idealized away.
Finally, let us address a third concern. In a recent article, Franklin-Hall
(2016) provides a general assessment of the philosophical impact of the
mechanistic wave. Judging by both the language of the new mechanists and
the influence of their work, one would legitimately expect that the new mech-
anism has served up a bevy of solutions. In contrast, she argues, with respect
to the central task of elucidating the nature of explanation, the movement
as a whole has hardly delivered on its promises. Interestingly, her critique is
not centered on the falsity or incorrectness of the central mechanistic tenets.
Rather, the main concern is that “mechanistic explanatory accounts offered
to date—even in their strongest formulations—have failed to move beyond
the simple and uncontroversial slogan ‘some explanations show how things
work.’ In particular, I argue that proposed constraints on mechanistic expla-
nation are not up to the task required of them: namely, that of distinguishing
acceptable explanations from those that, though minimally mechanistic, are
incontrovertibly inadequate” (2016, p. 42).
Franklin-Hall concludes that, although the new mechanistic attempt to take
scientific practice seriously—and, more generally, its naturalistic stance—is
important and admirable, much remains to be done. We need to keep looking
under the hood of explanatory practice and detail its workings. The reason is
that “rather than opening the black boxes of the scientific enterprise—with re-
spect to causation, part individuation, and explanatory level—philosophers
have largely taken those practices for granted” (2016, p. 71).
I wholeheartedly agree. Describing the mechanistic structure of a system
is only one step in a long and tortuous stride. Prediction, explanation, abduc-
tion, and other inferences require framing, identifying difference-makers,
abstraction, idealization, modeling, and many other forms of representation.
Diet Mechanistic Philosophy 189
The focus should be on the nature of these epistemic practices, not on the no-
tion of mechanism, characterized as an ontological category.
There is one point of detail where I depart from Franklin-Hall’s assess-
ment. In her final footnote, she writes: “There is one mildly ironic excep-
tion to my general diagnosis. The only putative black box that mechanists
have opened is the scientists’ concept of ‘mechanism.’ On reflection, this
focus was imprudent. Not every concept used by scientists is meaty, and not
every term reflects a genuine black box; ‘mechanism’ is not a theoretical term
within the science, but a mere pointer, a placeholder—similar perhaps to the
philosopher’s term ‘conception’ ” (2016, p. 71, fn. 17). I would rephrase the
point in slightly different terms. A mechanism is, indeed, a pointer, or place-
holder. But, precisely for this reason, it is also a black box: a placeholder that
stands in for a range of behaviors or whatever produces it. Once again, the
power of black boxes should not be underestimated.
In conclusion, it is time to tie up some loose ends. The new mechanistic phi-
losophy is a welcome contribution to the philosophy of science. First, it has
promoted a healthy naturalistic attitude, which calls philosophers to draw
attention to the details of actual scientific practice. In addition, it offers a fur-
ther, lucid analysis of why physics should not be heralded as the paradigm of
science. Mechanistic disciplines, such as biology and neuropsychology, are
inherently different from mathematized and law-based physics. My black-
boxing approach owes a substantial intellectual debt to the new mechanists.
Still, framing, difference-making, and representation draw a wedge between
ontology and epistemology, which do not go hand in hand, as it is too fre-
quently assumed. Hence, we should not use the same term—“mechanism”—
to refer to both. In short, black-boxing tempers the ontological implications
of mechanistic approaches to science.
At the root of the trouble lies the notion of mechanism itself. Mechanisms
have a two-sided nature. On the one hand, philosophers tend to adopt an
ontology according to which mechanisms are a fundamental category or nat-
ural kind: they are real stuff in the real world. On the other hand, mechanisms
constitute a model-theoretic construction; they are a vehicle of scientific rep-
resentation. From this latter standpoint, neo-mechanists emphasize the sig-
nificance of discovering, modeling, and studying these entities. Mechanistic
190 Black Boxes
§8.1. Introduction
Let us retrace our steps back to the early stages of our journey. Reductionism
contends that science invariably advances by descending to lower
levels. Antireductionism flatly rejects this tenet. Some explanations,
antireductionists counter, cannot be enhanced by breaking them down any
further. But why should this be so? What makes explanations “autonomous”?
A popular way of cashing out the autonomy thesis, the core of antireduc-
tionism, involves the concept of emergence.
The main intuition underlying emergence is simple. As systems become
increasingly complex, they begin to display properties which, in some sense,
transcend the properties of their parts. As such, they exhibit behavior that
cannot be predicted by, explained with, or reduced to laws that govern sim-
pler entities. Common wisdom shows that there is more to a team than just
a bunch of players, and a musical symphony is not merely an arrangement
of notes. The main task of a philosophical analysis of emergence is to spell
out the “in some sense” qualifier. In what ways, if any, do emergents tran-
scend aggregative properties of their constituents? How should one under-
stand the alleged unpredictability, non-explainability, or irreducibility of the
resulting behaviors of teamwork, musical compositions, and other sophisti-
cated ensembles? Answering these questions might look simple. But it has
challenged scientists and philosophers alike for quite some time.
Black Boxes. Marco J. Nathan, Oxford University Press. © Oxford University Press 2021.
DOI: 10.1093/oso/9780190095482.003.0008
192 Black Boxes
The general idea of wholes transcending the sums of their parts is hardly
novel. It can already be found in Aristotle’s Metaphysics. More modern
discussions of emergence trace their roots back to J. S. Mill’s distinction be-
tween “homopathic” and “heteropathic” laws and have been shaped into
their contemporary form in the early 1900s by the work of philosophers such
as Broad, Alexander, and Lovejoy, and scientists like Morgan and Sperry. Set
aside during the heyday of logical positivism, when it was regularly dismissed
as confused and teetering on incoherence, over the last few decades, emer-
gence has worked its way back into the mainstream.
The comeback of emergence onto the main stage has had a polarizing ef-
fect, dividing the camp into enthusiastic supporters and skeptical naysayers.
Authors with antireductionist tendencies, scientists and philosophers, appeal
to emergence to capture a form of non-reductive physicalism. Reductionists,
on the other hand, treat emergence as an obscure, muddled notion that
should be definitively confined to oblivion. Once again, we are stuck between
Scylla and Charybdis. But is this the only path? Is emergence truly incompat-
ible with reduction? I suggest a negative answer.
This chapter presents, motivates, and defends a recharacterization of
emergence and its role in scientific research, grounded in our analysis
of black boxes. Here is the plan. Section 8.2 sets the stage by providing an
overview of extant accounts. Section 8.3 examines some ways in which
emergence is employed in complexity science and concludes that current
definitions have trouble accommodating such uses. Readers previously fa-
miliar with the debate should consider skipping directly to section 8.4, which
offers a constructive proposal. Emergents, I maintain, are best conceived as
black boxes: placeholders in causal explanations represented in models. My
suggestion has the welcome implications of bringing together various usages
of emergence across domains and reconciling emergence with reduction.
This, however, does come at a cost. It requires abandoning a rigid perspective
according to which emergence is an intrinsic or absolute feature of systems,
in favor of a more contextual approach that relativizes the emergent status of
a property or behavior to a specific explanatory frame of reference. The final
sections discuss refinements and implications (§8.5), as well as a few ade-
quacy conditions and advantages (§8.6) of the proposal.
Before moving on, two disclaimers are in order. First, my proposal does not
presuppose that all accounts of emergence should be judged from a scientific
perspective or, more generally, that science provides the ultimate standards
for philosophy. I simply maintain that making sense of scientific concepts is a
Emergence Reframed 193
1 While I focus on synchronic dependence between emergents and their basis, these relations can
also be characterized diachronically or dynamically (Humphreys 2016).
194 Black Boxes
concepts, such as causal powers. One may thus distinguish two forms of
metaphysical emergence. “Strong emergence” occurs when higher-level
properties synchronically depend upon their basal conditions and, yet,
these macro-properties display distinctive causal powers that are absent
from lower levels. A less demanding conception, which fits in better with
the physicalist outlook by eschewing controversial forms of “downward
causation,” retains the synchronic clause. Yet, it replaces the assumption
that emergents have novel causal powers with the requirement that the
powers of emergents must be a proper subset of the set of powers found at
the basal level. “Weak emergence,” thus defined, avoids the controversial
implication that new causes may arise at higher levels. Still, it purports to
vindicate a form of non-reductive physicalism, as higher and lower levels
can be distinguished via an application of the indiscernibility of identicals
principle.5
Intuitive as they may seem, definitions of emergence appealing to causal
powers are hardly unproblematic. For one, the precise individuation of
powers constitutes a thorny challenge. The issue is that virtually any de-
scription of any phenomenon involves properties, causal or otherwise,
that are lost by ascending or descending levels. How do we determine
“novelty,” and on what basis? Are the liquidity of water and the elasticity
of rubber “distinctive” powers, or do they restate the capacities of their
constituent molecules? The bottom line is that, in spite of the struggle to
avoid the vagueness of reductionism, appeals to causal powers are plagued
by at-root identical problems: properties and their linguistic descriptions
do not wear their level and status on their sleeve. Consequently, providing
a set of clear-cut individuation criteria and illuminating analyses is no
trivial endeavor.
We shall return to causal powers and metaphysical strategies in sections
8.3 and 8.4. Before doing so, let us introduce the alternative route.
The most influential epistemic analysis of emergence was offered by
Hempel and Oppenheim, who reject the idea of emergence as an ontolog-
ical property inherent to specific systems. They characterize emergence as an
epistemological stance, indicative of the scope of our knowledge at a given
5 Simply put, the indiscernibility of identicals states that identical entities must share all properties.
Thus, if some object a lacks some property of object b, the two objects a and b can be deemed onto-
logically distinct.
196 Black Boxes
admittedly fails to do justice to the extant and growing literature on the topic.
Still, I hope to have done enough to motivate the significance of two general
methodological questions. First, do extant accounts capture the multifarious
ways in which emergence is employed across disciplines, effectively pro-
viding a unifying definition? Second, if they do not, should we keep using
a laden concept like “emergence,” or would we be better off replacing it with
allegedly clearer and less controversial expressions, such as “unpredictable”
and “unexplainable”? The next two sections address both issues, in turn.
The previous section suggested that providing a clear, compelling, and un-
controversial definition of emergence is no trivial endeavor. There are sev-
eral promising accounts currently on the market, which I have categorized
as metaphysical vs. epistemic. While all of them pinpoint important aspects
of emergence, the question is whether any one is general and well-rounded
enough to fully capture this historically elusive concept.
Despite the lack of an uncontroversial, widely accepted analysis, the term
“emergence” continues to be employed extensively both in philosophy and
across the natural and social sciences. Fascinating examples come from com-
plexity theory, a growing discipline which encompasses a diverse array of
interdisciplinary approaches aimed at understanding and explaining sophis-
ticated biological and psychological systems. This section introduces and
discusses some illustrations. Our goal is to examine whether, and to what ex-
tent, the theoretical accounts of emergence surveyed in the preceding section
map onto empirical practice.
To get the ball rolling, let us focus our attention on a specific subfield of
complexity science: systems neuroscience. In a contribution to a recent
volume discussing prospects and challenges of contemporary neurosci-
ence, Olaf Sporns (2015) argues that, to fully understand the workings of the
brain, neuroscience must “shift perspective, towards embracing a view that
squarely acknowledges the brain as a complex networked system, with many
levels of organization, from cells to cognition that are individually irreduc-
ible and mutually interconnected” (p. 90). What motivates this shift in per-
spective, especially given the historically successful record of brain studies?
Sporns acknowledges that the so-called neuron doctrine—the hallowed tenet
that neurons are the fundamental computational units in the brain—has
198 Black Boxes
provided much insight. Yet, it is now evident that the power of neurons
derives from their collective action as parts of networks, bound together by
connections which facilitate their interaction, competition, and cooperation.
Comprehending the brain requires grasping these interconnections.
Sporns describes this breakthrough, his envisioned shift in perspective, as
the replacement of an old-fashioned account of neural circuits with a more
sophisticated alternative. Traditionally, the actions of a neural circuit were
assumed to be fully determined by the additive sum of highly specific point-
to-point exchanges of information among its elements, typically, individual
neurons. Consequently, the structure of the entire circuit was taken to be
decomposable—in principle, given complete data—into neat sequences of
individual causes and effects. Paraphrasing this into philosophical jargon,
the overall function of a circuit was assumed to be the resultant of the col-
lective interactions of individual components. In contrast, Sporns maintains,
cutting-edge approaches to complexity theory and network neuroscience
emphasize how global outcomes cannot be broken down into aggregates of
localized causes and how the functioning of the system as a whole transcends
the functioning of each individual element. In short, the lesson that neuro-
science imported from systems biology is that the behavior of cells depends
on gene regulatory networks, signal networks, and metabolic pathways,
which shape and govern interactions among individual molecules.
How is any of this relevant to our discussion? The answer is that, at this
point in Sporns’s presentation, our old friend emergence enters the pic-
ture. The proposed shift in perspective concerning neural circuits purports
to capture configurations of complex networks, which do not arise at lower
levels of organization. These are typically global states of brain dynamics in
which huge collections of neurons engage in coherent and collective beha-
vior through local interactions, individually quite weak, but aggregatively
powerful enough to generate large-scale patterns. Sporns mentions “neural
synchronization”—the coordinated, synchronized firing of large numbers of
neurons—and the “critical state,” a dynamic regime where systems engage
in a wide range of flexible and variable tasks. These, he claims, are emergent
phenomena.
Unsurprisingly, these neural processes become quite complex very fast.
Technical details need not concern us here.9 The question that I want to ad-
dress is a philosophical one. Can we adequately conceptualize the emergence
9 For an extensive but readable overview of these findings, see Sporns (2011, 2012).
Emergence Reframed 199
follow the movie because she does not speak German”) to serious path-
ological conditions such as aphasia, autism, or dyslexia. In short, what all
emergents have in common is that their micro-structure can be idealized and
abstracted away in the context of macro-explanations. They are black boxes
and this, I contend, makes all of them emergent.
What about neural systems? A similar analysis provides the conceptual
resources to capture the theoretical shift in neuroscience described and
prescribed by Sporns. Neural synchronization, the critical state, and other
networks can already be explained, at least partially, as witnessed by Sporns’s
own research. Furthermore, brain networks and their powers are neither
novel nor “irreducible” to individual interacting parts. What else constitutes
a circuit, in addition to a large aggregation of neurons, their additive and
non- additive interactions, and background conditions? The instructive
lesson imparted by contemporary neuroscience is that a complete descrip-
tion of brain function in terms of individual neurons and their relations is
not required for brain systems to do the explanatory heavy lifting. The reason
is that the core elements and processes that explain the workings of the brain
are difficult to describe at lower levels of organization, and thus require a shift
to a broader perspective. These macro-building blocks are, indeed, consti-
tuted by these neurons. But such configurations are stable and autonomous
enough to be identified and represented as wholes, without breaking them
down further to more fundamental units. In sum, the emergence of neural
states resides in the role of complex circuits in producing and maintaining
neuro-psychological states, which does not require a detailed account of how
these processes are physically realized at the unitary level. Thus framed, these
circuits can be effectively black-boxed. It is their status as black boxes that
makes them emergent.
I conclude this array of illustrations by briefly noting that my proposal
captures the essence of many other traditional examples of emergence.
Hempel and Oppenheim were correct, of course, that life and consciousness
are phenomena for which we could not fully account. Unfortunately, despite
scientific strides accomplished during the fifty years or so since the publica-
tion of their essay, we still cannot claim to have completely uncovered the
material underpinnings of these properties. At the same time, our incapacity
to explain qualia has not frustrated attempts by psychologists and cognitive
scientists to study conscious mental states, any more than the elusive mys-
tery of life has prevented biologists from studying, systematizing, and cat-
egorizing living organisms. Life and consciousness are emergent because
Emergence Reframed 205
Section 8.4 outlined my emergents qua black boxes account and illustrated
it with various traditional and more cutting-edge examples. This section
refines the view by addressing three implications and a potential objection.
Beginning with the first implication, it follows from our discussion that
there are various distinct families of emergent properties. More specifically,
emergents can be arranged along two orthogonal dimensions. On the one
hand, emergents can be distinguished by the rationale for omitting detail: ne-
cessity vs. pragmatic convenience. On the other hand, emergents can be cat-
egorized depending on whether they function as explananda or explanantia,
that is, whether they play the role of frames or difference-makers relative to
an explanatory context. Let me consider both cases, in turn.
First, the micro-structure of an emergent property may be left out because
it is currently unavailable or lacking tout court. The situation is exemplified
by Hempel and Oppenheim’s treatment of qualia. Subjective mental phe-
nomena cannot (yet) be accounted for at a physical or biochemical level and,
for all we know, a satisfactory account may well be beyond the grasp of sci-
ence. Here, including the required detail is not an option. At the same time,
there are also situations in which, while the microstructure is known, it is
irrelevant for the explanation and may thus be omitted for the sake of con-
venience. This is the case with many aspects of isomers, protein folding, and
liquidity, where the amount of included detail is a matter of choice. These
two motivations for abstraction and idealization—necessity vs. pragmatic
206 Black Boxes
Necessity Convenience
This section wraps up our discussion by, first, presenting some adequacy
conditions and, subsequently, some advantages and of my proposal.
Bedau (2012) puts forward three useful criteria to assess any account of
emergence. First, he contends, the analysis should be clear and explicit, in
13 The virtual ubiquity of emergence has been noted and defended from trivialization concerns by
other authors, such as Humphreys (2016).
210 Black Boxes
14 This form of fundamentalism may well be what Morrison has in mind when she claims that
“epistemic independence—the fact that we need not appeal to micro-phenomena to explain macro
processes—is not sufficient for emergence since it is also a common feature of physical explanations
across many systems and levels” (Morrison 2012, p. 161).
Emergence Reframed 213
one for sure. But there are several other reasons why structural details might
not be included in macro-explanations. All these are different, but equally le-
gitimate, cases of emergence.
A third and final advantage of my framework is that it preserves the cen-
trality of prediction and explanation in emergence, while avoiding Kim’s
trivialization worries and related concerns about emergence being spooky,
mysterious, or otherwise problematic. Emergents play an important predic-
tive and explanatory role across scientific and philosophical domains. Still,
contrary to both classic and contemporary accounts, the emergence of a
property or process is not inextricably tied to obscurity, lack of explanation,
non-derivability sans simulation, etc. To be clear, emergence can be associ-
ated to any of the preceding, as well as to causal powers, a mix of fundamen-
tality and dependence, or various other metaphysical features. Yet, none of
these conditions is necessary or sufficient for emergence. Despite substantial
ontological and epistemic differences, the common core feature of all these
properties, which makes all of them “emergents,” is their role as black boxes
in explanation.
in the philosophy of science: the question of scientific progress. For the time
being, I hope to have motivated and inspired an overarching and coherent
reframing of emergence that, nonetheless, leaves room for—and, indeed,
justifies—various metaphysical and epistemic assumptions, which have a
long and hallowed place in the history of philosophy.
9
The Fuel of Scientific Progress
§9.1. Introduction
Black Boxes. Marco J. Nathan, Oxford University Press. © Oxford University Press 2021.
DOI: 10.1093/oso/9780190095482.003.0009
216 Black Boxes
1 As Hacking (1983) aptly notes, Kuhn did not single-handedly engineer this transformation in the
history and philosophy of science. When Structure was first published in 1962, similar themes were
being expressed by a number of voices and the history of science was forming itself as a self-standing
discipline. The fundamental transformation in philosophical perspective was that science was finally
becoming a historical phenomenon.
The Fuel of Scientific Progress 217
2 See, for instance, Lakatos (1970); Shapere (1974); Laudan (1977); and Kitcher (1993).
218 Black Boxes
black boxes can be used to enrich the conception of progress underlying refer-
ential models, while mitigating the perilous consequences of meaning holism.
Section 9.5 borrows some insights from referential accounts of theory-change
to sharpen and develop the three-part recipe presented in Chapter 5. Finally,
section 9.6 wraps up our discussion of scientific progress with a few concluding
remarks.
Before getting down to business, allow me to preempt a potential ob-
jection.3 Some readers may take my call to revisit traditional questions of
progress, incommensurability, sense, and reference as a step in the wrong
direction. These are topics reminiscent of 1970s philosophy of science which,
like most of philosophy back then, took its cue from the philosophy of lan-
guage. Many have come to view the divorce from the issue of language, and
its replacement with an increased attention to empirical practice, as one
of the great achievements of contemporary philosophy of science. This
being so, should we really resurrect these tired old issues of meaning? Even
philosophers of language have finally set them aside, and for good reason.
To be clear, I have no intention of dragging philosophy of science back to
its positivistic roots. Nor do I want to disavow the current naturalistic stance,
as my attention to various case studies will hopefully testify. Nevertheless, the
question of progress is a central constituent of any account of science worth
its salt. And to the best of my knowledge, it has not been solved. It has been
quietly swept under the rug, where it has lain dormant ever since. My goal
is hardly to rehash the good old linguistic turn. It is to bring important phil-
osophical issues back onto the main stage. And if an honest, “naturalized”
discussion of progress requires reviving long lost concepts such as meaning,
reference, and incommensurability—is there a better alternative currently
on the table? —then so be it. Let’s not throw out the baby with the bath water.
If you just can’t stomach it, flip to the following chapter. No hurt feelings!
Let me begin by setting things straight. Section 9.1 casually referred to “the”
logical positivist account of science. Strictly speaking, this is an oversimpli-
fication. There admittedly is no uniform, monolithic depiction of science
collectively adopted by all positivists. Schlick, Carnap, Hempel, Neurath,
with established theory. Against the assumptions of positivists and their tra-
ditional adversaries, confirmation, verification, and falsification play a rel-
atively marginal role in everyday “normal” scientific practice. For instance,
contrary to Popper’s dictum, mismatches between theory and observation
should not be considered falsifications. Rather, they are routinely treated as
“anomalies,” that is, nagging counterexamples to be captured and explained
away by the reigning paradigm. Nevertheless, despite these substantial
differences, Kuhn’s normal science can be easily reconciled with Nagel’s
model. In particular, normal science may be viewed as a cumulative collec-
tion of statements and concepts within a specific domain. Progress is tanta-
mount to any contribution to this growing body of knowledge. Yet, this is
only one half of Kuhn’s story and not the most exciting one.
Kuhn observed that sometimes anomalies stubbornly resist resolution.
Rather than washing away, they pile up, and a few may come to be viewed as
especially pressing. As researchers try to fix the problem, counterexamples
accumulate. Consequently, the field enters a state of crisis. The typical way
out of this quagmire is the development of a fresh start, equipped with a host
of novel tools and concepts. As the new framework rapidly establishes it-
self and makes progress, superseded questions and ideas are set aside and,
eventually, they are forgotten. When this happens, when an older paradigm
is replaced by a newer one, we have what Kuhn dubs a scientific revolution.5
The new paradigm eventually crystallizes into normal science, adopting a
consensus-forging role, producing fresh anomalies, which eventually trigger
a crisis, followed by another revolution, and so on, in a continuous cycle.
The outcome of the revolution—the new paradigm—typically exhibits dif-
ferent goals and interests compared to its predecessor. The revamped normal
science may ask innovative questions, postulate novel concepts, posit dis-
tinctive laws, and so forth. Still, the occurrence of a revolution, by itself, does
not jeopardize the overall rational or progressive trajectory of science. The
threat to progress and rationality stems from the very nature of paradigm
shifts, which Kuhn famously compares to religious conversions and gestalt
switches in psychology. Members of the new paradigm, he claims, “live in
a different world” from their predecessors. They speak different languages
that cannot strictly be compared or translated into each other. The only way
to transition from one to the other is via a eureka-style intuition, as opposed
to deliberative reasoning. This is where trouble begins. “Living in a different
world” has substantial implications for progress.
On Kuhn’s picture, sensu stricto, where novel paradigms carry new lan-
guages with them, one might not even be able to convey the ideas of the
replaced theory in the language of the replacing one. Indeed, Kuhn initially
suggested that there is literally no way of specifying a theory-neutral lan-
guage in which to express and compare the two frameworks. This is a striking
departure from the old cumulative model of scientific knowledge. Recall that
Nagel did recognize the need for theory change. Still, despite the occasional
setback or difference in focus or perspective, the new theory always takes the
success of its predecessor under its wings, while eschewing some problems,
failures, and misconceptions. The basic idea is that the two theories can be
rationally compared, and the “better” one is selected. After all, substantial
change in theory is warranted if and only if the new paradigm explains the
known data and predicts new observations more accurately than its prede-
cessor. This, Kuhn claims, is what happens in the context of normal science.
Yet, he continues, when we zoom out and focus on larger time scales, science
does not work that way. After a revolution has swept in, a substantial compo-
nent of the old paradigm is dismissed and forgotten. It eventually becomes
accessible only to historians who, through slow and painstaking work, are
able to reconstruct the discarded Weltanschauung.
This, simply put, is Kuhn’s notorious concept of incommensurability, which
leads to conceptual relativism, the controversial doctrine that the language
used in a field of science changes so radically during a revolution that the
old and the new theories are not mutually inter-translatable.6 Etymologically,
“incommensurability” derives from ancient Greek mathematics, where the
term had a precise meaning, denoting two lengths which have no common
measure.7 Kuhn borrows this idea and applies it, figuratively, to the compar-
ison of scientific paradigms. The notion of incommensurability may be used
to describe different phenomena, with various controversial implications.
6 As Kitcher (1978) remarks, it is interesting that Kuhn and Feyerabend, vigorous contenders of
the relevance of history for the philosophy of science, have also advanced theses which imply that the
task of the historian of science cannot be successfully completed.
7 Two lengths p and q have a common measure if it is possible to lay x of one against exactly y of the
other, thereby measuring p in terms of q, or vice versa. Not all lengths are commensurable, as shock-
ingly discovered by the Pythagoreans when they realized that the diagonal of a square is “incommen-
surable” to the length of the sides.
222 Black Boxes
hard to come by. Furthermore, and even more problematic, the definitions
of many theoretical terms appeal to other, even more technical notions.
For instance, learning the standard definition of “electron” as a negatively
charged subatomic particle presupposes that the subject already grasps
the concept of charge, which is arguably more complex and theory-laden
than “electron” itself. These and similar difficulties suggest a different,
more plausible alternative. We explain theoretical terms by specifying a
theory. The meaning of words like “electron” and “black hole” is provided
by their position within the structure of the entire corresponding theory.
From this holistic standpoint—pioneered by Duhem, endorsed by many
logical positivists, and developed to its fullest extent by Quine8—it follows
that, say, “mass” does not mean the same thing in classical and relativistic
physics, and “planet” is a different concept for Ptolemy and Copernicus.
The reason should now be evident. If the meaning of a theoretical term
is determined by an entire corresponding theory, changing the theory
thereby changes the meaning of all its relevant concepts.
At first blush, these conclusions will strike some readers as inconse-
quential and, perhaps, quite plausible. Did Einstein’s groundbreaking
intuitions not change the very meaning of “mass” by relativizing it to a
frame of reference? Meanings are constantly in flux. All this is fine. Yet,
problems arise as soon as we try to compare theories. If terms like “elec-
tron” and “mass” get their meaning from their place within a network
of statements and laws, then, as noted, when the theory is modified,
the meaning of these terms changes as well. But if the meaning of core
concepts varies from theory to theory, how can we compare the theories
themselves? If, in principle, theories never talk about the same thing, then
we have no common measure to assess them. Hence arise the notorious
problems of incommensurability and theory change, which turn Nagel’s
plausible doctrine of subsumption, as well as the very possibility of “cru-
cial” tie-breaking scientific experiments, into a logical impossibility. In
what follows, it is this third and strongest meaning-incommensurability
that will be the focus of our attention.
The thesis of meaning-incommensurability was met with cries of out-
rage. Some dubbed it untenable because its rests on the fundamentally
incoherent idea of incompatible conceptual schemes. Others rejected it
on the grounds that there is clearly enough similarity of concepts across
8 See Duhem (1954); Carnap (1956a); Hempel (1965, 1966); and Quine (1953).
224 Black Boxes
9 The first critique was famously explored by Davidson (1974). The second route was developed
primarily by Shapere (1966); Achinstein (1968); and Kordig (1971).
10 Hacking (1983) suggests that Kuhn did not originally intend to address the issue of rationality at
all. Things are different, however, in the case of Feyerabend, a philosopher whose radical ideas often
overlap substantially with Kuhn. Feyerabend is a longtime foe of dogmatic rationality. For him, there
is no canon of rationality, no privileged class of “good reasons,” and no absolute paradigm of science.
There are many rationalities, as opposed to a single one, and the choice among them can never be
fully objective.
The Fuel of Scientific Progress 225
11 Frege’s classic distinction between Sinn and Bedeutung comes from his 1892 article. Putnam’s
well-known proposal is detailed in “The Meaning of Meaning” (1975b).
226 Black Boxes
kinds—does not vary. Thus, the fundamental identity for expressions is de-
termined neither by senses nor by stereotypes. It comes from reference.
With this in mind, it becomes clear how it is possible for concepts
embedded in radically different theories to nevertheless talk about the same
things. This happens when a term maintains the same stable reference across
Kuhnian paradigm shifts. Thus, Democritus, Newton, Laplace, Thomson,
Lorentz, Bohr, and Millikan, all of whom have very different theoretical
presuppositions concerning physical particles, can be talking about the same
entities when they use the term “electron.” In short, this referential approach
provides the resources to address the meaning of theoretical terms without
being lured into problems—or pseudo-problems—of incommensurability
and relativism.
Things, unfortunately, are not that simple. Putnam’s sketch works rea-
sonably well for “success stories,” where authors with very different
presuppositions may nonetheless use a term to refer to the same natural
kind: electrons, viruses, gravity, and the like. But how do we apply this model
to concepts, such as acid, which have competing definitions and are very
likely not to be homogeneous natural kinds? Even more problematically,
proponents of theories positing nonexistent entities, such as aether and ca-
loric, seemingly communicate their ideas just as well as researchers with dif-
ferent views about “real” theoretical entities, like viruses and electrons. How
do we account for this observation? How do we explain the substantial con-
tribution of these theories to contemporary science, despite the lack of stable
referential relations across the board? Some dismiss these examples on the
grounds that “the notion of meaning is ill-adapted to philosophy of science.
We should worry about kinds of acids, not kinds of meaning” (Hacking 1983,
p. 85). Others have picked up the tab by revising Putnam’s insights.
An influential refinement has been offered by Kitcher.12 Situating himself
within a philosophical tradition which feels uncomfortable with intensional
entities like Fregean senses, Kitcher urges the benefits of a “referential” ap-
proach to the semantics of scientific terms. As noted, the development of an
extensionalist (referential) account of conceptual change was already well on
its way, at the time Kitcher was writing.13 However, Kitcher argues, referen-
tial change, in and of itself, is neither necessary nor sufficient for conceptual
12 Kitcher develops his views over a series of publications. The following reconstruction brings to-
gether aspects from Kitcher (1978, 1982, and 1993).
13 The idea of cashing out incommensurability in terms of reference was an old adage, proposed
over a decade earlier by Scheffler (1967), and refined by Putnam, among others.
The Fuel of Scientific Progress 227
14 A trivial kind of conceptual relativism sans reference change occurs when languages contain
non-overlapping expressions. More interestingly, reference-change may occur without incommen-
surability, if shifts in reference can be specified in the new language.
228 Black Boxes
15 Major contributions are Kripke (1972); Donnellan (1970, 1974); and Putnam (1973).
16 More precisely, following Kripke and Donnellan, Kitcher assumes that this process of fixing ref-
erence takes the form of a “historical explanation.” The referent of a token is that entity which figures
in the appropriate way in a correct explanation of the production of the token in question. Any such
explanation will consist in a direct or indirect description of a sequence of events that begins with a
primal act of baptism and whose terminal member is the production of the token.
The Fuel of Scientific Progress 229
17 This “principle of humanity,” which Kitcher borrows from Richard Grandy (1973), is also known
in philosophy as the “principle of charity.”
230 Black Boxes
set of initiating events? The reason is that scientists who engage in different
projects frequently find it useful to initiate their tokens by different events.
For instance, these tokens may correspond to the different ways in which
a chemical substance or reaction can be produced. When this occurs, as it
frequently does, we may reasonably conclude that the term in question is
“theory laden.”
What exactly is Kitcher alluding to? Setting technicalities aside, a simple
example should help drive the point home. Consider the concept of gene, in
the context of Mendelian genetics. When classical geneticists talked about
“genes,” what exactly were they referring to? The obvious answer is that genes
corresponded to units of function. The effects of the genotype of an organism
on their phenotype, from this standpoint, could be traced back to the effect
of a gene or set of genes. This is true. However, it can hardly be the entire
story. Within the theory of classical Mendelian genetics, genes also played at
least two other theoretical roles.18 Genes were also taken to be units of recom-
bination, in the sense that changes in linkage relationships either separated
genes that were previously linked or linked genes which previously segre-
gated independently. Finally, genes were characterized as the units of muta-
tion: changes in genes give rise to new alleles, variants of that same gene. In
short, as we shall discuss in greater detail in the following section, scientists
within and across eras and fields can use the same term, “gene,” to pick out
very different entities. This can be captured effectively by claiming that “gene”
is a theory-laden term with a multifaceted reference potential. What makes
classical geneticists a “linguistic community” is their shared disposition to
use the term “gene” to refer to the same entities or events.
The thesis that theory-laden scientific expressions have a heterogeneous
reference potential—that is, these terms may refer to different entities in dif-
ferent contexts—is the conceptual core of Kitcher’s extensionalist solution
to both incommensurability and scientific progress. I take up both issues,
in turn.
Begin by focusing on the former issue: incommensurability. The heter-
ogeneous reference potential of an expression depends, at least in part, on
the theoretical presuppositions which pertain to the particular paradigm
to which the expression belongs. Hence, it will often occur that the refer-
ence potential of a term belonging to the language of a theory cannot be
matched by any expression in its post-revolutionary successors, when these
18 For an insightful discussion of these theoretical roles, see Griffiths and Stotz (2013).
The Fuel of Scientific Progress 231
The previous section briefly covered some influential approaches to the time-
worn issue of scientific progress. Referential models have two substantial
strengths. First, they allow talk about theoretical change without positing
senses or other intensional entities that many authors find ontologically sus-
picious. Second, they pave a way for heeding Kuhn’s and Feyerabend’s call to
take seriously the history of science, while eschewing the more radical and
problematic implication of their view: meaning incommensurability.
The strategy outlined in section 9.3 is not devoid of controversial
implications. Kuhn himself was sympathetic to Kitcher’s proposal that the
language of modern chemistry can successfully identify the referents of the
key expressions of phlogiston theory. Yet, Kuhn did not fully accept Kitcher’s
characterization of reference-determination as a bona fide “translation” and
the related suggestion to bring talk about incommensurability to a close.
We may paraphrase Kuhn’s qualms along the following lines. Referential
agreement may well be a necessary condition for comparing theories and
ideas across paradigms. But is it sufficient? Kuhn answers in the negative.
Communication requires more than a shared interpretation based on exten-
sional semantics. It presupposes a true “translation,” that is, agreement on
The Fuel of Scientific Progress 233
what is said about the referents. Thus, formulations and resolutions of in-
commensurability must go beyond mismatches in reference potential.20
In fairness, Kitcher does not flatly identify translation with referential
agreement. As we shall shortly see, he acknowledges the difficulty and offers
a possible way out. Still, Kuhn’s general point deserves to be emphasized. His
remarks can be turned into an adequacy condition, posing a dilemma-style
argument for any general account of conceptual change. Purely referential
models dodge incommensurability by showing how speakers with radically
different theories can nevertheless talk about the same stuff. But reducing
translation to referential agreement, Kuhn notes, is problematic. Successful
communication requires convergence on what is said about the referents.
This requires positing something akin to Fregean senses. But, first, senses are
ontologically spooky. Second, and more important, senses are what trigger
the problem of incommensurability in the first place, when coupled with the
modest holistic assumption that the meaning of theoretical terms is deter-
mined by their position within the structure of the entire theory.
In short, here is the dilemma. On the one hand, reference alone is insuf-
ficient to characterize inter-paradigm communication and, hence, the ad-
vancement of science. On the other hand, richer, intensional accounts solve
the problem of communication but lead us back to forms of incommensura-
bility. Can we find a conception of progress that is substantial enough to do
the trick, but sufficiently slender to avoid unpalatable consequences?
Section 9.6 will take a look at Kuhn’s own attempt at taking a stab at the
problem—an admittedly metaphorical characterization of the intensional
component of translation in terms of culture and the structure of language.
Meanwhile, let us focus on Kitcher’s proposal that, while not entirely explicit
on this score, offers a promising starting point for a solution.
At various points, Kitcher suggests that his notion of reference potential
captures something of Frege’s non-referential dimension of meaning and
the quasi-holistic dictum that concept formation and theory formation go
hand in hand.21 Allow me to elaborate. The notion of sense posited by Frege
is supposed to play (at least) two fundamental roles. First, the sense is what
20 My reading of Kuhn’s (1982) response to Kitcher is inspired by Carey (1991, 2009). “The problem
with Kitcher’s argument is that it identifies communication with agreement on the referents of terms.
But communication requires more than agreement on referents; it requires agreement on what is said
about the referents. The problem of incommensurability goes beyond mismatch of referential poten-
tial” (Carey 1991, p. 462).
21 Contrary to common wisdom, modest holism—typically associated with Quine and Kuhn—
was accepted and endorsed by mature logical empiricists such as Hempel (1966).
234 Black Boxes
combustion. The core assumption was that all flammable substances are rich
in a “principle”—a substance, namely, phlogiston—which is imparted to the
air during combustion.
The challenge confronting historians and philosophers of science is to ex-
plain how it is that many of Priestley’s insights constitute accurate and sig-
nificant scientific discoveries, given the non-referential nature of its core
concepts: phlogiston and derivative notions. This calls for elucidation.
How do we explain the transformations involving, say, the burning of
fuel and the heating of metal? Priestley and his colleagues had an interesting
story to tell. When we burn a log en plein air, the phlogiston contained in the
cellulose is released in the atmosphere, leaving ash behind. Similarly, when
an iron bar is heated, the combination of heat, metal, and air causes iron to
release phlogiston into the surroundings, leaving the “calx” of the metal as a
residue. Even more impressively, the theory also provided an account for the
reverse of these reactions. Heating the red calx of mercury in an air-rich con-
tainer results in a combination of mercury and a different kind of air, which
Priestley called “dephlogisticated air.” How does this work? Phlogiston theory
provided an ingenious explanation. Heating the red calx of mercury causes
the calx to take up the phlogiston contained in the atmosphere, turning
“normal” air into “dephlogisticated air.” Priestley’s explanations were backed
up by successful predictions and retrodictions. For instance, because phlo-
giston is released during the process of combustion, the residue (ash, calx, or
the like) should weigh less than the original substance: wood, metal, etc. For
similar reasons, the surrounding air should be altered by the reaction. Both
predictions are actually borne out and were experimentally verified.
Regardless of these partial— albeit remarkable— successes, phlogiston
theory did not withstand scrutiny. Eventually, the new quantitative chemistry
of Lavoisier provided better explanations of these and related phenomena.
We now have conclusive evidence against the existence of any principle or
substance being emitted in all reactions of combustion. Oversimplifying a
bit, from the modern perspective, heating an iron bar produces metal oxide
and releases into the atmosphere air that is poor in oxygen. Similarly, heating
mercury oxide, what Priestley calls the “red calx of mercury,” produces mer-
cury and releases pure oxygen into the surrounding air. In short, phlogiston
theory was eventually superseded and replaced by atomic chemistry. But the
question remains: how can we explain the partial successes of phlogiston
theory, given the non-referential nature of its core concepts?
236 Black Boxes
reference potential sheds light on black boxes. The following section applies
the conceptual resources inherited from our discussion of conceptual change
to re-examine the role of black boxes in the success stories of Darwin and
Mendel, the mistakes of behaviorism, and tales like neoclassical economics,
where the final verdict is yet to be uttered.
Let us begin with evolutionary biology. The relation between Darwin’s own
approach and contemporary evolutionary theory is not ordinarily viewed as
one of “incommensurability.” Indeed, whether Kuhn’s account of scientific
revolutions applies to biology at all can be, and has been, questioned.23 Still,
as we shall presently see, the unfolding of the theory of evolution since its in-
ception raises puzzles similar to the ones discussed in previous sections.
Consider Darwin’s own views of inheritance, as presented in Chapter 27
of Variation of Plants and Animals under Domestication (cf. §3.2). According
to his “provisional hypothesis of pangenesis,” both the transmission of
heritable qualities and the unfolding of ontogeny are caused by invisible
particles. These microscopic entities, called “gemmules,” are thrown off by
cells and, when supplied with sufficient nutrients, they can multiply by self-
division. Just a few years after the publication of this conjecture, Weismann
23 For a clear and insightful discussion of this issue, see Godfrey-Smith (2003).
The Fuel of Scientific Progress 239
conclusively showed that gemmules do not exist. In this respect, they re-
semble phlogiston. Just as phlogiston is the heart of Priestley’s theory of com-
bustion, gemmules play a central role in Darwin’s views on inheritance. As we
saw, Priestley’s theory raises a puzzle: how can a non-referential concept—
namely, phlogiston—contribute to scientific discoveries? The same question
can be asked with respect to Darwin’s pangenesis, with the additional com-
plication that, contrary to phlogiston theory, Darwinian evolution has not
been superseded. What is going on here?
One option is to flatly ignore Darwin’s claims in Variation and focus on
the earlier, more influential framework in Origin, where the English sci-
entist remained agnostic on the nature of the mechanisms of inheritance.
Rather than solving the conundrum, this whiggish historical reconstruction
merely sweeps the dust under the rug. After all, Darwin did make this mis-
taken claim about the nature of inheritance. And his errors did not affect
the explanatory success of his theory. The central question is: why did these
mistakes leave the overall success of the theory unscathed?
A much more promising strategy borrows Kitcher’s context-sensitive no-
tion of reference potential. From this standpoint, we can recognize how the
reference of various tokens of “gemmules” varies with the circumstances.
Thus, for example, when Darwin speculates that “the gemmules in their dor-
mant state have a mutual affinity for each other, leading to their aggregation
either into buds or into the sexual elements” (Variation, p. 374), it seems rea-
sonable to interpret him as asserting a mistaken claim about nonexistent
entities, namely, gemmules. Now, contrast this with Darwin’s assertion that
gemmules “are supposed to be transmitted from the parents to the offspring,
and are generally developed in the generation which immediately succeeds,
but are often transmitted in a dormant state during many generations and are
then developed” (Variation, p. 374). By applying the principle of humanity,
we may surmise that, here, “gemmule” refers to (what are now known as)
genes and other cytological gears, making Darwin’s statement true, or ap-
proximately so. Following Kitcher, we conclude that this referential hetero-
geneity makes the concept of gemmule “theory laden.”
The black- boxing strategy extends and develops this point, fur-
ther explaining why and how reference potential works the way it does.
Specifically, when does “gemmule” refer to genes? When does it fail to refer?
The answer to this question should now be evident. Darwin’s notion of gem-
mule is part of a much broader theoretical framework. His explanandum is
not a generic: how are traits inherited? As we saw in section 6.2 of Chapter 6,
240 Black Boxes
Genes provide another interesting case study for conceptual change. In the
course of the twentieth century, the reference potential of the term “gene”
has been significantly altered in response to experimental and theoretical
The Fuel of Scientific Progress 241
innovations. This raises interesting puzzles. On the one hand, geneticists since
Mendel have always been talking about the same things, namely, chromo-
somal segments. On the other hand, it is clear that, over decades of research,
tokens of “gene” refer to distinct entities. On various occasions, different
chromosomal segments may be picked out. The question becomes: what are
the principles that determine the specific referent of each token?
In his detailed analysis, Kitcher (1982) distinguishes two biological
characterizations of “genes.” The first strategy, prominent in Mendelian genetics,
identifies genes by focusing on their function in producing phenotypic effects.
In its early usage among Morgan’s Drosophila group, “gene” or “factor” referred
to a set of chromosomal segments, each of which plays a distal causal role in the
determination of phenotypic traits. Because of ambiguity in the specification of
these functional roles, the concept gene rapidly acquired a heterogeneous ref-
erence potential, where different tokens could pick out different segments, in
a hypothesis-relative fashion. This classical gene concept was molded into its
definitive form in the 1950s, when Benzer introduced “cistrons” into the pic-
ture. The second approach, common in molecular biology, identifies genes by
focusing on their proximate action. On this view—first articulated in the 1930s
in the context of Beadle’s “one gene–one enzyme hypothesis”—chromosomal
segments are pinpointed according to their functional roles at the earlier stages
of the process, as opposed to using relatively indirect mutation and recombina-
tion tests.
These well-known considerations, Kitcher argues, show the existence of
many concepts of gene, determined by alternative decisions at the pheno-
typic level. These concepts are not in competition. Different ways of dividing
chromosomes—on a spectrum ranging from codons coding for single amino
acids to lengthy DNA sequences coding for multi-polypeptides—will be best
suited to serve one’s interests depending on the particular research project at
hand. From this standpoint, asking which criterion of segmentation corres-
ponds to the concept of gene is an ill-posed question. As anticipated in section
9.3, the term is theory-laden and its reference varies across contexts.
The black- boxing strategy preserves the spirit of Kitcher’s historico-
philosophical reconstruction. In addition, it further explains how and why the
notion of reference potential captures Frege’s sense qua mode of presentation
without lapsing into the quagmire of radical incommensurability.
Alternative gene concepts underlie different theoretical and experimental
purposes. Classical genetics and molecular biology are not addressing the
242 Black Boxes
same questions.24 This becomes crystal-clear at the first framing stage of the
black-boxing recipe. Classical geneticists were perfectly aware that genes are
not sufficient causal bases for complex phenotypic traits such as eye color or
wing shape. From a Mendelian perspective, genes are supposed to account
for variation among members of a population. Molecular genetics, in con-
trast, presupposes an altogether different framework, which aims at unrav-
eling the mechanisms underlying ontogeny. With this in mind, it is hardly
surprising that genes do not play the same difference-making role in these
causal explanations. Classical genetic models represent genes as difference-
makers for phenotypic traits. Molecular biological models, in contrast, rep-
resent genes as difference-makers for the production of polypeptides. Hence,
these two accounts should not be viewed as competing. Classical and molec-
ular genetics should be assessed independently, on their own ground.
At this point, some attentive readers may have noted, we face a problem
of a different sort. If classical and molecular biology presuppose different
concepts of gene, then are they really talking about the same kind of stuff?
Following Putnam’s insight, developed in section 9.3, it may be tempting to
note that both theories refer to the same sets of entities, namely, genes. But
is this a plausible response? As Kitcher rightly notes, asking which concept
of segmentation corresponds to “the” concept of gene is an ill-posed ques-
tion. But, then, what does it mean to say that both theories are talking about
genes? What kind of genes? The issue, simply put, is that we are now led back
into the swamp of meaning incommensurability. If the meaning of theoret-
ical terms is theory-dependent, the meaning will change with the theory.
But this seems simply preposterous. As many authors have noted, molecular
biology extends and deepens the explanations of classical genetics. But this
presupposes that there is a common measure to assess them. What is it? Once
again, black boxes to the rescue.
Both classical genetics and molecular biology treat genes as black boxes.
Importantly, they are not the same black box. Morgan and colleagues frame
and represent genes as the units of mutation, recombination, and pheno-
typic function. Within the framework pioneered by Beadle and later refined
by Watson, Crick, and subsequent molecular biologists, genes are framed
and represented as functional units responsible for transcribing protein. At
the same time, at broader levels of description, viewed as the mechanisms
24 This point, presented cogently albeit in a different context, by Waters (1990, 2007), will be devel-
oped, in greater detail, in Chapter 10.
The Fuel of Scientific Progress 243
underlying inheritance and variation, all these units can be subsumed under
a common description. It is these broader frames and difference-makers
that constitute the trait d’union between theories that make very different
assumptions. What is the relation between these levels of description? This is
a question that will be addressed in Chapter 10. And, once again, the notion
of incommensurability will play a central role. Before getting there, however,
we still have a few more examples to cover.
Black boxes capture conceptual progress in Darwin and Mendel. Can analo-
gous considerations shed light onto the strength and weaknesses of sophis-
ticated radical behaviorism? Skinner is frequently faulted for his decision
to black-box mental states. As we saw in section 6.4 of Chapter 6, this is a
mistake. Skinner did deliberately introduce abstractions and idealizations
in his psychological models. His admittedly oversimplified characteriza-
tion of stimuli and behavior is a good illustration. But this, in and of itself, is
no more problematic than, say, Darwin’s oft casual appeals to traits, fitness,
and the mechanisms of inheritance. Furthermore, unlike Darwin, Skinner’s
explanations do not involve any systematic introduction of nonexistent enti-
ties or non-referring terms. This basic parallel raises a third problematic ob-
servation. The principle of humanity permits a charitable reinterpretation of
some tokens of “phlogiston” and “gemmules” to refer to oxygen and genes,
respectively. Could one not rephrase Skinner’s behavioral states so as to in-
clude psychological and neural processes, as we understand them today? The
problem is that now it seems hard to celebrate Darwinism as a success story
while discarding behaviorism as an outdated theory of mind. But this is bla-
tantly absurd! Where did our analysis go wrong?
The appropriate reaction is not to throw in the towel and settle for an “an-
ything goes” relativism. Skinner was, indeed, guilty of mistakes that did not
undermine Darwin’s and Mendel’s approaches. But, in order to pinpoint
these shortcomings, we need to move beyond referential models of meaning,
even when reference is—rightly—understood in a context-sensitive fashion.
The lack of progress that eventually turned radical behaviorism into a re-
gressive research program involves not what the theory talks about. All its
key terms are perfectly referential. The problem pertains to the structure of
Skinner’s causal explanations of human conduct. Allow me to elaborate.
244 Black Boxes
25 These presuppositions, often left implicit, are discussed explicitly in Glimcher (2011).
246 Black Boxes
the precise nature of their realizers. Darwin, Mendel, and Crick have very
different conceptions of gene in mind—indeed, Darwin and Mendel did not
talk about “genes” at all. The same can be said about Priestley and Lavoisier.
These authors all share a coarse description of a set of explananda and the
underlying mechanisms. In short, Kuhn’s preservation of taxonomy can be
fruitfully understood as the unfolding of black boxes across theories. These
boxes capture the structure that remains constant across paradigm shifts, for
instance in the shift from the classical physics of Newton to Einstein’s relativ-
istic framework.
Time to tie up some loose ends. An account of progress is a central compo-
nent of any serious analysis of science. For a long time, philosophers assumed,
more or less implicitly, that the advancements of science could be depicted
as a gradual accumulation of truth and knowledge. Kuhn, Feyerabend, and
their colleagues are rightly credited with a decisive dismissal of this simplistic
view. However, their pars construens was not quite as persuasive as their pars
destruens. Referential models, the most popular solution to radical holism
and meaning incommensurability, are not structured enough to capture sci-
entific progress. Successful translation requires more than mere agreement
on reference. The black-boxing strategy attempts to blend the objectivity of
purely referential models with a mild—but inevitable—form of meaning ho-
lism to sketch a notion of progress that allows us to compare frameworks and
avoid the bogey of incommensurability.
In addition, a discussion of progress sheds further light on historical
examples. Darwin’s and Mendel’s work reveals how, as long as the struc-
ture of the black box is identified correctly, failure to pinpoint the under-
lying mechanisms does not affect the success and fruitfulness of a scientific
hypothesis. Skinner’s shortcomings warn of the danger of mis-framing and
mis-representing a causal explanation. Finally, the debate between classical
and psycho-neural economists shows that identity of reference is neither a
necessary nor a sufficient condition for progress. Utility is understood very
differently across these frameworks. But, at a coarse level of description, it
plays the same functional role as frame and difference-maker.
One final remark. Hopefully, the discussion in the previous three chapters
shows that my analysis of black boxes is not an attempt to rebuild philosophy
of science entirely from scratch. This prominent albeit neglected construct
captures a number of traditional themes—such as mechanisms, emergence,
and progress—and recasts them in a new light. At the same time, talk about
meaning, reference, and incommensurability does not necessarily drag us
The Fuel of Scientific Progress 249
Our long excursus into the nature and structure of black boxes began with
an old dusty image, which was mainstream well into the twentieth century
and is still popular in some circles. This is the figurative depiction of science
as a slow, painstaking accumulation of truths. The goal of scientific research,
from this hallowed perspective, is to provide an accurate and complete de-
scription of the universe, or some portion of it. In this sense, the develop-
ment of science is akin to the erection of a wall or the tiling of a mosaic. The
building blocks of science are basic facts about the universe we inhabit.
Over the years, scholars—scientists, philosophers, historians, and many
others—have vocally denounced the misleading nature of this analogy. At
best, it is a drastic oversimplification. At worst, it utterly misses the mark.
Either way, it should not be taken too seriously. Popper’s suggestive image of
the scientist as a knight in shining armor battling the evil forces of darkness
has followed suit and faded away. Of course, truth, knowledge, and objec-
tivity remain important goals for the scientific enterprise. But they are only
one side of the story. The remainder involves what we do not know, what we
cannot grasp, what we get wrong. In a word, what is missing from the “brick-
by-brick” model of science is the productive role of ignorance.
* “You were not caught by my device //When you were snared like this tonight. //Who holds the
Devil hold him tight! //He can’t expect to catch him twice.” Translation by W. Kaufmann.
Black Boxes. Marco J. Nathan, Oxford University Press. © Oxford University Press 2021.
DOI: 10.1093/oso/9780190095482.003.0010
Sailing through the Strait 251
1 Thus, for instance, Mayr (2004), an outspoken antireductionist, calls this basic tenet “analysis”
and contrasts it explicitly with “reduction.”
2 I explore these issues, more systematically, in a series of articles: Nathan (2012, 2015b); Nathan
and Del Pinal (2016, 2017).
256 Black Boxes
to sugar and tea, stevia and soda, and many more substances. Details about
H2O and NaCl increase the depth of the explanation. But adding precision
will ipso facto impact its breadth, its generality. Given that salt and sugar have
different chemical structures, making the model too similar to the former
substance will make it inapplicable to the latter. In short, there is a trade-off
between accuracy and generality, and the golden mean is not invariably, or
even typically, found at the lowest, most fundamental levels. This, simply put,
fuels the methodological autonomy of the special sciences, the heart and soul
of contemporary antireductionism.
Viewed in this light, the choice between reductionism and antireduc-
tionism may well turn out to be an ill-posed one. Sophisticated versions of
both these influential stances stress different, equally legitimate aspects.
Philosophy tends to view them as incompatible, but this is a mistake. As we
saw, when we focus on concrete examples, initially apparent differences fade
away. Try asking whether current science better conforms to reductionist or
antireductionist standards, and the matter tends to lose substance.
The stalemate becomes especially evident in the biological sciences, where
much is known about the structural implementation of function. Can all bi-
ology be reduced to molecular biochemistry? The trouble with this question
is that it requires and presupposes a rigorous definition of “molecular pro-
perty” that we currently lack and will likely never be found. The crux of the
disagreement is whether our best biological models are molecular, and this
ultimately depends on whether we classify explanations as “molecular.” This
is a semantic, terminological issue, not a substantive one.
Philosophers of mind, psychology, neuroscience, and economics should ac-
cept the dire lesson from their colleagues working on biology. Empirical dis-
coveries are unlikely to solve long-standing philosophical disputes over the
mind-body problem or the relation between economics and psychology. The
reason is not the—indisputable—complexity of these fields. The issue is the na-
ture of reduction which, contrary to common wisdom, is murky. All of these
observations point to the same conclusion: it is time to move on.
Section 10.2 recapped the argument, originally spelled out in section 2.6 of
Chapter 2, that the current standoff between reductionism and antireduc-
tionism is not as substantive as it is typically taken to be. Our present goal is
Sailing through the Strait 257
3 Recall from Chapter 1 that this characterization of levels as “coarse-grained” vs. “fine-grained,”
“micro” vs. “macro,” or “higher” vs. “lower,” should be understood as relativized to a specific choice of
explanandum.
258 Black Boxes
4 For instance, Glennan (2017) notes: “Because New Mechanists emphasize the ways in which
the organized activities and interaction of parts explain the behavior of wholes, it might seem that
New Mechanist ontology is committed to a pluralist (and reductionist) view” (p. 56, italics added).
Glennan is adamant in stressing that his approach avoids radical “nothing-but” forms of reduc-
tionism. Still, it has a clear reductionist flavor. Similarly, Godfrey-Smith (2014) notes, “This kind of
[mechanistic] work is “reductionist,” in a low-key sense of that term: the properties of whole systems
are explained in terms of the properties of their parts, and how these parts are put together” (p. 16).
5 In the words of another prominent advocate of new mechanism, “I argue that the mosaic model
of the unity of neuroscience, based on the search for mechanistic explanations, is better suited than
reduction to the descriptive, explanatory, and epistemic projects for which these classic models were
designed” (Craver 2007, p. 233, italics added).
Sailing through the Strait 259
and their micro-counterparts have the same explanandum, the same object
of explanation. What most discussants fail to note is that, by transforming
higher-level questions into lower-level ones, we are not providing different,
competing answers to the same query. Inquiries framed at different levels are,
for all intents and purposes, different questions. Failure to acknowledge this
simple—albeit powerful—point leads to much confusion, including viewing
reduction and autonomy as antithetical, when they are not. This is the ful-
crum of my argument. I now need to put some meat on these bare bones.
What is the relation between hypotheses at different levels? Borrowing a
Kuhnian metaphor, one could say that explanations with substantially dif-
ferent scope are typically “incommensurable.” As we saw in Chapter 9,
incommensurability was one of the main tenets of Kuhn’s philosophy of sci-
ence. His claim that scientific paradigms are “incommensurable” is often
understood as a radical form of conceptual relativism. This, simply put, is
the claim that a paradigm shift alters the language of a theory so drastically
that superficially similar claims turn out not to be mutually translatable
across paradigms. This, as we discussed at length in the previous chapter,
is too strong. Paradigms are seldom, if ever, incommensurable in this ex-
treme sense.
Our excursus into the nature of black-boxing draws attention to a subtler,
more compelling, and less devastating form of incommensurability. This is
the claim that hypotheses and explanations framed at different levels cannot
be compared directly. They are always mediated by models, theories, and
other vehicles of representation. This context-relativity boils down to a sort
of testing holism, albeit one whose consequences are fairly mild. Basically,
it is always possible to adjudicate between competing hypotheses directly,
when these alternatives are embedded within the same model. Nevertheless,
whenever one is trying to contrast explanations pertaining to different
models, what is compared are not the individual hypotheses themselves, but
a broader theory or paradigm. I should stress that my point is not the rad-
ical claim that hypotheses can never be compared across models. That would
be tantamount to advocating a radical form of meaning incommensurability
and giving up on the progress of science. My suggestion is that when one
compares, say, a Newtonian explanation with its relativistic counterpart, one
is not assessing hypotheses directly. Rather, in a more Duhemian fashion,
one judges the explanatory forces of two entire paradigms. As we shall see in
the following section, a similar relation holds when we contrast genetic vs.
molecular explanations, or psychological vs. neuroscientific ones.
260 Black Boxes
6 For an excellent example of this kind of “external” inquiry, see Benacerraf (1965).
Sailing through the Strait 261
7 Coming from a similar perspective, Bickle (2003, Ch. 1) resurrects a Carnap-inspired internal-
external distinction as part of meta-science. For a discussion of the revived importance of Carnap’s
distinction in contemporary ontology, see Chalmers et al. (2009).
262 Black Boxes
Section 10.3 argued that the black-boxing strategy reconciles autonomy and
reduction, the signature traits of antireductionism and reductionism, re-
spectively. Let us see how this works in practice by revisiting, one final time,
the case studies that accompanied us throughout our journey together.
To get started, take one last look at Darwin’s explanations. Recall that his ex-
plicit target is the distribution of organisms and traits around the world. His
evolutionary explanans, simply put, is descent with modification, fueled by
natural selection. Is this a story of reductionism or antireductionism?
Here is a reductionist rendition of the tale. Darwin is surely correct that
descent with modification is the central factor in explaining distributions
of organisms and traits across the globe. Evolution by natural selection is,
indeed, the principal frame and difference maker. But how should we un-
derstand evolution by natural selection? Darwin himself breaks down the
process into four key ingredients: variation, competition, fitness, and herit-
ability. This was an important insight. But it was only the beginning. In the
wake of Darwin’s groundbreaking work, progress was gradually achieved by
decomposing these broad concepts into more fundamental components.
This is precisely what one would expect from a reductionist perspective.
To further elaborate, consider Sober’s example of frequency changes in a
population of Drosophila. Initially, variation at the population level may be
explained by positing that type A flies are “fitter” than type B ones. As the
system is studied further, details emerge. In the imaginary case at hand, it
turns out that the fitter type A is characterized by a chromosome inversion
264 Black Boxes
that produces a thicker thorax, better insulating the organism, which makes
it more resistant to cold weather. As Sober notes, at this point, appeals to “fit-
ness” become disposable. Fitness attributions can be replaced by descriptions
of the mechanisms producing the relevant frequencies. And, indeed, this
more precise depiction provides a deeper account of changes at the popula-
tion level. In short, “fitness” is explanatory. But describing the mechanisms
responsible for differential survival is more explanatory. And further decom-
posing these genetic mechanisms into more fundamental constituents will
make the model even more powerful. This downward trajectory, the reduc-
tionist says, is captured by the reductive outlook.
Not so fast, the antireductionist retorts. Sure, the chromosomal descrip-
tion is more detailed than the preliminary ascription of fitness. Still, is this, in
and of itself, a vindication of reductionism? Accuracy is an important aspect
of explanation. But it is not the only factor at play. Shifts between levels in-
volve a trade-off in explanatory power. Micro-descriptions are more precise;
macro-depictions are more general. To wit, what do the fittest Bacteriophage
λ, the fittest Drosophila melanogaster, and the fittest Homo sapiens have in
common, which sets them apart from other members of their species? Good
luck answering this question by pinpointing shared biochemistry. And
insisting that, from a biophysical perspective these organisms have nothing
in common just goes to show that moving down on our levels of descriptions
is a compromise. Some features are gained, while others are lost. In short,
lower-level depictions are not invariably more powerful than higher-level
ones. They merely focus on different aspects. Fitness is just an instantiation
of a broader phenomenon. Darwin’s explanations in the Origin, which are
still widely accepted by evolutionists, vindicate the antireductionist story.
At this point, reductionists will presumably point out all the contributions
of molecular biology, broadly construed, to the study of evolution.
Sophisticated antireductionists should agree. But they also maintain that
what reductionists insist in dubbing “reduction” is really a case of multilevel
integration. Modest reductionists, in turn, will hold their ground, claiming
that this is all part of a molecular basis. And—here we are, back right where
we started! Both parties recognize Darwin’s remarkable contributions and
the progress that has been made since. The disagreement is whether this
should be recounted as a story of “reductionism” or “autonomy.”
How about trying something different? Consider the situation from the
perspective of black-boxing. First, note that variation, competition, fit-
ness, and heritability are placeholders in causal explanations framed in
Sailing through the Strait 265
Our second case study focuses on the development of genetics, from Mendel’s
pioneering insights to the contemporary landscape. Originally showcasing
266 Black Boxes
the limitations of Nagel’s account, the relation between classical and molec-
ular genetics has grown into a poster child for antireductionism.
Mendel’s explicit targets are the observable inheritance patterns, some-
times referred to as “Mendelian ratios.” Each organism, Mendel hypothe-
sized, inherits two “factors”—subsequently called “genes”—one from each
parent. When these factors are different, one is always expressed preferen-
tially over the other. These factors are passed on, unchanged, to the next gen-
eration. Why would this well-known story support antireductionism?
To restrict the scope of the discussion, consider Mendel’s second law, the
“law of independent assortment,” which, simply put, states that genes located
on non-homologous chromosomes assort independently. This generaliza-
tion, as Kitcher maintains, is best explained at the cytological level. To be
sure, the process of meiosis can be described, in much greater detail, in bi-
ochemical terms. But does this knowledge enhance the original cytological
depiction? Antireductionists answer in the negative.
Many reductionists disagree, complaining that this perspective is simply
outdated. Molecular biology has greatly improved the study of gene repli-
cation, expression, mutation, and recombination. Insisting that explanatory
power is, in principle, immune from lower-level revision and enrichment
is simply wrongheaded. Mendel’s insight was on the right track. But much
remained to be done. What is the physical structure of a gene? How are these
traits inherited, transmitted, and expressed? We now have a much deeper
understanding of genes and other genetic processes. Mendel’s insights have
not just been vindicated. They have been expanded and explained.
The problem with this response, from an antireductionist perspective,
is that it sets up a straw man. Of course, biochemistry and other molecular
insights enhance our knowledge of genetics. This was already acknowledged
in Kitcher’s pioneering “1953” article, where molecular biology is presented
as an “explanatory extension” of classical genetics. The point, stressed by
antireductionists ever since, is that higher-level natural kinds do not count
as natural kinds at all from a lower-level standpoint. Recall how the classical
Mendelian concept of a gene was supposed to fulfill three roles. It was in-
tended as the unit of mutation, the unit of recombination, and the unit of
function. No single biochemical entity fulfills all three. Hence, reductionism
distorts the success of classical pre-molecular genetics. We need an antire-
ductionist outlook to do justice to these groundbreaking discoveries.
Now it is time for reductionists to complain. Multiple-realizability did
pose an inescapable trap for classical reduction, which required a series of
Sailing through the Strait 267
entire theories, frameworks, paradigms. And the choice between them is “ex-
ternal,” that is, pragmatic. Thus construed, Mendelian genetics is both auton-
omous and reducible to molecular biology.
not too much success can be boasted. To be sure, much progress has been
achieved in the unraveling of psychological and neural mechanisms under-
lying higher and, especially, lower cognition. But has this advanced the phil-
osophical debate over the reduction of mind? If so, the news has not yet been
broken, as there seems to be no more consensus on this matter than there was
in the 1950s.
It is worth recalling that, when confronted with this lack of resolution,
scientists and philosophers alike tend to respond by pointing to the com-
plexity of the human brain, with its astronomical number of connections
among billions of cells. I obviously do not question how hard it is to study
brains and minds. I am, however, skeptical that the complexity of the under-
lying structure is responsible for lack of tangible progress. The main culprit is
the notion of (anti)reductionism itself. This should be old news. What’s novel
is that we can now finally see how to overcome the false dichotomy.
Consider behaviorism from the perspective of black boxes. Watson and
Skinner’s attempt to place psychology on more secure methodological
grounds had a momentous impact on the entire field. First, it stressed the
tight connection between mental states and behavior. Second, and more
important, it contributed to making psychology more “scientific.” Their in-
sight to black-box mental states was indeed revolutionary. By setting psycho-
neural patterns and mechanisms aside, Skinner and colleagues were able to
draw attention to the importance of stimuli, operant conditioning, and other
forms of environmental effects on human conduct. At the same time, rad-
ical behaviorists believed that the content of this black box had no substan-
tial place in psychology. This was their crucial mistake. In the wake of the
cognitive revolution, it became increasingly clear that mental dispositions,
and other psychological states, play an irreplaceable role in the explanation
of human conduct. As we saw, Watson did embrace a simplistic and naïve
reductionism. Not so Skinner, who was guilty of neither reductionism nor
antireductionism, which are external, pragmatic stances. His error is best un-
derstood in terms of the black-boxing strategy. His explanandum is framed
incorrectly. Consequently, the difference makers are misidentified, and the
ensuing causal model of behavior is lacking. All of this is independent of re-
ductionism and autonomy. Skinner’s shortcomings lie in the details of his
model of behavior—the construction of his black box.
Sailing through the Strait 271
The heart and soul of neoclassical economics is the attempt to predict and ex-
plain economic behavior on the basis of choice-related data. The ingredients
for this simple, yet controversial, recipe include formal postulates like the
weak and general axioms of revealed preference (“Warp” and “Garp”), as
well as the ideal of rationality at the core of Expected Utility Theory.
The past few decades have witnessed the rise of alternative approaches,
which I subsumed under the moniker “psycho-neural economics.” The basic
insight is that details concerning how human minds and brains actually
frame, compute, and resolve problems challenge fundamental assumptions
regarding the behavior of agents. For instance, behavioral economists stress
how people rely heavily on heuristics, biases, and reference points when
adjudicating potential outcomes, as opposed to computing expected utilities.
And, clearly, from the standpoint of psycho-neural economics, heuristics,
biases, and reference points are not the means by which rational agents ap-
proximate the predictions of expected utility theory. Certainly Friedman, or
someone of his behalf, could make the argument that psychological data of
various kinds do not disconfirm or otherwise call into question his “as if ”
approach.
An informed prediction of the trajectory of economics lies beyond
my interest and professional competence. The important point, for pre-
sent purposes, is that the debate between neoclassical and psycho-neural
economists can be reconstructed along the reductionism vs. antireduc-
tionism lines. But should it? Unsurprisingly, I offer a negative answer and an
alternative.
The reductionist story goes like this. Characterizing utility in terms of re-
vealed choice was an effective strategy early in the twentieth century, which
allowed economics to rest on more secure methodological grounds, making
it kosher from a scientific— read: positivist—standpoint. Specifically, it
allowed economists to transform utility ascriptions from spooky unquanti-
fiable mental states to empirically testable hypotheses. This effective strategy
faced some shortcomings. In particular, it was founded on a series of “as if ”
models, introducing various unrealistic psychological assumptions. This
was a necessary move decades ago, when little was known about how this
information is actually processed. The landscape, however, has changed
drastically. We now know a lot more about the psychological mechanisms
272 Black Boxes
8 Some neuroeconomists, such as Glimcher (2011), are explicit and unapologetic about their re-
ductive endeavors. Other authors prefer to replace talk of “reduction” with the “integration” of psy-
chology, economics, and neuroscience (Craver and Alexandrova 2008). Setting aside differences
among these strategies, the central point is that it is important to reconcile the claims of economics
with the best theories and data from psychology and neuroscience, as these fields have the potential
to mutually inform each other.
9 Recall from section 4.4 of Chapter 4 that, following the deductive-nomological model, Friedman
treated prediction and explanation as two sides of the same coin.
Sailing through the Strait 273
Before moving on, let me briefly address our final example: the case of phlo-
giston. This is another peculiar case study since, differently from the others,
it involves an entity that has been purged from our scientific ontology. This
raises a host of distinctive questions and challenges.
Recall how phlogiston theory purported to provide an explanation of the
process of combustion. What happens when a log is burned? What is the na-
ture of the reaction that turns wood into ashes? What accounts for changes
in mass, alterations to the surrounding air, and so forth? The keystone of
phlogiston theory is the postulation of an entity—phlogiston—that is sup-
posedly emitted in all cases of combustion. This concept, we now all know, is
not a natural kind. There is no substance that is emitted in all cases of com-
bustion. There is no such thing as phlogiston. Still, as we saw in Chapter 9,
the progress of the theory itself raises interesting conundrums. The question
274 Black Boxes
that I want to address here is this: is the transition from phlogiston theory to
atomic chemistry a reductionist or an antireductionist story?
From a radical reductionist perspective, the natural conclusion to draw is
that phlogiston has been eliminated or reduced to concepts and entities pos-
tulated by modern atomic chemistry. The causal role played by phlogiston
in the old theory is now performed, for the most part, by the oxidation of
organic compounds. The problem, in brief, is that this seems to throw out the
baby with the bath water. Sure, we can all agree that there is no phlogiston
out there in the world. Still, phlogiston theory did make strikingly accurate
predictions. How does a nonexistent substance explain anything at all? How
do we account for the partial success of the theory?
Traditional antireductionism provides a diametrically opposite perspec-
tive, according to which phlogiston theory has been rephrased in terms of
modern chemistry. This solves the issue of success, which undermines the
eliminativist story. If what Priestley and colleagues called “phlogiston” actu-
ally refers to, say, oxygen and other chemical elements, then there is no mys-
tery as to why appeals to phlogiston are partially explanatory. But we now face
trouble explaining why the theory came to be discarded at all. Furthermore,
if not even phlogiston counts as an instance of elimination, does any theoret-
ical entity in science ever come to be thrown out?
In short, uncompromising forms of both reductionism and antireduc-
tionism miss the mark. Case histories like this one require a more nuanced
and subtle middle ground that recognizes how parts of a theory have been
eliminated or reduced, whereas other parts have been integrated and
rephrased in contemporary terms. The question is: which parts and why?
On Kitcher’s view, theory-laden terms, such as “phlogiston,” have a com-
plex reference potential. The meaning and denotation of these expressions
is context-dependent: they may refer to a plurality of entities, processes, and
activities, as well as to nothing at all, depending on the circumstances and the
intentions of both the speaker and the surrounding linguistic community.
Chapter 9 argued that Kitcher’s notion of reference potential and my black-
boxing strategy mutually reinforce each other. When does “phlogiston” refer
to oxygen? When does it pick out something else? When does it flatly fail
to refer? The key to answering these questions is to note that theory-laden
expressions like “phlogiston” are embedded in a rich and complex theoretical
thicket. As such, their framing is crucial. This process singles out the presence
or absence of phlogiston as the crucial factor in the process of combustion.
Sailing through the Strait 275
10 These considerations, ça va sans dire, hardly exhaust the deep and extensive issue of ignorance.
Another interesting question concerns whether there is ignorance that is, in principle, insoluble. In
other words, are there questions that we do not currently know, and there is no way of knowing—
something akin to Emil du Bois Reymond’s ignoramus et ignorabimus mentioned in Chapter 1? Or
do all human questions have an answer, at least in principle? This is a fascinating debate, albeit one
tangential to the scope of this work.
278 Black Boxes
a slow, painstaking accumulation of truth has dominated the scene for the
better part of the past century, and it is still dominant in textbooks. The anti-
reductionist image of a dappled universe might not be the appropriate sub-
stitute. Black-boxing suggests a different image. How about recharacterizing
the scientific world as a set of matryoshka, that is, nested Russian dolls
(Figure 10.1)? This suggests that the relation between explanatory levels is
neither one of layer-cake-style progressive reduction nor one of complete
autonomy. The right metaphor is one of dynamic containment, where each
discipline constitutes a system in and of itself that, however, is constrained
by what goes on “outside” and, in turn, constrains what goes on “inside.” This
interplay, I believe, is an important message to convey to young students of
science, in books as in class, and the educated public in general. What goes
on at lower levels does, indeed, constrain what happens at higher floors. But
the converse is also true. Framing the right macro-explananda is instru-
mental for raising the appropriate micro-questions. The general lesson to be
learned—and taught—is that the advancement of science requires coopera-
tion, not elimination.
In conclusion, this book sets the foundations and explores the boundaries
of what is—hopefully—a fecund and exciting research project. What I have
done here barely scratches the surface of a deep, unfathomed ocean. Much
remains to be done to adequately study all the workings and implications of
ECON O M IC S
NEUROPSYCH
BIO L O G Y
PH YSIC S
Box, G. E. (1976). “Science and Statistics.” Journal of the American Statistical Association
71(356), 791–799.
Burge, T. (2013). “Modest Dualism.” In Cognition through Understanding: Philosophical
Essays, Vol. 3, pp. 471–488. Oxford: Oxford University Press.
Camerer, C. F. (2010). “The Case for Mindful Economics.” In A. Caplin and A. Schotter
(Eds.), The Foundations of Positive and Normative Economics, pp. 43– 69.
New York: Oxford University Press.
Camerer, C. F., G. Loewenstein, and D. Prelec. (2005). “Neuroeconomics: How
Neuroscience Can Inform Economics.” Journal of Economic Literature 43, 9–64.
Carey, S. (1999 [1991]). “Knowledge Acquisition: Enrichment or Conceptual Change?”
In E. Margolis and S. Laurence (Eds.), Concepts: Core Readings, pp. 459– 487.
Cambridge: Bradford.
Carey, S. (2009). The Origin of Concepts. New York: Oxford University Press.
Carnap, R. (1938). “Logical Foundations of the Unity of Science.” In O. Neurath, R.
Carnap, and C. Morris (Eds.), International Encyclopedia of Unified Science, pp. 42–62.
Chicago: University of Chicago Press.
Carnap, R. (1956a). “Empiricism, Semantics, and Ontology.” In Carnap, Meaning and
Necessity (2nd ed.), pp. 205–221. Chicago: University of Chicago Press.
Carnap, R. (1956b). Meaning and Necessity (2nd ed.). Chicago: University of Chicago Press.
Carroll, S. B. (2005). Endless Forms Most Beautiful: The New Science of Evo Devo.
New York: Norton.
Cartwright, N. (1980). “The Truth Doesn’t Explain Much.” American Philosophical
Quarterly 17(2), 159–163.
Cartwright, N. (1983). How the Laws of Physics Lie. Oxford: Clarendon.
Cartwright, N. (1999). The Dappled World: A Study of the Boundaries of Science. Cambridge:
Cambridge University Press.
Chalmers, D. J. (2006). “Strong and Weak Emergence.” In P. Davies and P. Clayton (Eds.),
The Re-Emergence of Emergence, pp. 244–254. New York: Oxford University Press.
Chalmers, D. J. (2012). Constructing the World. Oxford: Oxford University Press.
Chalmers, D. J., D. Manley, and D. Wasserman (Eds.) (2009). Metametaphysics: New
Essays in the Foundations of Ontology. Oxford: Oxford University Press.
Chomsky, N. (1959). “Review of Verbal Behavior by B.F. Skinner.” Language 35(1), 26–58.
Chomsky, N. (2012). The Science of Language: Interviews with James McGilvray. New York:
Cambridge University Press.
Churchland, P. (1986). Neurophilosophy. Cambridge, MA: MIT Press.
Churchland, P. M. (1979). Scientific Realism and the Plasticity of Mind.
Cambridge: Cambridge University Press.
Craver, C. F. (2007). Explaining the Brain: Mechanisms and the Mosaic Unity of Neuroscience.
New York: Oxford University Press.
Craver, C. F., and A. Alexandrova. (2008). “No Revolution Necessary: Neural Mechanisms
for Economics.” Economics and Philosophy 24, 381–406.
Craver, C. F., and L. Darden. (2013). In Search of Mechanisms: Discoveries across the Life
Sciences. Chicago: University of Chicago Press.
Craver, C. F., and D. M. Kaplan. (2020). “Are More Details Better? On the Norms of
Completeness for Mechanistic Explanation.” British Journal for the Philosophy of
Science 71(1), 287–319.
Crick, F.H. (1994). The Astonishing Hypothesis: The Scientific Search for the Soul,
New York: Scribners.
References 283
Culp, S., and P. Kitcher. (1989). “Theory Structure and Theory Change in Contemporary
Molecular Biology.” British Journal for the Philosophy of Science 40, 459–483.
Darden, L. (1991). Theory Change in Science: Strategies from Mendelian Genetics.
Oxford: Oxford University Press.
Darden, L., and N. Maull. (1977). “Interfield Theories.” Philosophy of Science 44, 43–64.
Darwin, C. (1859). On the Origin of Species (2008 ed.). New York: Oxford University Press.
Davidson, D. (1970). “Mental Events.” In L. Foster and J. Swanson (Eds.), Experience and
Theory, pp. 79–101. London: Duckworth.
Davidson, D. (1974). “On the Very Idea of a Conceptual Scheme.” Proceedings and
Addresses of the American Philosophical Association 47, 183–198.
DeMartino, G. F. (2000). Global Economy, Global Justice: Theoretical Objections and Policy
Alternatives to Neoliberalism. New York: Routledge.
Dennett, D. C. (1981). “Skinner Skinned.” In Dennett, Brainstorms: Philosophical Essays
on Mind and Psychology, pp. 53–70. Cambridge, MA: Bradford, MIT Press.
Dennett, D. C. (1987). The Intentional Stance. Cambridge, MA: Bradford, MIT Press.
Dennett, D. C. (1991). Consciousness Explained. Boston: Little, Brown.
Dizadji-Bahmani, F., R. Frigg, and S. Hartmann. (2010). “Who’s Afraid of Nagelian
Reduction?” Erkenntnis 73, 393–412.
Donnellan, K. (1970). “Proper Names and Identifying Descriptions.” Synthese 21,
335–358.
Donnellan, K. (1974). “Speaking of Nothing.” Philosophical Review 83, 3–31.
Dretske, F. (1973). “Contrastive Statements.” Philosophical Review 82, 411–437.
Duhem, P. M. (1954). The Aim and Structure of Physical Theory. Princeton, NJ: Princeton
University Press.
Dupré, J. (1993). The Disorder of Things. Cambridge, MA: Harvard University Press.
Dupré, J. (2012). Processes of Life: Essays in the Philosophy of Biology. New York: Oxford
University Press.
Falk, R. (2009). Genetic Analysis: A History of Genetic Thinking. Cambridge: Cambridge
University Press.
Fazekas, P. (2009). “Reconsidering the Role of Bridge Laws in Inter-Theoretic Relations.”
Erkenntnis 71, 303–322.
Feyerabend, P. K. (1993 [1975]). Against Method (3rd ed.). London and New York: Verso.
Firestein, S. (2012). Ignorance: How It Drives Science. New York: Oxford University Press.
Flanagan, O. (1991). The Science of the Mind (2nd ed.). Cambridge, MA: MIT Press.
Fodor, J. (1974). “Special Sciences (Or: The Disunity of Science as a Working Hypothesis).”
Synthese 28, 97–115.
Fodor, J. A. (1999). “Let Your Brain Alone.” London Review of Books 21.
Franklin-Hall, L. R. (2008). From a Microbiological Point of View. Ph.D. thesis, Columbia
University, New York.
Franklin-Hall, L. R. (2016). “New Mechanistic Explanation and the Need for Explanatory
Constraints.” In K. Aizawa and C. Gillett (Eds.), Scientific Composition and Metaphysical
Ground: New Directions in the Philosophy of Science, pp. 41–74. London: Palgrave
MacMillan.
Franklin- Hall, L. R. (forthcoming). “The Causal Economy Account of Scientific
Explanation.” Minnesota Studies in the Philosophy of Science.
Frege, G. (1892). “On Sinn and Bedeutung.” In M. Beaney (Ed.), The Frege Reader, pp.
251–271. Oxford: Blackwell.
284 References
For the benefit of digital users, indexed terms that span two pages (e.g., 52–53) may, on
occasion, appear on only one of those pages.
abstraction, 15, 26, 70–72, 75, 109–10, 116, Bickle, John, 37n.16, 40–41n.22,
120–34, 136–37, 170–76, 187–90, 218n.3, 261n.7
202–11, 236, 243–45, 258, 272– biochemistry. See chemistry
73, 275–76 biogeography, 138–40
Achinstein, Peter, 224n.9 biology, 7–8, 16–17, 34–46, 50–61, 66, 77–
advancement of science. See scientific 80, 84–88, 91–93, 115–16, 138–49,
progress 172–74, 238–43, 256, 263–68
Alexander, Samuel, 192 biometrics, 148–49
Alexandrova, Anna, 272n.8 Bogen, Jim, 168–69
Allais, Maurice, 72–73 Bohr, Niels, 226
allele. See gene Boniolo, Giovanni, 186n.16
Allen, Garland E., 144n.4 Boring, Edwin G., 62n.10
Amundson, Ron, 63n.11 Box, George E.P., 186
Anderson, John, 125n.10 bridge laws, 26–29, 35, 36–37, 45–
antireductionism, 6–12, 15–16, 19–22, 46, 253
24–48, 49, 82–83, 99, 107–8, 163, bridge principles. See bridge laws
191–92, 201, 206–7, 209, 213–14, Broad, Charlie Dunbar, 192
248–49, 251–79 Bromberger, Sylvain, 112n.1
Appiah, Kwame Anthony, 71–72n.20 Burge, Tyler, 9n.5
Aristotle, 73–74, 192
autonomy. See epistemic autonomy Camerer, Colin, 74, 75–76, 76n.27
capacity. See disposition
Barnes, Elizabeth, 196n.8, 211–12 Carey, Susan, 233n.20
Bateson, William, 144, 146–47 Carnap, Rudolf, 22n.8, 28n.6, 62–63, 216,
Batterman, Robert. 107n.9, 212–13 218–19, 223n.8, 260–62, 265
Beadle, George W., 241, 242–43 Carroll, Sean, 58–59, 60–61
Beatty John, 86, 89, 91 Cartesian dualism. See dualism
Becher, Johann J., 103–4 Cartwright, Nancy, 28n.5, 32n.12,
Bechtel, William, 164, 168–69, 170, 175– 127n.12, 131n.18, 174–75, 190, 191
76, 185, 258 causal explanation. See scientific
Bedau, Mark, 196, 200, 209–10 explanation
behaviorism. See psychology causal mechanism. See mechanism
Behe, Michael, 143n.3 causal power, 90–91, 97, 106–7, 182–84,
Benacerraf, Paul, 260n.6 194–96, 198–203, 211–13
Benzer, Seymour, 241 Cavendish, Henry, 103–4, 227, 236
Bernoulli, Daniel, 157 Chalmers, David, 9n.5, 196n.8, 261n.7
Bernoulli, Nicolaus, 157n.12 Chemero, Anthony, 185
292 Index
137, 155–60, 161, 162–63, 208, 237– Flanagan, Owen, 61n.9, 152–53
38, 244–46, 248, 271–73 Fodor, Jerry A., 27n.4, 29, 36–38n.18
“psychoneural” economics, 18–19, fragility, 89–91, 92–93
74–76, 78 ––80, 102–3, 159, 161, 208, frame, 17–18, 83, 94–107, 109–10, 116–18,
244–46, 248, 271–73 124, 126–27, 132–35, 136–37, 140–
Edgeworth, Francis Y., 78–79 60, 166–69, 177–79, 203–8, 236–37,
Einstein, Albert, 223, 247–48 239–40, 242–48, 252, 261–68, 273–75
eliminative materialism, 40–41n.22, 269 framing stage, 18, 109–10, 111–18, 124–
Ellsberg, Daniel, 72–73 26, 132–35, 136–37, 141–43, 150,
emergence, 19–22, 32n.12, 163, 191–214, 160, 162, 166–71, 181–82, 188–89,
215, 248–49, 252 207, 237, 238, 246, 252, 264–65, 274–
epistemic vs. metaphysical emergence, 75, 278–79
193–97, 199, 201–2, 208–9, 213–14 Darwin’s framing stage, 140–41
strong emergence, 32n.12, 194–95, 201 Friedman’s framing stage, 155–59
weak emergence, 194–95, 200, 201 Mendel’s framing stage, 145–47, 241–42
epistemic autonomy 8–21, 25, 31–48, 76, Skinner’s framing stage, 150–51
108, 133–34, 136–37, 191–96, 204– Franklin-Hall, Laura R., 44n.26,
14, 248–49, 251–79 131n.17, 188–89
evolution by natural selection, 50–55, 58, Frege, Gottlob, 225, 226–27, 233–34, 237,
60–61, 77, 84–86, 92–93, 115, 137, 241, 246
138–43, 160, 238–40, 244, 263–65 Fregean sense, 225–27, 233–34, 237,
principle of natural selection, 50–51, 85, 241, 246–47
138, 139 Freud, Sigmund, 61, 152–53
evolutionary-developmental biology. See Friedman, Michael, 11n.6
developmental synthesis Friedman, Milton, 70–75, 82–83, 102, 135,
expected utility theory. See utility 155–61, 238, 244–45, 252, 271, 272
explanatory autonomy. See epistemic fruit fly. See Drosophila
autonomy functional organization. See functionalism
explanatory extension, 37, 232, 266 functionalism, 13–14, 40, 42–46, 64–67,
explanatory relativity, 110–18, 125–26, 98–99, 129–30, 155, 180, 194–96,
175–76, 261 236–37, 241–43, 248
explanatory relevance, 36n.13, 112–16, functionalization. See functionalism
119–26, 169, 184
external question. See internal vs. external Galilei, Galileo, 71–72n.20
questions Galison, Peter, 32n.12
Galton, Francis, 56–57, 148–49
Falk, Raphael, 144n.4, 149n.7 Garfinkel, Alan, 37n.14, 113–14,
Fazekas, Peter, 31n.11 117, 131–32
fertility, 17–18, 84–88, 90–92, 96–98, 145 gemmules, 52–53, 147–48, 238–40,
Feyerabend, Paul K., 136, 216–17, 221n.6, 243, 246
222, 224n.10, 224, 227, 230–32, 248 gene, 16–17, 18–19, 35, 40, 42–46, 56–61,
Firestein, Stuart, 1, 3, 4–5, 48, 115, 134– 77–78, 115–16, 129–30, 144–49,
35, 276–77 159–60, 171–74, 207–8, 230, 239–48,
Fisher, Ronald, 57 261, 266–68
fitness, 17–18, 50–53, 83–94, 96–99, 101, genetics, 16–19, 30–31, 34–36, 40, 51n.4,
104–8, 123, 140–43, 160–61, 184, 53, 55–61, 66–67, 77–79, 115, 129–
208, 239–40, 243, 263–65 30, 137, 144–49, 159–60, 207–8, 230,
propensity interpretation of, 17–18, 83, 240–43, 244, 245, 252, 255, 261–
86–92, 96–98, 105–6 62, 265–68
294 Index
natural kind, 27n.4, 29, 36, 92–93, 190, 222–23, 226, 229, 247–48, 253–55,
225–36, 266, 273–74 272, 275–76
natural selection. See evolution by natural physical property, 84–97, 105–8, 128–
selection 33, 194, 204, 205–6, 209, 262, 269
neural network, 197–204 Piccinini, Gualtiero, 180
neural state. See psychological state Pigou, Arthur C. 78–79
neural synchronization, 123, 198–200, 211 Pillsbury, Walter, 62
Neurath, Otto, 218–19 Pinker, Steven, 13–14
neuropsychology. See neuroscience Place, Ullin Thomas, 40, 269
neuroscience, 7–8, 39–40, 45–46, 66–67, placeholder, 17–20, 77–78, 82–108, 109–
73–75, 78–79, 80, 154, 168–69, 10, 117–18, 122–24, 132–35, 136–61,
181n.12, 189, 193, 197–201, 204, 162–69, 177–90, 202–9, 236–44, 252,
212–13, 217, 254–56, 258n.5, 272–73 264–67, 275–76
systems neuroscience, 197–201, 204 Plato, 73–74
Newton, Isaac, 8–9, 77, 226, 247–48, 259 Popper, Karl R., 80, 216, 218–20, 250
Nicholson, Daniel, 187–88 Pott, Johann H., 103–4
normal science vs. revolutionary powers. See causal powers
science, 219–21 preference, 67–68, 72–248
Priestley, Joseph, 103–4, 227–28, 230–31,
ontogeny, 52–54, 58–61, 67, 77–78, 147– 235–37, 238–39, 274
49, 203, 238–39, 241–42 principle of charity. See principle of
Oppenheim, Paul, 28n.6, 123, 195–96, humanity
199, 205–6 principle of humanity, 229, 234, 239–40,
Orzack, Steven, 14 243, 247–48
progress. See scientific progress
pangenesis, 52–53, 147–48, 238–39, 240 propensity. See disposition; fitness
Paracelsus, 222 protein, 43–44, 60, 172–73, 199–211, 242–
paradigm. See scientific paradigm 43, 261
Pareto, Vilfredo, 245 Provine, William B., 58n.8
Pavlov, Ivan, 62, 64n.12 psychological state, 16–17, 18–19, 34, 40,
Pearson, Karl, 148–49 45–46, 61–62, 66–67, 68–69, 72–73,
Pesendorfer, Wolfgang, 75n.26, 75– 75, 78–80, 82–83, 98–101, 106–7,
76, 76n.27 151–55, 160, 162–63, 194, 198–200,
phenotype, 40n.21, 51n.5, 144–49, 204–5, 211, 229, 234, 243–44, 246,
230, 241–42 268–70, 271–72
philosophy of mind, 13–14, 36–37, 40, 45– Psychology, 3, 7–8, 27, 36–42, 45–46,
46, 98, 206, 269–70 61–68, 73–80, 82–83, 98–100, 128,
philosophy of science, 6–11, 19–22, 24–26, 137, 149–55, 189, 193, 212–13,
31, 38–39, 49, 80, 102, 107–8, 112, 220–21, 243–44, 252–56, 268–
131, 163–67, 189, 213–14, 215–32, 70, 272–73
248–49, 251–53, 259–61 psychological behaviorism, 61–67,
phlogiston, 103–7, 226–39, 243, 78–80, 98, 100, 115, 137, 149–55, 160,
247, 273–75 162–63, 237–38, 243–44, 268–70
phylogeny, 53–54, 57, 59, 77–78 Ptolemy, 222–23
physicalism, 25, 27n.4, 32–33, 36–39, 40, Punnett, Reginald C., 146–47
62, 64, 99, 192, 194–95, 210, 254, 269 Putnam, Hilary, 9n.5, 28n.6, 29, 32–37,
physics, 7–12, 24–43, 47, 56, 62–65, 69– 38–39, 41, 133, 137, 215, 225–26, 228,
72, 123, 128, 147, 154, 189, 211–13, 242, 255, 257, 262–63
Index 297
Smart, John Jamison Carswell, 40, 269 utility, 72–79, 100–3, 106–7, 155–60, 208,
Smith, Adam, 78–79 211, 244–46, 248, 271–73
Smith, Laurence D., 63n.11 expected utility theory, 72–74, 156–59,
Sober, Elliott, 39n.19, 85n.2, 86–88, 208, 271
89, 91, 96–97, 98, 106–8, 123, marginal utility, 100–2, 157
154n.11, 263–64 revealed preference view of utility, 72–
social sciences, 7–8, 36–37, 161, 197 76, 78–79, 102, 155, 159, 245, 271
solubility, 89–93, 95–96, 98–101, 104–
7, 255–56 van Eck, Dingmar, 175n.9
speaker’s intention, 175n.7, 228–29, 234, van Fraassen, Bas, 30n.8, 112–13, 125n.9,
236, 237–38, 247–48, 275 131n.16
special sciences, 8–11, 30, 36–41, 93, 107– van Strien, Marij, 9n.4
8, 187, 253–56 variation, 18–19, 50–60, 66–67, 77–80,
Spencer, Herbert, 85 84–97, 138–47, 160, 238–43, 263–65
Sperry, Roger Walcott, 192 principle of variation, 50–51
Sporns, Olaf, 197–99, 200, 201, 204 principle of variation in fitness, 50–51
square-peg-round-hole example 33–36, viability, 17–18, 84–88, 90–92, 96–
39, 47, 133, 137, 255, 262–63 98, 105–6
St. Petersburg paradox, 157 Virtus dormitiva, 82, 87, 89–94, 97, 99–
Stahl, George E., 103–4 100, 104, 106–7
Stebbins, George L., 57 von Mises, Ludwig, 69–70
Stotz, Karola, 144n.4, 149n.7, 230n.18 von Neumann, John, 74–75, 157, 158n.13
Strevens, Michael, 36n.13, 41n.23, 114–
15n.4, 119–22, 125–26, 130n.14, 131– Waters, C. Kenneth, 40n.21, 144n.4,
33, 141, 179, 182–86 149n.7, 242n.24
supervenience, 17–18, 25, 37–39, 83–108, Watson, James D., 242–43
210, 254 Watson, John B., 62, 63–64, 79–80, 149–
supervenience emergentism, 194 51, 154, 155, 270
Suppes, Patrick, 30n.8, 131n.18 Weber, Marcel, 144n.4, 149n.7
survival. See viability “wedding cake” model of science, 7–
8, 193–94
Taylor, Elanor, 196n.6 Weed, Douglas L., 14
tendency. See disposition Weinberg, Steven, 9n.5
Theoretician’s dilemma, 153–54, 244, 268 Weisberg, Michael, 128–31, 175n.8
theory change. See scientific progress Weismann, August, 53, 56–57, 147–
Theory-ladenness, 196–97, 222–23, 229– 49, 238–39
31, 237–46, 274–75 William of Occam, 186
Thomson, Joseph J., 226 Wilson, Charles E., 131n.17
Thorndyke, Edward, 62 Wilson, Jessica, 196n.8
Tolman, Edward C., 62–63, 150 Wimsatt, William, 3–4, 11n.7,
Toulmin, Stephen, 30 131n.18, 164
transmission of traits. See heredity Wittgenstein, Ludwig, 47, 100
truth, 1–3, 5, 6, 70, 127–28, 174, 190, 215– Wright, Cory, 175n.9
16, 248, 250–51, 272, 278–79 Wright, Sewall, 57
Wundt, Wilhelm, 61
unification. See unity of science
unity of science, 28–29, 30–31, 231 Zador, Anthony, 199n.10