Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Mindreading - 3. Reading Ones Own Mind

Download as pdf or txt
Download as pdf or txt
You are on page 1of 38

Mindreading: An Integrated Account of Pretence, Self-Awareness, and Understanding Other Minds

Shaun Nichols and Stephen P. Stich

https://doi.org/10.1093/0198236107.001.0001
Published: 2003 Online ISBN: 9780191600920 Print ISBN: 9780198236108

Downloaded from https://academic.oup.com/book/3935/chapter/145531015 by Universidad Nacional Autonoma de Mexico user on 02 August 2022
CHAPTER

p. 150
4 Reading One's Own Mind 
Shaun Nichols, Stephen P. Stich

https://doi.org/10.1093/0198236107.003.0004 Pages 150–199


Published: September 2003

Abstract
The most widely held account of self-awareness is the “theory theory”, according to which self-
awareness is a theory-mediated process which depends on the same “theory of mind” that underlies
the attribution of mental states to others. The chapter distinguishes several versions of the theory
theory of self-awareness and presents an alternative account, according to which self-awareness is
subserved by a monitoring mechanism that is independent of the theory of mind. The chapter also
describes and disputes the prominent arguments for the theory theory, including a wellknown
argument based on parallels between the development of rst person and third person mindreading.
Finally, it is argued that clinical ndings on autism and schizophrenia seem to favor the view that the
mechanism for self-awareness is independent of the theory of mind.

Keywords: appearance/reality distinction, autism, detecting vs. reasoning, development, dissociations,


phenomenology, psychopathology, schizophrenia, self-awareness, theory theory, Simon Baron-Cohen,
Alison Gopnik, Alvin Goldman
Subject: Philosophy of Science, Philosophy of Mind

4.1. Introduction

The idea that we have special access to our own mental states has a distinguished philosophical history.
Philosophers as di erent as Descartes and Locke agreed that we know our own minds in a way that is quite
di erent from the way in which we know other minds. In the latter half of the twentieth century, however,
this idea came under serious attack, rst from philosophy (Sellars 1956) and more recently from
1
researchers working on mindreading. In the previous chapter, we developed an account of how people read
other peoples’ minds. This has been the focus of most of the work on mindreading. However, a number of
psychologists and philosophers have also proposed accounts of the mechanisms underlying the attribution
of mental states to oneself. This process of reading one's own mind or becoming self‐aware will be our primary
concern in this chapter.

We will start by examining what is probably the account of self‐awareness that is most widely held among
psychologists, an account which we will call the ‘theory theory of self‐awareness’ (TTSA). The two basic
ideas of this account are (1) that one's access to one's own mind depends on the same cluster of cognitive
mechanisms that underlie the capacity to attribute mental states to others, and (2) that those mechanisms
include a rich body of information about the mind which plays a central role in both third‐person and rst‐
person mindreading. Though many authors have endorsed the theory theory of self‐awareness (Gopnik
1993; Gopnik and Wellman 1994; Gopnik and Meltzo 1994; Perner 1991; Wimmer and Hartl 1991;
Carruthers 1996; Frith 1994; Frith and Happé 1999), it is our contention that advocates of this account have
left their theory seriously underdescribed. In the next section, we will suggest three di erent ways in which

Downloaded from https://academic.oup.com/book/3935/chapter/145531015 by Universidad Nacional Autonoma de Mexico user on 02 August 2022
the theory might be elaborated, all of which have signi cant shortcomings. In Section 4.3, we will present
our own theory of self‐awareness, the Monitoring Mechanism theory, and compare its merits to those of the
p. 151 TTSA. Advocates of the TTSA argue that it is supported by evidence about psychological development and
psychopathologies. In Section 4.4 we will review the developmental arguments and try to show that none of
the evidence favours the TTSA over our Monitoring Mechanism theory. Indeed, we will maintain that a
closer look at the evidence on development actually provides arguments against the TTSA. In Section 4.5 we
will review the arguments from psychopathologies and we will argue that none of the evidence favours the
TTSA over our Monitoring Mechanism theory. Then, in Section 4.6, we will marshal some additional
evidence on psychopathologies to provide an argument in favour of the Monitoring Mechanism theory. On
our account, but not on the TTSA, it is possible for the mechanisms subserving self‐awareness and reading
other people's minds to be damaged independently. And, we will suggest, this may well be just what is
happening in certain cases of schizophrenia and autism. After making our case against the theory theory of
self‐awareness and in favour of our theory, we will consider two other theories of self‐awareness to be found
in the recent literature. The rst of these, discussed in Section 4.7, is Robert Gordon's ‘ascent routine’
account (Gordon 1995b, 1996), which, we will argue, is clearly inadequate to explain the full range of self‐
awareness phenomena. The second is Alvin Goldman's (1993a, b, 1997, 2000) phenomenological account
which, we maintain, is also underdescribed and admits of two importantly di erent interpretations. On both
of these interpretations, we'll argue, the theory is singularly implausible. That's where we're headed. But
before we do any of this, there is a good deal of background that needs to be set in place.

We begin by drawing a distinction that was left implicit in preceding chapters. Mindreading skills, in both
the rst‐person and the third‐person cases, can be divided into two categories which, for want of better
labels, we will call detecting and reasoning.

• Detecting is the capacity to attribute current mental states to someone.

• Reasoning is the capacity to use information about a person's mental states (typically along with other
information) to make predictions about the person's past and future mental states, her behaviour, and
her environment.

So, for instance, one might detect that another person wants ice cream and that the person thinks the closest
place to get ice cream is at the corner shop. Then one might reason from this information that, since the
person wants ice cream and thinks that she can get it at the corner shop, she will go to the shop. The
distinction between detecting and reasoning is an important one because some of the theories we will be
considering o er integrated accounts on which detecting and reasoning are explained by the same cognitive
mechanism. Other theories, including ours, maintain that in the rst‐person case, these two aspects of
mindreading are subserved by di erent mechanisms.

p. 152 Like the other authors we'll be considering, we take it to be a requirement on theories of self‐awareness that
they o er an explanation for:

1. the obvious facts about self‐attribution (e.g. that normal adults do it easily and often, that they are
generally accurate, and that they have no clear idea of how they do it);

2. the often rather un‐obvious facts about self‐attribution that have been uncovered by cognitive and
developmental psychologists.

However, we do not take it to be a requirement on theory building in this area that the theory address
philosophical puzzles that have been raised about knowledge of one's own mental states. In recent years,
philosophers have had a great deal to say about the link between content externalism and the possibility
that people can have privileged knowledge about their own propositional attitudes (e.g. McLaughlin and Tye
2
1998; Wright et al. 1998). These issues are largely orthogonal to the sorts of questions about underlying
mechanisms that we will be discussing in this chapter, and we have nothing at all to contribute to the

Downloaded from https://academic.oup.com/book/3935/chapter/145531015 by Universidad Nacional Autonoma de Mexico user on 02 August 2022
resolution of the philosophical puzzles posed by externalism. But in the unlikely event that philosophers
who worry about such matters agree on solutions to these puzzles, we expect that the solutions will t
comfortably with our theory.

4.2. The Theory Theory of Self‐Awareness

As noted earlier, among psychologists the theory theory of self‐awareness is the prevailing account. And, of
3
course, two of the leading accounts of how we understand other people's minds are also ‘theory theories’.
Before setting out TTSA, it will be useful to review in broad outline how theory theory accounts of third‐
person mindreading propose to explain our capacity to read other people's minds, stressing some important
points on which the scienti c‐theory theory of third‐person mindreading and the modular account of third‐
person mindreading agree.

According to both the scienti c‐theory theory and the modularity theory, the capacity to detect other
p. 153 people's mental states relies on inferences that invoke a rich body of information about the mind. For
scienti c‐theory theorists, this information is acquired and stored in much the same way that scienti c
theories are, while for modularity theorists, the information is innate and stored in a mental module.
Although this distinction was quite important to the concerns of the previous chapter, in the present
chapter little turns on the distinction and thus it can be safely ignored. So in this chapter we will often use
‘ToMI’ to refer to the body of information about the mind (the ‘Theory of Mind Information’) that is
exploited in third‐person mindreading, regardless of whether this information is akin to a scienti c theory
4
or housed in a module. According to theory theorists of both stripes, when we detect another person's
mental state, the process involves an information‐mediated (or ‘theory‐mediated’) inference that makes use
of the information in ToMI. The inference can also draw on perceptually available information about the
behaviour of the target and about her environment, and information stored in memory about the target and
her environment. A sketch of the mental mechanisms invoked in this account is given in Figure 4.1.
Fig. 4.1

Downloaded from https://academic.oup.com/book/3935/chapter/145531015 by Universidad Nacional Autonoma de Mexico user on 02 August 2022
Theory theory of detecting others' mental states

Both versions of the theory theory also maintain that ToMI is both the information about the mind that
underlies the capacity to detect other people's mental states and the information about the mind that
underlies the capacity to reason about other people's mental states and predict their behaviour. So for theory
p. 154 theorists, reasoning about other people's mental states and predicting their behaviour is also a theory‐
mediated inference process, where the inferences draw on beliefs about (inter alia) the target's mental
states. Of course, some of these beliefs will themselves have been produced by detection inferences. When
detecting and reasoning are depicted together we get Figure 4.2.

Fig. 4.2

Theory theory of detecting and reasoning about others' mental states

In Chapter 3 we argued that both versions of the theory theory of third‐person mindreading are inadequate,
because simulation‐style processing is also crucial to mindreading. For the rst six sections of this chapter,
however, we will try to simplify matters by ignoring our critique and assuming, for argument's sake, that
some version of the account of third‐person mindreading depicted in Figure 4.2 is correct. We maintain that
even if all third‐person mindreading depends on ToMI, that still will not provide the advocates of TTSA with
the resources to accommodate the facts about self‐awareness. So we ask the reader to bear in mind that,
5
until Section 4.7, we will be assuming that all third‐person mindreading depends on ToMI.
4.2.1. Reading one's own mind: three versions of the TTSA
The theory theory account of how we read other minds can be extended to provide an account of how we
p. 155 read our own minds. Indeed, both the theory for understanding other minds and the theory theory for
self‐awareness seem to have been rst proposed in the same article by Wilfrid Sellars (1956). The core idea
of the theory theory account of self‐awareness is that the process of reading one's own mind is largely or
entirely parallel to the process of reading someone else's mind. Advocates of the TTSA maintain that
knowledge about one's own mind, like knowledge about other minds, comes from theory‐mediated (or

Downloaded from https://academic.oup.com/book/3935/chapter/145531015 by Universidad Nacional Autonoma de Mexico user on 02 August 2022
information‐mediated) inferences, and the information that mediates these inferences is the same for self
and other—it is ToMI. In recent years many authors have endorsed this idea; here are two examples:

Even though we seem to perceive our own mental states directly, this direct perception is an
illusion. In fact, our knowledge of ourselves, like our knowledge of others, is the result of a theory.
. . . (Gopnik and Meltzo 1994: 168)

. . . if the mechanism which underlies the computation of mental states is dysfunctional, then self‐
knowledge is likely to be impaired just as is the knowledge of other minds. The logical extension of
the ToM de cit account of autism is that individuals with autism may know as little about their
own minds as about the minds of other people. This is not to say that these individuals lack mental
states, but that in an important sense they are unable to re ect on their mental states. Simply put,
they lack the cognitive machinery to represent their thoughts and feelings as thoughts and
feelings. (Frith and Happé 1999: 7)

Unfortunately, advocates of the theory theory account of self‐awareness are much less explicit than one
would like, and unpacking the view in di erent ways leads to signi cantly di erent versions of the theory.
But all of them share the claim that the processes of reasoning about and detecting one's own mental states
will parallel the processes of reasoning about and detecting others’ mental states. Since the process of
detecting one's own mental states will be a central concern in what follows, it is especially important to be
very explicit about the account of detection suggested by the theory theory of self‐awareness. According to
the TTSA:

1. Detecting one's own mental states is an information‐mediated or theory‐mediated inferential process.


The information, here as in the third‐person case, is ToMI.

2. As in the third‐person case, the information‐mediated or theory‐mediated process which enables


people to detect their own mental states draws on perceptually available information about one's own
behaviour and environment. The inference also draws on information stored in memory about oneself
and one's environment.

At this point the TTSA can be developed in at least three di erent ways. So far as we know, advocates of the
TTSA have never taken explicit note of these distinctions. Thus it is di cult to determine which version a
given theorist would endorse.

p. 156 TTSA version 1


TTSA version 1 (for which our code name is the crazy version) proposes to maintain the parallel between
detecting one's own mental states and detecting another person's mental states quite strictly. The only
information used as evidence for the inference involved in detecting one's own mental state is the
information provided by perception (in this case, perception of oneself) and by one's background beliefs (in
this case, background beliefs about one's own environment and previously acquired beliefs about one's own
mental states). This version of TTSA is sketched in Figure 4.3.
Fig. 4.3

Downloaded from https://academic.oup.com/book/3935/chapter/145531015 by Universidad Nacional Autonoma de Mexico user on 02 August 2022
Theory theory of self‐awareness, version 1

Of course, we typically have much more information about our own behaviour and our own prior mental
states than we do about the behaviour and prior mental states of others, so even on this version of the TTSA
we may well have a better grasp of our own mind than we do of other minds (see e.g. Gopnik 1993: 94).
However, the mechanisms underlying self‐awareness are supposed to be the same mechanisms that
underlie awareness of the mental states of others. Thus this version of the TTSA denies the widely held view
that an individual has some kind of special or privileged access to his own mental states.

We are reluctant to claim that anyone actually advocates this version of the TTSA, since we think it is a view
that is hard to take seriously. Indeed, the claim that perception of one's own behaviour is the prime source of
information on which to base inferences about one's own mental states reminds us of the old joke about the
p. 157 two behaviourists who meet on the street. One says to the other, ‘You're ne. How am I?‘ The reason the
joke works is that it seems patently absurd to think that perception of one's behaviour is the best way to nd
out how one is feeling. It seems obvious that people can sit quietly without exhibiting any relevant
behaviour and report on their current thoughts. For instance, people can answer questions about current
mental states like ‘what are you thinking about?‘ Similarly, after silently working through a problem in
their heads, people can answer subsequent questions like ‘how did you gure that out?‘ And we typically
assume that people are correct when they tell us what they were thinking or how they just solved a problem.
Of course, it is not just one's current and immediately past thoughts that one can report. One can also report
one's own current desires, intentions, and imaginings. It seems that people can easily and reliably answer
questions like: ‘what do you want to do?‘; ‘what are you going to do?‘; ‘what are you imagining?‘ People
who aren't exhibiting much behaviour at all are often able to provide richly detailed answers to these
questions.

These more or less intuitive claims are backed by considerable empirical evidence from several research
programmes in psychology. Using ‘think aloud’ procedures, researchers have been able to corroborate self‐
reports of current mental states against other measures. In typical experiments, subjects are given logical or
6
mathematical problems to solve and are instructed to ‘think aloud’ while they work the problems. For
instance, people are asked to think aloud while multiplying 36 times 24 (Ericsson and Simon 1993: 346–7).
Subjects’ responses can then be correlated with formal analyses of how to solve the problem, and the
subject's answer can be compared with the correct answer. If the subject's think‐aloud protocol conforms to
the formal task analysis, that provides good reason to think that the subject's report of his thoughts is
accurate (Ericsson and Simon 1993: 330). In addition to these concurrent reports, researchers have also
7
p. 158 explored retrospective reports of one's own problem solving. For instance Ericsson and Simon discuss a
study by Hamilton and Sanford in which subjects were presented with two di erent letters (e.g. R–P) and
asked whether the letters were in alphabetical order. Subjects were then asked to say how they solved the
problem. Subjects reported bringing to mind strings of letters in alphabetical order (e.g. LMNOPQRST), and
reaction times taken during the problem solving correlated with the number of letters subjects recollected
(Ericsson and Simon 1993: 191–2).

So, both commonsense and experimental studies con rm that people can sit quietly, exhibiting next to no

Downloaded from https://academic.oup.com/book/3935/chapter/145531015 by Universidad Nacional Autonoma de Mexico user on 02 August 2022
overt behaviour, and give detailed, accurate self‐reports about their mental states. In light of this, it strikes
us as simply preposterous to suggest that the reports people make about their own mental states are being
inferred from perceptions of their own behaviour and information stored in memory. For it is simply absurd
to suppose that there is enough behavioural evidence or information stored in memory to serve as a basis
for accurately answering questions like ‘what are you thinking about now?‘ or ‘how did you solve that math
problem?‘ Our ability to answer questions like these indicates that version 1 of the TTSA cannot be correct
since it cannot accommodate some central cases of self‐awareness.

TTSA version 2
Version 2 of the TTSA (for which our code name is the underdescribed version) allows that in using ToMI to
infer to conclusions about one's own mind there is information available in addition to the information
provided by perception and one's background beliefs. This additional information is available only in the
rst‐person case, not in the third‐person case. Unfortunately, advocates of the TTSA tell us very little about
what this alternative source of information is. And what little they do tell us is unhelpful to put it mildly.
Here, for instance, is an example of the sort of thing that Gopnik has said about this additional source of
information:

One possible source of evidence for the child's theory may be rst‐person psychological
experiences that may themselves be the consequence of genuine psychological perceptions. For
example, we may well be equipped to detect certain kinds of internal cognitive activity in a vague
and unspeci ed way, what we might call ‘the Cartesian buzz‘. (Gopnik 1993: 11, emphasis added)

We have no serious idea what the ‘Cartesian buzz’ is, or how one would detect it. Nor do we understand how
detecting the Cartesian buzz will enable the ToMI to infer to conclusions like: I want to spend next Christmas
in Paris or I believe that the Brooklyn Bridge is about eight blocks south of the Manhattan Bridge. Figure 4.4 is our
p. 159 attempt to sketch version 2 of the TTSA. We won't bother to mount a critique against this version, apart
from observing that without some less mysterious statement of what the additional source(s) of
information are, the theory is too incomplete to evaluate.
Fig. 4.4

Downloaded from https://academic.oup.com/book/3935/chapter/145531015 by Universidad Nacional Autonoma de Mexico user on 02 August 2022
Theory theory of self‐awareness, version 2

TTSA version 3
There is, of course, one very natural way to spell out what is missing in version 2. What is needed is some
source of information that would help a person form beliefs (typically true beliefs) about his own mental
states. The obvious source of information would be the mental states themselves. So, on this version of the
TTSA, the ToMI has access to information provided by perception, information provided by background
beliefs, and information about the representations contained in the Belief Box, the Desire Box, etc. This version of
the TTSA is sketched in Figure 4.5.

Fig. 4.5

Theory theory of self‐awareness, version 3

Now at this juncture one might wonder why the ToMI is needed in this story. If the mechanism subserving
self‐awareness has access to information about the representations in the various attitude boxes, then ToMI
has no serious work to do. So why suppose that it is involved at all? That's a good question, we think. And it
is also a good launching pad for our theory. Because on our account Figure 4.5 has it wrong. In detecting
one's own mental states, the ow of information is not routed through the ToMI system. Rather, the process
p. 160 is subserved by a separate self‐monitoring mechanism.

4.3. Reading One's Own Mind: The Monitoring Mechanism Theory

Downloaded from https://academic.oup.com/book/3935/chapter/145531015 by Universidad Nacional Autonoma de Mexico user on 02 August 2022
In constructing our theory about the process that subserves self‐awareness we have tried to be, to borrow a
phrase from Nelson Goodman (1983: 60), ‘refreshingly non‐cosmic’. What we propose is that we need to add
another component or cluster of components to our account of cognitive architecture, a mechanism (or
mechanisms) that serves the function of monitoring one's own mental states.

4.3.1. The Monitoring Mechanism and propositional attitudes


Recall what the theory of self‐awareness needs to explain. The basic facts are that when normal adults
believe that p, they can quickly and accurately form the belief I believe that p; when normal adults desire that
p, they can quickly and accurately form the belief I desire that p; and so on for other basic propositional
8
attitudes like intend and imagine. In order to implement this ability, no sophisticated body of information
about the mind like ToMI is required. To have beliefs about one's own beliefs, all that is required is that
p. 161 there be a Monitoring Mechanism (MM) that, when activated, takes the representation p in the Belief Box
as input and produces the representation I believe that p as output. This mechanism would be trivial to
implement. To produce representations of one's own beliefs, the Monitoring Mechanism merely has to copy
representations from the Belief Box, embed the copies in a representation schema of the form: I believe that
__, and then place the new representations back in the Belief Box. The proposed mechanism (or perhaps a
distinct but entirely parallel mechanism) would work in much the same way to produce representations of
9
one's own desires, intentions, and imaginings. Although we propose that the MM is a special mechanism
for detecting one's own mental states, we maintain that there is no special mechanism for what we earlier
called reasoning about one's own mental states. Rather, reasoning about one's own mental states depends on
10
the same ToMI as reasoning about others’ mental states. As a result, our theory (as well as the TTSA)
predicts that, ceteris paribus, where the ToMI is de cient or the relevant information is unavailable, subjects
will make mistakes in reasoning about their own mental states as well as others’. Our account of the process
subserving self‐awareness for beliefs is sketched in Figure 4.6.
Fig. 4.6

Downloaded from https://academic.oup.com/book/3935/chapter/145531015 by Universidad Nacional Autonoma de Mexico user on 02 August 2022
Monitoring mechanism theory of self‐awareness for beliefs

Since our theory maintains that reasoning about one's own mental states relies on ToMI, we can readily
accommodate ndings like those presented by Nisbett and Wilson (1977). They report a number of studies
in which subjects make mistakes about their own mental states. However, the kinds of mistakes that are
made in those experiments are typically not mistakes in detecting one's own mental states. Rather, the
studies show that subjects make mistakes in reasoning about their own mental states. The central ndings
are that subjects sometimes attribute their behaviour to ine cacious beliefs and that subjects sometimes
deny the e cacy of beliefs that are, in fact, e cacious. For instance, Nisbett and Schacter (1966) found that
subjects were willing to tolerate more intense shocks if they were given a drug (actually a placebo) and told
that the drug would produce heart palpitations, irregular breathing, and butter ies in the stomach.
Although being told about the drug had a signi cant e ect on the subjects’ willingness to take shocks, most
subjects denied this. Nisbett and Wilson's explanation of these ndings is, plausibly enough, that subjects
have an incomplete theory regarding the mind and that the subjects’ mistakes re ect the inadequacies of
their theory (Nisbett and Wilson 1977). This explanation of the ndings ts well with our account too. For
p. 162 on our account, when trying to gure out the causes of one's own behaviour, one must reason about
mental states, and this process is mediated by the ToMI. As a result, if the ToMI is not up to the task, then
people will make mistakes in reasoning about their own mental states as well as others’ mental states.

In this chapter, we propose to remain agnostic about the extent to which the information about the mind in
ToMI is innate. However, we do propose that the MM (or cluster of MMs) is innate and comes on‐line fairly
early in development—signi cantly before ToMI is fully in place. During the period when the Monitoring
Mechanism is up and running but ToMI is not, the representations that the MM produces can't do much. In
particular, they can't serve as premisses for reasoning about mental states, since reasoning about mental
states is a process mediated by ToMI. So, for example, ToMI provides the additional premisses (or the
special purpose inferential strategies) that enable the mind to go from premisses like I want q to conclusions
like: If I believed that doing A was the best way to get q, then (probably) I would want to do A. Thus our theory
predicts that young children can't reason about their own beliefs in this way.

Although we take no stand on the extent to which ToMI is innate, we maintain (along with many theory
theorists) that ToMI comes on‐line only gradually. As it comes on‐line, it enables a richer and richer set of
p. 163 inferences from the representations of the form I believe (or desire) that p that are produced by the MM.
Some might argue that early on in development, these representations of the form I believe that p do not
really count as having the content I believe that p, since the concept (or ‘proto‐concept’) of belief is too
inferentially impoverished. On this view, it is only after a rich set of inferences becomes available that the
child's I believe that p representations really count as having the content I believe that p. To make a
persuasive case for or against this view, one would need a well‐motivated and carefully defended theory of
content for concepts. And we don't happen to have one. (Indeed, one of us is inclined to suspect that much

Downloaded from https://academic.oup.com/book/3935/chapter/145531015 by Universidad Nacional Autonoma de Mexico user on 02 August 2022
recent work aimed at constructing theories of content is deeply misguided (Stich 1992, 1996).) But, with this
caveat, we don't have any objection to the claim that early I believe that p representations do not have the
content I believe that p. If that's what your favourite theory of content says, that's ne with us. Our proposal
can be easily rendered consistent with such a view of content by simply replacing the embedded mental
predicates (e.g. ‘believe‘) with technical terms ‘bel’, ‘des’, ‘pret’, etc. We might then say that the MM
produces the belief that I bel that p and the belief that I des that q; and that at some point further on in
development, these beliefs acquire the content I believe that p, I desire that q, and so forth. That said, we
propose to ignore this subtlety for the rest of the chapter.

The core claim of our theory is that the MM is a distinct mechanism that is specialized for detecting one's
11
own mental states. However, it is important to note that on our account of mindreading, the MM is not the
only mental mechanism that can generate representations with the content I believe that p. Representations
of this sort can also be generated by ToMI. Thus it is possible that in some cases, the ToMI and the MM will
produce con icting representation of the form I believe that p. For instance, if ToMI is de cient, then in some
cases it might produce an inaccurate representation with the content I believe that p which con icts with
accurate representations generated by the MM. In these cases, our theory does not specify how the con ict
will be resolved or which representation will guide verbal behaviour and other actions. On our view, it is an
open empirical question how such con icts will be resolved.

4.3.2. The Monitoring Mechanism and perceptual states


Of course, the MM theory is not a complete account of self‐awareness. One important limitation is that the
MM is proposed as the mechanism underlying self‐awareness of one's propositional attitudes, and it is quite
p. 164 likely that the account cannot explain awareness of one's own perceptual states. Perceptual states
obviously have phenomenal character, and there is a vigorous debate over whether this phenomenal
character is fully captured by a representational account (e.g. Tye 1995; Carruthers 2000; Block 2003). If
perceptual states can be captured by a representational or propositional account, then perhaps the MM can
be extended to explain awareness of one's own perceptual states. For, as noted above, our proposed MM
simply copies representations into representation schemas; for example, it copies representations from the
Belief Box into the schema ‘I believe that __’. However, we are sceptical that perceptual states can be
entirely captured by representational accounts, and as a result, we doubt that our MM theory can adequately
explain our awareness of our own perceptual states. Nonetheless, we think it is plausible that some kind of
monitoring account (as opposed to a TTSA account) might apply to awareness of one's own perceptual
states. Since it will be important to have a sketch of such a theory on the table, we will provide a brief outline
of what the theory might look like.

In specifying the architecture underlying awareness of one's own perceptual states, the rst move is to posit
a ‘Percept Box’. This device holds the percepts produced by the perceptual processing systems. We propose
that the Percept Box feeds into the Belief Box in two ways. First and most obviously, the contents of the
Percept Box lead the subject to have beliefs about the world around her, by what we might call a Percept‐to‐
Belief Mediator. For instance, if a normal adult looks into a quarry, her perceptual system will produce
percepts that will, ceteris paribus, lead her to form the belief that there are rocks down there. Something at
least roughly similar is presumably true in dogs, birds, and frogs. Hence, there is a mechanism (or set of
mechanisms) that takes percepts as input and produces beliefs as output. However, there is also, at least in
normal adult humans, another way that the Percept Box feeds into the Belief Box—we form beliefs about our
percepts. For example, when looking into a quarry I might form the belief that I see rocks. We also form
beliefs about the similarity between percepts—for example, this toy rock looks like that real rock. To explain
this range of capacities, we tentatively propose that there is a set of Percept‐Monitoring Mechanisms that
take input from the Percept Box and produce beliefs about the percepts. We represent this account in Figure
4.7. Note that the PMM will presumably be a far more complex mechanism than the MM. For the PMM must

Downloaded from https://academic.oup.com/book/3935/chapter/145531015 by Universidad Nacional Autonoma de Mexico user on 02 August 2022
take perceptual experiences and produce representations about those perceptual experiences. We have no
idea how to characterize this further in terms of cognitive mechanisms, and as a result, we are much less
p. 165 con dent about this account than we are about the MM account.

Fig. 4.7

Percept‐monitoring mechanism theory

4.4. Developmental Evidence: The Theory Theory of Self‐Awareness


vs. the Monitoring Mechanism Theory

In this section and the one to follow, we will discuss the empirical arguments for and against the theory
theory account of self‐awareness. But before we present those arguments, it may be useful to provide a brief
reminder of the problems we have raised for various versions of the TTSA:

1. Version 1 looks to be hopelessly implausible; it cannot handle some of the most obvious facts about
self‐awareness.

2. Version 2 is a mystery theory; it maintains that there is a special source of information exploited in
reading one's own mind, but it leaves the source of this additional information unexplained.

3. Version 3 faces the embarrassment that if information about the representations in the Belief Box &
Desire Box is available, then no rich body of information about the mind is needed to explain self‐
awareness; ToMI has nothing to do.
We think that these considerations provide an important primafacie case against the TTSA, though we also
think that, as in any scienti c endeavour, solid empirical evidence might outweigh the primafacie
p. 166 considerations. However, it is our contention that the empirical evidence produced by advocates of TTSA
does not support their theory over our Monitoring Mechanism theory. Rather, we shall argue, in some cases
both theories can explain the data about equally well, while in other cases the Monitoring Mechanism
theory has a clear advantage over the TTSA.

The best‐known and most widely discussed argument for the theory theory of self‐awareness comes from

Downloaded from https://academic.oup.com/book/3935/chapter/145531015 by Universidad Nacional Autonoma de Mexico user on 02 August 2022
developmental work charting the relation between performance on mindreading tasks for oneself and for
others. The TTSA predicts that subjects’ performance on mindreading tasks should be about equally good
(or equally bad) whether the tasks are about one's own mental states or the mental states of another person.
In perhaps the most systematic and interesting argument for the TTSA, Gopnik and Meltzo maintain that
there are indeed clear and systematic correlations between performance on mindreading tasks for self and
for others (see Table 4.1, reproduced from Gopnik and Meltzo 1994: table 10.1). For instance, Gopnik and
Meltzo note that children succeed at perceptual mindreading tasks for themselves and others before the
age of 3. Between the ages of 3 and 4, children begin to succeed at desire mindreading tasks for self and for
others. And at around the age of 4, children begin to succeed at the false belief task for self and for others.
‘The evidence’, Gopnik and Meltzo maintain,

suggests that there is an extensive parallelism between children's understanding of their own
mental states and their understanding of the mental states of others. . . . In each of our studies,
p. 167 children's reports of their own immediately past psychological states are consistent with their
accounts of the psychological states of others. When they can report and understand the
psychological states of others, in the cases of pretense, perception, and imagination, they report
having had those psychological states themselves. When they cannot report and understand the
psychological states of others, in the case of false beliefs and source, they do not report that they
had those states themselves. Moreover, and in some ways most strikingly, the intermediate case of
desire is intermediate for self and other. (1994: 179–80)
Table 4.1 Children's knowledge of their own mental states and those of others

States Others Self

Easy

Pretence Before age 3 (Flavell et al. 1987) Before age 3 (Gopnik and Slaughter 1991)

Downloaded from https://academic.oup.com/book/3935/chapter/145531015 by Universidad Nacional Autonoma de Mexico user on 02 August 2022
Imagination Before age 3 (Wellman and Estes 1986) Before age 3 (Gopnik and Slaughter 1991)

Perception (Level 1) Before age 3 (Flavell et al. 1981) Before age 3 (Gopnik and Slaughter 1991)

Intermediate

Desire Age 3–4 (Flavell et al. 1990) Age 3–4 (Gopnik and Slaughter 1991)

Di icult

Source of belief A er age 4 (O'Neill et al. 1992) A er age 4 (Gopnik and Graf 1988)

False belief A er age 4 (Wimmer and Perner 1983) A er age 4 (Gopnik and Astington 1991)

Source: From Gopnik and Meltzo 1994: 180.

This ‘extensive parallelism’ is taken to show that ‘our knowledge of ourselves, like our knowledge of others,
is the result of a theory’ (Gopnik and Meltzo 1994: 168). Thus the argument purports to establish a broad‐
based empirical case for the theory theory of self‐awareness. However, on our view quite the opposite is the
case. In the pages to follow we will try to show that the data don't provide any support for the TTSA over the
Monitoring Mechanism theory that we have proposed, and that some of the data that are comfortably
compatible with MM cannot be easily explained by the TTSA. Defending this claim is rather a long project,
but fortunately the data are intrinsically fascinating.

4.4.1. The parallelism prediction


Before we proceed to the data, it is important to be clear about the structure of Gopnik and Meltzo 's
argument and of our counter‐argument in favour of the Monitoring Mechanism theory. If Gopnik and
Meltzo are right that there is an ‘extensive parallelism’, that would support the TTSA because the TTSA
predicts that there will be parallel performance on parallel mindreading tasks for self and other. According to
the TTSA, in order to determine one's own mental states, one must exploit the same ToMI that one uses to
determine another's mental states. So, if a child's ToMI is not yet equipped to solve certain third‐person
tasks, then the child should also be unable to solve the parallel rst‐person tasks.

By contrast, for many of the tasks we will consider, our theory simply doesn't make a prediction about
whether there will be parallel performance on self‐ and other‐versions of the tasks. On our theory, the
special purpose mechanisms for detecting one's own mental states (MM & PMM) are quite independent
from ToMI, which plays a central role in processes of reasoning about mental states and detecting the
mental states of others. Hence, the ability to detect one's own mental states and the ability to detect
another's mental states need not show similar developmental trajectories, though in some cases they might.
What our theory does predict is that the capacity to detect one's own mental states, though not necessarily
the capacity to reason about them, should emerge quite early, since the theory claims that the MM and the
p. 168 PMM are innate and on‐line quite early in development. Also, as noted in Section 4.3, our theory allows
for the possibility that the ToMI can be used in attributing mental states to oneself. So it may well turn out
that sometimes subjects produce inaccurate self‐attributions because they are relying on the ToMI. Since

Downloaded from https://academic.oup.com/book/3935/chapter/145531015 by Universidad Nacional Autonoma de Mexico user on 02 August 2022
our theory provides no a priori reason to expect extensive parallel performance in detecting mental states in
oneself and others, if there is extensive parallelism our theory would be faced with a major challenge—it
would need to provide some additional and independently plausible explanation for the existence of the
parallelism in each case where it is found. But if, as we shall argue, the parallelism is largely illusory, then it
is the TTSA that faces a major challenge—it has to provide some plausible explanation for the fact that the
parallelism it predicts does not exist.

4.4.2. TTSA meets data


Gopnik and Meltzo argue for the TT by presenting a wide range of cases in which, they maintain, subjects
show parallel performance on self and other versions of mindreading tasks, and at rst glance the parallels
look very impressive indeed. However, we will argue that on closer inspection this impression is quite
misleading. In some cases, there really is parallel performance, but these cases do not support the TTSA
over our MM theory, since in these cases both theories do about equally well in explaining the facts; in some
cases, the evidence for parallel performance is dubious; and in several other cases, there is evidence that
performance is not parallel. These cases are of particular importance since they are compatible with the MM
account and prima facie incompatible with the TTSA. In the remainder of this section we will consider each
of these three classes of cases.

Cases where the parallelism is real

The ʻeasyʼ tasks

There is a range of tasks that Gopnik and Meltzo classify as easy for other and easy for self. They claim that
pretence, imagination, and perception (level 1 perspective taking) are understood for both self and other
before age 3. At least on some tasks, this claim of parallel performance seems to be quite right. Simple
perceptual tasks provide perhaps the clearest example. Lempers and colleagues (Lempers et al. 1977) found
that 2½‐year‐old children succeeded at ‘level 1’ perspective‐taking tasks, in which the children had to
determine whether another person could see an object or not. As we noted in Section 3.3.3, if a young child is
shown that a piece of cardboard has a picture of a rabbit on one side and a picture of a turtle on the other,
and if the child is then shown the turtle side, the child can correctly answer that the person on the other side
p. 169 of the cardboard sees the picture of the rabbit. Using similar tasks, Gopnik and Slaughter (1991) found that
3‐year‐old children could also successfully report their own past perceptions. As Gopnik and Meltzo
characterize it, this task is ‘easy’ for other and ‘easy’ for self, and Gopnik and Meltzo put forward such
cases as support for the TTSA.

As we see it, however, the fact that level 1 perspective‐taking tasks are easy for other and for self does not
count as evidence for the TTSA over our MM theory. To see why, let us consider rst the self case and then
the other case. On our account, MM is the mechanism responsible for self‐awareness of propositional
attitudes and, we have tentatively suggested, another mechanism (or family of mechanisms), the Percept‐
Monitoring Mechanism, underlies awareness of one's own perceptual states. The PMM, like the MM, is
hypothesized to be innate and to come on‐line quite early in development. Thus the PMM is up and running
by the age of 2½, well before ToMI is fully in place. So our theory predicts that quite young children should
be able to give accurate reports about their own perceptual states. Let's turn now to the other case. Both the
TTSA and our theory maintain that the detection of mental states in others depends on ToMI and, like
advocates of TTSA, we think that evidence on visual perspective taking (e.g. Lempers et al. 1977) shows that
part of ToMI is on‐line by the age of 2½. It is of some interest to determine why the part of ToMI that
subserves these tasks emerges as early as it does, though neither the TTSA nor our theory currently has any
explanation to o er. For both theories it is just a brute empirical fact. So here's the situation: our theory

Downloaded from https://academic.oup.com/book/3935/chapter/145531015 by Universidad Nacional Autonoma de Mexico user on 02 August 2022
predicts that awareness of one's own perceptions will emerge early, and has no explanation to o er for why
the part of ToMI that subserves the detection of perceptual states in others emerges early. By contrast, TTSA
predicts that both self and other abilities will emerge at the same time, but has no explanation to o er for
why they both emerge early. By our lights this one is a wash. Neither theory has any clear explanatory
advantage over the other.

Much the same reasoning shows that Gopnik and Meltzo 's cases of pretence and imagination do not lend
any signi cant support to the TTSA over our theory. There is some evidence that by the age of 3 children
have some understanding of pretence and imagination in others (e.g. Wellman and Estes 1986), though as
we will see in Section 4.4.2, there is also some reason for scepticism. However, whatever the ontogeny is for
detecting pretence and imagination in others, the TTSA account can hardly o er a better explanation than
our account, since we agree with advocates of TTSA that ToMI is centrally involved in this process, and
neither we nor the defenders of TTSA have any explanation to o er for the fact that the relevant part of
ToMI emerges when it does. As in the case of perception, our theory does have an explanation for the fact
that the ability to detect one's own pretences and imaginings emerges early, since on our view this process
p. 170 is subserved by the MM which is up and running by the age of 2½, but we have no explanation for the fact
(if indeed it is a fact) that the part of ToMI that subserves the detection of pretences and imaginings in
others also emerges early. The TTSA, on the other hand, predicts that self and other abilities will both
emerge at the same time, but does not explain why they both emerge early. So here, as before, neither
theory has any obvious explanatory advantage over the other.

Sources of belief

A suite of studies by Gopnik, O'Neill, and their colleagues (Gopnik and Graf 1988; O'Neill and Gopnik 1991;
O'Neill et al. 1992) show that there is a parallel between performance on source of belief tasks for self and
for others. In the self‐versions of these tasks, children came to nd out which objects were in a drawer either
by seeing the object, being told, or inferring from a simple cue. After establishing that the child knows what
is in the drawer, the child is asked ‘How do you know that there's an x in the drawer?‘ This question closely
parallels the question used to explore children's understanding of the sources of another's belief (O'Neill et
al. 1992). O'Neill and her colleagues found that while 4‐year‐olds tended to succeed at the other‐person
version of the task, 3‐year‐olds tended to fail it; similarly, Gopnik and Graf (1988) found that 4‐year‐olds
tended to succeed at the self‐version of the task, but 3‐year‐olds tended to fail it. For instance, 3‐year‐olds
often said that their knowledge came from seeing the object when actually they had been told about the
object, and 3‐year‐olds made similar errors when judging the source of another person's knowledge.

These results are interesting and surprising, but they are orthogonal to the issue at hand. The Monitoring
Mechanism posited in our theory is a mechanism for detecting mental states, not for reasoning about them.
But questions about the sources of one's beliefs or knowledge cannot be answered merely by detecting one's
own mental states. Rather, questions about how you gained knowledge fall into the domain of reasoning
about mental states, and that job, we are assuming, is performed by the ToMI. So, on our theory, questions
about sources will implicate the ToMI both for self and other. Hence, our theory, like the TTSA, predicts that
there will be parallel performance on tasks like the source tasks.
The relevant but dubious data
In Gopnik and Meltzo 's table displaying extensive parallelism, there are two remaining cases that cannot
be dismissed as irrelevant. However, we will argue that the cases fall far short of clear support for the TTSA.

False belief

In Chapter 3, we discussed at length the well‐known nding that young children fail the ‘false belief task’

Downloaded from https://academic.oup.com/book/3935/chapter/145531015 by Universidad Nacional Autonoma de Mexico user on 02 August 2022
p. 171 (see Sections 3.2.2 and 3.3). On closely matched tasks, Gopnik and Astington (1988) found a correlation
between failing the false belief task for another and failing it for oneself. Gopnik and Astington (1988)
presented children with a candy box and then let the children see that there were really pencils in the box.
Children were asked, ‘What will Nicky think is in the box?‘ and then, ‘When you rst saw the box, before we
opened it, what did you think was inside it?‘ Children's ability to answer the question for self was
signi cantly correlated with their ability to answer the question for other. Thus, here we have a surprising
12
instance of parallel performance on tasks for self and other. This is, of course, just the outcome that the
TTSA would predict. For the TTSA maintains that ToMI is crucial both in the detection of other people's
beliefs and in the detection of one's own. Thus if a child's ToMI has not yet developed to the point where it
can detect other people's beliefs in a given situation, it is to be expected that the child will also be unable to
detect her own beliefs in that context. And this, it appears, is just what the experimental results show.

What about our theory? What explanation can it o er for these results? The rst step in answering this
question is to note that in the self version of the false belief task, the child is not actually being asked to
report on her current belief, but rather to recall a belief she had in the recent past. Where might such
memories come from? The most natural answer, for a theory like ours, is that when the child rst sees the
box she believes that there is candy in it, and the MM produces a belief with the content I believe that there is
candy in the box. As the experiment continues and time passes that belief is converted into a past tense belief
whose content is (roughly) I believed that there was candy in the box. But, of course, if that were the end of the
story, it would be bad news for our theory, since when asked what she believed when she rst saw the box,
the child reports that she believed that there were pencils in the box. Fortunately, that is not the end of the
story. For, as we noted in Section 4.3.1, in our theory MM is not the only mechanism capable of generating
beliefs with the content I believe(d) that p. ToMI is also capable of producing such beliefs, and sometimes
ToMI may produce a belief of that form that will con ict with a belief produced by MM. That, we propose, is
exactly what is happening in the Gopnik and Astington experiment when younger children fail to report
their own earlier false belief. As the results in the other‐version of the task indicate, the ToMI in younger
children has a strong tendency to attribute beliefs that the child actually believes to be true. So when asked
what she believed at the beginning of the experiment, ToMI mistakenly concludes that I believed that there
13
p. 172 were pencils in the box. Thus, on our account, there will be two competing and incompatible
representations in the child's Belief Box. And to explain the fact that the child usually relies on the mistaken
ToMI‐generated belief, rather than on the correct MM‐generated belief, we must suppose that the memory
trace is relatively weak, and that when the child's cognitive system has to decide which belief about her past
belief to rely on, the MM‐generated memory trace typically loses.

At this point, we suspect, a critic might protest that this is a singularly unconvincing explanation. There is,
the critic will insist, no reason to think that the MM‐generated memory will typically be weaker than the
ToMI‐generated belief; it is just an ad hoc assumption that is required to get our theory to square with the
facts. And if this were the end of the story, the critic would be right. Fortunately for us, however, this is not
the end of the story. For there is evidence that provides independent support for our explanation and
undercuts the TT account. Recent work by German and Leslie exploring performance on self‐ and other‐
versions of the false belief task indicates that if memory enhancements are provided, young children's
performance on self‐versions improves, while their performance on other‐versions stays about the same. German
and Leslie devised a task in which a child would hide a biscuit and then search for it in the wrong place,
because it had been moved when the child was behind a screen. In one condition, the child was then shown a
videotape of the entire sequence of events—hiding, moving, and searching—and asked, at the appropriate
point, ‘Why are you looking there?‘ and then, ‘When you were looking for the biscuit, where did you think
the biscuit was?‘ In another condition, after the same hiding, moving, and searching sequence, the
videotape was ‘accidentally’ rewound too far, and the child watched another child in an identical situation.
At the appropriate point, the child was asked, ‘Why was she looking there?‘ and ‘When she was looking for

Downloaded from https://academic.oup.com/book/3935/chapter/145531015 by Universidad Nacional Autonoma de Mexico user on 02 August 2022
the biscuit, where did she think the biscuit was?‘ German and Leslie found that children who were shown
their own mistaken search were much more likely to o er a false belief explanation and to attribute a false
belief than were children who were shown another's mistaken search (German and Leslie, forthcoming).
This ts nicely with our proposed explanation for why young children fail the false belief task for the self.
However, it is di cult to see how an advocate of the TTSA could explain these results. For according to the
TTSA, if the child has a defective or immature ToMI, the child should make the same mistakes for himself
that he does for another. If there is no MM to generate a correct belief which becomes a correct memory,
then giving memory enhancements should not produce di erential improvement.

p. 173 Desire

Another source of data that might o er support to the TTSA comes from work on understanding desires.
Gopnik and Meltzo maintain that 3‐year‐olds are just beginning to understand desire in others, and Gopnik
and Slaughter found that a signi cant percentage of children make mistakes about their own immediately
past desires. The Gopnik and Slaughter own‐desire tasks were quite ingenious. In one of the tasks, they went
to a daycare centre just before snack time and asked the child whether he was hungry. The hungry child said
‘Yes’ and proceeded to eat all the snack he desired. Then the experimenter asked, ‘When I rst asked you,
before we had the snack, were you hungry then?‘ (1991: 102). Gopnik and Slaughter found that 30–40 per
cent of the 3‐year‐olds mistakenly claimed that they were in their current desire state all along. This
surprising result is claimed to parallel Flavell et al.'s (1990) nding that a signi cant percentage of 3‐year‐
olds make mistakes on desire tasks for others. In the Flavell tasks, the child observes Ellie make a disgusted
look after tasting a cookie, and the child is asked ‘Does Ellie think it is a yummy tasting cookie?‘ (Flavell et
al. 1990: 918). Gopnik and Meltzo remark that the ‘absolute levels of performance were strikingly similar’
to the results reported by Flavell et al. (Gopnik and Meltzo 1994: 179), and they cite this as support for the
parallel performance hypothesis.

The central problem with this putative parallel is that it is not at all clear that the tasks are truly parallel. In
Gopnik and Slaughter's tasks, 3‐year‐olds are asked about a desire that they don't currently have because it
was recently satis ed. It would be of considerable interest to couple Gopnik and Slaughter's own‐desire
version of the hunger task with a closely matched other‐person version of the task. For instance, the
experiment could have a satiated child watch another child beginning to eat at snack time and ask the
satiated child, ‘Is he hungry?‘ If the ndings on this task paralleled ndings on the own‐desire version, that
would indeed be an important parallel. Unfortunately, the putatively parallel task in Flavell et al. that
Gopnik and Meltzo cite is quite di erent from the Gopnik and Slaughter task. In the Flavell tasks, the child
is asked whether the target thinks the cookie is ‘yummy tasting’ (Flavell et al. 1990: 918). The task doesn't
explicitly ask about desires at all. Flavell and his colleagues themselves characterize the task as exploring
children's ability to attribute value beliefs. Further, unlike the Gopnik and Slaughter task, the Flavell et al.
tasks depend on expressions of disgust. Indeed, there are so many di erences between these tasks that we
think it is impossible to draw any conclusions from the comparison.

In this section we have considered the best cases for the TTSA, and it is our contention that the data we have
discussed do not provide much of an argument in favour of the TTSA. For there are serious empirical
problems with both cases, and even if we ignore these problems, the data certainly don't establish the
p. 174 ‘extensive parallelism’ that the TTSA predicts. Moreover, as we will see in the next section, there are
results not discussed by Gopnik and Meltzo which, we think, strongly suggest that the parallelism on
which their argument depends simply does not exist.

Evidence against the self‐other parallelism


In this section we will review a range of data indicating that often there is not a parallel between

Downloaded from https://academic.oup.com/book/3935/chapter/145531015 by Universidad Nacional Autonoma de Mexico user on 02 August 2022
performance on self and other versions of mindreading tasks. We are inclined to think that these data
completely uproot Gopnik and Meltzo 's parallelism argument, and constitute a major challenge to the
theory theory of self‐awareness.

Knowledge vs. ignorance

In knowledge versus ignorance experiments, Wimmer and colleagues found a signi cant di erence between
performance on closely matched tasks for self and other (Wimmer et al. 1988). After letting children in two
conditions either look in a box or not look in a box, the researchers asked them, ‘Do you know what is in the
box or do you not know that?‘ The 3‐year‐olds performed quite well on this task. For the other‐person
version of the task, they observed another who either looked or didn't look into a box. They were then asked:
‘Does [name of child] know what is in the box or does she [he] not know that?‘ (1988: 383). Despite the
almost verbatim similarity between this question and the self‐version, the children did signi cantly worse
on the other‐version of this question (see also Nichols 1993). Hence, we have one case in which there is a
signi cant di erence between performance on a mindreading task for self and performance on the task for
other. And there's more to come.
Pretence and imagination

Gopnik and Meltzo maintain that children under age 3 understand pretence for others and for self.
Although there are tasks on which young children exhibit some understanding of pretence (e.g. Wellman
and Estes 1986), the issue has turned out to be considerably more complicated. It is clear from the literature
on pretend play that from a young age, children are capable of reporting their own pretences. Indeed,
Gopnik and Slaughter (1991) show that 3‐year‐old children can easily answer questions about their past
pretences and imaginings. Despite this facility with their own pretences, it doesn't seem that young children

Downloaded from https://academic.oup.com/book/3935/chapter/145531015 by Universidad Nacional Autonoma de Mexico user on 02 August 2022
have an adequate ‘theory’ of pretence. For instance, Lillard's (1993) results suggest that children as old as 4
years think that someone can pretend to be a rabbit without knowing anything about rabbits. More
importantly for present purposes, although young children have no trouble detecting and reporting their
own pretences (e.g. Leslie 1994a), children seem to be signi cantly worse at recognizing pretence in others
p. 175 (Flavell et al. 1987; Rosen et al. 1997). Indeed, recent results from Rosen et al. (1997) indicate that young
children have a great deal of di culty characterizing the pretences of others. Rosen and his colleagues had
subjects watch a well‐known television show in which the characters were sitting on a bench but pretending
to be on an airplane. The researchers asked the children: ‘Now we're going to talk about what everyone on
Barney is thinking about. Are they thinking about being on an airplane or about sitting on a bench outside
their school?‘ (1997: 1135). They found that 90 per cent of the 3‐year‐olds answered incorrectly that
everyone was thinking about sitting on a bench. By contrast, in Gopnik and Slaughter's experiments, 3‐year‐
old children did quite well on questions about what they themselves were pretending or imagining. In one of
their pretence tasks, the child was asked to pretend that an empty glass had orange juice in it; the glass was
turned over, and the child was subsequently asked to pretend that it had hot chocolate in it. The child was
then asked, ‘When I rst asked you. . . . What did you pretend was in the glass then?‘ (Gopnik and Slaughter
1991: 106). Children performed near ceiling on this task. In Gopnik and Slaughter's imagination task, the
children were told to close their eyes and think of a blue doggie, then they were told to close their eyes and
think of a red balloon. The children were then asked, ‘When I rst asked you. . . . , what did you think of
then? Did you think of a blue doggie or did you think of a red balloon?‘ (Gopnik and Slaughter 1991: 106).
Over 80 per cent of the 3‐year‐olds answered this correctly. Although the Gopnik and Slaughter pretence and
imagination tasks aren't exact matches for the Rosen et al. task, the huge di erence in the results suggests
that children do much better on pretence and imagination tasks for self than they do on pretence and
imagination tasks for another person. Hence, it seems likely that children can detect and report their own
pretences and imaginings before they have the theoretical resources to detect and characterize pretences
14
and imaginings in others.
Perspective taking

As we noted earlier, children as young as 2½ years are able to succeed at ‘level 1’ perspective‐taking tasks
both for others and for themselves. However, there is a cluster of more di cult perspective‐taking tasks,
‘level 2’ tasks, in which young children do signi cantly better in the self‐version than in the other‐version.
p. 176 These tasks require the child to gure out how an object looks from a perspective that is di erent from
her own current perspective. In one task, for example, the child is shown a drawing of a turtle that looks to
be lying on its back when viewed from one position and standing on its feet when viewed from another

Downloaded from https://academic.oup.com/book/3935/chapter/145531015 by Universidad Nacional Autonoma de Mexico user on 02 August 2022
position. The child is asked whether the turtle is on its back or on its feet; then the child is asked how the
person across the table sees the turtle: on its back or on its feet. Children typically don't succeed at these
tasks until about the age of 4. However, contrary to the parallel performance hypothesis, Gopnik and
Slaughter (1991) found that 3‐year‐olds did well on a self‐version of the task. They had the child look at the
drawing of the turtle and then had the child change seats with the experimenter. The child was subsequently
asked, ‘When I rst asked you, before we traded seats, how did you see the turtle then, lying on his back or
standing on his feet’ (1991: 106). Gopnik and Slaughter were surprised at how well the 3‐year‐olds did on this
task. They write, ‘Perhaps the most surprising nding was that performance on the level 2 perception task
turned out to be quite good, and was not signi cantly di erent from performance on the pretend task.
Seventy‐ ve percent of the 3‐year‐olds succeeded at this task, a much higher level of performance than the
33% to 50% reported by Masangkay et al. (1974) in the other person version of this task’ (Gopnik and
Slaughter 1991: 107). Here, then, is another example of a mindreading task in which the self‐version of the
task is signi cantly easier for subjects than the other‐version of the task. So we have yet another case in
15
which the TTSA's prediction of extensive parallelism is discon rmed.

4.4.3. What conclusions can we draw from the developmental data?


We now want to step back from the details of the data to assess their implications for the debate between the
TTSA and our Monitoring Mechanism theory. To begin, let's recall what each theory predicts, and why. The
p. 177 TTSA maintains that ToMI is centrally involved in detecting and reasoning about both one's own mental
states and other people's. But the TTSA makes no claims about when in the course of development various
components of ToMI are acquired or come on‐line. Thus TTSA makes no predictions about when speci c
mindreading skills will emerge, but it does predict that any given mindreading skill will appear at about the
same time in self and other cases. MM, by contrast, maintains that ToMI is involved in detecting and
reasoning about other people's mental states and in reasoning about one's own mental states, but that a
separate Monitoring Mechanism (or a cluster of such mechanisms) is typically involved when we detect our
own mental states. MM also claims that the Monitoring Mechanism(s) come on‐line quite early in
development. Thus MM predicts that children will be able to detect (but not necessarily reason about) their
own mental states quite early in development. But it does not predict any particular pattern of correlation
between the emergence of the capacity to detect one's own mental states and the emergence of the capacity
to detect other people's mental states.

Which theory does better at handling the data we have reviewed? As we see it, the answer is clear: MM is
compatible with all the data we have reviewed, while some of the data are seriously problematic for the
TTSA. To make the point as clearly as possible, let's assemble a list of the various mindreading phenomena
we have reviewed:

1. Level 1 perspective taking. This emerges early for both self and other. TTSA predicts the parallel
emergence and is compatible with, but does not predict, the early emergence. MM predicts the early
emergence in the self case and is compatible with but does not predict the early emergence in the
other case. Neither theory has an advantage over the other.

2. Pretence and imagination. It is clear that self‐detection emerges early, as MM predicts. However, there
is some recent evidence indicating that detection and understanding of pretence in others does not
emerge until much later. If this is right, it is a problem for TTSA, though not for MM.

3. Sources of belief. The ability to identify sources of belief emerges at about the age of 4 in both the self
and the other case. Since this is a reasoning problem not a detection problem, both theories make the
same prediction.

4. False belief. Recent evidence indicates that if memory enhancements are provided, young children do

Downloaded from https://academic.oup.com/book/3935/chapter/145531015 by Universidad Nacional Autonoma de Mexico user on 02 August 2022
better on the self‐version of false belief tasks than on the other‐version. This is compatible with MM
but quite problematic for TTSA.

5. Desire. The evidence available does not use well‐matched tasks, so no conclusions can be drawn about
either TTSA or MM.

6. Knowledge vs. ignorance. Three‐year‐olds do much better on the self‐version than on the other‐version.
This is compatible with MM but problematic for TTSA.

p. 178 7.  Level 2 perspective taking. Here again, 3‐year‐olds do better on the self‐version than on the other‐
version, which is a problem for TTSA but not for MM.

Obviously, the extensive parallelism between self and other cases on which Gopnik and Meltzo rest their
case for the theory theory of self‐awareness is not supported by the data. Conceivably a resourceful advocate
of TTSA could o er plausible explanations for each of the cases in which the parallel predicted by TTSA
breaks down. But in the absence of a systematic attempt to provide such explanations we think it is clear
that the developmental evidence favours our theory of self‐awareness over the TTSA.

4.5. The Evidence from Autism: The Theory Theory of Self‐Awareness


vs. The Monitoring Mechanism Theory

In addition to the developmental arguments, several authors have appealed to evidence on autism as
support for a theory theory account of self‐awareness (Baron‐Cohen 1989; Carruthers 1996; Frith and Happé
1999). On our view, however, the evidence from autism provides no support at all for TTSA. Before we
consider these arguments, we need to provide a bit of background to explain why data from autism are
relevant to the issue of self‐awareness. Studies of people with autism have loomed large in the literature in
mindreading ever since Baron‐Cohen, Leslie, and Frith (1985) reported some now famous results on the
performance of autistic individuals on the false belief task. As we noted in Section 3.2.2, Baron‐Cohen and
colleagues compared performance on false belief tasks in normal children, autistic children, and children
with Down's syndrome. They found that autistic subjects with a mean chronological age of about 12 and
mean verbal and non‐verbal mental ages of 9 years, 3 months, and 5 years, 5 months respectively failed the
false belief task (Baron‐Cohen et al. 1985). These subjects answered the way normal 3‐year‐olds do. By
contrast, the control group of Down's syndrome subjects matched for mental age performed quite well on
the false belief task. One interpretation of these results is that autistic individuals lack a properly
functioning ToMI mechanism. Although in Chapter 3 we argued that this does not provide an adequate
explanation of the mindreading de cits in autism, in this section we propose to assume, for argument's
sake, that the interpretation is correct. Our strategy will be to argue that even if it is granted that people with
autism have an impaired ToMI mechanism, the arguments to be considered in favour of TTSA still are not
plausible.

Now, if we assume that individuals with autism have an impaired ToMI, then, since the theory theory
p. 179 account of self‐awareness claims that ToMI is implicated in the formation of beliefs about one's own
mental states, the TTSA predicts that autistic individuals should have de cits in this domain as well. If
people with autism lack a properly functioning ToMI mechanism and that mechanism is required for self‐
awareness, then autistic individuals should be unable to form beliefs about their own beliefs and other
mental states. In recent papers both Carruthers (1996) and Frith and Happé (1999) have maintained that
autistic individuals do indeed lack self‐awareness, and that this supports the TTSA account. In this section
we will consider three di erent arguments from the data on autism. One argument depends on evidence
that autistic children have di culty with the appearance/reality distinction. A second argument appeals to

Downloaded from https://academic.oup.com/book/3935/chapter/145531015 by Universidad Nacional Autonoma de Mexico user on 02 August 2022
introspective reports of adults with Asperger's syndrome (autistic individuals with near normal IQs), and a
third, related, argument draws on autobiographical testimony of people with autism and Asperger's
syndrome.

4.5.1. Autism and the appearance/reality distinction


Both Baron‐Cohen (1989) and Carruthers (1996) maintain that the performance of autistic children on
appearance/reality tasks provides support for the view that autistic children lack self‐awareness, and hence
provides evidence for the TTSA. The relevant studies were carried out by Baron‐Cohen (1989), based on the
appearance/reality tasks devised by Flavell and his colleagues. Using those tasks, Flavell and his colleagues
found that children have di culty with the appearance/reality distinction until about the age of 4 (Flavell et
al. 1986). For instance, after playing with a sponge that visually resembles a piece of granite (a ‘Hollywood
rock’), most 3‐year‐olds claim that the object both is a sponge and looks like a sponge. Baron‐Cohen found
that autistic subjects also have di culty with the appearance/reality distinction. When they were allowed to
examine a piece of fake chocolate made out of plastic, for example, they thought that the object both looked
like chocolate and really was chocolate. ‘In those tasks that included plastic food,‘ Baron‐Cohen reports, ‘the
autistic children alone persisted in trying to eat the object long after discovering its plastic quality. Indeed,
so clear was this perseverative behavior that the experimenter could only terminate it by taking the plastic
object out of their mouths’ (Baron‐Cohen 1989: 594).

Though we nd Baron‐Cohen and Flavell et al.‘s work on the appearance/reality distinction intriguing, we
are deeply puzzled by the suggestion that the studies done with autistic subjects provide support for the
theory theory account of self‐awareness. And, unfortunately, those who think that these studies do support
p. 180 the TTSA have never o ered a detailed statement of how the argument is supposed to go. At best they
have provided brief hints like the following:

[T]he mind‐blindness theory would predict that autistic people will lack adequate access to their
own experiences as such . . ., and hence that they should have di culty in negotiating the contrast
between experience (appearance) and what it is an experience of (reality). (Carruthers 1996: 260–1)

[Three‐year‐old children] appear unable to represent both an object's real and apparent identities
simultaneously. . . . Gopnik and Astington (1988) argued that this is also an indication of the 3‐year‐
old's inability to represent the distinction between their representation of the object (its
appearance) and their knowledge about it (its real identity). In this sense, the A‐R distinction is a
test of the ability to attribute mental states to oneself. (Baron‐Cohen 1989: 591)

The prediction that this would be an area of di culty for autistic subjects was supported, and this
suggests that these children . . . are unaware of the A‐R distinction, and by implication unaware of
their own mental states. These results suggest that when perceptual information contradicts one's
own knowledge about the world, the autistic child is unable to separate these, and the perceptual
information overrides other representations of an object (Baron‐Cohen 1989: 595)
How might these hints be unpacked? What we have labelled the A/R Argument is our best shot at making
explicit what Carruthers and Baron‐Cohen might have had in mind. Though we are not con dent that this is
the right interpretation of their suggestion, it is the most charitable reading we have been able to construct.
If this isn't what they had in mind (or close to it) then we really haven't a clue about how the argument is
supposed to work.

A/R Argument

Downloaded from https://academic.oup.com/book/3935/chapter/145531015 by Universidad Nacional Autonoma de Mexico user on 02 August 2022
If the theory theory of self‐awareness is correct then ToMI plays a crucial role in forming beliefs about one's
own mental states. Thus, since autistic subjects do not have a properly functioning ToMI mechanism they
should have considerable di culty in forming beliefs about their own mental states. So autistic people will
typically not be able to form beliefs with contents like:

(1) I believe that that object is a sponge.

and

(2) I am having a visual experience of something that looks like a rock.

Perhaps (2) is too sophisticated, however. Perhaps the relevant belief that they cannot form but that normal
adults can form is something more like:

(2a) That object looks like a rock.

By contrast, since ToMI is not involved in forming beliefs about the non‐mental part of the world, autistic
subjects should not have great di culty in forming beliefs like:

(3) That object is a sponge.

p. 181 To get the correct answer in an appearance/reality task, subjects must have beliefs with contents like (3) and
they must also have beliefs with contents like (2) or (2a). But if the TTSA is correct then autistic subjects
cannot form beliefs with contents like (2) or (2a). Thus the TTSA predicts that autistic subjects will fail
appearance/reality tasks. And since they do in fact fail, this counts as evidence in favour of the TTSA.

Now what we nd puzzling about the A/R Argument is that, while the data do indeed indicate that autistic
subjects fail the appearance/reality task, they fail it in exactly the wrong way. According to the A/R
Argument, autistic subjects should have trouble forming beliefs like (2) and (2a) but should have no trouble
in forming beliefs like (3). In Baron‐Cohen's studies, however, just the opposite appears to be the case. After
being allowed to touch and taste objects made of plastic that looked like chocolate or eggs, the autistic
children gave no indication that they had incorrect beliefs about what the object looked like. Quite the
opposite was the case. When asked questions about their own perceptual states, autistic children answered
correctly. They reported that the fake chocolate looked like chocolate and that the fake egg looked like an
egg. Where the autistic children apparently did have problems was just where the A/R Argument says they
should not have problems. The fact that they persisted in trying to eat the plastic chocolate suggests that
they had not succeeded in forming beliefs like (3)—beliefs about what the object really is. There are lots of
hypotheses that might be explored to explain why autistic children have this problem. Perhaps autistic
children have di culty updating their beliefs on the basis of new information; perhaps they perseverate on
16
rst impressions; perhaps they privilege visual information over the information provided by touch and
taste; perhaps the task demands are too heavy. But whatever the explanation turns out to be, it is hard to see
how the sorts of failures predicted by the TTSA—the inability to form representations like (1), (2), and (2a)
—could have any role to play in explaining the pattern of behaviour that Baron‐Cohen reports.

All this may be a bit clearer if we contrast the performance of autistic children on appearance/reality tasks
with the performance of normal 3‐year‐olds. The 3‐year‐olds also fail the task. But unlike the autistic
children who make what Baron‐Cohen calls ‘phenomenist’ errors (Baron‐Cohen 1989: 594), normal 3‐year‐
olds make what might be called ‘realist’ errors on the same sorts of tasks. Once they discover that the
Hollywood rock really is a sponge, they report that it looks like a sponge. Since there is reason to believe that
ToMI is not yet fully on‐line in 3‐year‐olds, one might think that the fact that 3‐year‐olds make ‘realist’
p. 182 errors in appearance/reality tasks supports a theory theory account of self‐awareness. Indeed, Alison
Gopnik appears to defend just such a view. The appearance/reality task, she argues,

is another case in which children make errors about their current mental states as a result of their

Downloaded from https://academic.oup.com/book/3935/chapter/145531015 by Universidad Nacional Autonoma de Mexico user on 02 August 2022
lack of a representational theory [i.e. a mature ToMI] . . . Although it is not usually phrased with
reference to the child's current mental states, this question depends on the child's accurately
reporting an aspect of his current state, namely, the way the object looks to him. Children report
that the sponge‐rock looks to them like a sponge. To us, the fact that the sponge looks like a rock is
a part of our immediate phenomenology, not something we infer. . . . The inability to understand
the idea of false representations . . . seems to keep the child from accurately reporting perceptual
appearances, even though those appearances are current mental states. (Gopnik 1993: 93)

For our current purposes, the crucial point here is that Gopnik's argument, unlike the A/R Argument, is
perfectly sensible. Three‐year‐olds are not at all inclined to make ‘phenomenist’ errors on these tasks. Once
they have examined the plastic chocolate, they no longer believe that it really is chocolate, and they have no
inclination to eat it. Where the 3‐year‐olds go wrong is in reporting what plastic chocolate and Hollywood
rocks look like. And this is just what we should expect if, as the TTSA insists, ToMI is involved in forming
beliefs about one's own perceptual states.

At this point, the reader may be thinking that we have jumped out of the frying pan and into the re. In
using Gopnik's argument to highlight the shortcomings of the A/R Argument, have we not also provided a
new argument for TTSA, albeit one that does not rely on data about autistic subjects? Our answer here is that
Gopnik's argument is certainly one that must be taken seriously. But her explanation is not the only one that
might be o ered to account for the way in which 3‐year‐olds behave in appearance/reality tasks. The
hypothesis we favour is that though the Percept‐Monitoring Mechanisms that we posited in Section 4.3.2 are
in place in 3‐year‐olds, 3‐year‐olds fail the task because of heavy information‐processing demands in the
standard appearance/reality task. As it happens, there are the beginnings of such a theory in the literature,
and some nice evidence supporting it (Rice et al. 1997). In the standard appearance/reality task, the
successful subject must keep in mind several things at once. She must have in mind the reality of the object
—it's a sponge; she must also have in mind the typical appearance of sponges; further, she must have in
mind the typical appearance of rocks. This constitutes a serious informational load, and perhaps the
informational demands lead younger children to fail. If so, then easing the informational load should
improve the young child's performance. In fact, this is exactly what Rice and colleagues found. They rst
presented subjects with a standard appearance/ reality task. Subjects were then shown an ordinary rock and
asked to identify the object. After identifying the object, the subject was told to pick it up and feel it. The rock
p. 183 was placed on the table and the subjects were asked ‘So, what is this really and truly?‘ The same
procedure was then done with an ordinary sponge and nally with the sponge‐rock. At the end, all three
objects were on the table, with the sponge‐rock in the middle. The experimenter then pointed to the sponge‐
rock and asked, ‘Now, for real, is this really and truly a rock or is this really and truly a sponge?‘ and ‘Now,
when you look at this with your eyes right now, does it look like a sponge or does it look like a rock?‘ The
results of this experiment were impressive: 74 per cent of the 3‐year‐olds passed the task. This seems to
support the information‐processing explanation for why young children fail the appearance/reality task. For
in the Rice et al. experiment, the child does not need to keep in mind the typical appearance of rocks and the
typical appearance of sponges. She can simply consult the rock and the sponge that are anking the sponge‐
rock. More importantly for our purposes, the experiment indicates that children do indeed have access to
their percepts and can form beliefs about them. For if they lacked such access, presumably the information‐
processing aids would not help them to perform well on the task.

Let us brie y sum up this section. Our major conclusion is that, while the data about the performance of
autistic subjects on appearance/reality tasks are fascinating, they provide no evidence at all for the TTSA.
Moreover, while some of the data about the performance of normal 3‐year‐olds on appearance/reality tasks
is compatible with the TTSA, more recent data suggest that the di culty that young children have with
some of these tasks can be traced to heavy information‐processing requirements they impose. So none of

Downloaded from https://academic.oup.com/book/3935/chapter/145531015 by Universidad Nacional Autonoma de Mexico user on 02 August 2022
the ndings reviewed in this section suggests that TTSA is preferable to our MM theory.

4.5.2. Introspective reports and autobiographies from adults with Asperger's


syndrome
The next two arguments we will consider are much more direct arguments for the TTSA, but, we maintain,
no more convincing. Carruthers (1996) and Frith and Happé (1999) both cite evidence from a recent study
on introspective reports in adults with Asperger's syndrome (Hurlburt et al. 1994). People with Asperger's
syndrome have normal intelligence levels, but they have a cluster of social de cits that has led researchers
to regard Asperger's syndrome as a type of autism (e.g. Frith 1991). The study on introspective reports is
based on a technique for ‘experience sampling’ developed by Russell Hurlburt. Subjects carry around a
beeper and are told, ‘Your task when you hear a beep is to attempt to “freeze” your current experience “in
mind,” and then to write a description of that experience in a . . . notebook which you will be carrying. The
experience that you are to describe is the one that was occurring at the instant the beep began . . .‘ (Hurlburt
1990: 21).

p. 184 Hurlburt and his colleagues had three adults with Asperger's syndrome carry out this experience sampling
procedure (Hurlburt et al. 1994). All three of the subjects were able to succeed at simple mindreading. The
researchers found that the reports of these subjects were considerably di erent from reports of normal
subjects. According to Hurlburt and colleagues, two of the subjects reported only visual images, whereas it is
17
common for normal subjects also to report inner verbalization, ‘unsymbolized thinking’, and emotional
feelings. The third subject didn't report any inner experience at all in response to the beeps.

Carruthers maintains that these data suggest ‘that autistic people might have severe di culties of access to
their own occurrent thought processes and emotions’ (1996: 261). Frith and Happé also argue that the
evidence ‘strengthens our hypothesis that self‐awareness, like other awareness, is dependent on ToM’
(Frith and Happé 1999: 14).

As further support for the theory theory account of self‐awareness, Frith and Happé appeal to several
autobiographical essays written by adults with autism or Asperger's syndrome (1999). They argue that these
autobiographies indicate that their authors have signi cant peculiarities in self‐consciousness. Here are
several examples of autobiographical excerpts quoted by Frith and Happé:

‘When I was very young I can remember that speech seemed to be of no more signi cance than any
other sound. . . . I began to understand a few single words by their appearance on paper . . .‘ (Jolli e
et al. 1992: 13, quoted in Frith and Happé 1999: 15)

‘I had—and always had had, as long as I could remember—a great fear of jewellery. . . I thought
they were frightening, detestable, revolting.‘ (Gerland 1997: 54, quoted in Frith and Happé 1999:
16)

‘It confused me totally when someone said that he or she had seen something I had been doing in a
di erent room.‘ (Gerland 1997: 64, quoted in Frith and Happé 1999: 17)
4.5.3. What conclusions can we draw from the data on introspection in autism?
We are inclined to think that the data cited by Carruthers (1996) and Frith and Happé (1999) provide a novel
p. 185 and valuable perspective on the inner life of people with autism. However, we do not think that the
evidence lends any support at all to the TTSA over the MM theory that we advocate. Quite to the contrary, we
are inclined to think that if the evidence favours either theory, it favours ours.

What the data do strongly suggest is that the inner lives of autistic individuals di er radically from the inner

Downloaded from https://academic.oup.com/book/3935/chapter/145531015 by Universidad Nacional Autonoma de Mexico user on 02 August 2022
lives of most of us. Images abound, inner speech is much less salient, and autistic individuals almost
certainly devote much less time to thinking or wondering or worrying about other people's inner lives. As we
read the evidence, however, it indicates that people with autism and Asperger's syndrome do have access to
their own inner lives. They are aware of, report, and remember their own beliefs and desires as well as their
occurrent thoughts and emotions.

Hurlburt, Happé, and Frith (1994) revisited


In the experience sampling study, there were a number of instances in which subjects clearly did report
their occurrent thoughts. For example, one of the subjects, Robert, reported that

he was ‘thinking about’ what he had to do today. This ‘thinking about’ involved a series of images
of the tasks he had set for himself. At the moment of the beep, he was trying to gure out how to
nd his way to the Cognitive Development Unit, where he had his appointment with us. This
‘trying to gure out’ was an image of himself walking down the street near Euston station.
(Hurlburt et al. 1994: 388)

On another occasion, Robert reported that he was

‘trying to gure out’ why a key that he had recently had made did not work. This guring‐out
involved picturing an image of the key in the door lock, with his left hand holding and turning the
key . . . . The lock itself was seen both from the outside . . . and from the inside (he could see the
levers inside the lock move as the blades of the key pushed them along). (Hurlburt et al. 1994: 388)

A second subject, Nelson, reported that

he was ‘thinking about’ an old woman he had seen earlier that day. This thinking‐about involved
‘picturizing’ (Nelson's own term for viewing an image of something) the old woman. . . . There was
also a feeling of ‘sympathy’ for this woman, who (when he actually saw her earlier) was having
di culty crossing the street. (Hurlburt et al. 1994: 390)

In all three of these cases it seems clear that the subjects are capable of reporting their current thinking and,
in the latter case, their feelings. Though, as we suggested earlier, it may well be the case that the inner lives
that these people are reporting are rather di erent from the inner lives of normal people.

p. 186 Perhaps even more instructive is the fact that Hurlburt and his colleagues claim to have been surprised at
how well the subjects did on the experience sampling task. Hurlburt et al. write: ‘While we had expected a
relative inability to think and talk about inner experience, this was true for only one of the subjects, Peter,
who was also the least advanced in terms of understanding mental states in the theory of mind battery’
(1994: 393). Moreover, even Peter, although he had di culty with the experience sampling method, could
talk about his current experience. Thus Frith and Happé (1999: 14) report that ‘Although Peter was unable to
tell us about his past inner experience using the beeper method, it was possible to discuss with him current
ongoing inner experience during interviews.‘ So, far from showing that the theory theory account of self‐
awareness is correct, these data would seem to count against the TTSA. For even Peter, who is likely to have
had the most seriously abnormal ToMI, was capable of reporting his inner experiences.

It is true that all of the subjects had some trouble with the experience sampling task, and that one of them
could not do it at all. But we think that this should be expected in subjects whose ToMI is functioning poorly,
18
even if, as we maintain, the ToMI plays no role in self‐awareness. Advocates of TTSA maintain that ToMI plays
a central role in detecting mental states in other people and in reasoning about mental states—both their
own and others’. And we are in agreement with both of these claims. It follows that people who have poorly

Downloaded from https://academic.oup.com/book/3935/chapter/145531015 by Universidad Nacional Autonoma de Mexico user on 02 August 2022
functioning ToMI mechanisms will nd it di cult to attribute many mental states to other people and will
do little or no reasoning about mental states. So thoughts about mental states will not be very useful or
salient to them. Given the limited role that thoughts about mental states play in the lives of people with
defective ToMI mechanisms, it is hardly surprising that, when asked to describe their experience, they
sometimes do not report much. An analogy may help to make the point. Suppose two people are asked to
look at a forest scene and report what they notice. One of the two is an expert on the birds of the region and
knows a great deal about their habits and distribution. The other knows comparatively little about birds and
has little interest in them. Suppose further that there is something quite extraordinary in the forest scene;
there is a bird there that is rarely seen in that sort of environment. We would expect that that bird would
gure prominently in the expert's description, though it might not be mentioned at all in the novice's
description. Now compared to autistic individuals, normal subjects are experts about mental states. They
know a lot about them, they think a lot about them, and they care a lot about them. So it is to be expected
p. 187 that autistic subjects—who have a comparatively impoverished grasp of mental states—will often fail to
spontaneously mention their own mental states even if, like the person who knows little about birds, they
can detect and report their own mental states if their attention is drawn to them by their interlocutor.

Autobiographies revisited
In the cases of autobiographical re ections, again, we maintain, a number of the examples cited by Frith
and Happé are prima facie incompatible with the conclusion they are trying to establish. In the
autobiographies, adults with autism or Asperger's syndrome repeatedly claim to recall their own childhood
thoughts and other mental states. This is evident in the three quotes from Frith and Happé that we
reproduced in Section 4.5.2, and in this respect, the passages from Frith and Happé are not at all unusual.
Here are three additional examples of autobiographical comments from adults with Asperger's syndrome:

‘I remember being able to understand everything that people said to me, but I could not speak
back. . . . One day my mother wanted me to wear a hat when we were in the car. I logically thought
to myself that the only way I could tell her that I did not want to wear the hat was to scream and
throw it on the car oor.‘ (Grandin 1984: 145)

‘When I was 5 years old I craved deep pressure and would daydream about mechanical devices
which I could get into and be held by them. . . . As a child I wanted to feel the comfort of being held,
but then I would shrink away for fear of losing control and being engulfed when people hugged
me.‘ (Grandin 1984: 151)

‘I didn't talk until I was almost ve, you know. Before I started talking I noticed a lot of things, and
now when I tell my mother she is amazed I remember them. I remember that the world was really
scary and everything was over‐stimulating.‘ (Reported in Dewey 1991: 204)

If these recollections are accurate, then these individuals must have been aware of their own mental states
even though, at the time in question, they could not reliably attribute beliefs to other people.
4.6. Double Dissociations and the Monitoring Mechanism Theory

We have argued that the evidence from autism does not support the theory theory of self‐awareness over our
theory. Indeed, it seems that the evidence provides support for our theory over the TTSA. In this section, we
want to strengthen the case for the Monitoring Mechanism theory by arguing that it provides a natural
explanation of a pattern of evidence on autism and certain other psychopathologies.

Downloaded from https://academic.oup.com/book/3935/chapter/145531015 by Universidad Nacional Autonoma de Mexico user on 02 August 2022
p. 188 One important di erence between our MM theory and all versions of the TTSA is that on our theory there is
a theoretically motivated way to divide mindreading tasks and the mechanisms underlying them into two
distinct categories. One category includes the Monitoring Mechanisms which are responsible for the
detection of one's own mental states. The other category includes a heterogeneous collection of mental
mechanisms which subserve detection of other people's mental states, reasoning about other people's
mental states, and reasoning about one's own mental states. Thus on our theory it is possible for one or
more of the mechanisms in the rst category to malfunction, causing a de cit in one or more aspects of
rst‐person mental state detection, while the mechanisms in the second category continue to function
normally. It is also possible for the opposite pattern of breakdowns to occur, leading to a de cit in one or
more aspects of third‐person mental state detection, or in reasoning about mental states, while rst‐person
detection is intact. On the TTSA, by contrast, this sort of ‘double dissociation’ would be much harder to
explain. The central idea of TTSA is that the process of reading one's own mind is largely or entirely parallel
to the process of reading someone else's mind, and that ToMI plays a central role in both. Thus any
pathology that disrupts rst‐person mindreading might be expected to disrupt third‐person mindreading,
and vice versa—particularly if that pathology damaged ToMI. So one way to support our theory over the
TTSA would be to nd the kind of double dissociation that our theory leads us to expect, but TTSA cannot
easily account for.

Do double dissociations of this sort occur? We propose that they do. In autism, we maintain, third‐person
mindreading is seriously defective, though rst‐person mental state detection is not signi cantly impaired.
By contrast, in patients exhibiting certain ‘ rst‐rank’ symptoms of schizophrenia, rst‐person mental state
detection is disrupted while third‐person mindreading is not.

4.6.1. Autism: intact first‐person detection and impaired third‐person


mindreading
Much of the case for autism as one‐half of the needed double dissociation has already been made. In Chapter
3 we recounted a number of studies indicating that people with autism have considerable di culty in
attributing beliefs and thoughts to other people, though they are much better at attributing desires. And, as
we argued in Section 4.5, none of the evidence cited by advocates of TTSA indicates that autism involves a
de cit in the ability to detect one's own mental states. Indeed, some of the data suggested just the opposite.
The adults with Asperger's syndrome who were asked to recount their immediate experiences did show an
appreciation of what was happening in their minds (Hurlburt et al. 1994). Further, in the autobiographical
p. 189 excerpts, the adults claim to recall their own beliefs and thoughts from childhood. Also, there is no
evidence that autistic children or adults have any trouble recognizing their thoughts and actions as their
own. (The importance of this point will emerge below.)

There is some additional experimental evidence that further con rms our contention that the ability to
detect one's own mental states is spared in autism. In a recent set of studies, Farrant and colleagues found
that autistic children did remarkably well on ‘metamemory’ tests (Farrant et al. 1999). In metamemory
tasks, subjects are asked to memorize a set of items and subsequently to report on the strategies they used
to remember the items. In light of arguments from defenders of the TTSA, the experimenters expected
autistic children to perform much worse than non‐autistic children on metamemory tasks: ‘On the basis of
evidence that children with autism are delayed in passing false belief tasks and on the basis of arguments
that mentalizing and metacognition involve related processes, we predicted that children with autism
would show impaired performance relative to controls on false belief tasks and on metamemory tasks and
that children's performances on the two types of task would be related’ (Farrant et al. 1999: 108). However,
contrary to the researchers’ predictions, there was no signi cant di erence between the performance of
autistic children and non‐autistic children on a range of metamemory tasks. In one task, the subject was
asked to remember a set of numbers that were given. The children were subsequently asked, ‘What did you

Downloaded from https://academic.oup.com/book/3935/chapter/145531015 by Universidad Nacional Autonoma de Mexico user on 02 August 2022
do to help you to remember all the numbers that I said?‘ Like the other children in the study, most of the
autistic children answered this question with some explanation that adverted to thinking, listening, or
exploiting a strategy. For instance, one autistic child explained that to remember the string of numbers he
was given, which included a 6 followed by an 8, ‘I did 68, then the rest, instead of being six, eight, you put
68.‘ Indeed, Farrant et al. claim that it is clear from the data that ‘there was no relation between
passing/failing false belief tasks and the categories of response given to the metamemory question’ (Farrant
et al. 1999: 118, 119). Although the results outed the experimenters’ TTSA‐based prediction, they t
perfectly with the Monitoring Mechanism theory. For the Monitoring Mechanism can be intact even when
the mental mechanisms subserving third‐person belief attribution are damaged. While it will of course be
important to get further empirical con rmation, these ndings and those cited earlier indicate that people
a icted with autism do indeed manifest one of the patterns of dissociation that our theory expects to nd.

4.6.2. Passivity experiences in schizophrenia: impaired first‐person detection


and intact third‐person mindreading
Are there cases in which we nd the opposite pattern? That is, are there individuals whose ability to detect
p. 190 their own mental states is impaired, but whose third‐person mindreading abilities are spared? Although
the data are often fragmentary and di cult to interpret, we think there might actually be such cases.
Schizophrenia has recently played an important role in the discussion of mindreading, and we think that
certain kinds of schizophrenia might involve damage to the Monitoring Mechanism that does not a ect
other components of the mindreading system.

There is a cluster of symptoms in some cases of schizophrenia sometimes referred to as ‘passivity


experiences’ or ‘ rst rank symptoms’ (Schneider 1959) ‘in which a patient's own feelings, wishes or acts
seem to be alien and under external control’ (Frith 1992: 73–4). One rst‐rank symptom of schizophrenia is
delusions of control, in which a patient has di culty recognizing that certain actions are her own. For
example, one patient reported:

‘When I reach my hand for the comb it is my hand and arm which move, and my ngers pick up the
pen, but I don't control them. . . . I sit there watching them move, and they are quite independent,
what they do is nothing to do with me. . . . I am just a puppet that is manipulated by cosmic strings.
When the strings are pulled my body moves and I cannot prevent it.‘ (Mellor 1970: 18)

Another rst‐rank symptom is ‘thought withdrawal’, the impression that one's thoughts are extracted from
one's mind. One subject reported: ‘I am thinking about my mother, and suddenly my thoughts are sucked
out of my mind by a phrenological vacuum extractor, and there is nothing in my mind, it is empty’ (Mellor
1970: 16–17).

At least some symptomatic schizophrenics have great di culty in reporting their current thoughts. Russell
Hurlburt had four schizophrenic patients participate in a study using Hurlburt's experience sampling
method (see Section 4.5.2). Two of these subjects reported experiences and thoughts that were strange or
‘goofed up’. One of the patients, who was symptomatic throughout the sampling period (and whose
symptoms apparently included rst‐rank symptoms), seemed incapable of carrying out the task at all.
Another patient was able to carry out the task until he became symptomatic, at which point he could no
longer carry out the task. Hurlburt argues that these two subjects, while they were symptomatic, did not
have access to their inner experience (Hurlburt 1990: 239). Hurlburt writes:

What we had expected to nd, with Joe, was that his inner experiences were unusual—perhaps
with images that were ‘goofed up’ as Jennifer had described, or several voices that spoke at once so
that none was intelligible, or some other kind of aberrant inner experience that would explain his
pressure of speech and delusions. What we found, however, was no such thing; instead, Joe could

Downloaded from https://academic.oup.com/book/3935/chapter/145531015 by Universidad Nacional Autonoma de Mexico user on 02 August 2022
not describe any aspects of his inner experience in ways that we found compelling. (Hurlburt 1990:
207–8)

What is especially striking here is the contrast between this claim and Hurlburt et al.‘s nding about the
adults with Asperger's syndrome discussed in Section 4.5. Hurlburt (1990) expected the symptomatic
p. 191 schizophrenics to be able to report their inner experiences, and Hurlburt et al. (1994) expected the adults
with Asperger's syndrome to be unable to report their inner experiences. What they found, however, was
just the opposite. The symptomatic schizophrenics could not report their inner experiences, and the adults with
Asperger's syndrome could.

These ndings on schizophrenia led Christopher Frith to suggest that in schizophrenics with rst‐rank
19
symptoms, there is a de cit in ‘central monitoring’ (e.g. Frith 1992: 81–2). Frith's initial account of
central monitoring does not specify how the monitoring works, but in recent work, Frith suggests that the
way to ll out his proposal on central monitoring is in terms of mechanisms underlying mindreading.

Many of the signs and symptoms of schizophrenia can be understood as arising from impairments
in processes underlying ‘theory of mind’ such as the ability to represent beliefs and intentions.
(Frith 1994: 148)

To have a ‘theory of mind’, we must be able to represent propositions like ‘Chris believes that “It is
raining”‘. Leslie (1987) has proposed that a major requirement for such representations is a
mechanism that decouples the content of the proposition (It is raining) from reality . . . I propose
that, in certain cases of schizophrenia, something goes wrong with this decoupling process. . . .
Failure of this decoupling mechanism would give rise . . . to . . . the serious consequence . . . that the
patient would no longer be able to represent mental states, either their own or those of others. I have
suggested previously (Frith 1987) that patients have passivity experiences (such as delusions of
control and thought insertion) because of a defect in central monitoring. Central monitoring
depends on our being aware of our intention to make a particular response before the response is
made. In the absence of central monitoring, responses and intentions can only be assessed by
peripheral feedback. For example, if we were unable to monitor our intentions with regard to
speech, we would not know what we were going to say until after we had said it. I now propose that
p. 192 this failure of central monitoring is the consequence of an inability to represent our own mental
states, including our intentions. (Frith 1994: 154, emphasis added)

Hence Frith now views the problem of central monitoring in schizophrenia as a product of a de cit in part of
the mindreading system that is also responsible for third‐person mindreading (Frith 1994). Indeed, Frith
characterizes schizophrenia as late‐onset autism (1994: 150).

Although we are intrigued by Frith's initial suggestion that passivity experiences derive from a de cit in
central monitoring, we are quite sceptical of his claim that the root problem is a de cit in a part of the
mindreading system that is also implicated in third‐person mindreading. We think that a better way to ll
out Frith's hypothesis is in terms of the Monitoring Mechanism. That is, we suggest that certain rst‐rank
symptoms or passivity experiences might result from a de cit in the Monitoring Mechanism that is quite
independent of any de cit in the remainder of the mindreading system. And, indeed, Frith's subsequent
empirical work on schizophrenia and mindreading indicates that schizophrenics with passivity experiences
do not have any special di culty with standard third‐person mindreading tasks. Frith and Corcoran (1996)
write, ‘It is striking that the patients with passivity features (delusions of control, thought insertion, etc.)
could answer the theory of mind questions quite well. This was also found by Corcoran et al. (1995) who
used a di erent kind of task’ (Frith and Corcoran 1996: 527). Of course, this is exactly what would be
predicted by our theory since we maintain that the mechanism for detecting one's own intentions is

Downloaded from https://academic.oup.com/book/3935/chapter/145531015 by Universidad Nacional Autonoma de Mexico user on 02 August 2022
independent from the mechanism responsible for detecting the mental states of others. Hence, there's no
reason to think that a de cit in detecting one's own intentions would be correlated with a de cit in
detecting mental states in others.

We maintain that, as with autism, our theory captures this range of data on schizophrenia comfortably.
Contra Frith's proposal, schizophrenia does not seem to be a case, like autism, in which third‐person
mindreading is damaged; rather, it is more plausible to suppose that in schizophrenic individuals with
passivity experiences, it is the Monitoring Mechanism that is not working properly. If this is right, then it is
plausible that we have found the sort of double dissociation that our theory predicts. In autism, there is a
de cit in third‐person mindreading but not in rst‐person mental state detection. In schizophrenic subjects
with rst‐rank symptoms, rst‐person mental state detection is severely impaired but third‐person
mindreading is not. This, we think, provides yet another reason to prefer the MM theory to the theory
20
theory account of self‐awareness.

p. 193
4.7. The Ascent Routine Theory

Although the TTSA is the most widely accepted account of self‐awareness in the recent literature, there are
two other accounts that are also quite visible, though neither seems to have gained many advocates. In this
section and the next we will brie y consider each of these accounts.

Our MM account appeals to an innate cognitive mechanism (or a cluster of mechanisms) specialized for
detecting one's own mental states. One might want to provide an account of self‐awareness that is more
austere. One familiar suggestion is that when we are asked a question about our own beliefs: ‘Do you believe
that p?‘ we treat the question as the simple fact‐question: ‘p?‘ This kind of account was proposed by Evans
(1982), but in recent years it has been defended most vigorously by Robert Gordon who labels the move from
belief‐question to fact‐question an ‘ascent routine’. ‘Self‐ascription’, Gordon maintains, ‘relies . . . on what I
call ascent routines. For example, the way in which adults ordinarily determine whether or not they believe
that p is simply to ask themselves the question whether or not p’ (Gordon 1996: 15). Gordon goes on to
propose that the account can be extended to other sorts of self‐attributions, including even self‐attributions
of pain (Gordon 1995b, 1996).

This account has the virtue of emphasizing that, for both children and adults, questions like ‘Do you think
that p?‘ and ‘Do you believe that p?‘ may not be interpreted as questions about one's mental state, but as
questions about p. Similarly, statements like ‘I believe that p‘ are often guarded assertions of p, rather than
21
p. 194 assertions about the speaker's mental state. These are facts that must be kept in mind in interpreting
the results of experiments on mindreading and self‐awareness.

Alongside these virtues, however, the ascent routine also has clear, and we think fatal, shortcomings. As
Goldman (2000) points out, the ascent routine story doesn't work well for attitudes other than belief.

Suppose someone is asked the question, ‘Do you hope that Team T won their game yesterday?‘
(Q1). How is she supposed to answer that question using an ascent routine? Clearly she is not
supposed to ask herself the question, ‘Did Team T win their game yesterday?‘ (Q2), which would
only be relevant to belief, not hope. What question is she supposed to ask herself? (Goldman 2000:
183)

The ascent routine strategy doesn't work any better for lots of other important cases of self‐attribution. In
addition to questions like ‘Do you believe that p?‘, we can answer questions about current mental states like
‘What are you thinking about?‘ But in this case, it is hard to see how to rework the question into an ascent
routine. Similarly, as we noted in Section 4.2.1, people can give accurate retrospective reports in response to
questions like ‘How did you gure that out?‘ We can see no way of transforming these questions into fact‐

Downloaded from https://academic.oup.com/book/3935/chapter/145531015 by Universidad Nacional Autonoma de Mexico user on 02 August 2022
questions of the sort that Gordon's theory requires. This also holds for questions about current desires,
intentions, and imaginings, questions like: ‘What do you want to do?‘; ‘What are you going to do?‘; ‘What
are you imagining?‘ Our ability to answer these questions suggests that the ascent routine strategy simply
cannot accommodate many central cases of self‐awareness. There is no plausible way of recasting these
questions so that they are questions about the world rather than about one's mental state. As a result, the
ascent routine account strikes us as clearly inadequate as a general theory of self‐awareness.

4.8. The Phenomenological Theory

For the last decade, Alvin Goldman has been advocating a ‘phenomenological model for the attitudes’
(Goldman 1993b: 23; see also Goldman 1997, 2000). According to Goldman, in order to detect one's own
mental states, ‘the cognitive system [must] use . . . information about the intrinsic (nonrelational) and
categorical (nondispositional) properties of the target state’ (1993a: 87). Goldman then goes on to ask
‘which intrinsic and categorical properties might be detected in the case of mental states?‘ His answer is as
p. 195 follows: ‘The best candidates, it would seem, are so‐called qualitative properties of mental states—their
22
phenomenological or subjective feelings (often called “qualia")‘ (1993a: 87). So, on this view, one detects
one's own mental states by discerning the phenomenological properties of the mental states—the way
those mental states feel.

Goldman is most con dent of this phenomenological approach when the mental states being detected are
not propositional attitudes but rather what he calls ‘sensations’. ‘Certainly,‘ he argues, ‘it is highly
plausible that one classi es such sensations as headaches or itches on the basis of their qualitative feel’
(1993a: 87). Goldman suggests that this account might also be extended to propositional attitudes, though
he is rather more tentative about this application.

Whether the qualitative or phenomenological approach to mental concepts could be extended from
sensations to attitudes is an open question. Even this prospect, though, is not beyond the bounds
of credibility. There is no reason why phenomenological characteristics should be restricted to
sensory characteristics, and it does indeed seem to ‘feel’ a particular way to experience doubt,
surprise, or disappointment, all of which are forms of propositional attitudes. (1993a: 88; see also
1993b: 25, 104)

We are inclined to think that the idea of extending the phenomenological approach from sensations to
propositional attitudes is much less of an ‘open question’ than Goldman suggests. Indeed, as a general
theory of the self‐attribution of propositional attitudes, we think that it is quite hopeless.

4.8.1. Two versions of Goldman's proposal


To explain our scepticsm, let us begin by noting that there are two quite di erent ways in which Goldman's
proposal might be elaborated:
1. The Weaker Version claims that we (or our cognitive systems) detect or classify the type of a given
mental state by the qualitative or phenomenological properties of the mental state in question. It is
the qualitative character of a state that tells us that it is a belief or a desire or a doubt. On the weaker
version, however, the qualitative properties of propositional attitudes do not play a role in detecting
the content of propositional attitudes.

2. The Stronger Version claims that we (or our cognitive systems) detect or classify both the type and the
content of a given mental state by the qualitative or phenomenological properties of the mental state

Downloaded from https://academic.oup.com/book/3935/chapter/145531015 by Universidad Nacional Autonoma de Mexico user on 02 August 2022
p. 196 in question. So it is the qualitative character of a state that tells us that it is a belief or a desire and it
is also the qualitative character that tells us that it is the belief that there is no greatest prime number or
the desire that the Democrats win the next election.

If one speaks, as we just did, of qualitative or phenomenological qualities ‘telling us’ that a state is a belief
or that its content is that there is no greatest prime number, it is easy to ignore the fact that this is a metaphor.
Qualitative states don't literally ‘tell’ anybody anything. What is really needed, to make a proposal like
Goldman's work, is a mental mechanism (or a pair of mental mechanisms) which can be thought of as
transducers: they are acted upon by the qualitative properties in question and produce, as output,
representations of these qualitative properties (or, perhaps more accurately, representations of the kind of
state that has the qualitative property). So, for example, on the Weaker Version of the theory, what is needed
is a mechanism that goes from the qualitative property associated with belief or doubt to a representation
that the state in question is a belief or doubt. On the Stronger Version, the transducer must do this for the
content of the state as well. So, for instance, on the Stronger Version, the transducer must go from the
qualitative property of the content there is no greatest prime number to a representation that the state in
question has the content there is no greatest prime number. Figure 4.8 is an attempt to depict the mechanisms
and processes required by Goldman's theory.

Fig. 4.8

Phenomenological model of self‐awareness


p. 197 4.8.2. Critique of Goldman's theory
As we see it, the Weaker Version of Goldman's proposal is not a serious competitor for our MM theory, since
the Weaker Version does not really explain some of the crucial facts about self‐awareness. At best, it explains
how, if I know that I have a mental state with the content p, I can come to know that it is a belief and not a
hope or desire. But the Weaker Version doesn't even try to explain how I know that I have a mental state
with the content p in the rst place. So as a full account of self‐awareness of propositional attitudes, the
Weaker Version is a non‐starter.

Downloaded from https://academic.oup.com/book/3935/chapter/145531015 by Universidad Nacional Autonoma de Mexico user on 02 August 2022
The Stronger Version of Goldman's model does attempt to provide a full account of self‐awareness of
propositional attitudes. However, we think that there is no reason to believe the account, and there is good
reason to doubt it. The Stronger Version of Goldman's theory requires a phenomenological account of the
awareness of content as well as a phenomenological account of the awareness of attitude type. Goldman
does not provide a detailed defence of the phenomenological account of content awareness, but he does
sketch one argument in its favour. The argument draws on an example proposed by Keith Gunderson (1993).
Goldman discusses the example as follows:

If I overhear Brown say to Jones, ‘I'm o to the bank,‘ I may wish to know whether he means a spot
for shing or a place to do nancial transactions. But if I say to someone, ‘I'm o to the bank,‘ I
cannot query my own remark: ‘To go shing or to make a deposit?‘ I virtually always already know
. . . . The target article mainly supported a distinctive phenomenology for the attitude types.
Gunderson's example supports distinctive phenomenology for di erent contents. (Goldman 1993b:
104)

We think this argument is wholly unconvincing. It is true that we typically know the interpretation of our
own ambiguous sentences. However, this doesn't even begin to show that belief contents have distinctive
phenomenologies. At best it shows that we must have some mechanism or strategy for obtaining this
knowledge. The MM theory can quite comfortably capture the fact that we typically know the
interpretations of our own ambiguous sentences, and it does so without resorting to phenomenological
features of content. As far as we can tell, then, there is no reason to adopt the phenomenological account of
content. Moreover, there are two rather obvious reasons to prefer the MM account to the Stronger Version of
the Phenomenological Theory.

On an account like Goldman's there must be mechanisms in the mind that are sensitive to
phenomenological or qualitative properties—i.e. mechanisms that are causally a ected by these qualitative
properties in a highly sensitive and discriminating way. The qualia of a belief must lead the mechanism to
produce a representation of belief. The qualitative properties of states with the content Socrates is wise must
p. 198 cause the mechanism to produce representations with the content Socrates is wise. Now we don't wish to
claim that there are no mechanisms of this sort or that there couldn't be. But what is clear is that no one has
a clue about how such mechanisms would work. No one has even the beginning of a serious idea about how a
mechanism could be built that would be di erentially sensitive to the (putative) qualitative properties of the
contents of propositional attitude states. So, for the moment, at least, the mechanisms that Goldman needs
are quite mysterious. The mechanism that our theory needs, by contrast, is simple and straightforward. To
generate representations of one's own beliefs, all that the Monitoring Mechanism has to do is copy
representations in the Belief Box, embed them in a representation schema of the form I believe that __, and
then place this new representation back in the Belief Box. The analogous sort of transformation for
representations in a computer memory could be performed by a simple and utterly unmysterious
23
mechanism.

The preceding argument is simply that it would be trivial to implement a mechanism like the MM whereas
no one has the faintest idea how to implement the mechanisms required for Goldman's account or how such
mechanisms could work. Of course, this is hardly a knock‐down argument against Goldman's account. If it
were independently plausible that phenomenology is the basis for awareness of one's own propositional
attitudes, then the mysteriousness of the transducers would simply pose a challenge for cognitive scientists
to gure out how such a mechanism could work. However, far from being independently plausible, it seems
to us that the phenomenological account is phenomenologically implausible—to say the least! To take the
Stronger Version of Goldman's proposal seriously, one would have to assume that there is a distinct feel or
qualia for every type of propositional attitude, and a distinct qualia for every content (or at least for every

Downloaded from https://academic.oup.com/book/3935/chapter/145531015 by Universidad Nacional Autonoma de Mexico user on 02 August 2022
content we can detect). Now perhaps others have mental lives that are very di erent from ours. But from
our perspective this seems to be (as Jerry Fodor might say) crazy. As best we can tell, believing that 17 is a
prime number doesn't feel any di erent from believing that 19 is a prime number. Indeed, as best we can
tell, neither of these states has any distinctive qualitative properties. Neither of them feels like much at all.
If this is right, then the Strong Version of the Phenomenological Theory is every bit as much a non‐starter as
the Weak Version.

p. 199
4.9. Conclusion

The empirical work on mindreading provides an invaluable resource for characterizing the cognitive
mechanisms underlying our capacity for self‐awareness. However, we think that other authors have drawn
the wrong conclusions from the data. Contrary to the claims of those who advocate the TTSA, the evidence
indicates that the capacity for self‐awareness is not subserved by the same mental mechanisms that are
responsible for third‐person mindreading. It is much more plausible, we have argued, to suppose that self‐
awareness derives from a Monitoring Mechanism that is independent of the mechanisms that enable us to
detect other people's mental states and to reason about mental states. Other authors have attempted to use
the intriguing evidence from autism and young children to support the TTSA. But we have argued that the
evidence from psychopathologies and from developmental studies actually suggests the opposite. The
available evidence indicates that the capacity for understanding other minds can be dissociated from the
capacity to detect one's own mental states and that the dissociation can go in either direction. If this is right,
it poses a serious challenge to the TTSA, but it ts neatly with our suggestion that the Monitoring
Mechanism is independent of third‐person mindreading. Like our Monitoring Mechanism theory, the ascent
routine and the phenomenological accounts are also alternatives to the TTSA; but these theories, we have
argued, are either obviously implausible or patently insu cient to capture central cases of self‐awareness.
Hence, we think that at this juncture in cognitive science, the most plausible account of self‐awareness is
that the mind comes pre‐packaged with a set of special‐purpose mechanisms for reading one's own mind.

Notes
1 For more on Sellars's role in this challenge to the traditional view, see Stich and Ravenscro (1994).
2 Content externalism is the view that the content of one's mental states is determined at least in part by factors external to
one's mind. In contemporary analytic philosophy, the view was motivated largely by Putnam's Twin Earth thought
experiments (Putnam 1975) that seem to show that two molecule‐for‐molecule twins can have thoughts with di erent
contents or meanings, apparently because of their di erent external environments.
3 As we noted in the previous chapter (Section 3.4), the term ʻtheory theoryʼ has been used both as a label for what we have
been calling the ʻscientific‐theory theoryʼ and as a label for all information‐rich accounts of mindreading, including
modular theories. In this book we have adopted the latter, more inclusive, reading of ʻtheory theoryʼ.
4 We will also sometimes use ʻToMIʼ as a label for the mental mechanism or mechanisms that house and exploit this
information. Where the distinction is important, the context will make clear which is intended.
5 Though the argument is a bit messier, the case against TTSA is even stronger if one drops the simplifying assumption that
all third‐person mindreading invokes ToMI. The details are le as an exercise for the reader.
6 To give an idea of how this works, here is an excerpt from Ericsson and Simon's instructions to subjects in think‐aloud
experiments: ʻIn this experiment we are interested in what you think about when you find answers to some questions that
I am going to ask you to answer. In order to do this I am going to ask you to THINK ALOUD as you work on the problem
given. What I mean by think aloud is that I want you to tell me EVERYTHING you are thinking from the time you first see the
question until you give an answer.ʻ (Ericsson and Simon 1993: 378.)
7 For retrospective reports, immediately a er the subject completes the problem, the subject is given instructions like the
following: ʻNow I want to see how much you can remember about what you were thinking from the time you read the
question until you gave the answer. We are interested in what you actually can REMEMBER rather than what you think you

Downloaded from https://academic.oup.com/book/3935/chapter/145531015 by Universidad Nacional Autonoma de Mexico user on 02 August 2022
must have thought. If possible I would like you to tell about your memories in the sequence in which they occurred while
working on the question. Please tell me if you are uncertain about any of your memories. I don't want you to work on
solving the problem again, just report all that you can remember thinking about when answering the question. Now tell
me what you remember.ʻ (Ericsson and Simon 1993: 378.)
8 There are, of course, much more complicated propositional attitudes like disappointment and Schadenfreude. We
postpone discussion of these ʻthickʼ propositional attitudes until Chapter 5 (see Section 5.1.2).
9 Apart from the cognitive science trappings, the idea of an internal monitor goes back at least to David Armstrong (1968)
and has been elaborated by William Lycan (1987) among others. However, much of this literature has become intertwined
with the attempt to determine the proper account of consciousness, and that is not our concern at all. Rather, on our
account, the monitor is just a rather simple information‐processing mechanism that generates explicit representations
about the representations in various components of the mind and inserts these new representations in the Belief Box.
10 Recall that, for simplicity in evaluating TTSA, we are ignoring the role of processes that don't exploit ToMI in third‐person
mindreading and assuming that all third‐person mindreading depends on ToMI.
11 As we have presented our theory, the MM is a mechanism that is distinct from ToMI. But it might be claimed that the MM
that we postulate is just a part of the ToMI mechanism. Here the crucial question to ask is whether it is a ʻdissociableʼ part
which could be selectively damaged or selectively spared. If the answer is no, then we will argue against this view in
Section 4.6. If the answer is yes (MM is a dissociable part of the ToMI mechanism) then there is nothing of substance le to
fight about. That theory is a notational variant of ours.
12 Similarly, Baron‐Cohen (1991a) found that in people with autism, there are correlations between failing the false belief
task for other and failing the task for self.
13 Some theorists, most prominently Fodor (1992), have explained the results in the other‐version of the task by claiming
that young children do not use the ToMI in these tasks. They arrive at their answer, Fodor argues, by using a separate
reality‐biased strategy. We need take no stand on this issue, since if Fodor is correct then it is plausible to suppose that the
same reality‐biased strategy generates a mistaken I believed that there were pencils in the box representation in the self‐
version of the task.
14 In Chapter 2, we argued against Leslie's view that the young child has a notion of pretend, and in this chapter we maintain
that the young child has an early‐emerging capacity to detect her own pretences. However, there is no inconsistency here.
For Leslie's notion of pretend is explicitly the ToMI notion that one uses to ascribe pretence to others. Our claim in Chapter
2 is that, contra Leslie, there is no reason to think that the 18‐month‐old child has the ToMI pretend concept. Although
there is no good reason to think that young children have the ToMI pretend concept, in this chapter our claim is that there
is reason to think that 3‐year‐olds have a concept of pretend that is delivered by the monitoring mechanism. And there is
no reason to think that even this concept is available to the 18‐month‐old.
15 Gopnik and Meltzo have also produced results that suggest a disparity between performance on self‐ and other‐versions
of a very simple perspective‐taking task. They found that when 24‐month‐olds were asked to hide an object from the
experimenter, they ʻconsistently hid the object egocentrically, either placing it on the experimenter's side of the screen or
holding it to themselves so that neither they nor the experimenter could see itʼ (reported in Gopnik and Meltzo 1997:
116). Given that Gopnik and Meltzo characterize the child's performance as ʻegocentricʼ, it seems quite likely that the
children would succeed at versions of this task that asked the child to hide the object from herself. Hence, one expects
that children would perform significantly better on a self‐version of the task than on the other‐version of the task. If in fact
the 2‐year‐old child can't solve the hiding task for another person, but can solve it for self, then this looks like another
case that counts against the extensive parallelism predicted by the TTSA.
16 It is worth noting that perseveration is quite common in autistic children in other domains as well.
17 Hurlburt and colleagues describe ʻunsymbolized thoughtsʼ as ʻclearly‐apprehended, di erentiated thoughts that occurred
with no experience of words, images, or others symbols that might carry the meaning. Subjects sometimes referred to the
phenomenon as “pure thought”. In such samples the subjects could, in their sampling interviews, say clearly what they
had been thinking about at the moment of the beep, and thus could put the thought into words, but insisted that neither
those words nor any other words or symbols were available to awareness at the moment of the beep, even though the
thought itself was easily apprehended at that moment.ʻ (Hurlburt et al. 1994: 386)
18 Though we have been assuming that autism involves a serious ToMI deficit, it is important to note that that assumption
plays no substantive role in the argument set out in this paragraph. All we really need to claim is that individuals with
autism have serious deficits in the ability to detect and reason about some mental states in other people.
19 Frith used a series of error correction experiments to test the hypothesis that passivity experiences result from a deficit in
central monitoring. Frith and colleagues designed simple video games in which subjects had to use a joystick to follow a
target around a computer screen. The games were designed so that subjects would make errors, and the researchers were
interested in the subjectsʼ ability to correct the errors without external (visual) feedback indicating the error. Normal
people are able to rapidly correct these errors even when they don't get feedback. Frith takes this to indicate that normal

Downloaded from https://academic.oup.com/book/3935/chapter/145531015 by Universidad Nacional Autonoma de Mexico user on 02 August 2022
people can monitor their intended response, so that they don't need to wait for the external feedback. Thus, he suggests,
ʻIf certain patients cannot monitor their own intentions, then they should be unable to make these rapid error correctionsʼ
(Frith 1992: 83). Frith and others carried out studies of the performance of schizophrenics on these video game tasks. The
researchers found that ʻacute schizophrenic patients corrected their errors exactly like normal people when visual
feedback was supplied but, unlike normal people o en failed to correct errors when there was no feedback. Of particular
interest was the observation that this disability was restricted to the patients with passivity experiences: delusions of
control, thought insertion and thought blocking. These are precisely the symptoms that can most readily be explained in
terms of a defect of self‐monitoringʼ (Frith 1992: 83). Mlakar et al. (1994) found similar results. Thus, there seems to be
some evidence supporting Frith's general claim that passivity experiences derive from a defect in central monitoring.
20 The idea that a component of the mindreading system responsible for first‐person detection can be selectively damaged
while the component of the system responsible for analogous third‐person detection remains intact might apply to
detection of mental states other than propositional attitudes like beliefs and desires. For instance, alexithymia is a clinical
condition in which subjects have great di iculty discerning their own emotional states. One researcher characterizes the
condition as follows: ʻWhen asked about feelings related to highly charged emotional events, such as the loss of a job or
the death of a family member, patients with alexithymia usually respond in one of two ways: either they describe their
physical symptoms or they seem not to understand the questionʼ (Lesser 1985: 690). As a result, patients with this
condition o en need to be given instruction about how to interpret their own somatic sensations. ʻFor instance, they need
to understand that when one is upset or scared, it is normal to feel abdominal discomfort or a rapid heart beat. These
sensations can be labeled “anger” or “fear” ʻ (1985: 691). Thus alexithymia might be a case in which subjects have
selective damage to a system for monitoring one's own emotions. Of course, to make a persuasive case for this, one would
need to explore (among other things) these subjectsʼ ability to attribute emotions to other people. If it turns out that
patients with alexithymia can e ectively attribute emotions to others but not to themselves, that would indicate that
alexithymia might indeed be caused by damage to a monitoring mechanism. We think that these kinds of questions and
experiments only become salient when we keep careful track of the distinction between first‐person mental state
detection and the cluster of mindreading abilities which, according to us, are independent of the Monitoring Mechanism
system. (We are grateful to Robert Woolfolk for suggesting this interpretation of alexithymia.)
21 Claims like this are, of course, commonplace in the philosophical literature on the ʻanalysisʼ of belief. For example,
Urmson maintains that ʻbelieveʼ is a ʻparenthetical verbʼ and that such verbs ʻare not psychological descriptionsʻ (Urmson
1956: 194). Rather, ʻwhen a man says, “I believe that he is at home” or “He is, I believe, at home”, he both implies a
(guarded) claim of the truth, and also implies a claim of the reasonableness of the statement that he is at homeʼ (Urmson
1956: 202).
22 Since Goldman regards these phenomenological properties as ʻintrinsicʼ, he rejects the higher‐order account of
consciousness advocated by Rosenthal (1992), Carruthers (2000), and others (see Goldman 2000: 179).
23 It might be argued that the PMM that we posit in Section 4.3.2 is just as mysterious as the mechanism that Goldman's
theory requires. However, nothing in our account of the PMM requires that it is sensitive to qualitative properties of
percepts. But even if it turns out that the PMM is sensitive to qualitative properties, we are inclined to think that the
objection that we are proposing in this paragraph still has some force, since Goldman's account invokes a rather
mysterious mechanism for generating beliefs about one's own beliefs and desires when a very unmysterious one would do
the job.

You might also like