Self Deception
Self Deception
Self Deception
First published Tue Oct 17, 2006; substantive revision Mon Mar 5, 2012
Virtually every aspect of the current philosophical discussion of self-deception is a
matter of controversy including its definition and paradigmatic cases. We may say
generally, however, that self-deception is the acquisition and maintenance of a belief
(or, at least, the avowal of that belief) in the face of strong evidence to the contrary
motivated by desires or emotions favoring the acquisition and retention of that belief.
Beyond this, philosophers divide over whether this action is intentional or not, whether
self-deceivers recognize the belief being acquired is unwarranted on the available
evidence, whether self-deceivers are morally responsible for their self-deception, and
whether self-deception is morally problematic (and if it is in what ways and under what
circumstances). The discussion of self-deception and its associated puzzles gives us
insight into the ways in which motivation affects belief acquisition and retention. And
yet insofar as self-deception represents an obstacle to self-knowledge, which has
potentially serious moral implications, self-deception is more than an interesting
philosophical puzzle. It is a problem of particular concern for moral development, since
self-deception can make us strangers to ourselves and blind to our own moral failings.
1. Definitional Issues
2. Intentionalist Approaches
o 2.1 Temporal Partitioning
o 2.2 Psychological Partitioning
3. Non-Intentionalist Approaches
o 3.1 Intentionalist Objections
4. Twisted Self-Deception
5. Morality and Self-deception
o 5.1 Moral Responsibility for Self-Deception
o 5.2 The Morality of Self-Deception
6. Collective Self-Deception
1. Definitional Issues
What is self-deception? Traditionally, self-deception has been modeled on interpersonal
deception, where A intentionally gets B to believe some proposition p, all the while
knowing or believing truly ~p. Such deception is intentional and requires the deceiver
to know or believe ~p and the deceived to believe p. One reason for thinking selfdeception is analogous to interpersonal deception of this sort is that it helps us to
distinguish self-deception from mere error, since the acquisition and maintenance of the
false belief is intentional not accidental. If self-deception is properly modeled on such
interpersonal deception, self-deceivers intentionally get themselves to believe p, all the
while knowing or believing truly ~p. On this traditional model, then, self-deceivers
apparently must (1) hold contradictory beliefs, and (2) intentionally get themselves to
hold a belief they know or believe truly to be false.
The traditional model of self-deception, however, has been thought to raise two
paradoxes: One concerns the self-deceiver's state of mindthe so-called static
paradox. How can a person simultaneously hold contradictory beliefs? The other
concerns the process or dynamics of self-deceptionthe so-called dynamic or
strategic paradox. How can a person intend to deceive herself without rendering her
intentions ineffective? (Mele 1987a; 2001)
The requirement that the self-deceiver holds contradictory beliefs raises the static
paradox, since it seems to pose an impossible state of mind, namely, consciously
believing p and ~p at the same time. As the deceiver, she must believe ~p, and, as the
deceived, she must believe p. Accordingly, the self-deceiver consciously believe p and
~p. But if believing both a proposition and its negation in full awareness is an
2. Intentionalist Approaches
The chief problem facing intentional models of self-deception is the dynamic paradox,
namely, that it seems impossible to form an intention to get oneself to believe what one
currently disbelieves or believes is false. For one to carry out an intention to deceive
oneself one must know what one is doing, to succeed one must be ignorant of this same
fact. Intentionalists agree on the proposition that self-deception is intentional, but divide
over whether it requires the holding of contradictory beliefs, and thus over the specific
content of the alleged intention involved in self-deception. Insofar as even the bare
intention to acquire the belief that p for reasons not having to do with one's evidence
for p seems unlikely to succeed if directly known, most intentionalists introduce
temporal or psychological divisions that serve to insulate self-deceivers from the
awareness of their deceptive strategy. When self-deceivers are not consciously aware of
their beliefs to the contrary or their deceptive intentions, no paradox seems to be
involved in deceiving oneself. Many approaches utilize some combination of
psychological and temporal division (e.g., Bermdez 2000).
agency (Pears 1984, 1986; 1991); to the relatively modest division of Davidson, where
there need only be a boundary between conflicting attitudes (1982, 1985). Such
divisions are prompted in large part by the acceptance of the contradictory belief
requirement. It isn't simply that self-deceivers hold contradictory beliefs, which though
strange, isn't impossible. One can believe p and believe ~p without believing p & ~p,
which would be impossible. The problem such theorists face stems from the appearance
that the belief that ~p motivates and thus form a part of the intention to bring it about
that one acquire and maintain the false belief p(Davidson 1985). So, for example, the
Nazi official's recognition that his actions implicate him in serious evil motivates him to
implement a strategy to deceive himself into believing he is not so involved; he can't
intend to bring it about that he holds such a false belief if he doesn't recognize it is false,
and he wouldn't want to bring such a belief about if he didn't recognize the evidence to
the contrary. So long as this is the case, the deceptive subsystem, whether it constitutes
a separate center of agency or something less robust, must be hidden from the conscious
self being deceived if the self-deceptive intention is to succeed. While these
psychological partitioning approaches seem to resolve the static and dynamic puzzles,
they do so by introducing a picture of the mind that raises many puzzles of its own. On
this point, there appears to be consensus even among intentionalists that self-deception
can and should be accounted for without invoking divisions not already used to explain
non-self-deceptive behavior, what Talbott (1995) calls innocent divisions.
Some intentionalists reject the requirement that self-deceivers hold contradictory beliefs
(Talbott 1995; Bermdez 2000). According to such theorists, the only thing necessary
for self-deception is the intention to bring it about that one believe p where lacking such
an intention one would not have acquired that belief. The self-deceiver thus need not
believe ~p. She might have no views at all regarding p, possessing no evidence either
for or against p; or she might believe p is merely possible, possessing evidence for or
against p too weak to warrant belief that p or ~p (Bermdez 2000). Self-deceivers in
this minimal sense intentionally acquire the belief that p, despite their recognition at the
outset that they do not possess enough evidence to warrant this belief by selectively
gathering evidence supporting p or otherwise manipulating the belief-formation process
to favor belief that p. Even on this minimal account, such intentions will often be
unconscious, since a strategy to acquire a belief in violation of one's normal evidential
standards seems unlikely to succeed if one is aware of it.
3. Non-Intentionalist Approaches
A number of philosophers have moved away from modeling self-deception on
intentional interpersonal deception, opting instead to treat it as a species of
motivationally biased belief. These non-intentionalists allow that phenomena answering
to the various intentionalist models available may be possible, but everyday or gardenvariety self-deception can be explained without adverting to subagents, or unconscious
beliefs and intentions, which, even if they resolve the static and dynamic puzzles of
self-deception, raise many puzzles of there own. If such non-exotic explanations are
available, intentionalist explanations seem unwarranted.
The main paradoxes of self-deception seem to arise from modeling self-deception too
closely on intentional interpersonal deception. Accordingly, non-intentionalists suggest
the intentional model be jettisoned in favor of one that takes to be deceived to be
nothing more than to believe falsely or be mistaken in believing (Johnston 1988; Mele
2001). For instance, Sam mishears that it will be a sunny day and relays this
misinformation to Joan with the result that she believes it will be a sunny day. Joan is
deceived in believing it will be sunny and Sam has deceived her, albeit unintentionally.
Initially, such a model may not appear promising for self-deception, since simply being
mistaken about p or accidentally causing oneself to be mistaken about p doesn't seem to
be self-deception at all but some sort of innocent errorSam doesn't seem selfdeceived, just deceived. Non-intentionalists, however, argue that in cases of selfdeception the false belief is not accidental but motivated by desire (Mele 2001), anxiety
(Johnston 1988, Barnes 1997) or some other emotion regarding p or related to p. So, for
instance, when Allison believes against the preponderance of evidence available to her
that her daughter is not having learning difficulties, the non-intentionalist will explain
the various ways she misreads the evidence by pointing to such things as her desire that
her daughter not have learning difficulties, her fear that she has such difficulties, or
anxiety over this possibility. In such cases, Allison's self-deceptive belief that her
daughter is not having learning difficulties, fulfills her desire, quells her fear or reduces
her anxiety, and it is this function (not an intention) that explains why her belief
formation process is bias. Allison's false belief is not an innocent mistake, but a
consequence of her motivational states.
Some non-intentionalists suppose that self-deceivers recognize at some level that their
self-deceptive belief p is false, contending that self-deception essentially involves an
ongoing effort to resist the thought of this unwelcome truth or is driven by anxiety
prompted by this recognition (Bach 1981; Johnston 1988). So, in Allison's case, her
belief that her daughter is having learning difficulties along with her desire that it not be
the case motivate her to employ means to avoid this thought and to believe the opposite.
Others, however, argue the needed motivation can as easily be supplied by uncertainty
or ignorance whether p, or suspicion that ~p (Mele 2001, Barnes 1997). Thus, Allison
need not hold any opinion regarding her daughter's having learning difficulties for her
false belief that she is not experiencing difficulties to count as self-deception, since it is
her regarding evidence in a motivationally biased way in the face of evidence to the
contrary, not her recognition of this evidence, that makes her belief self-deceptive.
Accordingly, Allison need not intend to deceive herself nor believe at any point that her
daughter is in fact having learning difficulties. If we think someone like Allison is self-
imagine Josh having the same strong desire that his chocolate not be tainted by
exploitation and yet assessing the cost of falsely believing it is not tainted differently.
Say, for example, he works for an organization promoting fair trade and non-exploitive
labor practices among chocolate producers and believes he has an obligation to
accurately represent the labor practices of the producer of his favorite chocolate and
would, furthermore, lose credibility if the chocolate he himself consumes is tainted by
exploitation. In these circumstances, Josh is more sensitive to evidence that his favorite
chocolate is tainted, despite his desire that it not be, since the subjective cost of being
wrong is higher for him than it was before. It is the relative subjective costs of falsely
believing p and ~p that explains why desire or other motivation biases belief in some
circumstances and not others. Challenging this solution, Bermdez (2000) suggests that
the selectivity problem may reemerge, since it isn't clear why in cases where there is a
relatively low cost for holding a self-deceptive belief favored by our motivations we
frequently do not become self-deceived. Mele (2001), however, points out that
intentional strategies have their own selectivity problem', since it isn't clear why some
intentions to acquire a self-deceptive belief succeed while others do not.
4. Twisted Self-Deception
Self-deception that involves the acquisition of an unwanted belief, termed twisted selfdeception by Mele (1999, 2001), has generated a small but growing literature of its
own, most recently, Barnes (1997), Mele (1999, 2001), Scott-Kakures (2000; 2001). A
typical example of such self-deception is the jealous husband who believes on weak
evidence that his wife is having an affair, something he doesn't want to be the case. In
this case, the husband apparently comes to have this false belief in the face of strong
evidence to the contrary in ways similar to those ordinary self-deceivers come to
believe something they want to be true.
One question philosophers have sought to answer is how a single unified account of
self-deception can explain both welcome and unwelcome beliefs. If a unified account is
sought, then it seems self-deception cannot require that the self-deceptive belief itself
be desired. Pears (1984) has argued that unwelcome belief might be driven by fear or
jealousy. My fear of my house burning down might motivate my false belief that I have
left the stove burner on. This unwelcome belief serves to ensure that I avoid what I fear,
since it leads me to confirm that the burner is off. Barnes (1997) argues that the
unwelcome belief must serve to reduce some relevant anxiety; in this case my anxiety
that my house is burning. Scott-Kakures (2000; 2001) argues, however, that since the
unwelcome belief itself does not in many cases serve to reduce but rather to increase
anxiety or fear, their reduction cannot be the purpose of that belief. Instead, he contends
that we think of the belief as serving to make the agent's goals and interests more
probable than not, in my case, preserving my house. My testing and confirming an
unwelcome belief may be explained by the costs I associate with being in error, which
is determined in view of my relevant aims and interests. If I falsely believe that I have
left the burner on, the cost is relatively lowI am inconvenienced by confirming that it
is off. If I falsely believe that I have not left the burner on, the cost is extremely high
my house being destroyed by fire. The asymmetry between these relative costs alone
may account for my manipulation of evidence confirming the false belief that I have left
the burner on. Drawing upon recent empirical research, both Mele (2001) and ScottKakures (2000) advocate a model of this sort, since it helps to account for the roles
desires and emotions apparently play in cases of twisted self-deception. Nelkin (2002)
argues that the motivation for self-deceptive belief formation be restricted to a desire to
believe p. She points out that the phrase unwelcome belief is ambiguous, since a
belief itself might be desirable even if its being true is not. I might want to hold the
belief that I have left the burner on, but not want it to be the case that I have left it on.
The belief is desirable in this instance, because holding it ensures that it will not be true.
In Nelkin's view, then, what unifies cases of self-deceptionboth twisted and straight
is that the self-deceptive belief is motivated by a desire to believe p; what
distinguishes them is that twisted self-deceivers do not want p to be the case, while
straight self-deceivers do. Restricting the motivating desire to a desire to believe p,
according to Nelkin, makes clear what twisted and straight self-deception have in
common as well as why other forms of motivated belief formation are not cases of selfdeception. Though non-intentional models of twisted self-deception dominate the
landscape, whether desire, emotion or some combination of these attitudes plays the
dominant role in such self-deception and whether their influence merely triggers the
process or continues to guide it throughout remain matters of controversy.
be taxed for entering into it. To be ignorant of one's moral self, as Socrates saw, may
represent a great obstacle to a life well lived whether or not one is at fault for such
ignorance.
responsible, since it is rarely the case that self-deceivers possess the requisite awareness
of the biasing mechanisms operating to produce their self-deceptive belief. Lacking
such awareness, self-deceivers do not appear to know when or on which beliefs such
mechanisms operate, rendering them unable to curb the effects of these mechanisms,
even when they operate to form false beliefs about morally significant matters. Levy
also argues that if self-deceivers typically lack the control necessary for moral
responsibility in individual episodes of self-deception, they also lack control over being
the sort of person disposed to self-deception. Non-intentionalists may respond by
claiming that self-deceivers often are aware of the potentially biasing effects their
desires and emotions might have and can exercise control over them. They might also
challenge the idea the self-deceivers must be aware in the ways Levy suggests. One
well known account of control, employed by Levy, holds that a person is responsible
just in case she acts on a mechanism that is moderately responsive to reasons (including
moral reasons), such that were she to possess such reasons this same mechanism would
act upon those reasons in at least one possible world (Fischer and Ravizza 1999).
Guidance control, in this sense, requires that the mechanism in question be capable of
recognizing and responding to moral and non-moral reasons sufficient for acting
otherwise. In cases of self-deception, deflationary views may suggest that the biasing
mechanism, while sensitive and responsive to motivation, is too simple to itself be
responsive to reasons. However, the question isn't whether the biasing mechanism itself
is reasons responsive but whether the mechanism governing its operation is, that is,
whether self-deceivers typically could recognize and respond to moral and non-moral
reasons to resist the influence of their desires and emotions and instead exercise special
scrutiny of the belief in question. At the very least, it isn't obvious that they could not.
Moreover, that some overcome their self-deception seems to indicate such a capacity
and thus control over ceasing to be self-deceived at least.
Insofar as it seems plausible that in some cases self-deceivers are apt targets for
censure, what prompts this attitude? Take the case of a mother who deceives herself into
believing her husband is not abusing their daughter because she can't bear the thought
that he is a moral monster (Barnes 1997). Why do we blame her? Here we confront the
nexus between moral responsibility for self-deception and the morality of selfdeception. Understanding what obligations may be involved and breached in cases of
this sort will help to clarify the circumstances in which ascriptions of responsibility are
appropriate.
harm to others (Linehan 1982) and to oneself, undermines autonomy (Darwall 1988;
Baron 1988), corrupts conscience (Butler 1722), violates authenticity (Sartre 1943), and
manifests a vicious lack of courage and self-control that undermine the capacity for
compassionate action (Jenni 2003). Linehan (1982) argues that we have an obligation to
scrutinize the beliefs that guide our actions that is proportionate to the harm to others
such actions might involve. When self-deceivers induce ignorance of moral obligations,
of the particular circumstances, of likely consequences of actions, or of their own
engagements, by means of their self-deceptive beliefs, they are culpable. They are
guilty of negligence with respect to their obligation to know the nature, circumstances,
likely consequences and so forth of their actions (Jenni 2003). Self-deception,
accordingly, undermines or erodes agency by reducing our capacity for self-scrutiny
and change. (Baron 1988) If I am self-deceived about actions or practices that harm
others or myself, my ability to take responsibility and change are also severely
restricted. Joseph Butler, in his well-known sermon On Self-Deceit, emphasizes the
ways in which self-deception about one's moral character and conduct, self-ignorance
driven by inordinate self-love', not only facilitates vicious actions but hinders the
agent's ability to change by obscuring them from view. Such ignorance, claims Butler,
undermines the whole principle of good and corrupts conscience, which is the guide
of life (On Self-Deceit). Existentialist philosophers such as Kierkegaard and Sartre,
in very different ways, viewed self-deception as a threat to authenticity insofar as selfdeceivers fail to take responsibility for themselves and their engagements past, present
and future. By alienating us from our own principles, self-deception may also threaten
moral integrity (Jenni 2003). Furthermore, self-deception also manifests certain
weakness of character that dispose us to react to fear, anxiety, or the desire for pleasure
in ways that bias our belief acquisition and retention in ways that serve these emotions
and desires rather than accuracy. Such epistemic cowardice and lack of self-control may
inhibit the ability of self-deceivers to stand by or apply moral principles they hold by
biasing their beliefs regarding particular circumstances, consequences or engagements,
or by obscuring the principles themselves. In all these ways and a myriad of others,
philosophers have found some self-deception objectionable in itself or for the
consequences it has on our ability to shape our lives.
Those finding self-deception morally objectionable, generally assume that selfdeception or, at least, the character that disposes us to it, is under our control to some
degree. This assumption need not entail that self-deception is intentional only that it is
avoidable in the sense that self-deceivers could recognize and respond to reasons for
resisting bias by exercising special scrutiny (see section 5.1). It should be noted,
however, that self-deception still poses a serious worry even if one cannot avoid
entering into it, since self-deceivers may nevertheless have an obligation to overcome it.
If exiting self-deception is under the guidance control of self-deceivers, then they might
reasonably be blamed for persisting in their self-deceptive beliefs when they regard
matters of moral significance.
But even if agents don't bear specific responsibility for their being in that state, selfdeception may nevertheless be morally objectionable, destructive and dangerous. If
radically deflationary models of self-deception do turn out to imply that our own desires
and emotions, in collusion with social pressures toward bias, lead us to hold selfdeceptive beliefs and cultivate habits of self-deception of which we are unaware and
from which cannot reasonably be expected to escape on our own, self-deception would
still undermine autonomy, manifest character defects, obscure us from our moral
engagements and the like. For these reasons, Rorty (1994) emphasizes the importance
of the company we keep. Our friends, since they may not share our desires or emotions,
are often in a better position to recognize our self-deception than we are. With the help
of such friends, self-deceivers may, with luck, recognize and correct morally corrosive
self-deception.
Evaluating self-deception and its consequences for ourselves and others is a difficult
task. It requires, among other things: determining the degree of control self-deceivers
have; what the self-deception is about (Is it important morally or otherwise?); what ends
the self-deception serves (Does it serve mental health or as a cover for moral
wrongdoing?); how entrenched it is (Is it episodic or habitual?); and, whether it is
escapable (What means of correction are available to the self-deceiver?). In view of the
many potentially devastating moral problems associated with self-deception, these are
questions that demand our continued attention.
6. Collective Self-Deception
Collective self-deception has received scant direct philosophical attention as compared
with its individual counterpart. Collective self-deception might refer simply to a group
of similarly self-deceived individuals or to a group-entity, such as a corporation,
committee, jury or the like, that is self-deceived. These alternatives reflect two basic
perspectives social epistemologists have taken on ascriptions of propositional attitudes
to collectives. On the one hand, such attributions might be taken summatively as simply
an indirect way of attributing those states to members of the collective (Quinton
1975/1976). This summative understanding, then, considers attitudes attributed to
groups to be nothing more than metaphors expressing the sum of the attitudes held by
their members. To say that students think tuition is too high is just a way of saying that
most students think so. On the other hand, such attributions might be understood nonsummatively as applying to collective entities, themselves ontologically distinct from
the members upon which they depend. These so-called plural subjects (Gilbert 1989,
1994, 2005) or social integrates (Pettit 2003), while supervening upon the individuals
comprising them, may well express attitudes that diverge from individual members. For
instance, saying NASA believed the O-rings on the space shuttle's booster rockets to be
safe need not imply that most or all the members of this organizations personally held
this belief only that the institution itself did. The non-summative understanding, then,
considers collectives to be, like persons, apt targets for attributions of propositional
attitudes, and potentially of moral and epistemic censure as well. Following this
distinction, collective self-deception may be understood in either a summative or nonsummative sense.
In the summative sense, collective self-deception refers to self-deceptive belief shared
by a group of individuals, who each come to hold the self-deceptive belief for similar
reasons and by similar means, varying according to the account of self-deception
followed. We might call this self-deception across a collective. In the non-summative
sense, the subject of collective self-deception is the collective itself, not simply the
individuals comprising it. The following sections offer an overview of these forms of
collective self-deception, noting the significant challenges posed by each.
evidence to the contrary. Caring for her as I do, I share many of the anxieties, fears and
desires that sustain my friend's self-deceptive belief, and as a consequence I form the
same self-deceptive belief via the same mechanisms. In such a case, I unwittingly
support my friend's self-deceptive belief and she mineour self-deceptions are
mutually reinforcing. We are collectively or mutually self-deceived, albeit on a very
small scale. Ruddick (1988) calls this joint self-deception.
On a larger-scale, sharing common attitudes, large segments of a society might deceive
themselves together. For example, we share a number of self-deceptive beliefs
regarding our consumption patterns. Many of the goods we consume are produced by
people enduring labor conditions we do not find acceptable and in ways that we
recognize are environmentally destructive and likely unsustainable. Despite our being at
least generally aware of these social and environmental ramifications of our
consumptive practices, we hold the overly optimistic beliefs that the world will be fine,
that its peril is overstated, that the suffering caused by the exploitive and ecologically
degrading practices are overblown, that our own consumption habits are unconnected to
these sufferings anyway, even that our minimal efforts at conscientious consumption are
an adequate remedy (See, Goleman 1989). When self-deceptive beliefs such as these
are held collectively, they become entrenched and their consequences, good or bad, are
magnified (Surbey 2004).
The collective entrenches self-deceptive beliefs by providing positive reinforcement by
others sharing the same false belief, as well as protection from evidence that would
destabilize the target belief. There are, however, limits to how entrenched such beliefs
can become and remain self-deceptive. The social support cannot be the sole or primary
cause of the self-deceptive belief, for then the belief would simply be the result of
unwitting interpersonal deception and not the deviant belief formation process that
characterizes self-deception. If the environment becomes so epistemically contaminated
as to make counter-evidence inaccessible to the agent, then we have a case of false
belief, not self-deception. Thus, even within a collective a person is self-deceived just in
case she would not hold her false belief if she did not possess the motivations skewing
her belief formation process. This said, relative to solitary self-deception, the collective
variety does present greater external obstacles to avoiding or escaping self-deception,
and is for this reason more entrenched. If the various proposed psychological
mechanisms of self-deception pose an internal challenge to the self-deceiver's power to
control her belief formation, then these social factors pose an external challenge to the
self-deceiver's control. Determining the how superable this challenge is will affect our
assessment of individual responsibility for self-deception as well as the prospects of
unassisted escape from it.
6.2
Non-summative
Collective
SelfDeception: Self-Deception of a Collective
Entity
Collective self-deception can also be understood from the perspective of the collective
itself in a non-summative sense. Though there are varying accounts of group belief,
generally speaking, a group can be said to believe, desire, value or the like just in case
its members jointly commit to these things as a body (Gilbert 2005). A corporate
board, for instance, might be jointly committed as a body to believe, value and strive for
whatever the CEO recommends. Such commitment need not entail that each individual
board member personally endorses such beliefs, values or goals, only that as members
of the board they do (Gilbert 2005). While philosophically precise accounts of nonsummative self-deception remain largely unarticulated, the possibilities mirror those of
individual self-deception. When collectively held attitudes motivate a group to espouse
a false belief despite the group's possession of evidence to the contrary, we can say that
the group is collectively self-deceived in a non-summative sense.
For example, Robert Trivers (2000) suggests that organizational self-deception led to
NASA's failure to represent accurately the risks posed by the space shuttle's O-ring
design, a failure that eventually led to the Challenger disaster. The organization as a
whole, he argues, had strong incentives to represent such risks as small. As a
consequence, NASA's Safety Unit mishandled and misrepresented data it possessed that
suggested that under certain temperature conditions the shuttle's O-rings were not safe.
NASA, as an organization, then, self-deceptively believed the risks posed by O-ring
damage were minimal. Within the institution, however, there were a number of
individuals who did not share this belief, but both they and the evidence supporting
their belief were treated in a bias manner by the decision-makers within the
organization. As Trivers (2000) puts it, this information was relegated to portions of
the organization that [were] inaccessible to consciousness (we can think of the people
running NASA as the conscious part of the organization). In this case, collectively held
values created a climate within NASA that clouded its vision of the data and led to its
endorsement of a fatally false belief.
Collective self-deceit may also play a significant role in facilitating unethical practices
by corporate entities. For example, a collective commitment by members of a
corporation to maximizing profits might lead members to form false beliefs about the
ethical propriety of the corporation's practices. Gilbert (2005) suggests that such a
commitment might lead executives and other members to simply lose sight of moral
constraints and values they previously held. Similarly, Tenbrunsel and Messick (2004)
argue that self-deceptive mechanisms play a pervasive role in what they call ethical
fading, acting as a kind of bleach that renders organizations blind to the ethical
dimensions of their decisions. They argue that such self-deceptive mechanisms must be
recognized and actively resisted at the organizational level if unethical behavior is to be
avoided. More specifically, Gilbert (2005) contends that collectively accepting that
certain moral constraints must rein in the pursuit of corporate profits might shift
corporate culture in such a way that efforts to respect these constraints are recognized as
part of being a good corporate citizen. In view of the ramifications this sort of collective
self-deception has for the way we understand corporate misconduct and responsibility,
understanding its specific nature in greater detail remains an important task.
Collective self-deception understood in either the summative or non-summative sense
raises a number of significant questions such as whether individuals within collectives
bear responsibility for their self-deception or the part they play in the collective's selfdeception, whether collective entities can be held responsible for their epistemic
failures. Finally, collective self-deception prompts us to ask what means are available
collectives and their members to resist, avoid and escape self-deception. To answer
these and other questions, more precise accounts of these forms of self-deception are
needed. Given the capacity of collective self-deception to entrench false beliefs and to
magnify their consequencessometimes with disastrous resultscollective selfdeception is not just a philosophical puzzle; it is a problem that demands attention.
Bibliography
Ames, R.T., and W. Dissanayake, (eds.), 1996, Self and Deception, New York:
State University of New York Press.
Baron, M., 1988, What is Wrong with Self-Deception, in Perspectives on SelfDeception, B. McLaughlin and A. O. Rorty (eds.), Berkeley: University of California
Press.
Bok, S., 1980, The Self Deceived, Social Science Information, 19: 923935.
Butler, J., 1726, Upon Self-Deceit, in D.E. White (ed.), 2006, The Works of
Bishop Butler, Rochester: Rochester University Press. [Available online]
Chisholm, R. M., and Feehan, T., 1977, The Intent to Deceive, Journal of
Philosophy, 74: 143159.
Davidson, D., 1985, Deception and Division, in Actions and Events, E. LePore
and B. McLaughlin (eds.), New York: Basil Blackwell.
Intentions,
and
Contradictory
Self-Deception
and
Self-
Elster, J., (ed.), 1985, The Multiple Self, Cambridge: Cambridge University
Press.
Fairbanks, R., 1995, Knowing More Than We Can Tell, The Southern Journal
of Philosophy, 33: 431459.
Funkhouser, E., 2005, Do the Self-Deceived Get What They Want?, Pacific
Philosophical Quarterly, 86(3): 295312.
Hales, S. D., 1994, Self-Deception and Belief Attribution, Synthese, 101: 273
289.
Hernes, C., 2007, Cognitive Peers and Self-Deception, teorema, 26(3): 123130.
Levy, N., 2004, Self-Deception and Moral Responsibility, Ratio (new series),
17: 294311.
and
Moral
Press.
Nicholson,
A.,
2007.Cognitive
Deception, teorema, 26(3): 45-58.
and
Self-
Noordhof,
P.,
2003,
Self-Deception,
Interpretation
Consciousness, Philosophy and Phenomenological Research, 67: 75100.
and
Bias,
and
the
Intentionality
Desire
to
Sorensen, R., 1985, Self-Deception and Scattered Events, Mind, 94: 6469.
Self-Deception
and
Emotional
Tenbrusel, A.E. and D. M Messick, 2004, Ethical Fading: The Role of SelfDeception in Unethical Behavior, Social Justice Research, 7(2): 223236.
Van Fraassen, B., 1995, Belief and the Problem of Ulysses and the
Sirens, Philosophical Studies, 77: 737.
Involve
Intentional