Cognitive Biases that Interfere with Critical Thinking and Scientific
Reasoning: A Course Module
Hershey H. Friedman, Ph.D.
Professor of Business
Department of Business Management
Murray Koppelman School of Business
Brooklyn College of the City University of New York
Email: x.friedman@att.net
Abstract
It is clear that people do not always behave in a rational manner. Sometimes they are presented
with too much information or they may want to make a quick decision. This may cause them to
rely on cognitive shortcuts known as heuristics (rules of thumb). These cognitive shortcuts often
result in cognitive biases; at least 175 cognitive biases have been identified by researchers. This
paper describes many of these biases starting with actor-observer bias and ending with zero-risk
bias. It also describes how one can overcome them and thereby become a more rational decision
maker.
Keywords: rational man, cognitive biases, heuristics, anchoring bias, focusing illusion,
happiness, availability bias, confirmation bias, neglect of probability bias, overconfidence bias.
Many of the fundamental principles of economic theory have recently been challenged.
Economic theory is largely based on the premise of the “rational economic man.” Rational man
makes decisions based solely on self-interest and wants to maximize his utility. However, the
rational man theory may be a theory that is dead or rapidly dying. After the Great Recession of
2008, Alan Greenspan, former Chairman of the Federal Reserve, told Congress: “I made a
mistake in presuming that the self-interests of organizations, specifically banks and others, were
such that they were best capable of protecting their own shareholders” (Ignatius, 2009). Nouriel
Roubini, a prominent economist known as Dr. Doom for predicting the housing market collapse
in 2006, stated that "The rational man theory of economics has not worked" (Ignatius, 2009).
Kahneman (2011: 374) avows: “Theories can survive for a long time after conclusive evidence
falsifies them, and the rational-agent model certainly survived the evidence we have seen, and
much other evidence as well.”
Kahneman (2011: 269) describes how he was handed an essay written by the Swiss
economist Bruno Frey that stated: “The agent of economic theory is rational, selfish, and his
tastes do not change.” Kahneman was astonished that economists could believe this given
that it was quite obvious to psychologists that “people are neither fully rational nor
completely selfish, and that their tastes are anything but stable. Our two disciplines seemed to
be studying different species, which the behavioral economist Richard Thaler later dubbed
Econs and Humans.”
1
Many economists now realize that man does not always behave in a rational manner.
Thaler and Mullainatha (2008) describe how in experiments involving “ultimatum” games,
we see evidence that people do not behave as traditional economic theory predicts they will.
People will act “irrationally” and reject offers they feel are unfair:
In an ultimatum game, the experimenter gives one player, the proposer,
some money, say ten dollars. The proposer then makes an offer of x, equal
or less than ten dollars, to the other player, the responder. If the responder
accepts the offer, he gets x and the proposer gets 10 − x. If the responder
rejects the offer, then both players get nothing. Standard economic theory
predicts that proposers will offer a token amount (say twenty-five cents)
and responders will accept, because twenty-five cents is better than
nothing. But experiments have found that responders typically reject offers
of less than 20 percent (two dollars in this example).
This is why we must also draw on insights from the discipline of psychology. Ariely
(2008) uses the latest research to demonstrate that people are predictably irrational; they use
heuristics or rules of thumb to make decisions. Heuristics may be seen as “cognitive
shortcuts” that humans utilize when there is a great deal of required information to collect in
order to make a correct decision but time (or desire to do the extensive research) and/or
money is limited (Caputo, 2013). Using rules of thumb may help a person make quick
decisions but might lead to a systematic bias. Smith (2015) lists 67 cognitive biases that
interfere with rational decision making. A cognitive bias is defined as:
a systematic error in thinking that affects the decisions and judgments that
people make. Sometimes these biases are related to memory. The way you
remember an event may be biased for a number of reasons and that in turn
can lead to biased thinking and decision-making. In other instance,
cognitive biases might be related to problems with attention. Since
attention is a limited resource, people have to be selective about what they
pay attention to in the world around them (Chery, 2016).
2
Wikipedia lists 175 cognitive biases, but there appears to be a great deal of duplication
(Benson, 2016). According to Benson (2016), cognitive biases help us address four different
problems:
Problem 1: Too much information to deal with (information overload) so
our brain uses tricks to select the information we are most likely to use.
Problem 2: Not enough meaning; but we need to make sense out of what
we perceive. To solve this problem, we fill in the gaps.
Problem 3: Need to act fast.
Problem 4: What should we remember?
These are the downsides of cognitive biases according to Benson:
We don’t see everything. Some of the information we filter out is actually
useful and important.
Our search for meaning can conjure illusions. We sometimes imagine
details that were filled in by our assumptions, and construct meaning and
stories that aren’t really there.
Quick decisions can be seriously flawed. Some of the quick reactions and
decisions we jump to are unfair, self-serving, and counter-productive.
Our memory reinforces errors. Some of the stuff we remember for later
just makes all of the above systems more biased, and more damaging to
our thought processes (Benson, 2016).
Duncan Pierce (n.d.) categorizes cognitive biases using the following groupings:
Social and group effects: Social and group related biases are biases
primarily involving relationships with other people. These biases may be
helpful in understanding group interactions in organizations.
Attitude to risk and probability: These biases affect how individual
people makes decisions in the presence of uncertainty and risk or with
probabilistic outcomes. They may have an influence on planning and
decision-making activities.
Seeking/recognizing/remembering information: The information we
internalize can be strongly affected by our existing ideas. What stands out
strongly to one person may not be noticed by another. There are several
cognitive biases about attention–how we direct our noticing and
evaluating activities.
3
Evaluating information: How we evaluate the information we are aware
of can also be strongly affected by our existing ideas and some seemingly
built-in thinking “shortcuts” we apply.
Taking action: Once information is available and has been evaluated
sufficiently to allow action to be taken, other cognitive biases may have an
effect on the actions we take, perhaps delaying or prolonging them.
Memory, retrospection: Once action has been taken, the ways in which
we evaluate the effectiveness of what we did may be biased, influencing
our future decision-making.
Judgement and liking: How we judge others and expect them to judge us
(in terms of liking, moral acceptability etc.) may be influenced by a
number of biases.
Miscellaneous
Researchers from various disciplines have been examining cognitive biases in order
to understand how to improve decision making in their areas. Caputo (2013), who was
concerned with the negotiation process, asserts that “cognitive misperceptions can highly
bias human behavior when making judgments and decisions, and this is true in negotiations.”
The military has been studying cognitive biases in order to improve decision making by the
US army. The military has found that “Because these heuristics generalize situations and
allow people to make quick decisions despite time constraints or imperfect information, they
often result in predictable errors in judgments (cognitive biases)” (Mission Command, 2015).
The Central Intelligence Agency (CIA) devotes several chapters in its manual to cognitive
biases. The following reason is given for studying these biases:
Psychologists have conducted many experiments to identify the
simplifying rules of thumb that people use to make judgments on
incomplete or ambiguous information, and to show--at least in laboratory
situations--how these rules of thumb prejudice judgments and decisions.
The following four chapters discuss cognitive biases that are particularly
pertinent to intelligence analysis because they affect the evaluation of
4
evidence, perception of cause and effect, estimation of probabilities, and
retrospective evaluation of intelligence reports (Heuer Jr., 2008; see
Chapters 9-12).
McCann (2014) came up with 10 cognitive biases that can result in poor decisions by
executives in finance. Cognitive biases have been found to cause patient harm in healthcare
facilities (Joint Commission, 2016). Smith (2015) avers that a good marketer must
understand cognitive biases in order to convert prospects into customers. Dror, McCormack
& Epstein (2015) focused on the importance of understanding how cognitive biases work in
the legal system. They were especially concerned with how these biases affect the
“impartiality” of expert witnesses. They underscore that:
a mere expectation can bias the cognitive and brain mechanisms involved
in perception and judgment. It is very important to note that cognitive
biases work without awareness, so biased experts may think and be
incorrectly convinced that they are objective, and be unjustifiably
confident in their conclusion (Dror, McCormack & Epstein, 2015).
It is clear, that individuals who want to make rational decisions that are unbiased in all
kinds of situations, not only negotiations, military intelligence, or healthcare, should attempt to
understand the various cognitive biases that distort clear thinking. In fact, the best way to reduce
or eliminate cognitive biases is to be aware of them.
Some Cognitive Biases that Adversely Affect Rational Decision Making
Actor-Observer Bias
The actor-observer bias refers to a “tendency to attribute one's own actions to external
causes, while attributing other people's behaviors to internal causes” (Chery, 2017). Thus, if
5
someone else cuts in line it is because he is a jerk. If I cut in line it is because I am late for an
important appointment. Zur (1991) found that cognitive biases may affect how we perceive
the actions of enemies.
Research has repeatedly demonstrated how the enemy's hostile actions are
more likely to be attributed to natural characteristics, while positive,
conciliatory or peaceful actions are more likely to be attributed to
situational factors. In other words when the enemy is acting peacefully, it
is because it is forced to do so by external circumstances and not by its
own choice. When it acts aggressively, it is due to personal choice or
characteristic behavior (Zur, 1991).
Anchoring Bias
Thaler & Sunstein (2008: 23-24) provide an example of how anchoring works: People
who are asked to guess the population of Tallahassee will probably have no idea. Suppose
subjects are randomly assigned to two groups. Group A is first told that Los Angeles has 4
million people and then asked to guess the population of Tallahassee. Subjects in Group B
are first told that the population of Liberty, NY is 9,900. What will happen is that Group A
will make a much higher guess than Group B as to the population of Tallahassee. The reason
is that the first number they are given is used as an anchor and then adjusted. Group A will
adjust the 4 million downward knowing that Tallahassee is much smaller than Los Angeles.
Group B will adjust upward knowing that Tallahassee is larger than Liberty, NY.
Kahneman (2011: 119-123) describes an experiment with a rigged roulette wheel.
The wheel would only stop at the numbers 10 or 65. The wheel was spun, and subjects were
told to write down the number the wheel stopped at (of course, the number was 10 or 65).
Subjects were asked two questions:
Is the percentage of African nations among UN members larger or smaller
than the number you just wrote? [some saw the wheel stop at 10 and some
saw it stop at 65]
6
What is your best guess of the percentage of African nations in the UN?
There is no reason that the random spin of a roulette wheel should influence the
response to the second question. Yet, the average responses of those who saw a 10 was
25% to the second question; for those who saw a 65, the average response was 45%.
Kahneman discusses the possible reasons behind the anchoring effect and concludes that
it is sometimes due to an adjustment effect. One starts from the anchoring number and
then adjusts the estimate by moving up or down. However, Kahneman links the
anchoring effect to priming/suggestion and feels this is the better way to explain the
effect. Anchors prime subjects by suggesting a number (Kahneman, 2011: 122-123).
Lawyers use anchoring to establish a number in a lawsuit. The lawyer will ask for $30
million in damages knowing very well that there is no way the jury will award this kind of
number for, say, a libelous story in the paper about the client. However, she might get her
client a few million dollars since the $30 million will be used as an anchor. Retailers might
use phony markdowns (original price $800) to anchor a price and get customers to overpay
for a product.
Thompson (2013) states: “people don't really like making decisions. We have habits,
we like thinking automatically. So sometimes we avoid making choices altogether because it
stresses us out” Real estate agents understand this and take advantage of buyers by
employing the following technique.
Since buying a house is highly consequential and difficult to reverse,
rational people should look at a great many options and think them
through very carefully. A good agent will show you a few houses that
are expensive and not very nice, and then one at almost the same price
and far nicer. Many buyers will respond by stopping their search and
jumping on this bargain. Our susceptibility to "bargains" is one of the
cognitive devices we use to simplify choice situations, and one that
7
companies are conscious of when they position their products
(Thompson, 2013).
The following is some practical advice based on the anchoring bias:
We must all be alert to the influence that arbitrary starting values can have
on our estimates, and we must guard against individuals who might try to
sway our judgments by introducing starting values that serve their
interests, not ours. It has been shown, for example, that an opening
proposal in a negotiation often exerts undue influence on the final
settlement, and so we may want to pay considerable attention to how the
opening proposals are made and who makes them. It has also been shown
that the items we buy in the grocery store are powerfully affected by the
anchor values that are put in our heads by advertisers. In one study, for
example, an end-of-the-aisle promotional sign stated either “Snickers
Bars: Buy 18 for your Freezer” or “Snickers Bars: Buy them for your
Freezer.” Customers bought 38% more when the advertisers put the
number 18 in customers’ heads (Iresearchnet, 2017a).
Availability Bias
This refers to the overestimation of risks that are easily available in memory. How
easily things come to mind is a heuristic that makes people overestimate the importance of
certain kinds of information. If something is difficult to remember, one will assume that it is
less likely to occur. Kahneman (2011) defines availability bias as follows:
There are situations in which people assess the frequency of a class or the
probability of an event by the ease with which instances or occurrences
can be brought to mind. For example, one may assess the risk of heart
attack among middle-aged people by recalling such occurrences among
one's acquaintances. Similarly, one may evaluate the probability that a
given business venture will fail by imagining various difficulties it could
encounter. This judgmental heuristic is called availability. Availability is a
useful clue for assessing frequency or probability, because instances of
large classes are usually reached better and faster than instances of less
frequent classes. However, availability is affected by factors other than
frequency and probability (Kahneman, 2011:425).
8
Availability bias means that there is a tendency to overestimate the risks of accidents
that are easy to recall. Why are people more worried about being killed with a gun than
drowning in a pool? Or, why do we think more people die of homicides than suicides?
According to Thaler & Sunstein (2008: 24-26), people "assess the likelihood of risks by
asking how readily examples come to mind." Therefore, familiar risks are (e.g., those that are
reported in the media) are more frightening to people than those that are not familiar.
Thousands of people die each year from injuries resulting from falling in the shower; yet
people are more worried about being killed by a terrorist. The danger of being hurt from
texting while driving (or even walking) is quite significant. According to Thaler & Sunstein
(2008: 26): "easily remembered events may inflate people's probability judgments." This is
also the reason people believe that accidents are responsible for as many deaths as disease. It
works both ways. Events that we cannot bring to mind, will have lower probabilities of
occurring. Of course, a marketer can make risks familiar by showing them in advertisements.
Two biases that affect availability are recency and salience. Recency refers to the
tendency to give more weight to the latest, most recent information or events rather than
older information or events. Saliency bias refers to the fact that
Big, dramatic events, such as explosions, gun battles, and natural disasters,
stick in our heads and stay there, undermining our ability to think
objectively about things like causation, probabilities, and death rates.
Since September, 2001, motor-vehicle accidents have killed more than
four hundred thousand Americans, but how often do you worry or get
upset about them? (Cassidy, 2013).
The media makes us aware of the threat from terrorist attacks. It is, however, statistically
much more likely that an American will die in a car accident than being hurt in a terrorist attack
There is one chance in a hundred that a person will die in a car accident over a lifetime and the
chance of being killed in a terrorist attack is 1 in 20 million
9
(http://www.lifeinsurancequotes.org/additional-resources/deadly-statistics/).
Availability Cascade
Kuran & Sunstein (2007) define an availability cascade as “a self-reinforcing process
of collective belief formation by which an expressed perception triggers a chain reaction that
gives the perception increasing plausibility through its rising availability in public discourse.”
Basically, if something is repeated often enough, it will gain much more credibility. As the
popular saying goes: “repeat something long enough and it will become true.”
Backfire Effect
One would think that people would change their beliefs and opinions when presented
with facts that contradict them. The truth, however, is that what often happens when people’s
beliefs – especially those that are strongly held – are challenged by contradictory evidence, these
incorrect beliefs get even stronger. It is very difficult to change people’s beliefs with facts.
Certainty and misinformation are extremely powerful and it is difficult for facts to
change people’s minds. In fact, there is evidence that not only do facts not correct
misinformation, but they make it more persistent and potent (Gorman & Gorman, 2017;
Kolbert, 2017; Mercier & Sperber, 2017; Wadley, 2012). Colleen Seifert, a researcher at the
University of Michigan has the following to say about misinformation.
Misinformation stays in memory and continues to influence our thinking,
even if we correctly recall that it is mistaken. Managing misinformation
requires extra cognitive effort from the individual… If the topic is not very
important to you, or you have other things on your mind, you are more
likely to make use of misinformation. Most importantly, if the information
fits with your prior beliefs, and makes a coherent story, you are more
likely to use it even though you are aware that it's incorrect (Wadley,
2012).
10
Bandwagon Effect Bias
This bias refers to the tendency of people to adopt a certain behavior, belief, attitude, or
style if a large number of people have also accepted it (Chery, 2015). It is a type of groupthink.
The fact that a large number of people believe something does not make it true. The bandwagon
effect may have an impact on how people vote. People want to vote for winners and may vote for
someone who is perceived (polls may affect this) as being far ahead in the polls. Advertising
may also try to convince us that a product is good simply because millions of people use it.
There is some evidence that opinion polls may contribute to the bandwagon effect by influencing
undecided voters to go along with the majority (Obermaier, Koch & Baden, 2013).
Bias Blind Spot
People tend to have a bias blind spot, meaning that they are more likely to rate
themselves as being less susceptible to biases (this includes cognitive biases) than others.
We are also more able to detect biases in others than in ourselves. According to one
researcher:
People seem to have no idea how biased they are. Whether a good
decision-maker or a bad one, everyone thinks that they are less biased than
their peers …This susceptibility to the bias blind spot appears to be
pervasive, and is unrelated to people’s intelligence, self-esteem, and actual
ability to make unbiased judgments and decisions (Reo, 2015).
Thus, physicians believe that gifts from pharmaceutical companies are very likely to
unconsciously bias decisions made by other doctors. These gifts, however, will not bias their
own medical decisions (Reo, 2015).
11
Certainty Bias
See Zero-Risk Bias.
Choice-Supportive Bias
Choice-supportive bias is the tendency for people making a decision to remember
their choice as being better than it actually was simply because they made it. Basically, we
overrate options we selected and underrate options that were rejected. Post-purchase
rationalization is also a type of choice-supportive bias. One who does not want to fall into
the trap of choice-supportive bias must constantly check and reevaluate to see whether a
decision was correct; one should not defend flawed decisions. After all, everyone makes
mistakes.
Clustering Illusion Bias
People tend to see patterns in what is essentially random streaks. Gamblers tend to do
this and attempt to “beat the system” by taking advantage of these phantom patterns in
various games of chance such as cards (“hot hand”) or the roulette wheel. People tend to see
patterns in price fluctuations of various stocks. The Gambler’s Fallacy is another cognitive
bias that involves a lack of understanding of random streaks.
Confirmation Bias
Once people form an opinion they “embrace information that confirms that view
while ignoring, or rejecting, information that casts doubt on it … Thus, we may become
prisoners of our assumptions” (Heshmat, 2015). People tend to only listen to information that
12
supports their preconceptions. People may have the ability to see flaws in their opponent’s
arguments. However, when it comes to their own opinions, that is when they are blind.
Kahneman speaks of “adversarial collaboration” which means bringing together two
researchers who disagree and having them conduct an experiment jointly (Matzke et al.,
2013; Kahneman, 2012). This is a way to reduce the confirmation bias that arises when a
researcher consciously or unconsciously designs an experiment in such a way so as to
provide support for a particular position (Matzke et al., 2013).
Given the huge amount of research available to scholars, it is not difficult for a
researcher to cherry-pick the literature and only reference studies that provide support for a
particular opinion (confirmation bias) and exclude others (Goldacre, 2011). Even if
individual studies are done correctly, this does not guarantee that a researcher writing a state
of the art review paper will write an accurate, undistorted synthesis of the literature. Indeed,
Celia Mulrow demonstrated that many review articles were biased (Goldacre, 2011).
Motivated reasoning bias is the flip side of confirmation bias (Marcus, 2008: 56)
Congruence Bias
Congruence bias is similar to confirmation bias. It is a tendency to test a given
hypothesis (usually our own beliefs) rather than considering alternative hypotheses that
might actually produce better results. In effect, someone guilty of congruency bias is trying
to prove that s/he is right. This is the reason that alternative hypotheses are not considered.
From the quotes below, it is clear that Arthur Conan Doyle, creator of Sherlock Holmes,
understood the importance of being aware of the potential existence of several hypotheses
13
rather than starting with one. After the facts are collected, a detective or researcher selects the
hypothesis that does the best job of fitting the facts.
The following three quotes from the Arthur Conan Doyle’s Sherlock Holmes stories
describe how research should be done (Buxbaum, 2013).
“It is a capital mistake to theorize before one has data. Insensibly one
begins to twist facts to suit theories, instead of theories to suit facts (“A
Scandal in Bohemia”).
“One should always look for a possible alternative and provide against it.
It is the first rule of criminal investigation" (“Adventure of Black Peter”).
“When you have excluded the impossible, whatever remains, however
improbable, must be the truth” (“Sign of the Four”).
There are some researchers who are convinced that marijuana is a gateway drug that
leads to addictions to harder drugs such as heroin. Indeed, there is evidence that a large
percentage of addicts did start with marijuana when adolescents. However, there is an
alternative hypothesis suggested by the National Institute on Drug Abuse (2017):
An alternative to the gateway-drug hypothesis is that people who are more
vulnerable to drug-taking are simply more likely to start with readily
available substances such as marijuana, tobacco, or alcohol, and their
subsequent social interactions with others who use drugs increases their
chances of trying other drugs. Further research is needed to explore this
question (National Institute on Drug Abuse, 2017).
Conjunction Fallacy
According to probability theory, the probability of a conjunction, the joint probability
of A and B [(P (A and B)], cannot exceed the probability of either of its two individual
constituents, P (A) or P (B). In other words, P (A and B) ≤ P (A) and P (A and B) ≤ P (B).
For example, the probability of being a man with red hair is less than or equal to the
14
probability of being a man; the probability of being a man with red hair is less than or equal
to the probability of having red hair.
Despite this, people will make this mistake with the so called “Linda Problem.”
Linda is 31 years old, single, outspoken, and very bright. She majored in
philosophy. As a student, she was deeply concerned with issues of
discrimination and social injustice, and also participated in antinuclear
demonstrations. Which one of these is more probable?
(a) Linda is bank teller
(b) Linda is a bank teller and active in the feminist movement
Logically, as noted above, option (b) cannot be more likely than option (a), but
Tversky & Kahneman (1983) found that about 85 percent of respondents claimed that it was.
Even advance graduate students who had taken several statistics courses made this mistake.
Tversky & Kahneman posit that the reason most people get this wrong is because they use a
heuristic called representativeness. Representativeness (or similarity) refers to the tendency
of people to judge the likelihood of an event occurring by finding something similar and then
assuming (often incorrectly) that the probabilities of the two events must be similar. Option
(b) appears to be more representative and better resembles the behavior of Linda. People do
not think of a bank teller as being a political activist.
Conservatism Bias
People tend to favor a prior view even when presented with new information or
evidence, i.e., there is a tendency to stick to old information and a reluctance to accept
something new. People do not revise their beliefs sufficiently when presented with new
evidence because of conservatism bias. Conservatism bias is related to status quo bias.
Azzopardi (2010: 88) makes this distinction: “The status quo bias is emotional and causes
15
people to hold on to how things are. The conservatism bias is cognitive and causes people to
hold on to their previous opinions and idea frames even though facts have changed.”
Decoy Effect
Suppose customers are asked to choose between options A and B. Each option has
some advantage (Option A may offer fewer features but be less expensive than Option B
which offers more features). The decoy effect occurs when a third option (the decoy),
Option C, is introduced that is worse than, say, Option B but causes more people to choose
the higher priced Option B. The decoy is purposely introduced to get customers to select the
higher priced option. This is an example of how this would work and get more people to
choose Option B.
Option A – Price of $250 7 features
Option B – Price of $400 10 features
Option C (Decoy) – Price of $500 9 features
Déformation Professionelle Bias
Déformation professionelle is a cognitive bias that comes from the tendency to view
the world in a narrow way and through the eyes of one’s discipline or profession. People
suffering from this see the world in a distorted way and not as it really is. The quote from
Mark Twain saying that “to a man with a hammer, everything looks like a nail” is
reminiscent of this bias. This is similar to what Friedman & Friedman (2010) refer to as
disciplinary elitism.
16
Dunning-Kruger Effect
This is the tendency of people who are ignorant or unskilled in an area to
overestimate their abilities and believe that they are much more competent than they really
are. People who have absolutely no knowledge of, say, Egyptology, will not suffer from a
Dunning-Kruger Effect. It is people who have a little bit of knowledge that are likely to have
a great deal of confidence in their capabilities.
Kruger & Dunning (1999) documented this effect in a paper titled, "Unskilled and
Unaware of It: How Difficulties in Recognizing One's Own Incompetence Lead to Inflated
Self-Assessments." They asserted that individuals need a reasonable amount of skill and
knowledge to accurately estimate the actual amount of skill and knowledge they possess. A
little knowledge is indeed dangerous (Poundstone, 2017).
Endowment Effect
There is a tendency for people who own an object to value it more than those who do
not own it. Thus, people demand more to give up or sell an object they own than they would
be willing to pay to acquire it. This relates to the status quo bias and loss aversion and is
inconsistent with economic theory. Based on several experiments, Kahneman, Knetsch &
Thaler (1990) concluded: “The evidence presented in this paper supports what may be called
an instant endowment effect: the value that an individual assigns to such objects as mugs,
pens, binoculars, and chocolate bars appears to increase substantially as soon as that
individual is given the object.”
Escalation of Commitment Bias
17
This is the tendency for an individual or a group to stick with a failing decision or
action rather than accepting that a mistake was made and alter course. There is a reluctance
to admit that the original decision was wrong even when there is clear evidence that this is
the case. Countries sometimes do this and continue to fight a war that is unwinnable. The
expression “Throwing good money after bad” is reminiscent of this irrational fallacy. It is
sometimes called the “sunken cost fallacy.”
Expectation Bias
This refers to the tendency for the researcher’s expectations to affect the outcome of a
study. It also refers to the fact that people remember things the way they expected them to
occur; this is why many memories are false. The need for double-blind studies is to
minimize expectation bias. Expectation bias is one of the few cognitive biases that has been
researched in the field of auditing (Pike, Curtis & Chui, 2013).
False Consensus Effect
People have a tendency to overestimate the extent to which other people share their
attitudes, behaviors, beliefs, preferences and opinions. We tend to think that others think the
same way we do.
Focalism (Focusing Illusion)
Focalism (sometimes called the focusing illusion) has been defined as:
the tendency for people to give too much weight to one particular piece of
information when making judgments and predictions. By focusing too
much on one thing (the focal event or hypothesis), people tend to neglect
18
other important considerations and end up making inaccurate judgments as
a result (Iresearchnet, 2017b).
Some researchers consider it part of the anchoring bias. Kahneman (2011: 402)
describes focalism with one sentence: “Nothing in life is as important as you think it is when
you are thinking about it.” By focusing too much on one particular piece of information (i.e.,
the focal event), and not considering other crucial factors, people make erroneous judgments.
Kahneman (2011: 402-404) uses focalism to explain why Midwesterners and Californians
believe that those living in California are happier overall. Californians enjoy the climate of
their state and Midwesterners loathe their climate. However, climate is not that important
when it comes to judging well-being; factors such as meaningful work and standard of living
are crucial. But when California is mentioned, climate comes to mind and all other factors
tend to be ignored.
Another example of the focusing illusion is as follows: The correlation between the
following two questions when presented in the indicated order is approximately 0.
(A) “How happy are you with your life in general?” and (B) “How many dates did you have
last month?” When the questions were reversed, however, the correlation between the two
questions increased to .66. By asking the dating question first, it caused subjects to focus on
it when answering the general happiness question. The dating question evidently caused that
aspect of life to become salient and its importance to be exaggerated when the respondents
encountered the more general question about their happiness (Strack, Martin & Schwarz,
1988). In survey research, this is sometimes referred to as priming the respondent and is an
easy way to bias a study.
Kahneman states:
19
The focusing illusion creates a bias in favor of goods and experiences that
are initially exciting, even if they eventually lose their appeal. Time is
neglected, causing experiences that will retain their attention value in the
long term to be appreciated less than they deserve to be (Kahneman, 2011:
406).
Many countries are interested in measuring the happiness of its citizens, i.e., Gross
Domestic Happiness rather than Gross Domestic Product
(http://worldhappiness.report/ed/2017/). Happiness is a very complex, multi-faceted concept
and there are different ways of examining it. We can measure subjective well-being and/or
experienced well-being; the two concepts are not the same (Kahneman, 2011: 391-407).
Subjective well-being is usually measured asking respondents the following: “All
things considered, how satisfied are you with your life as a whole these days?” or “Taken all
together, would you say that you are very happy, pretty happy, or not too happy?”
Kahneman, et al. (2006) assert that questions of this type elicit a global evaluation of one’s
life. The focusing illusion can have a strong effect on this measurement. Gallup uses the
Cantril Self-Anchoring Scale which measures one type of well-being. It is supposed to
measure well-being “closer to the end of the continuum representing judgments of life or life
evaluation” (http://www.gallup.com/poll/122453/understanding-gallup-uses-cantrilscale.aspx):
Please imagine a ladder with steps numbered from zero at the bottom to 10
at the top.
The top of the ladder represents the best possible life for you and the
bottom of the ladder represents the worst possible life for you.
On which step of the ladder would you say you personally feel you stand
at this time? (ladder-present)
On which step do you think you will stand about five years from now?
(ladder-future) (http://www.gallup.com/poll/122453/understanding-gallupuses-cantril-scale.aspx)
20
There is a different method that asks respondents to measure their feelings in real
time, i.e., experienced well-being. This method measures experienced happiness in real time.
Kahneman (2011: 392-393) describes one method to measure experienced well-being as the
Day Reconstruction Method (DRM). This requires that subjects relive the previous day and
break it up into episodes which may be pleasant (e.g., socializing) or unpleasant (e.g.,
commuting). With DRM it is possible to measure the percentage of time a person spends in
an unpleasant state, the U-index. There are other ways to measure experienced well-being
and these approaches are now being used in several countries.
According to Kahneman et al., the way subjective well-being is measured is biased by
the focusing illusion since it causes people to focus too much on one factor such as income.
Actually, increases in income do not do much to help increase happiness once a person’s
basic needs are satisfied; what matters more than absolute wealth is relative wealth
(Kahneman, et. al., 2006). Marriage is another factor that Kahneman (2011: 398-401) can
impact a global evaluation of life. Life satisfaction is very high when people get married.
Thus, when the global evaluation question is asked, newly-married individuals will give
salience to marriage since it is on their minds when responding to the question. This will bias
the response to the life satisfaction question and make it higher than it should be. On the
other hand, people who have been married for several years will not be focusing much on
their marriage when responding to the life satisfaction question.
People also exaggerate the effect on future well-being of an expensive car and
underestimate the effect of playing cards with friends on a weekly basis. This process of
predicting the effect on future well-being of, say, a particular purchase, marriage, or moving
to another state is known as affective forecasting. “Miswanting” is a term coined by Daniel
21
Gilbert and Timothy Wilson (2000) to describe poor choices that result from affective
forecasting errors; people often get it wrong when they think about how much they are going
to like something in the future (Kahneman, 2011: 406). The focusing illusion often causes
people to miswant and make blunders such as buying an expensive car or moving to Hawaii.
Framing Bias
Tversky & Kahneman (1981) were among the first to identify this cognitive bias
known as framing. People respond differently to choices/preferences depending on how it is
presented to them. In particular, there will be different responses depending on whether the
choices are presented as a gain or loss (see loss aversion). Thus, doctors are more likely to
prescribe a procedure when it is described as having a 93% survival rate within five years
than if it is presented as having a 7% mortality rate within five years (McNeil, Pauker, &
Tversky, 1988). Similarly, 9 out of 10 students will rate condoms as effective if they are
informed that they have a “95 percent success rate” in stopping HIV transmission; if,
however, they are told that it has a “5 percent failure rate,” then only 4 out of 10 students rate
condoms as being effective (Linville, Fischer, & Fischhoff, 1992). This is why it is more
important for a marketer to emphasize what a prospective customer loses by not making a
purchase than what he or she gains by making the purchase (Flynn, 2013).
Patel provides the following examples of framing:
We are more likely to enjoy meat labeled 75 percent lean meat as opposed
to 25 percent fat. 93 percent of Ph.D. students registered early when the
framing was in terms of a penalty fee for late registration, with only 67
percent registering early when the framing was in terms of a discount for
earlier registration. More people will support an economic policy if the
employment rate is emphasized than when the associated unemployment
rate is highlighted (Patel, 2015).
22
Kahneman (2011: 373) explains the different results from opt-out and opt-in systems
as a framing effect. This is especially important when it comes to getting people to donate
organs such as kidneys. Countries that use an opt-out system – where the default is that you
are an organ donor and you have to check a box if you do not want to be one – have
significantly more organ donors than countries where an opt-in system is used (i.e., the
individual must explicitly state that s/he is willing to be an organ donor). The differences in
one study showed that the organ donation rate was almost 100% in Austria with an opt-out
system versus 12% in Germany with an opt-in system.
Another example given by Kahneman (2011: 372-373) is the following:
Consider two car owners who seek to reduce their costs. Adam switches
from a gas-guzzler of 12 mpg to a slightly less voracious guzzler that
runs at 14 mpg. The environmentally virtuous Beth switches from a 30
mpg car to one that runs at 40 mpg. Suppose both drivers travel equal
distances over a year. Who will save more gas by switching? (Kahneman,
2011: 372).
The answer is counter-intuitive. If they both drive 10,000 miles, Adam saves
about 119 gallons (from about 833 gallons to 714 gallons) and Beth saves approximately
83 gallons (from 333 gallons to 250 gallons). The problem has to do with framing. If the
information is in gallons per mile (gpm) rather than mpg, the savings become clearer.
Adam switches from a car that consumes .0833 gpm to one that consumes .0714 gpm —
saving .0119 gpm. Beth switches from a car that consumes .0333 gpm to .0250 gpm —
saving .0083 gpm.
23
Fundamental Attribution Error
The fundamental attribution error refers to the tendency of a person observing another
person’s behavior to attribute it to internal factors or personality and to underestimate the
effect of situational causes (i.e., external influences). In other words, we believe that others
do what they do because of their internal disposition. Thus, if you see someone fighting with
another person, you will probably attribute it to the fact that the person has a violent temper.
Of course, it is quite possible that he is the victim of a mugging attempt and is trying to
defend himself. Sherman (2014) provides the following example of the fundamental
attribution error:
A classic example is the person who doesn’t return your call. You could
go the usual route and think, “He is an inconsiderate slob and
my parents were right years ago when they said I should have dropped
him as a friend.” But the fundamental attribution error would remind you
that there might very well be other reasons why this person hasn’t called
you back. Maybe he is going through major issues in his life. Maybe he is
traveling for work. Maybe he honestly forgot (Sherman, 2014).
Gambler’s Fallacy (also known as Monte Carlo Fallacy)
Gambler’s fallacy is a cognitive bias in which a person mistakenly believes that even
with a random process, past outcomes will have an effect on future outcomes. For example, if
you flip a coin 5 times and get 5 heads, one guilty of this bias will expect a tail on the next
toss. Of course, since each toss is an independent event, the probability is a constant 50%.
People somehow incorrectly believe that random processes are self-correcting. This example
is related to this fallacy and is known as “misconception of chance.”
A coin
is to be tossed 6 times. Which sequence is more likely?
Sequence 1: H T H T T H
Sequence 2: H H H T T T
24
Of course, both are equally likely. However, people will think sequence 1 is more
likely than sequence 2 because it appears to be more random (Tversky & Kahneman, 1974).
This bias was found to influence decision-makers such as refugee asylum judges, loan
officers, and baseball umpires. They also made the same mistake in underestimating the
probabilities of sequential streaks such as five baseball strikes in a row or approving asylum
for, say, six refugees in a row. Thus, “misperceptions of what constitutes a fair process can
perversely lead to unfair decisions” (Chen, Moskowitz & Shue, 2016).
Halo Effect
Halo effect is a cognitive bias that occurs in impression formation in which a person
assumes that because someone possesses positive characteristic A, then she will also possess
positive characteristics B, C, D, E, and F. It also occurs with negative characteristics. If a
person possesses negative characteristic A, then he will also possess negative characteristics
B, C, D, E, and F.
Kahneman (2011: 206-208) feels that the halo effect together with outcome bias helps
explain the popularity of various books dealing with leadership. These books focus on
successful firms and then attribute it to leadership style. Actually, it most cases it is simply
luck. Chance often explains the success of certain firms and the failures of others, not the
competence of leadership. Indeed, with the passage of time, the situation often reverses itself
and the successful firms become unsuccessful and vice versa. Kahneman claims that the
message of Built to Last, a leadership book by Jim Collins and Jerry I. Porras, is that “good
managerial practices can be identified and that good practices will be rewarded by good
results.” Kahneman (2011: 207) asserts: “In the presence of randomness, regular patterns
25
can only be mirages.” According to Fitza (2013), chance or luck often has a greater effect on
firm performance – it may account for 70% -- than the actual abilities of the CEO.
Hindsight Bias
This is sometimes referred to as the “I knew it all along” effect. It is the tendency to see
past events as being more predictable than they were before they occurred. After an event occurs
(e.g. election of Donald Trump), people believe that they knew he would win before the election
took place. Boyd (2015) says that “Hindsight bias can make you overconfident. Because you
think you predicted past events, you’re inclined to think you can see future events coming. You
bet too much on the outcome being higher and you make decisions, often poor ones, based on
this faulty level of confidence.”
Hyperbolic Discounting
Hyperbolic discounting is a cognitive bias that explains many supposedly irrational
behaviors such as addictions, health choices, and personal financial decisions. McCann
(2014) lists it as a key bias that adversely affects corporate finance decisions. Hyperbolic
discounting refers to the tendency of people to have a preference for a reward that arrives
sooner rather than wait longer for a larger reward in the future. People discount the value of
the award that arrives later in the future. A rational person would use a constant discount rate
to discount the value of a future reward (this is known as exponential discounting and has
been used in economic theory); this means the discount rate should not change across
different wait times. In reality, however, people use a time-inconsistent discounting model:
The further out in the future the reward, the more we discount it (Kinari, Ohtake & Tsutsui,
26
2009; Frederick, Loewenstein & O’Donoghue, 2002).
Thus, one may prefer receiving $5000 now to $5200 in 3 months. However, if the
choice is $5000 in two years or $5200 in two years and 3 months, most people would opt for
the $5200. People do not mind waiting the three months if the wait occurs in two years. What
this indicates is that the discount rate used by people is not constant or rational: As the length
of the delay increases, the time discount rate decreases.
Try this experiment on your friends: Show them a $100 bill and ask: “Would you
rather have this $100 bill now or wait two weeks and get $109?” What you find is that people
are not that rational and want things now. Most will take the $100. Of course, a rational
person should wait the two weeks for the $109—this is equivalent to earning a 9% return ($9
/ $100) for two weeks of waiting. Does anyone know of a bank that offers 9% interest for
two weeks?
Identifiable Victim Effect
People have a tendency to respond more strongly and to be willing to offer greater
assistance to a single identifiable victim or person at risk than to a large group of anonymous
people at risk. This may be the reason talking about a single case of a victim of a disease may
be more effective in raising money than describing millions of people who are victims. Lee
& Feeley (2016) conducted a meta analysis of this effect.
Ikea Effect
There is a tendency for people to overvalue and overrate objects they made or assembled
by themselves such as IKEA furniture, regardless of the actual quality of the finished product.
27
However, this effect seems to only exist when the labor resulted in the successful completion of
the project. If subjects did not complete the task, the IKEA effect disappeared (Norton, Mochon,
& Ariely, 2012).
Illusion of Control
People tend to overestimate how much control they have over external factors such as
prices, costs, demand, and the stock market. In some cases, people actually believe that they can
control the outcome of something that is totally random such as the toss of dice.
Information Overload Bias
People make the mistake of believing that more information means better decisions.
Actually, too much information often results in poorer decisions since people cannot handle
all the information available to them. There is only a limited amount of information the brain
can process. Information overload can cause increased stress and what has been referred to as
information fatigue. Behavioral economists disagree with neoclassical economists and posit
that too many choices lead to poorer decisions (Pollitt & Shaorshadze, 2011).
Ariely (2008: 152-153) demonstrates how having too many options often results in
the failure to make any decision. For example, someone trying to purchase a laptop might
spend several months trying to buy the best laptop and not consider the “consequence of not
deciding.” The difference among the laptops might be very small but the time spent dwelling
over trivial differences as well as lost opportunities of not having a laptop is not taken into
account. We often waste far too much time on making a trivial decision when we would be
better off flipping a coin to make the choice. To learn more about the problem of offering too
many choices, read Barry Schwartz’s book entitled, The Paradox of Choice: Why More is
28
Less or view his TED lecture at
http://www.ted.com/talks/barry_schwartz_on_the_paradox_of_choice.html
Insensitivity to Prior Probability of Outcomes
Tversky & Kahneman (1974) found that “When no specific evidence is given, prior
probabilities are properly utilized; when worthless evidence is given, prior probabilities are
ignored. This is related to the representativeness bias. This example from Tversky &
Kahneman (1974) illustrates how this bias works. The following description of Dick is
provided to subjects.
Dick is a 30 year old man. He is married with no children. A man of high
ability and high motivation, he promises to be quite successful in his field.
He is well liked by his colleagues.
This description was intended to convey no information relevant to the
question of whether Dick is an engineer or a lawyer. Consequently, the
probability that Dick is an engineer should equal the proportion of
engineers in the group, as if no description had been given. The subjects,
however, judged the probability of Dick being an engineer to be .5
regardless of whether the stated proportion of engineers in the group was
.7 or .3. Evidently, people respond differently when given no evidence and
when given worthless evidence.
Insensitivity to Sample Size (Law of Small Numbers)
This cognitive bias is the tendency of people to underestimate the amount of variation
that occurs in small samples. There is considerably more variation in small samples than
large samples and people do not take this into account when estimating probabilities. The
problem below, known as the “Hospital problem,” was used by Tversky & Kahneman (1974)
to illustrate the insensitivity to sample size problem.
29
A certain town is served by two hospitals. In the larger hospital about 45
babies are born each day, and in the smaller hospital about 15 babies are
born each day. As you know, about 50 percent of all babies are boys.
However, the exact percentage varies from day to day. Sometimes it may
be higher than 50 percent, sometimes lower. For a period of 1 year, each
hospital recorded the days on which more than 60 percent of the babies
born were boys. Which hospital do you think recorded more such days?
The larger hospital (22.1%)
The smaller hospital (22.1 %)
About the same (55.8%)
Most subjects judged the probability of obtaining more than 60 percent
boys to be the same in the small and in the large hospital, presumably
because these events are described by the same statistic and are therefore
equally representative of the general population. In contrast, sampling
theory entails that the expected number of days on which more than 60
percent of the babies are boys is much greater in the small hospital than in
the large one, because a large sample is less likely to stray from 50
percent. This fundamental notion of statistics is evidently not part of
people's repertoire of intuitions (Tversky & Kahneman, 1974).
It is clear from probability theory, that the smaller hospital is much more likely to
deviate a great deal from the expected probability of 50%. Thus, a person tossing a coin three
times is likely to get three tails (12.5% chance). But if the coin is tossed 100 times, it is
highly unlikely to deviate much from the 50% probability of getting a tail, i.e., 50 tails. For
more on the Hospital Problem, see Noll & Sharma (2014).
Kahneman (2011: 112-113) discusses the problem of selecting samples and indicates
that a large number of studies use samples that are too small to confirm their true hypotheses.
What this means is that there is insufficient power to reject the claim about a population (i.e.,
the null hypothesis) even if when it is false. There is a way to ensure that a sample is large
enough to have sufficient power, but most researchers rely on intuition rather than formulas.
As Kahneman (2011: 114) points out: “The strong bias toward believing that small samples
30
closely resemble the population from which they are drawn is also part of a larger story: we
are prone to exaggerate the consistency and coherency of what we see.”
Kahneman (2011: 117-118) cites a study that concluded that small schools were more
successful than large schools. This was based on the fact that 6 of the top 50 schools in
Pennsylvania were small (an overrepresentation by a factor of four). This resulted in huge
amounts of money invested by the Gates Foundation in creating small schools. In actuality,
inferior schools also tend to be smaller than the average school. The truth is that small
schools are not better than large schools but have more variability. In fact, the evidence
suggests that large schools may be better overall because they provide more curricular
options.
The bottom line is that it is important to realize that many things that happen,
including “hot hands” and winning streaks are often chance. People should be careful before
attributing streaks to some causal effect (e.g., he is a great manager).
Intergroup (In-Group) Bias
Intergroup bias is the tendency to evaluate members of the in-group more favorably
than members of the out-group. This bias can be expressed in various ways including the
allocation of resources, evaluation of peers, behaviors such as discrimination, and attitudes
such as prejudice. If a person believes that another individual belongs to the same group as
herself, she will have more positive ratings of that person and show favoritism.
Loss Aversion
The pain of losing something we own outweighs the joy of winning by as much as
31
two to one. Thus, for example, the pain of losing $1000 that you currently have is about
double the intensity of the joy you would experience getting $1000. Emel (2013), citing the
work of Dan Ariely, makes the following point:
Loss aversion means that our emotional reaction to a loss is about twice as
intense as our joy at a comparable gain: Finding $100 feels pretty good,
whereas losing $100 is absolutely miserable. People are more motivated
by avoiding loss than acquiring similar gain. If the same choice is framed
as a loss, rather than a gain, different decisions will be made.
The following example cited by Emel (2013) demonstrates the principle of loss
aversion.
Participants were told the US is preparing for an outbreak of an unusual
disease which is expected to kill 600 people. They could pick one of two
scenarios to address the problem:
•
200 people will be saved.
•
1/3 chance 600 people will be saved. 2/3 chance that no people
will be saved.
72% of participants chose option 1, while only 28% of participants
chose option 2.
The same group of people were given two more scenarios:
•
•
400 people will die.
1/3 chance no one will die. 2/3 chance 600 people will die.
22% of participants chose options 1, and 78% of participants chose
option 2. People picked the polar opposite answer of their original
choice and the only difference was how the options were framed.
Mere Exposure Effect
32
This refers to the tendency to have a preference and liking for things merely because
we are more familiar with them. This suggests that repeated exposure to some philosophy or
idea will help make them more acceptable to others.
Motivated Blindness
Motivated blindness provides a psychological reason that many people engage in
unethical behavior. It refers to the psychological tendency individuals have to overlook
unethical behaviors when it is in their interest to remain ignorant. Once people have a vested
interest in something, they no longer can be objective. This is why conflicts of interest are
such a problem; it is almost impossible to behave ethically when there a conflict of interest is
present. Bazerman & Tenbrunsel (2011a) demonstrate how motivated blindness caused many
ethical failures including the Great Recession of 2008.
It’s well documented that people see what they want to see and easily miss
contradictory information when it’s in their interest to remain ignorant—a
psychological phenomenon known as motivated blindness. This bias
applies dramatically with respect to unethical behavior.
As noted above, “People tend to have a bias blind spot, meaning that they are more
likely to rate themselves as being less susceptible to biases (this includes cognitive biases)
than others.” Bazerman & Tenbrunsel (2011b: 37) observe that “Most of us dramatically
underestimate the degree to which our behavior is affected by incentives and other situational
factors.” On the other hand, we overestimate how others will be influenced by incentives
(e.g., paying people to donate blood).
Motivated Reasoning
33
As noted above, motivated reasoning is related to confirmation bias. Marcus (2008:
56) defines motivated reasoning as “our tendency to accept what we wish to believe (what
we are motivated to believe) with much less scrutiny than what we don’t want to believe.”
Marcus makes the following distinction between motivated reasoning and confirmation bias:
“Whereas confirmation bias is an automatic tendency to notice data that fit with our beliefs,
motivated reasoning is the complementary tendency to scrutinize ideas more carefully if we
don’t like them than if we do.” Needless to say, people’s reluctance to scrutinize and analyze
contrary ideas makes it difficult for them to change their beliefs. This may contribute to
status quo bias.
Neglect of Probability Bias
The neglect of probability is the tendency to completely ignore probabilities when
making decisions under uncertainty. People often focus on the adverse outcome rather than
on the probability that it will occur. The car ride to the airport is much more dangerous
than flying in a plane, yet people are more apprehensive about flying. The danger of being
killed in a terrorist attack is extremely low, yet people are not afraid to text while driving
which has a very high probability of resulting in an accident but are afraid of terrorism.
The following example is used to illustrate neglect of probability when it comes to
lotteries.
Two games of chance: In the first, you can win $10 million, and in the
second, $10,000. Which do you play? … The probability of winning is
one in 100 million in the first game, and one in 10,000 in the second game.
So which do you choose? (Meaning Ring, 2016).
34
The correct answer is the second lottery since it has an expected monetary value that
is ten times greater than the first lottery. Most people, however, would choose the first
lottery.
Omission bias
Omission bias is the tendency to judge commissions – active, harmful actions that
hurt others – as being worse and more immoral than otherwise equivalent omissions (e.g.,
allowing others to die). We think it is worse to directly and actively harm others than cause
harm passively by not doing something, even when the same number of people are hurt. The
famous “Runaway Trolley” case is reminiscent of this bias. Approximately 90% of subjects
ae willing to pull a lever which diverts the runaway trolley and kills one person but saves the
lives of five people. On the other hand, very few people would be willing to throw a fat man
off a bridge to stop the runaway trolley and thereby save five people (known as ‘would you
kill the fat man?’). In both cases, the math is the same: one person dies in order to save five
(Bakewell, 2013).
Optimism Bias
This refers to the tendency to be overly optimistic about favorable outcomes. People
do not believe that bad things will happen to them. Evatt (2010) asserts: “Most people expect
they have a better-then-average chance of living long, healthy lives; being successfully
employed and happily married; and avoiding a variety of unwanted experiences such as
being robbed and assaulted, injured in an automobile accident, or experiencing health
problems.”
35
Outcome Bias
Outcome bias is a cognitive bias which refers to the tendency to judge the quality of a
decision by focusing on the eventual outcome rather than examining the factors that existed
when the decision was made. For example, a doctor might make a correct decision and go ahead
with, say, doing a C-section. If the baby dies, people (and juries) are more likely to believe that
the doctor made a poor decision. People have this inclination to overemphasize outcomes rather
than the factors and issues that were present when the decision was made. A general might do
something foolhardy. However, if he wins the battle, people will think he is a brilliant strategist.
Outcome bias should not be confused with hindsight bias. With hindsight bias, there has
been memory distortion and the past has not been accurately recalled. The person actually
believes that s/he predicted that an event would occur, even though this was not the case. With
outcome bias, on the other hand, the past is not misremembered, it is ignored or devalued. This is
due to the tendency to minimize the uncertainties that existed at the time the decision was made
and to mainly focus on the outcome.
Overconfidence Bias
People tend to overestimate their abilities and are overconfident. This is an even
greater problem with experts. This overconfidence often results in people taking greater risks
than they should. Kolbert (2017) highlights the fact that “People believe that they know way
more than they actually do.” Sloman & Fernbach (2017) also speak of the “knowledge
illusion”; we simply do not understand how little we actually know. With certain kinds of
questions, answers that people feel that their response is 99% certain to be correct” turn out
to be incorrect 40% of the time (Kasanoff, 2017).
36
Several books have been written about expert predictions which usually turn out to be
wrong. Experts do only slightly better than random chance. Kahneman (2011: 218-219) cites
research conducted by Tetlock (2005) that demonstrates how poorly experts who make a
living “commenting or offering advice on political and economic trends” actually perform.
They do not do better than monkeys throwing darts on a board displaying the various
possible outcomes (Kahneman 2011: 219).
This is what can be said about expert predictions:
When they’re wrong, they’re rarely held accountable, and they rarely
admit it, either. They insist that they were just off on timing, or blindsided
by an improbable event, or almost right, or wrong for the right reasons.
They have the same repertoire of self-justifications that everyone has, and
are no more inclined than anyone else to revise their beliefs about the way
the world works, or ought to work, just because they made a mistake.
Extensive research in a wide range of fields shows that many people not
only fail to become outstandingly good at what they do, no matter how
many years they spend doing it, they frequently don’t even get any better
than they were when they started. In field after field, when it came to
centrally important skills—stockbrokers recommending stocks, parole
officers predicting recidivism, college admissions officials judging
applicants—people with lots of experience were no better at their jobs
than those with very little experience (Eveleth, 2012).
Kahneman (2011: 222-233) believes that algorithms often do a better job at
predictions than experts. He describes several situations where one should rely on a simple
checklist consisting of, say, six relevant characteristics rather than relying on an expert. In
fact, Kahneman discusses a simple algorithm developed by Dr. Virginia Apgar in 1953 to
determine whether a newborn infant was in distress. Her method is superior to the expert
judgment of obstetricians since it focuses on several cues. Kahneman does point out the
hostility towards using algorithms. Incidentally, Apgar’s algorithm still in use has saved
thousands of lives. Kahneman (2011: 226) cites the work of Dawes (1979) and claims that a
37
simple formula that uses predictors (i.e., independent variables) with equal weights are often
superior to multiple regression models that use complex statistics to assign different weights
to each of the predictor variables. Multiple regression models are often affected by “accidents
of sampling.” Of course, some common sense is needed to select the independent variables
that are most likely to accurately predict the dependent variable. Dawes claims that the
simple algorithm of “frequency of lovemaking minus frequency of quarrels” does an
excellent job of predicting marital stability (Kahneman, 2011: 226). Bottom line is that we
should not be overly impressed with the judgment of experts.
It does, however, sometimes pay to be overconfident. There is evidence that
individuals who are overconfident and certain of their abilities are overrated by others;
individuals who are underconfident, are underrated by others as being worse than they
actually happen to be (Lamba & Nityananda, 2014). This may explain why politicians have
no problem with being so sure of themselves and overpromising (Hutson, 2014).
The importance of overconfidence is being used to explain why there is a gender gap
in the corporate world. Men are more egotistical than women so this makes them appear
more capable (Hutson, 2014). Kahneman (2011) believes that one has to be very careful with
people who are overconfident and assertive. Before believing that they know what they are
talking about, one has to have some way of measuring this empirically. He concludes that
“overconfident professionals sincerely believe they have expertise, act as experts and look
like experts. You will have to struggle to remind yourself that they may be in the grip of an
illusion.”
Peak–End Rule
38
The peak-end rule is a cognitive bias that deals with how people judge experiences,
both pleasant (e.g., vacations) and unpleasant (e.g., sticking hand in ice cold water). People
tend to perceive experiences mainly by how it was at the peak and when it ended. The peak is
the most intense part of the experience and might be positive or negative. What is interesting
is that people do not average out the entire experience to arrive at an overall rating.
Whitbourne (2012) provides the following examples to demonstrate the peak-end
rule.
participants exposed to 30 seconds of 14 degree ice water (very cold!)
rated the experience as more painful than participants exposed to 90
seconds of exposure to 60 seconds of 14 degree ice water plus 30
additional seconds of 15 degree ice water. In other words, participants
found the 90 seconds of ice water exposure less painful than those
exposed to 60 seconds of nearly equally cold water because the 90
seconds ended with exposure to a “warmer” stimulus. We will rate an
experience as less painful, then, if it ends on a slightly less painful way.
The “peak end” in this case was a one degree difference in water
temperature.
People will prefer and even choose exposing themselves to more pain
(objectively determined) if the situation ends with them feeling less pain…
If you are having a tooth drilled, you’d find it was less painful if the
dentist ends the procedure with some lightening of the drill’s intensity,
even if the procedure is longer than it would otherwise be.
We approach not only our experiences of pleasure and pain in this way,
but also our acquisition of objects that we’re given as gifts… participants
given free DVD’s were more pleased with the gifts if they received the
more popular ones after the less popular ones, then if they received the
exact same DVD’s in the opposite order. When it comes to pleasure, it’s
all about the ending.
Pessimism Bias
This refers to the tendency of some individuals to be overly pessimistic and exaggerating
the likelihood that negative events will occur. People with this kind of outlook, believe that
39
negative things will keep happening to them and they will not succeed at all kinds of tasks.
Individuals who suffer from depression are very likely to have a pessimism bias (Alloy &
Ahrens, 1987). It is the opposite of optimism bias.
Planning Fallacy
This bias is related to overconfidence and illusion of control (McCann, 2014). People
tend to underestimate the time and cost it will take to complete a project or task. What often
happens when completing a task is that something unforeseen happens. McCann (2014) lists this
bias together with illusion of control and overconfidence as special problems in the area of
corporate finance. Kahneman (2011: 249-251) cites a survey conducted in 2002 of American
homeowner who remodeled their kitchens. They thought the cost of the job would be around
$18,658 and ended up spending an average of $38,769.
This happens all the time when government estimates the cost of a new weapons system
or buildings. Kahneman (2011: 251) provides a simple solution known as “reference class
forecasting.” Basically, the forecaster should try to gather information about time or cost from
outsiders involved in similar ventures and use this information to come up with a baseline
prediction. The forecaster should then decide whether they are being too optimistic as far as time
and cost and see if the baseline prediction needs to be adjusted.
Projection Bias
Projection bias refers to the tendency of people to misperceive how their future tastes and
preferences will differ from current tastes and preferences. In fact, people have a tendency to
exaggerate to themselves the degree to which their future preferences, values, and tastes will be
40
the same as their current preferences, values, and tastes (Loewenstein, O’Donoghue, & Rabin,
2003). Projection bias leads to all kinds of poor choices including becoming addicted to, say,
cigarettes, buying too many impulse items when shopping while hungry, and ordering too much
food in a restaurant when ordering at the beginning of the meal while hungry. People deciding on
where to vacation during the summer who make their plans during the winter when it is very
cold will tend to go to places that are too hot because of projection bias. They will assume that
they need a very warm place. Of course, once the winter is over, preferences change.
Reactance
Human beings value their freedom and ability to make any choice. If someone tries to
restrict the choice and people feel that they are being forced into a certain behavior, they will
resent the diminution in freedom and act in a manner that restores their autonomy. In other
words, they will often do the opposite of what the authority figure tells them to do.
Reactive Devaluation
Reactive devaluation is a cognitive bias that results when people reject or downgrade
ideas merely because they originated from an opponent, competitor, or some other
antagonist. One way to overcome this is by not revealing the source of the idea or pretending
that it came from someone the other party likes.
Regression Toward the Mean (also known as regression to the mean) Bias
Regression toward the mean bias was first documented by Sir Francis Galton (1886)
who was examining the relationship between the height of parents and height of their
children. He found that, in general, parents who taller than average tend to have children who
41
are taller than average; and parents who are shorter than average tend to have children who
are shorter than average. However, in instances where the average height of the parents was
greater than the average for the population (e.g. suppose the father is 6’8” and the mother is
5’11”), the children tended to be shorter than the parents. Similarly, when the average height
of the parents was shorter than the average for the population (e.g., suppose the father is 5’1”
and the mother is 4’10”), the children tended to be taller than the parents.
Regression to the mean is a widespread statistical phenomenon and has many
implications. Thus, if you play a slot machine and have a “hot hand” and win several times in
a row (this is due to chance), you might conclude that you have a winning streak and keep
playing. However, regression toward the mean indicates that if you keep playing, your luck
will run out and you will start losing. The same is true in sports. An athlete that has a
phenomenal year and hits, say, 60 home runs will probably not do as well the next year.
Smith (2016) discusses the so-called “Sports Illustrated Cover Jinx.” There is no curse
associated with being on the cover of Sports Illustrated. The reason players tend to have a
poor year after being on the cover of Sports Illustrated is not a curse but due to the regression
to the mean. Smith (2016) demonstrates that the five baseball players with the highest batting
averages in 2014 (average for the five was .328), did worse in 2015 (average dropped to
.294).
Regression to the mean will not happen if two perfectly correlated variables are
measured (there is no random effect). If two variables do not have such a strong correlation
[Kahneman (2011: 181) provides the following examples: the correlation between SAT
scores and college GPA is .60; and the correlation between income and education level is
.40], there will be a stronger regression-to-the-mean effect. The weaker the correlation, the
42
greater the role of randomness. The batting average of baseball players during one season
correlates with the batting average of a subsequent season but the correlation is not perfect.
Also, if a measurement is far from the population mean, there will be a stronger regressionto-the-mean effect since the amount of room to regress is much larger than if the
measurement is close to the population mean.
Regression towards the mean can result in serious mistakes by researchers and
decision makers. They may believe something is due to an experimental factor when it is
simply due to chance. Thus, if you take a sample of 200 ADHD children who score very
highly on aggressiveness and feed them borscht three times a day. If you examine the
aggressiveness scores 60 days later, the scores should be lower because of regression towards
the mean, not drinking borscht. In fact, this is true of any measurements. If you examine
scores of subjects that are either much higher or much lower than average, and then take a
second set of measurements from the same people, the second set of scores should be closer
to the population average.
Morton & Torgerson (2003) feel that all healthcare professionals should be aware of
regression to the mean if they want to make correct decisions.
Clinicians use diagnostic tests to target and monitor treatment. Regression
to the mean can confound this strategy. The preliminary test has a high
probability of giving an abnormal result through chance, and initial
treatment may be unnecessary. Because of this chance effect, there is a
high probability that subsequent measurements will spontaneously regress
towards the mean value. This misleads clinicians and patients into
thinking that treatment has been effective when the treatment was either
not required or ineffective…
Public health interventions are often aimed at sudden increases in disease
and thus vulnerable to the effects of regression to the mean (Morton &
Torgerson, 2003).
43
Kahneman (2011: 175-176) describes the kind of mistakes that are made in teaching
flight instructors. The belief that praising trainee pilots for an excellent landing often resulted
in a subsequent poor landing, contrary to theories that claim that good performance should be
rewarded so that subjects become conditioned to do well. The true explanation was
regression towards the mean.
Kahneman (2011: 181-182) underscores the point that a statement such as “Highly
intelligent women tend to marry men who are less intelligent than they are” will result in
many interesting theories involving causality. For example, some people will feel that this is
due to the fact that very intelligent women do not want to compete with their husbands. In
actuality, regression to the mean provides a simpler explanation.
Selective Perception Bias
People tend to allow their expectations or beliefs to influence how they perceive the
world. Thus, information that contradicts existing beliefs will tend to be overlooked and/or
forgotten; information in agreement with their expectations will be noticed and retained
(selective retention).
Self-Serving Bias
There is no question that “People have a need to see themselves positively” (Heine, et
al., 1999; Wang, et al., 2015). Self-serving bias, a type of attributional bias, enables people to
see themselves in a positive light. It is a type of cognitive bias that involves attributing one’s
successes to internal, personal characteristics (internal attributions) and blaming one’s
failures on outside forces beyond one’s control (external attributions). In other words, we
44
take personal credit when we succeed (e.g., get an A+ in a course), but if something does not
work out (e.g., getting a D in a course), we tend to deny responsibility and blame outside
factors such as a poor teacher or an unfair test. One thing self-serving bias accomplishes is
that it improves one’s self-esteem and strengthens the ego. However, it makes it difficult for
a person to desire to improve if s/he believes that all failures are due to outside forces.
Semmelweis Reflex
The Semmelweis Reflex refers to the tendency to reject new ideas because they
contradict established beliefs and paradigms. This was named after Dr. Ignaz Semmelweis
who could not convince doctors to wash their hands before delivering babies (see story at
http://www.exp-platform.com/Pages/SemmelweisReflex.aspx).
Status Quo Bias
Status quo bias is a cognitive bias that occurs when people favor the familiar and prefer
that things remain the same rather than opting for change. People seem to prefer inaction to
making decisions. It also manifests itself when inertia results in people continuing with a
previously-made decision rather than trying something new. People are more upset from the
negative consequences that may result from making a new decision than from the consequences
of not making any decision (Kahneman & Tversky, 1982). Choosing by default (default may be
a historical precedent or a choice made noticeable), an automated choice heuristic, is related to
status quo bias.
45
Stereotyping Bias
Stereotyping is a mental shortcut used by people when making decisions about strangers.
When stereotyping we have certain expectations about the qualities and attributes members of a
group (e.g., women, blacks, Jews, homosexuals, Hispanics, Asians, Moslems, etc.) possess.
Benson (2016) notes that people tend to “prefer generalizations over specifics because they take
up less space.” Some stereotypes may have validity. One might make certain assumptions about
a person who identifies himself as a liberal Democrat or conservative Republican. Many
stereotypes, however, are incorrect and based on incorrect beliefs about certain groups. In any
case, there is a great deal of variability among individuals that comprise a group.
Survivor Bias
This refers to the tendency to focus on the people or objects that survived or
succeeded. We tend to ignore the non-survivors and might completely overlook them
because they have become invisible. Unfortunately, in many cases, the non-survivors or
failures can provide us with a great deal of information. However, since they are not around,
we may not even be aware that there is a great deal of missing information.
Shermer (2014) provides the following interesting example of survivor bias citing
Gary Smith author of the book, Standard Deviations (Smith, 2014):
Smith illustrates the effect with a playing card hand of three of clubs, eight
of clubs, eight of diamonds, queen of hearts and ace of spades. The odds
of that particular configuration are about three million to one, but Smith
says, “After I look at the cards, the probability of having these five cards is
1, not 1 in 3 million.”
Survivor bias is also known as sampling or selecting on the dependent variable. This
is where the researcher selects cases where some measure or phenomenon of interest has
46
been observed while excluding the cases where the measure or phenomenon of interest has
not been observed. The selected cases are then used to prove the measure or phenomenon of
interest. For example, suppose a researcher looks only at successful firms as measured by
annual returns. She concludes that these firms were headed by leaders who had humility and
concludes that humility on the part of the CEO will make a company great. This finding may
or may not be true. The flaw in the researcher’s reasoning is that she did not also examine
unsuccessful firms. It is quite possible that unsuccessful firms are also headed by humble
CEOs.
In Search of Excellence by Tom Peters and Robert H. Waterman (1982) is one of the
most popular business books. The authors studied 43 of America’s best run companies in
order to determine what made them successful and came up with eight basic principles of
management. In other words, they sampled based on the dependent variable of “excellent
firms in 1982.” The question is what happened to those firms. Eckel (2013) says that “two
thirds of them underperformed the S&P 500 over a decade. Some faltered badly, and some
even went out of business.” Kodak, K Mart, and Wang Labs are three examples of firms on
Peter and Waterman’s (1982) list that went bankrupt. Amdahl, also on the list, was successful
until the early 1990s and then started losing money and was eventually taken over. Baum and
Smith (2015) also found that the stock performance of these companies did not stand the test
of time.
Von Restorff Effect
This bias is named after the German psychologist Hedwig von Restorff (1906–1962).
She found that things that are radically different and distinctive are more likely to stand out
47
in one’s brain than ordinary items. This is the logic of highlighting terms that we want to
remember.
Heuer, Jr. (2010, Chapter 10) suggests other ways to make information stand out:
Specifically, information that is vivid, concrete, and personal has a greater
impact on our thinking than pallid, abstract information that may actually
have substantially greater value as evidence. For example:
*Information that people perceive directly, that they hear with their own
ears or see with their own eyes, is likely to have greater impact than
information received secondhand that may have greater evidential value.
*Case histories and anecdotes will have greater impact than more
informative but abstract aggregate or statistical data (Heuer, Jr., 2010,
Chapter 10).
Zeigarnik Effect
There is a tendency for people to find it easier to remember a task that is incomplete and has
not been finished than one which has been completed. This probably has to do with the way
short-term memory works. This effect is named after Russian psychologist Bulma
Zeigarnaik who first wrote about it (Zeigarnik, 1927).
Zero-Risk Bias (also known as “Certainty Bias” and “Certainty Effect”)
Studies show that people have a preference for options that result in reducing a small
risk to zero over a greater reduction in a much larger risk. In other words, we tend to have a
preference for the absolute certainty of a smaller benefit (i.e., complete elimination of risk) to
the lesser certainty of receiving a larger benefit. The risk of having an autistic child is much
smaller than the risk of a child dying from infectious diseases. Yet many parents try to reduce
the risk of autism by not vaccinating their children (actually, there is no evidence linking
48
autism to vaccines) and take on the much higher risk associated with infectious diseases such
as measles, rubella, and mumps.
Duff (2014) uses the following example to explain zero-risk bias:
For example, suppose one had the following two scenarios: 1) I give you $50,
then flip a coin. If it’s heads, I take back the $50. 2) I flip a coin. If it's tails, I
give you $50. Which one would you pick?
Majority of people decide to pick option 2, even though the odds are exactly
the same: 50 per cent of getting $50. People are loss averse and they avoid the
idea of “losing”: There’s a chance of “losing” in option 1, thus people avoid
that.
Duff (2014) provides another version of this problem:
When asked to choose between two versions of risk reduction: reduce risk
subtype A from 5 per cent to 0 per cent, or reduce risk subtype B from 50 per
cent to 25 per cent, when they cost the same, people would tend to pick the 5
per cent to 0 per cent, even though arguably the 50 per cent to 25 per cent
would do far more good. This latter choice is due to the zero-risk bias, where
people try to reduce the risk of one of the options to 0.
Kahneman (2011: 312-314) discusses the Allais paradox to demonstrate how even the
greatest statisticians were susceptible to a certainty effect.
In problems A and B, which would you choose?
A. 61% chance to win $520,000 OR 63% chance to win $500,000
B. 98% chance to win $520,000 OR 100% chance to win $500,000
(Kahneman, 2011: 313).
Most people prefer the left-hand option in problem A and the right-hand option
(certainty) in problem B. This pattern of choice makes no logical sense and violates
utility theory. Allais demonstrated that “the leading decision theorists in the world had
preferences that were inconsistent with their own view of rationality!” Kahneman
explains this using the certainty effect.
49
People are willing to pay a great deal to completely eliminate a risk. This results
in laws that focus on attempting to totally reduce risk regardless of the actual benefits.
This may be the reason the public prefers the “perfect” cleanup of a Superfund site or
totally outlawing cancer-causing additives from foods. The cost and effort required to
reduce the risk to zero may not be worth it given the limited resources available to
government (Duff, 2014). The same can be said of all the tests done by the healthcare
system. The costs involved in zero-risk healthcare are enormous and it may make more
sense to spend the money on preventive medicine and/or healthcare for the indigent.
One of the most powerful words in advertising is “free.” This may relate to the zerorisk bias. When something is free, there is no risk attached to acquiring it. There should be no
difference between purchasing two bottles of champagne at $40 each for a total of $80 or
paying $80 for one bottle and getting the second one free; either way, the consumer gets two
bottles of champagne for $80. However, the word “free” changes everything.
For example, in one study where people were offered a choice of a
fancy Lindt truffle for 15 cents and a Hershey’s kiss for a penny, a
large majority (73%) chose the truffle. But when we offered the same
chocolates for one penny less each—the truffle for 14 cents and the
kiss for nothing—only 31% of participants selected it. The word
“free,” we discovered, is an immensely strong lure, one that can even
turn us away from a better deal and toward the “free” one (Ariely,
2009).
Conclusion
Taylor (2013) highlights that fact that cognitive biases are bad for business because
they often result in poor decisions. He notes that there are several ways to reduce these
biases. First, he posits that one must be aware of the different types of biases. By studying
cognitive biases and understanding them, one can reduce their impact. Second, he asserts that
50
collaboration is probably the most powerful tool for minimizing cognitive biases. This is why
it is important to have diverse groups (groupthink is also a bias) that can work together to
make a decision. He highlights some recommendations made by Daniel Kahneman who
recommends that the following questions should be asked in order to minimize cognitive
bias:
Is there any reason to suspect the people making the recommendation of
biases based on self-interest, overconfidence, or attachment to past
experiences? Realistically speaking, it is almost impossible for people to
not have these three influence their decisions.
Was there groupthink or were there dissenting opinions within the
decision-making team? This question can be mitigated before the
decision-making process begins by collecting a team of people who will
proactively offer opposing viewpoints and challenge the conventional
wisdom of the group (Taylor, 2013).
Soll, Milkman & Payne (2015) provide suggestions on how to outsmart some
cognitive biases. They discuss three tools that can be used to prevent what they call
“misweighting,” i.e., placing too much weight on the wrong information: blinding,
checklists, and algorithms. Blinding is one way to eliminate the effects of such factors as
stereotyping. One orchestra had job candidates audition behind a screen in order to prevent
gender bias. This resulted in a huge increase (from 5% to 40%) in female players. The use of
checklists help place the focus on what is truly relevant and help reduce cognitive biases that
may result in poor choices. This has helped venture capitalists and HR people make better
selections. Algorithms are far from perfect since they are created by people but are still
considerably better than relying solely on human judgment (Soll, Milkman & Payne, 2015).
How many decisions a day does the average person make? This is a very difficult
question to answer. A number that is cited often on the internet is 35,000 (Hoomans, 2015).
Of course, most of these decisions are as trivial as when to get out of bed. Many decisions,
51
however, are quite serious and a poor choice can cause immense harm. Indeed, wars are often
the result of cognitive biases when it comes to understanding the enemy (Zur, 1991).
Certainly, politicians, business people, military leaders, negotiators, and investors have to
strive to improve their decision-making abilities. This means doing everything possible to
minimize cognitive biases. Without understanding cognitive biases and knowing how to deal
with them, one cannot be a critical thinker.
52
References
Alloy, L. B. & Ahrens, A. H. (1987). Depression and pessimism for the future: Biased use of
statistically relevant information in predictions for self versus others. Journal of
Personality and Social Psychology, 52(2), 366-378.
Ariely, D. (2009). The end of rational economics. Harvard Business Review, July. Retrieved
from https://hbr.org/2009/07/the-end-of-rational-economics
Ariely, D. (2008). Predictably irrational. New York: HarperCollins Publishers.
Azzopardi, P. V. (2010). Behavioural technical analysis: An introduction to behavioural
finance and its role in technical analysis. Hampshire, Great Britain: Harriman
House.
Bakewell, S. (2013). Clang went the trolley: ‘Would you kill the fat man?’ and ‘the trolley
problem.’ New York Times Book Review. Retrieved from
http://www.nytimes.com/2013/11/24/books/review/would-you-kill-the-fat-man-andthe-trolley-problem.html
Baum, G. & Smith, G. (2015) Great companies: Looking for success secrets in all the wrong
places. Journal of Investing, Fall, 61-72. Available at: http://economicsfiles.pomona.edu/GarySmith/SuccessSecrets.pdf
Bazerman, M. H. & Tenbrunsel, A. E. (2011a). Ethical breakdowns. Harvard Business
Review, April. Retrieved from https://hbr.org/2011/04/ethical-breakdowns
Bazerman, M. H. & Tenbrunsel, A. E. (2011b). Blind spots: Why we fail to do what's right
and what to do about it. Princeton, NJ: Princeton University Press.
Benson, B. (2016). Cognitive bias cheat sheet. Better Humans. Retrieved from
https://betterhumans.coach.me/cognitive-bias-cheat-sheet-55a472476b18
Boyd, D. (2015, August 30). Innovators beware the hindsight bias. Psychology Today.
Retrieved from https://www.psychologytoday.com/blog/inside-thebox/201508/innovators-beware-the-hindsight-bias
Buxbaum, R. E. (2013). The scientific method isn’t them method of scientists.
Rebresearch.com. Blog. Retrieved from http://www.rebresearch.com/blog/thescientific-method-isnt-the-method-of-scientists/
Caputo, A. (2013). A literature review of cognitive biases in negotiation processes.
International Journal of Conflict Management, 24(4), 374-398.
53
Cassidy J. (2013, September 11). The saliency bias and 9/11: Is America recovering? New
Yorker. Retrieved from http://www.newyorker.com/news/john-cassidy/the-saliencybias-and-911-is-america-recovering
Chen, D., Moskowitz, T. J. & Shue, K. (2016). Decision-making under the gambler’s fallacy:
Evidence from asylum judges, loan officers, and baseball umpires. Quarterly Journal
of Economics, 131(3), March, 1-60. DOI: 10.1093/qje/qjw017
Chery, K. (2017). What is the actor-observer bias? Verywell.com. Retrieved from
https://www.verywell.com/what-is-the-actor-observer-bias-2794813
Chery, K. (2016). What is a cognitive bias: Definition and examples. Verywell.com.
Retrieved from https://www.verywell.com/what-is-a-cognitive-bias-2794963
Chery, K. (2015, November 9). What is the bandwagon effect. Verywell.com. Retrieved from
https://www.verywell.com/what-is-the-bandwagon-effect-2795895
Dawes, R. (1979). The robust beauty of improper linear models in decision making.
American Psychologist, 34(7), July, 571-582.
Dror, I., E. McCormack, B. M. & Epstein, J. (2015). Cognitive bias and Its impact on expert
witnesses and the court. Judges’ Journal, 54(4), Retrieved from
http://www.americanbar.org/publications/judges_journal/2015/fall/cognitive_bias_an
d_its_impact_on_expert_witnesses_and_the_court.html#6
Duff, V. (2014, December 5). Riskfactor: Zero-risk bias. Financial Chronicle. Retrieved
from http://www.mydigitalfc.com/news/briskfactorb-zero-risk-bias-308
Duncan Pierce (n.d.). But that’s crazy! Cognitive biases in decision making.
Duncanpierce.org. Retrieved from http://duncanpierce.org/cognitive_bias_workshop
Eckel, B. (2013, November). Fake science. Reinventing Business. Retrieved from
http://www.reinventing-business.com/2013/10/fake-science.html
Emel, M. (2013, August 15). The economy of decisions. Retrieved from
http://www.metia.com/seattle/mandy-emel/2013/08/the-economy-of-decisions/
Evatt, C. (2010). Brain biases. Retrieved from
http://brainshortcuts.blogspot.com/2010/11/optimism-bias.html
Eveleth, R. (2012, July 31). Why experts are almost always wrong. Smithsonian.com.
Retrieved from http://www.smithsonianmag.com/smart-news/why-experts-arealmost-always-wrong-9997024/
54
Fitza, M. A. (2013). The use of variance decomposition in the investigation of CEO effects:
How large must the CEO effect be to rule out chance? Strategic Management
Journal, 35(12), December, 1839-1852.
Flynn, S. (2013, May 8). Behavioural economics: part three – understanding purchasing
pains. PowerRetail. Retrieved from:
http://www.powerretail.com.au/marketing/behavioural-economics-part-threeunderstanding-purchasing-pains/
Frederick, S., Loewenstein, G. & O’Donoghue, T. (2002). Time discounting and time
preference: A critical review. Journal of Economic Literature, 40, June, 351-401.
Friedman, H. H. & Friedman, L. W. (2009, May 3). Bigotry in academe: Disciplinary elitism.
SSRN.com. Retrieved from
SSRN: https://ssrn.com/abstract=1398505 or http://dx.doi.org/10.2139/ssrn.1398505
Galton, F. (1886). Regression towards mediocrity in hereditary statute. Journal of the
Anthropological Institute, 15, 246-263.
Gilbert, D. T., & Wilson, Timothy D. (2000). Miswanting: Some problems in the forecasting
of future affective states.” In Thinking and feeling: The role of affect in social
cognition, edited by Joseph P. Forgas, 178-197. Cambridge: Cambridge University
Press.
Goldacre, B. (2011). The dangers of cherry-picking evidence. Guardian. Retrieved from
https://www.theguardian.com/commentisfree/2011/sep/23/bad-science-ben-goldacre
Gorman, S. E. & Gorman, J. M. (2017). Denying to the grave: Why we ignore the facts that
will save us. New York: Oxford University Press.
Heine S. J., Lehman D. R., Markus H. R., Kitayama S. (1999). Is there a universal need for
positive self-regard? Psychological Review, 106, 766–794.
Heshmat, S. (2015, April 23). What is confirmation bias? Psychology Today. Retrieved from
https://www.psychologytoday.com/blog/science-choice/201504/what-isconfirmation-bias
Heuer, Jr., R. J. (2008). Psychology of intelligence analysis. CIA's Center for the Study of
Intelligence. Available at https://www.cia.gov/library/center-for-the-study-ofintelligence/csi-publications/books-and-monographs/psychology-of-intelligenceanalysis/art3.html
Hoomans, J. (2015, March 20). 35,000 decisions: The great choices of strategic leaders.
Leading Edge Journal. Retrieved from http://go.roberts.edu/leadingedge/the-greatchoices-of-strategic-leaders
55
Hutson, M. (2014). It pays to be overconfident, even when you have no idea what you’re
doing. New York Magazine. Retrieved from
http://nymag.com/scienceofus/2014/05/pays-to-be-overconfident.html
Ignatius, D (February 8, 2009). The death of rational man. Washington Post. Retrieved from
http://articles.washingtonpost.com/2009-02-08/opinions/36876289_1_nourielroubini-behavioral-economics-irrational-psychological-factors
Iresearchnet (2017a). Anchoring and adjustment heuristic definition. Iresearchnet.com.
Retrieved from https://psychology.iresearchnet.com/social-psychology/socialcognition/anchoring-and-adjustment-heuristic/
Iresearchnet (2017b). Focalism definition. Iresearchnet.com. Retrieved from
https://psychology.iresearchnet.com/social-psychology/social-cognition/focalism/
Joint Commission (2016). Cognitive biases in health care. Quick Safety, Issue 28, October.
Retrieved from
https://www.jointcommission.org/assets/1/23/Quick_Safety_Issue_28_Oct_2016.pdf
Kahneman, D. (2012). The human side of decision making: Thinking things through with
Daniel Kahneman. Journal of Investment Consulting, 13(1), 5-14.
Kahneman, D. (2011). Thinking fast and slow. New York: Farrar, Straus and Giroux.
Kahneman, D., & Tversky, A. (1982). The psychology of preference. Scientific American,
246, 160-173.
Kahneman, D., Knetsch, J. L. & Thaler, R. H. (1990). Experimental tests of the endowment
effect and the Coase theorem. Journal of Political Economy, 98(6), December, 132548.
Kahneman, D., Krueger, A. B., Schkade, D., Schwarz, N., and Stone, A. (2006). Would you
be happier if you were richer? A focusing illusion. Science, 312 (5782), 1908-1910.
Kasanoff, B. (2017, March 29). 175 reasons why you don’t think clearly. Forbes. Retrieved
from https://www.forbes.com/sites/brucekasanoff/2017/03/29/sorry-you-cant-make-alogical-data-driven-decision-without-intuition/#1e6bbf847f60
Kinari, Y., Ohtake, F. & Tsutsui, Y. (2009). Time discounting: Declining impatience and
interval effect. Journal of Risk and Uncertainty, 39(1), 87-112. doi:10.1007/s11166009-9073-1
Kolbert, E. (2017, February 27). Why facts don’t change our minds. New Yorker. Retrieved
from http://www.newyorker.com/magazine/2017/02/27/why-facts-dont-change-ourminds
56
Kruger, J. & Dunning, D. (1999). "Unskilled and unaware of It: How difficulties in
recognizing one's own incompetence lead to inflated self-assessments. Journal of
Personality and Social Psychology, 77 (6), 1121–34. doi: 10.1037/00223514.77.6.1121
Kuran, T. & Sunstein, C. (2007). Availability cascades and risk regulation. University of
Chicago Public Law & Legal Theory Working Paper, No. 181. Retrieved from
http://chicagounbound.uchicago.edu/cgi/viewcontent.cgi?article=1036&context=publ
ic_law_and_legal_theory
Lamba S. & Nityananda, V (2014). Self-deceived individuals are better at deceiving others.
PLoS ONE, August, 9(8): e104562. doi:10.1371/journal.pone.0104562
Lee, S. & Feeley, T. H. (2016). The identifiable victim effect: A meta-analytic review.
Social Influence, 11(3), 199-215. http://dx.doi.org/10.1080/15534510.2016.1216891
Linville, P. W., Fischer, G. W., & Fischhoff, B. (1992). AIDS risk perceptions and decision
biases, in J. B. Pryor and G. D. Reeder (Eds.), The social psychology of HIV
infection. Hillsdale, NJ: Erlbaum.
Loewenstein, G., O’Donoghue, T., & Rabin, M. (2003). Projection bias in predicting future
utility. Quarterly Journal of Economics, 118 (4), 1209–1248.
Marcus, G. (2008). Kluge: The haphazard evolution of the human mind. New York:
Houghton Mifflin Company.
Matzke, D., Nieuwenhuis, S., van Rijn, H., Slagter, H. A, van der Molen, M. W. &
Wagenmakers, E. J. (2013). Two birds with one stone: A preregistered adversarial
collaboration on horizontal eye movements in free recall. Retrieved from
http://dora.erbe-matzke.com/papers/DMatzke_EyeMovements.pdf
McCann, D. (2014, May 22). 10 cognitive biases that can trip up finance. CFO. Retrieved
from http://ww2.cfo.com/forecasting/2014/05/10-cognitive-biases-can-trip-finance/
McNeil, B. J., Pauker, S. G., &. Tversky, A. (1988). On the framing of medical decisions, in
D. E. Bell, H. Raiffa, and A. Tversky (Eds.), Decision making: Descriptive,
normative, and prescriptive interactions. Cambridge, England: Cambridge University
Press.
Meaning Ring (2016, March 28). Why You’ll Soon Be Playing Mega Trillions.
Retrieved from http://meaningring.com/2016/03/28/neglect-of-probability-by-rolfdobelli/
Mercier, H. & Sperber, D. (2017). The enigma of reason. Cambridge, MA: Harvard
University Press.
57
Mission Command (2015, January 9). Cognitive biases and decision makings: A literature
review and discussion of implications for the US army. White Paper. Mission
Command Center of Excellence. Retrieved from
http://usacac.army.mil/sites/default/files/publications/HDCDTF_WhitePaper_Cogniti
ve%20Biases%20and%20Decision%20Making_Final_2015_01_09_0.pdf
Morton, V. & Torgerson, D. J. (2003, May 17). Effect of regression to the mean on decision
making in health care. BMJ, 326(7398), 1083-1084. doi: 10.1136/bmj.326.7398.1083
Retrieved from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1125994/
National Institute on Drug Abuse (2017, January). Is marijuana a gateway drug? Retrieved
from https://www.drugabuse.gov/publications/research-reports/marijuana/marijuanagateway-drug
Noll, J. & Sharma, S. (2014). Qualitative meta-analysis on the hospital task: Implications for
research Journal of Statistics Education, 22(2). Available at
www.amstat.org/publications/jse/v22n2/noll.pdf
Norton, M. I., Mochon, D. & Ariely, D. (2012). The IKEA effect: When labor leads to love.
Journal of Consumer Psychology, 22(3), July, 453–460.
Obermaier, M., Koch, T. & Baden, C. (2015). Everybody follows the crowd?
Effects of opinion polls and past election results on electoral preferences. Journal of
Media Psychology, DOI: http://dx.doi.org/10.1027/1864-1105/a000160
Patel, N. (2015, May 18). 5 psychological hacks that will make your pricing page
irresistible. Marketing Land. Retrieved from http://marketingland.com/5psychological-hacks-will-make-pricing-page-irresistible-121535
Peters, T. & Waterman, R. H. (1982). In search of excellence. New York: Harper & Row.
Pike, B., Curtis, M. B. & Chui, L. (2013). How does an initial expectation bias influence
auditors' application and performance of analytical procedures? Accounting Review,
July, 88(4), 1413-1431.
Pollitt, M. G. & Shaorshadze, I. (2011). The role of behavioural economics in energy and
climate policy. Cambridge Working Papers in Economics (CWPE) No. 1165.
University of Cambridge. Retrieved from
http://www.econ.cam.ac.uk/dae/repec/cam/pdf/cwpe1165.p
Poundstone, W. (2017, January 21). The Dunning-Kruger president. Psychology Today.
Retrieved from https://www.psychologytoday.com/blog/head-in-thecloud/201701/the-dunning-kruger-president
58
Reo,S. (2015, June 8). Researchers find everyone has a bias blind spot. Carnegie Mellon
University News. Retrieved from
https://www.cmu.edu/news/stories/archives/2015/june/bias-blind-spot.html
Sherman, M. (2014, June 20). Why we don’t give each other a break. Psychology Today.
Retrieved from https://www.psychologytoday.com/blog/real-men-dont-writeblogs/201406/why-we-dont-give-each-other-break
Shermer, M. (2014, September 1). How the survivor bias distorts reality. Scientific American.
Retrieved from https://www.scientificamerican.com/article/how-the-survivor-biasdistorts-reality/
Sloman, S. & Fernbach, P. (2017). The knowledge illusion: Why we never think alone. New
York: Riverhead Books.
Smith. G. (2016, October 12). The Sports Illustrated cover jinx: Is success a curse?.
Psychology Today. Retrieved from https://www.psychologytoday.com/blog/what-theluck/201610/the-sports-illustrated-cover-jinx
Smith, G. (2014). Standard deviations: Flawed assumptions, tortured data, and other ways
to lie with statistics. New York: Overlook Press.
Smith, J. (2015). 67 ways to increase conversion with cognitive biases. Neuromarketing.
Retrieved from http://www.neurosciencemarketing.com/blog/articles/cognitivebiases-cro.htm#
Soll, J. B., Milkman, K. L. & Payne, J. W. (2015). Outsmart your own biases. Harvard
Business Review, May, Retrieved from https://hbr.org/2015/05/outsmart-your-ownbiases
Strack, F., Martin, L. & Schwarz, N. (1988). Priming and communication: Social
determinants of information use in judgments of life satisfaction. European Journal
of Social Psychology, 18(5), 429-442.
Taylor, J. (2013, May 20). Cognitive biases are bad for business. Psychology Today.
Retrieved from https://www.psychologytoday.com/blog/the-powerprime/201305/cognitive-biases-are-bad-business
Tetlock, P. (2005). Expert political judgment: How good is it? How can we know?
Princeton, New Jersey: Princeton University Press.
Thaler, R. H. and Mullainathan, S. (2008). How behavioral economics differs from
traditional economics. The Concise Library of Economics. Retrieved from
http://www.econlib.org/library/Enc/BehavioralEconomics.html
Thaler, R. H. & Sunstein, C. R. (2008). Nudge. New Haven, CT: Yale University Press.
59
Thompson, D. (2013, January 16). The irrational consumer: Why economics is dead wrong
about how we make choices. Atlantic.com. Retrieved from
http://www.theatlantic.com/business/archive/2013/01/the-irrational-consumer-whyeconomics-is-dead-wrong-about-how-we-make-choices/267255/
Tversky, A. & Kahneman, D. (1983). Extension versus intuitive reasoning: The conjunction
fallacy in probability judgment. Psychological Review, 90 (4), October, 293–315.
doi:10.1037/0033-295X.90.4.293.
Tversky, A. & Kahneman, D. (1981). The framing of decisions and the psychology of
choice. Science, 211, 453–458.
Tversky, A. & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases
Science, New Series, 185 (4157).1124-1131.Retrieved from
http://www.jstor.org/stable/1738360?seq=1#page_scan_tab_contents
Von Restorff, H. (1933). Über die wirkung von bereichsbildungen im spurenfeld.
Psychologische Forschung. 18, 299-342. doi:10.1007/BF02409636
Wadley, J. (2012, September 20). New study analyzes why people are resistant to correcting
misinformation, offers solutions. Michigan News. Retrieved from
http://ns.umich.edu/new/releases/20768-new-study-analyzes-why-people-areresistant-to-correcting-misinformation-offers-solutions
Wang, X., Zheng, L., Cheng, X., Li, L., Sun, L., Wang, Q. & Guo, X. (2015). Actorrecipient role affects neural responses to self in emotional situations. Frontiers in
Behavioral Neuroscience, 9:83, Published online Published online 2015 Apr 15.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4397920/doi: 10.3389/fnbeh.2015.00
083
Whitbourne, S. K. (2012, September 8). Happiness: It’s all about the ending. Psychology
Today. Retrieved from https://www.psychologytoday.com/blog/fulfillment-anyage/201209/happiness-it-s-all-about-the-ending
Zeigarnik, B. (1927). Uber das behalten yon erledigten und underledigten handlungen.
Psychologische Forschung, 9, 1-85.
Zur, O. (1991). The love of hating: The psychology of enmity. History of European Ideas,
13(4), 345-369.
60