Relevance and Risk:
How the Relevant Alternatives Framework
Models the Epistemology of Risk
Georgi Gardiner
University of Tennessee
This essay appears in Synthese
Abstract. The epistemology of risk examines how risks bear on epistemic
properties. A common framework for examining the epistemology of risk holds
that strength of evidential support is best modelled as numerical probability given
the available evidence. In this essay I develop and motivate a rival ‘relevant
alternatives’ framework for theorising about the epistemology of risk.
I describe three loci for thinking about the epistemology of risk. The first
locus concerns consequences of relying on a belief for action, where those
consequences are significant if the belief is false. The second locus concerns
whether beliefs themselves—regardless of action—can be risky, costly, or harmful.
The third locus concerns epistemic risks we confront as social epistemic agents.
I aim to motivate the relevant alternatives framework as a fruitful approach
to the epistemology of risk. I first articulate a ‘relevant alternatives’ model of the
relationship between stakes, evidence, and action. I then employ the relevant
alternatives framework to undermine the motivation for moral encroachment.
Finally, I argue the relevant alternatives framework illuminates epistemic
phenomena such as gaslighting, conspiracy theories, and crying wolf, and I draw on
the framework to diagnose the undue skepticism endemic to rape accusations.
Keywords. Relevant alternatives; moral encroachment; risk; stakes; epistemic
injustice; rape accusations; gaslighting; conspiracy theories.
1. Introduction
Judgements have consequences. Our confidence the bus leaves at noon can mean the difference
between attending and missing the appointment. A parent’s belief that his child is dreadful at music
can be self-fulfilling when he fails to invest in music lessons and when the child absorbs that selfconception. A false belief can spawn other false beliefs when relied on in inference. The
epistemology of risk examines the epistemological contours that arise from consequences of
judgement; central questions include whether risks, such as high cost of error, affect epistemic
properties like knowledge, and how we should understand distinctively epistemic kinds of harm,
such as epistemic injustice and epistemic wronging. This essay employs a relevant alternatives
framework to illuminate the epistemology of risk.
A common—indeed ubiquitous—framework for investigating the epistemology of risk holds that
strength of evidential support for a claim, p, is best modelled as the numerical probability of p
conditional on the total available evidence.1 On this model, which I call the ‘quantifiable balance’
conception, evidential support is quantifiable and reflects the balance of available evidence.
Theorists typically endorse this picture and debate, for instance, whether the numerical threshold
1
Note ‘evidential support’ can be interpreted broadly, to encompass epistemic competence and similar truthconducive features. This is sometimes called ‘epistemic support’, but this terminology can mislead because—as I
describe in section seven—some theorists claim practical and moral factors affect epistemic position by affecting
whether S’s belief is epistemically justified or known.
1
for justified belief increases with costs of error. Perhaps 90% probability given the evidence
suffices for justified belief when stakes are low, for instance, but not when stakes are high. Others
endorse the basic quantifiable balance framework, but argue that other conditions, such as
sensitivity of evidence, are also required for justified belief. Such approaches augment, but do not
reject, the broad contours of the quantifiable balance framework.
In this essay I develop and motivate a rival framework for theorising about the epistemology of
risk. A relevant alternatives framework illuminates how epistemic properties are affected by risk
whilst avoiding problems plaguing the quantifiable balance conception of evidential support. At
the very least, a relevant alternatives framework provides a welcome alternative to the dominant
quantifiable balance conception; I hope to motivate that a relevant alternatives framework for the
epistemology of risk is worth taking seriously. 2
In section two I introduce three central loci for thinking about the epistemology of risk. The first
locus concerns consequences of relying on belief for action, where those consequences are
significant if the belief is false. The second locus concerns whether beliefs themselves—regardless
of action—can be risky, costly, or harmful. The third locus concerns risks we face as epistemic
agents, particularly focusing on vulnerabilities from the social-situatedness of our epistemic
agency. The three loci do not jointly exhaust questions about the epistemic consequences of risk,
but they are three paramount arenas.
In section three I describe the relevant alternatives condition on knowledge, and use this to
introduce the overall relevant alternatives framework. In section four I set aside the conditions of
knowledge and instead apply a relevant alternatives framework to the epistemology of risk. I first
examine how costs of error affect whether evidence suffices for action. I argue that rising error
costs mean increasingly remote error possibilities must be addressed. In section five I describe
what determines an error possibility’s remoteness. In sections six and seven I examine the second
locus—whether beliefs themselves can be costly or risky, severed from resulting action or
inference. I first explain the challenge from moral encroachment, which asks whether purist
epistemology can explain apparent conflicts between epistemic and moral norms. In section six I
argue a relevant alternatives framework addresses the challenge from moral encroachment; I
thereby argue against moral encroachment by undermining its motivation. In section seven I argue
a relevant alternatives framework can fruitfully model moral encroachment. In sections eight and
nine I turn to the third locus: Risks we encounter as social epistemic agents. This includes risks of
epistemic harm and injustice. I argue that a relevant alternatives framework highlights and
systematises kinds of social epistemic harm and injustice. These features are not well-modelled by
the more simple quantifiable balance conception of evidential support.
2. Three Loci
A classic locus of the epistemology of risk concerns the consequences of relying on a belief when
that belief is false. Rowing across the Atlantic is serious business, you must be confident the boat
is sturdy. Transferring money can be risky, do you know the website is secure? If your belief is
2
A note on the scope of this essay: I here focus on articulating the relevant alternatives framework and explaining
how it illuminates three loci of the epistemology of risk. Another key undertaking to rejuvenate the relevant
alternatives framework is criticising rival frameworks, such as the quantifiable balance approach. I lack space to
do justice to this second aspect here, and I instead focus on directly motivating the relevant alternatives framework.
I articulate two objections to the quantifiable balance model in footnote 38 and its corresponding main text. For
further objections to quantifiable balance theories of epistemic support, see Cohen (1977), Nelkin (2000),
Achinstein (2003), Ho (2008; 2015), Nelson (2002), Littlejohn (2012), Buchak (2013; 2014), Haack (2014), Staffel
(2016), Nance (2016), Smith (2016), Leitgeb (2017), and Jackson (2018). For surveys, see Ho (2015), Di Bello
(2013), and Gardiner (2019a).
2
false, the costs could be significant. Consider the following pair of vignettes, adapted from Keith
DeRose (1992).
Bank Low Stakes. Driving home from work on Friday, Loren plans to stop at the bank to
deposit her paycheque. She sees the long queue, and so decides to return on Saturday.
Depositing her paycheque is not urgent. She must deposit it before her rent is due, but that is
not for two weeks. ‘The bank will be open tomorrow’, Loren thinks to herself, ‘I’ll return then’.
Loren drives past the bank.
Bank High Stakes. Driving home from work on Friday, Hiram plans to stop at the bank to
deposit his paycheque. He sees the long queue, and so decides to return on Saturday. But, as
Hiram well knows, his rent is due Saturday. Unless his paycheque is deposited before Saturday
evening, his rent payment will bounce. Since he is already behind on payments, eviction may
result. ‘The bank will be open tomorrow’, Hiram thinks to himself, ‘I’ll return then’. Hiram
drives past the bank.
Call the claim the bank will be open tomorrow claim B. The second vignette involves far higher costs if
B is false; relying on judgement B is riskier. And this means, many theorists hold, more evidence
is required to legitimate action. There is an amount (or kind) of evidence for claim B, that can
render Loren uncriticisable yet renders Hiram criticisable. Suppose Loren’s and Hiram’s evidence
is the same. They recall driving past the bank on a Saturday six months ago and seeing customers.
Given this evidence, plausibly Hiram makes a practical mistake, and Loren does not. Hiram should
not rely on his belief, given the stakes and his evidence. Since their evidence is the same, this
difference stems from practical features.
This first locus focuses on the risks of relying on a belief for action. It asks whether and how
practical stakes can affect the amount of evidence needed to properly rely on a judgement in action,
where those stakes concern the costs of p being false.3 This question does not directly ask whether
a belief qualifies as justified or known. Some theorists leverage the stakes sensitivity of whether
evidence suffices for action—combined with knowledge-action links—to motivate impurism
about knowledge.4 Impurism holds that whether S’s true belief is knowledge depends not only on
truth-conducive factors, such as evidence and epistemic competence, but also on practical features
such as the costs of p being false. Purism, by contrast, holds that whether a true belief is knowledge
depends solely on truth-conducive factors, such as evidence and epistemic competence.
A recent strand of theorising within the epistemology of risk focuses not on the consequences of
relying on a belief for action, but instead on the harms, risks, and wrongs of the belief itself. This
is the second locus of the epistemology of risk. Here is an illustrative example.5
3
Costs relevant to locus one accrue were p false. But proponents of pragmatic encroachment hold that even when
p is true, and so the bad consequences do not obtain, there can be something wrong with relying on p with
insufficient evidence given the high costs involved were p false. It is too risky.
4
Knowledge-action links are claims like ‘A person can properly use p as a reason for action iff she knows p’. Cf.
Fantl and McGrath (2002, 2007, 2009), Hawthorne (2004), Stanley (2005), Hawthorne and Stanley (2008), DeRose
(2009), Anderson (2015), Brown (2008, 2014), Worsnip (2015, forthcoming), Ichikawa (2017), Fritz (2017, ms),
Kim (2017).
5
This wording is from Gardiner (2018a). See Franklin (2005: 4; 340). Gendler (2011) employs this example to
exemplify a putative tension between epistemic and moral demands. Basu (2018, forthcoming-a), Schroeder
(2018a), Basu and Schroeder (2019), and Bolinger (forthcoming), have since invoked Franklin’s experience to
motivate moral encroachment. See also Basu’s (2018) ‘Mexican Restaurant’ example. For discussion, see Gardiner
(2018a), Worsnip (forthcoming), Bolinger (ms-b), and Fritz and Jackson (ms). Fritz and Jackson (ms) also
distinguishes impurism based on the consequences of actions (locus one) from impurism based on the moral value
of beliefs themselves, independent from action or choice (locus two).
3
The Cosmos Club. Historian John Hope Franklin hosts a party at his social club, The Cosmos
Club. As Franklin reports, ‘It was during our stroll through the club that a white woman called
me out, presented me with her coat check, and ordered me to bring her coat. I patiently told
her that if she would present her coat to a uniformed attendant, “and all of the club attendants
were in uniform,” perhaps she could get her coat’. Almost every attendant is black and few club
members are black. This demographic distribution almost certainly led to the woman’s false
belief that Franklin is staff.
Theorists argue the woman’s belief wrongs Franklin and, furthermore, that it does so
independently from any actions that rely on her belief. Consider the following case, adapted from
Basu (2018).
Tipping Prediction. Spencer the waiter sensed that white diners tipped more than black
diners. He subsequently researched the trend online, and discovered that black diners tip on
average substantially lower than white diners. A black diner, Jamal, enters Spencer’s restaurant
and dines in a booth outside of Spencer’s area. Spencer predicts Jamal will tip lower than
average, and later discovers his prediction was correct.
Spencer does not rely on his belief in action. But, theorists argue, his belief itself is nonetheless
costly, risky, harmful, or morally wrong. Theorists distinguish between costly and risky beliefs.
Costs are negatives stemming from the belief, even if the belief is true. Risks are negatives accrued
only if the belief is false. If S’s belief is harmful regardless of whether it is true, the belief is costly.
If it is harmful only if false, the belief is risky. Whilst the first locus of the epistemology of risk
focuses on risky beliefs, the second locus makes room for thinking about costly beliefs.6
The stakeholders relevant to the first locus of the epistemology of risk are the believer, those
affected by her actions, and—where knowledge attributions are made—the attributor or assessor
of that knowledge claim. This second locus highlights new sets of stakeholders: the person the
belief is about, members of his social group, and—since we all have a stake in non-discrimination
and morality—society at large.7 Central questions in the second locus ask whether and how risks
accruing to these new stakeholders affect epistemic justification, and how to understand risks of
beliefs as such, severed from the risks of action.
A third locus for thinking about the epistemology of risk concerns ways we are vulnerable as
epistemic agents, particularly focusing on vulnerabilities stemming from social features of
epistemic agency. A simple vulnerability we face as epistemic agents is the risk of false belief. We
are at risk of deception, manipulation, misunderstanding, misremembering, misperceiving, and so
on. Even true beliefs can lead to false beliefs downstream, if they fuel prejudice or breed
overconfidence. We can epistemically mistreat others, such as by lying or regarding people with
undue skepticism. We risk suffering and authoring epistemic harms and injustices. In this third
locus, I highlight epistemic risks stemming from the social embeddedness of our epistemic agency.
6
This distinction is from Moss (2018a). Cf. Worsnip (forthcoming). Moss (2018a) holds that risks, but not costs,
underwrite moral encroachment. Moss’s gloss on the distinction focuses on acting on the belief, restricting it to
locus one. But the distinction can also apply to the second locus. That is, we can distinguish between costly and
risky beliefs even when discussing the disvalues of holding the beliefs themselves, rather than acting on the beliefs.
Note a belief might be wrong in virtue of being risky, even in cases where that belief is true. If the believer lacks
conclusive evidence, the belief might be censurably risky, in virtue of harms risked were the belief false. Some
theorists, such as Basu, claim that holding a risky belief about a person with insufficient evidence qualifies as a
harm, because it is reckless, racist, or inconsiderate. Thus on Basu’s view, all risky beliefs are thereby also somewhat
costly.
7
A belief’s subject and believer can, of course, be the same. This is exemplified by, for example, internalised racism.
4
I argue that a relevant alternatives framework illuminates the contours of the three loci: the risks
of relying on a belief with insufficient evidence given the gravity of the practical circumstances,
the putative harms of beliefs themselves, and risks we confront as social epistemic agents—the
vulnerabilities in our social epistemic lives.
3. The Relevant Alternatives Framework
Bertie is an experienced Appalachian birdwatcher.8 He sees a hawk soaring above and believes it
is a red-shouldered hawk. Does Bertie know it is a red-shouldered hawk? He can see its black and
white tail, and so he can tell it is not a red-tailed hawk. And he can see the tail has three bands, so
he can rule out broad-winged hawk. He knows broad-winged hawks have only one white tail band.
Bertie studies birds—he knows that no other hawks are naturally found in Appalachia. Plausibly
Bertie knows it is a red-shouldered hawk.
Compare Bertie to Newla. Newla is new to birdwatching. She sees the hawk soaring above and
believes it is a red-shouldered hawk. Newla can see its black and white tail, and so can distinguish
it from a red-tailed hawk. But she never learnt to count the tail bands to distinguish red-shouldered
hawks from broad-winged hawks. Accordingly Newla lacks knowledge because, for all she can tell,
the bird could be a broad-winged hawk. She cannot rule this out.
Suppose someone approaches Bertie with a challenge from skepticism. They might say ‘You don’t
know the bird is a red-shouldered hawk. It could be a sophisticated robot, a red-tailed hawk with
disguised colouring, or a falcon with misshapen wings and tail. Perhaps you are hallucinating or
someone is tricking you. Perhaps it is a songbird, and you suffer a drug-fuelled illusion. Perhaps
your birding guides have long been replaced by misleading trickster books.’ In normal
circumstances, Bertie can plausibly retort ‘Don’t be silly’, or simply ignore the interruption
altogether. He can jot ‘red-shouldered hawk’ in his personal birding record and happily educate
his son, ‘This is what a red-shouldered hawk looks like. See how the tail has three narrow white
bands, and its wings have broad tips.’ Newla, by contrast, cannot blithely disregard the earlier,
more mundane challenge. ‘You don’t know the bird is a red-shouldered hawk. It might be a broadwinged hawk.’
The relevant alternatives condition for knowledge can explain Bertie’s knowledge and Newla’s lack
of knowledge.9
Relevant Alternatives Condition on Knowledge. S knows that p only if S can rule out
relevant alternatives to p.
The condition provides resources to explain why some challenges—such as the possibility the bird
is a robot—can be disregarded, whilst other challenges—its being a broad-winged hawk—cannot.
In order to know the soaring buteo is a red-shouldered hawk, ordinarily Appalachian birdwatchers
must be able to rule out its being an owl, kestrel, red-tailed hawk, or broad-winged hawk. These
are relevant alternatives. But normally they need not rule out robot, disguised bird, and so on.
These possibilities are farfetched and so need not be taken seriously. Under some unusual
8
I am grateful to Joe Pyle for help developing this example.
9
Dretske (1970), Stine (1976), Goldman (1976), Lewis (1996), Pritchard (2002), Rysiew (2006), McKinnon (2013),
Amaya (2015, esp. 525–531), Gerken (2017), Moss (2018a, 2018b), and Bolinger (forthcoming). Gardiner (2019b)
and Moss (2021) develop relevant alternatives accounts of legal standards of proof. Gardiner (forthcominga)
harnesses the relevant alternatives framework to diagnose the undue doubt endemic to rape accusations.
5
circumstances—such as a lifelike robot convention—birdwatchers might need to rule out its being
a lifelike robot. But normally such possibilities are remote enough to properly ignore.
The relevant alternatives framework posits some key structural features. These are alternatives, a
threshold of relevance, ruling out alternatives, and addressing alternatives. I sketch these four
features in the following four paragraphs.
An alternative, also known as an error possibility, is a possibility inconsistent with p. Where p is
‘that bird is a red-shouldered hawk’, error possibilities include that it is a robin, robot, eagle, broadwinged hawk, illusion, and so on. Some error possibilities, such as the bird’s being a broad-winged
hawk, are ordinary, nearby, preponderant. They are error possibilities virtuous thinkers would
readily think to rule out. They are normal, mundane not-p possibilities. Some possibilities, such as
robot possibilities, are more farfetched and outlandish.
The relevant alternatives condition on knowledge posits a threshold of relevance. Those
possibilities within the threshold are relevant to knowledge and must be ruled out. Those beyond
the threshold can be properly ignored. They are irrelevant. The threshold marking relevance to
knowledge is only one possible threshold. We can posit thresholds for various legal standards of
proof, for example, and disregardability thresholds for relying on beliefs given various practical or
conversational contexts. In sections five and seven I discuss what determines the remoteness of
an error possibility and the location of the disregardability threshold.
Ruling out an alternative requires being able to discriminate p from the alternative. Bertie can rule
out its being a red-tailed hawk because he sees the black and white tail. He can rule out its being a
broad-winged hawk because he sees multiple white tail bands. Ruling out alternatives is a normal,
automatic activity. We rule out songbird error possibilities simply by seeing the hawk’s shape. We
need not consider error possibilities explicitly, or think of ourselves as eliminating possibilities.
Bertie rules out ‘red-tailed hawk’ by seeing the black and white tail. Note, though, it is possible the
bird is a red-tailed hawk with painted or artificial feathers. Bertie’s evidence does not rule out these
(remote) sub-possibilities. These remaining sub-alternatives illustrate that alternatives are always
further dividable. The ‘red-tailed hawk’ possibility divides into sub-possibilities: Normal red-tailed
hawk, abnormal red-tailed hawk, disguised red-tailed hawk, and so on. Disguised red-tailed hawk
further cleaves into well-disguised red-tailed hawk, poorly disguised red-tailed hawk, and so on.
Bertie can eliminate some sub-possibilities, but not others. We can say an error possibility is
addressed by the evidence when the possibility is rendered into branched sub-alternatives, and each
sub-alternative is either eliminated by his evidence or lies beyond the disregardability threshold.
Strictly speaking, then, error possibilities are typically addressed rather than eliminated, since in
almost every case some remote sub-possibilities remain uneliminated. This inevitable remainder
fuels the skeptical challenge.10
Typically evidence that eliminates relevant error possibilities also makes the belief more
numerically probable. That is, increasing quantifiable balance and eliminating error possibilities
normally coincide. But they can diverge. A person’s evidence can eliminate myriad error subpossibilities, yet the truth or falsity of p remains well-balanced. Consider a crime mystery before
the culprit is revealed. Ideally many error sub-possibilities are addressed by available evidence, yet
the remaining rival hypotheses about the crime have similar probabilities. Conversely, evidence
can render a claim numerically highly probable, but leave relevant alternatives unaddressed.
Consider believing that, given the odds, a fair D20 die will score between one and nineteen. This
10
Uneliminated error sub-possibilities inevitably remain, except for when p is the cogito; the skeptical challenge
cannot root there.
6
is very probably true, but the evidence fails to address a relevant error possibility, namely that the
die lands twenty.11
We can set aside the question of whether the relevant alternatives condition holds about
knowledge. We can instead apply the relevant alternatives framework to central questions about
the epistemology of risk: risk levels affect the location of the disregardability threshold. And, as I
explain in section seven, perhaps it affects relative remoteness of error possibilities.
4. Action, Stakes, and Evidence
The first locus of the epistemology of risk asks whether and how the gravity of the practical
circumstances affects whether an amount of evidential support suffices for relying on a belief for
action. This locus focuses on the consequences of acting were the belief false. Plausibly if these
consequences are significant, the person’s evidence must be better than if the consequences are
less significant. You can properly purchase a pencil on the evidence that a non-expert mentions it
is good value for money. You cannot properly—under normal circumstances—purchase a car on
this meagre evidence. The relevant alternatives approach says that if it matters significantly whether
the belief is true, the sphere of relevant alternatives accordingly increases; when stakes are higher,
the disregardability threshold is more distant.
The stakes sensitivity of whether evidence suffices for action stems from two distinct kinds of
error cost. Firstly, it can matter that your belief is true, rather than false. The mattering is rooted
in having a true belief (rather than whether p). Secondly, the mattering can be rooted in whether
p, given your deliberative context. These can come apart, illustrating two distinct sources of the
stakes-sensitivity of whether evidence suffices for action. To illustrate the first source, it does not
matter whether your job interviewer’s name is Jill or Jane, but it is paramount your belief is correct;
it matters not what your passport number is, it only matters that you transcribe it accurately on
visa applications. In such cases, it matters that S’s belief is true, rather than false, but it does not
matter much whether p rather than not p. For examples of the second kind—cases where it matters
whether p, and this mattering underwrites why it matters that S tracks whether p—recall Hiram’s
high stakes. It makes a considerable difference to Hiram whether the bank is open, and this
difference means Hiram must track whether p. The substantial costs of not p within Hiram’s
deliberative context explain why he needs excellent evidence before relying on p.12
There are two related kinds of practical stakes concerning relying on beliefs for action, but they
do not underwrite stakes sensitivity about whether evidence suffices for action. Firstly, it can
matter for action whether S possesses a belief, but it matters not whether the belief is true. Suppose
S will not apply for a job unless he first acquires the confidence-inspiring belief ‘I am tall’, for
11
Different relevant alternative theories offer various explanations for why this error possibility cannot be ignored.
Explanations include that it could easily happen (cf. Pritchard 2005) or is a normal outcome (cf. Smith 2010, 2016),
reasonable action and assertion demand attending to the possibility and a reasonable person would attend to it
(Lawlor 2013: 164), the possibility is not destabilising (McKinnon 2013), or is salient (Jackson 2018), the result
actually obtains or is relevantly similar to the actual outcome (Lewis 1996: 557), and that social convention renders
the possibility relevant (Heller 1989).
12
This distinction differs from Worsnip’s (2015, forthcoming) gloss on A-stakes and W-stakes. On Worsnip’s
account, the passport number has high W-stakes, for example. This is because Worsnip glosses high W-stakes as
holding fixed a particular attitude (such as believing the number is 75797825), it matters a great deal what the
world is like (whether the number is 75797825). Here I find fault with Worsnip’s distinction. Insofar as there are
two sources of stakes-sensitivity—viz, (i.) what matters is that S has a true belief and (ii.) what matters is that p
obtains, given S’s practical context—Worsnip’s result indicates his gloss does not aptly capture this distinction.
The passport example isolates the first source of stakes, but on his view qualifies as high W-stakes. Indeed, I aver
Worsnip’s account cannot allow for isolating A-stakes-only cases. I am grateful to Alex Worsnip for helpful
conversations.
7
example. But whether S is tall does not matter, and whether his belief is true does not matter. It
only matters whether S possesses the belief. Secondly it can matter greatly whether p, and S’s
deliberative context hinges on whether p, yet S’s action is unimportant and so it matters little
whether S possesses a true belief about whether p. Suppose S bases an insignificant and irrelevant
decision on whether a tidal wave hits a distant coastal town. Whether p is true is extremely
important, but whether S possesses a true belief is not. Neither of these two kinds of practical
stake underwrite the stakes-sensitivity of whether evidence suffices for acting on a belief, since in
neither case does the practical significance concern whether the belief is true. In the first case it only
matters whether p is believed; it matters not whether p or whether S has a true belief. In the latter
case it only matters whether p is true; it matters little what S believes or whether their belief is true.
To illustrate the relevant alternatives model of how stakes affects whether evidence suffices for
action, recall low-stakes Loren. It does not matter whether her cheque is deposited before Saturday
evening. Consider claim B, the bank will be open tomorrow. Loren’s evidence can eliminate the most
ordinary error possibilities for B, such as the bank only operates Monday to Friday. She can address
this mundane possibility because she recollects Saturday customers six months ago.
We can imagine an intermediate case, in which it matters somewhat whether the cheque is
deposited. The protagonist—let’s call her Middy—is in middle stakes. If her paycheque is not
deposited before Saturday evening, she will incur an $80 overdraft fee. In this intermediate case,
more distant error possibilities become relevant—caution demands that the sphere of relevance
expands. The disregardability threshold is more distant. Consider the error possibility that the bank
reduced its operating hours in the last few months, or that she was mistaken about seeing it open
on a Saturday. Perhaps it was not a Saturday or was not open. Such error possibilities might be
disregardable in low stakes contexts, such as Loren’s, but are relevant in higher stakes. Given these
higher stakes, to appropriately rely on her belief, Middy must address these possibilities. One way
she could address them is reading the opening hours on the sign as she passes on Friday. Gathering
this additional evidence for B addresses these more distant error possibilities.
Even if she duly reads the sign, some error sub-possibilities remain. (Recall that uneliminated subpossibilities almost always inevitably remain.) Perhaps the bank recently changed its hours and the
sign is outdated, for example, or perhaps she is mistaken that it is Friday and it is already Saturday.
But plausibly she can disregard these more remote error sub-possibilities. They are rather distant
and farfetched. Now suppose it is crucial the paycheque is deposited before Saturday evening.
Hiram risks eviction. Given these higher stakes, more remote error possibilities are relevant and
must be addressed before Hiram can rely on his belief.
The relevant alternatives framework helps model risk aversion. People with high risk tolerance see
the disregardability threshold as relatively nearby. They are happy to ignore closer error possibilities
than risk averse people. They treat as disregardable the possibility the bank changed its hours, for
example, even when risking fees or eviction. People with low risk tolerance see the disregardability
threshold as more distant. They want to address relatively distant error possibilities even with low
error costs. In addition to differences in absolute risk aversion, people differ in sensitivity to
variations in costs when deciding whether they can rely on a belief. One person might always seek
to address distant error possibilities, regardless of error costs. Another might disregard relatively
mundane possibilities, even in high stakes. These people differ significantly in risk aversion. The
second has far higher risk tolerance. But they share the feature of relative insensitivity to risk
magnitude; their caution level is not appropriately attuned to stakes.
In short, the relevant alternatives framework for modelling the epistemology of risk holds that
when more is at stake—when error is costlier—increasingly distant error possibilities must be
eliminated before a person can rely on the claim. The relevant alternatives condition for knowledge
8
posits the ‘relevant to knowledge’ threshold. The relevant alternatives framework applied to locus
one—the relationship between evidence and action and how this relationship is affected by
practical risks—posits the ‘relevant given the practical context’ threshold. This threshold is
sensitive to error costs.13
Gardiner (2019b) adapts the relevant alternatives framework to model three legal standards of
proof. These are preponderance of evidence, clear and convincing evidence, and beyond
reasonable doubt. I argue that these three standards correspond to three concentric rings of
disregardability. This project is located within the first locus of the epistemology of risk. As I
describe in section seven, perhaps relative remoteness of error possibilities is also affected by error
costs.
It is worth emphasising one can endorse the relevant alternatives framework for the epistemology
of risk, with its stakes-sensitive disregardability threshold, but reject a relevant alternatives
condition on knowledge. Or one can endorse a stakes-sensitive disregardability threshold for
action, and endorse the relevant alternatives condition on knowledge, yet maintain the
disregardability threshold for knowledge is stakes insensitive. That is, the proposed framework is
consistent with purism about knowledge.14
5. Remoteness and Relevance
The overall relevant alternatives framework is a skeletal structure that can be combined with
various accounts of what determines remoteness of possibilities.15 These include whether the
alternative is true, normal, or statistically probable. Plausibly alternatives are relevant if they spring
readily to mind or if virtuous inquirers would prioritise them. On some relevant alternative
accounts, such as that proposed by David Lewis (1996), an alternative is relevant if it saliently
resembles an alternative that is relevant.16 I advance a schema: I motivate a framework to rival the
quantifiable balance conception of evidential support. Theorists can couple the schematic
framework with competing claims about what determines remoteness to yield different relevant
alternatives accounts. In this essay I focus on five features that might contribute to the remoteness
of an alternative.
Firstly, I explore, but do not endorse, the idea that the actual can never be properly ignored. On
this view, if a proposition is true it is thereby non-remote. If the bank is actually a Monday-toFriday only bank, for example, this source of error is thereby relevant. This idea enjoys plausibility:
if a proposition is true, the person should not instead believe a false incompatible claim. Note,
though, that if alternatives are rendered relevant in virtue of actually obtaining, then ruling out all
13
Cf. Lewis (1996: n.12), Cf. Moss (2021), and Bolinger (ms-b).
14
Rysiew (2006) and Bradley (2014) argue for the universal appeal of the relative alternatives framework. See also
Hannon (2015). The relevant alternatives framework proposed here says that whether a person has sufficient
evidence to rely on a proposition is sensitive to practical factors. When coupled with some knowledge-action links,
impurism about knowledge results. Theorists who endorse the proposed relevant alternatives framework—where
the disregardability threshold depends on deliberative context—but endorse purism about knowledge, should deny
the knowledge-action links. Cf. Fantl and McGrath (2002, 2007, 2009), Stanley (2005), Brown (2008, 2014), Reed
(2010), Gerken (2017).
15
Cf. Lewis (1996), Dretske (1970, 1971), Stine (1976), Goldman, (1976), Gerken (2017), Lawlor (2013), McKinnon
(2013), Ho (2008). See also the summary in footnote 11. Some factors might explain an alternative’s remoteness,
whilst others are hallmarks of remoteness.
16
Lewis (1996: 556) describes constraints on the rule of resemblance.
9
relevant alternatives to p is factive. If p is false then a relevant alternative—the truth—remains
uneliminated.17
Secondly, an error possibility is nearby to the extent it is a normal source of error. When people
believe p given their evidence, an alternative proposition is non-remote to the extent that it is
typically true. If the error possibility is abnormal given the evidence, it is remote. If broad-winged
hawk is a normal source of error when people believe they see a red-shouldered hawk, then ‘broadwinged hawk’ is thereby a close error possibility; caution and diligence demand birdwatchers be
sensitive to this common pitfall.
Thirdly, if the evidence suggests an alternative obtains, this can render the alternative less remote.
If the bird looks like a broad-winged hawk, for instance, this can make the alternative relevant.
Notice evidence might render an error possibility relevant even if it is not a common or normal
source of error. Suppose you believe the bank will be open tomorrow, but your passengers insist
that the branch is closed on Saturdays. This evidence can, if your passengers seem competent,
render the error possibility relevant even if no high street banks in your country are standardly
closed on Saturdays. Good evidence for an error possibility can make it irresponsible to ignore
that possibility, even if the evidence is ultimately misleading.18 Widespread belief can, under some
circumstances, be evidence for that claim. Accordingly if an error possibility is widely believed,
this can render it relevant. It is, under some circumstances, dogmatically stubborn to ignore
uneliminated error possibilities that many other competent people take seriously.
Fourthly, advocates of moral encroachment can posit that moral factors affect relative remoteness
of error possibilities. If p has morally significant consequences, or could be morally harmful, for
example, then perhaps alternatives to p are relevant in virtue of being less morally risky. On this
view, particular individual error possibilities should be disregarded or addressed because of moral
features. I explore this view in section seven, but I do not endorse it.
Finally, convention can help determine which error possibilities a thinker should eliminate.19 She
need not herself independently determine which possibilities are within relevance thresholds.
People are not epistemic islands. She can defer to others by emulating what others tend to address,
take seriously, mention, and disregard. Convention might help determine both how distant the
disregardability threshold is and the relative remoteness of individual error possibilities. Social
epistemology typically highlights ways we gain information, evidence, and skills from others. The
relevant alternatives framework highlights another kind of epistemic dependence: we absorb from
others a sense of which error possibilities are properly disregardable. I return to this in section
eight. Convention can be domain-specific. Perhaps professional standards regard some alternatives
as relevant, for example, that other professions can disregard. If p is ‘the child died from a
trampoline accident’, a grievance counsellor can readily disregard filicide error possibilities. But the
police investigator cannot.
17
Lewis (1996). A reviewer asks ‘Why won’t this idea make all kinds of errors collapse into errors of misplaced
relevance threshold?’ To clarify: The error is failing to realise that a particular error possibility is non-remote. The
believer takes the possibility to be ignorably remote when it is not because, unbeknownst to them, it actually
obtains. They thus misjudge the relative remoteness of a specific possibility. The believer might be correct about
the location of the disregardability threshold, but be mistaken about the remoteness of a specific error possibility.
18
Suppose p is ‘the classroom is safe’ and error possibility T is ‘Ted plans a shooting today’. If evidence, such as an
anonymous tip, indicates T, then T cannot be disregarded even if false and modally distant. That is, even if Ted is
not planning a shooting and would never do so.
19
Cf. Lewis’s (1996: 559) rule of conservation; Heller (1989).
10
One can adopt the relevant alternatives framework but disagree about which factors determine
remoteness of error possibilities. Some theorists might hold remoteness is determined by the
possibility’s resemblance to the actual world, for instance, and deny remoteness can depend on
social convention or what inquirers tend to consider. Others might posit the relative
disregardability of error possibilities is determined in part by social convention, and so error
possibilities can be closer in virtue of what possibilities people tend to think of addressing. This
paper motivates a framework for illuminating the epistemology of risk, and thus its ultimate aim is
relatively ecumenical about which features determine remoteness. I will note, though, a restriction
and a reservation.
First, the restriction. According to a relevant alternatives account of the epistemology of risk,
whether an error possibility is disregardable is unaffected by fleeting features of conversation or
thought, such as whether the (otherwise wholly farfetched and irrelevant) possibility is mentioned
or is currently being considered. Allowing fleeting features to affect relevance would mean that
whether a person’s evidential position concerning p is sufficient to rely on her belief is hostage to
fugacious features of the moment. If it does not matter whether the paycheque is deposited
promptly, merely raising the possibility the bank changed its hours does not undermine Loren’s
acting appropriately. And even if stakes are high, Hiram’s merely considering an evil demon does
not render that possibility relevant.
Secondly, readers might wonder whether remoteness of error possibility can be determined wholly
by similarity to the actual world, so that possibilities are nearby just to the extent they resemble the
actual world. The resulting view resembles Duncan Pritchard’s modal account of risk, which itself
descends from his safety account of knowledge.20 I have reservations, which I discuss in depth
elsewhere and summarise here. Firstly, the resulting view is committed to the factivity of
eliminating relevant alternatives. On a Pritchardian view, the actual is always relevant, and so one
cannot eliminate all relevant alternatives to a false claim. The factivity of safety does not impede
its use in an account of knowledge, since knowledge is factive. But factivity is less anodyne in an
account of when a person may rely on belief in her deliberative context, when belief is justified,
how to negotiate morally risky beliefs, and the epistemology of legal standards of proof such as
‘preponderance of evidence’. The relevant alternatives framework offers flexibility: theorists can
choose between factive and non-factive accounts. Secondly, a Pritchardian relevant alternatives
account is unappealing to theorists skeptical about global similarity comparisons. The relevant
alternatives framework, by contrast, does not hinge on the viability of global similarity
comparisons.21
Further objections centre around how Pritchardian remoteness, and resulting appropriateness of
belief and action, are determined by external facts concerning how the world actually is, and do
not adequately reflect the agent’s perspective and evidence. Gardiner (2020) argues that Pritchard’s
view cannot explain the epistemic flaw with profiling. Suppose a theft occurs and only two people,
Jake and Barbara, could access the building. Suppose that S, based only on the statistical evidence
that men commit theft at far higher rates, believes Jake rather than Barbara committed the theft.22
Pritchard argues his account explains the flaw in such beliefs: they are unsafe. I dispute this; in
many such cases, the resulting belief is safe. This is because if Jake committed the crime he, not
Barbara, did so in nearby worlds.
20
Pritchard (2005; 2015; 2017), Heller (1989).
21
Some people foster reservations about metaphysical underpinnings of Pritchard’s account, such as possible worlds.
These worries might be set aside because arguably one can recast Pritchard’s account with a different metaphysical
substrate. But global similarity comparisons are integral to Pritchard’s account.
22
This example first appears in Buchak (2014).
11
Similarly, safety arguably legitimates ignoring base rate evidence when you ought not. Suppose
Frank is tested for a rare genetic disease. He knows the testing method generates some false
positives. The result is positive, and he forms the belief that he carries the disease. His belief is
true, and he was genetically determined to carry the disease. For at least some versions of the
example, Frank nonetheless should not form the belief; he commits the base rate fallacy. Yet his
belief is safe. Not easily could his belief have been wrong, because it is a stable feature of Frank
that he carries the disease. A Pritchardian version of the relevant alternatives framework, I claim,
confronts problems explaining the error of Frank’s belief.
Plausibly an account of remoteness based solely on similarity to the actual world cannot explain
the epistemic potency of misleading evidence. Suppose misleading evidence frames Ted as
planning a school shooting. In fact, he does not and would not in nearby worlds. Plausibly the
belief ‘there will not be a school shooting’ is safe on Pritchard’s view. But police cannot disregard
the misleading evidence. They must investigate. The relevant alternatives framework offers a ready
explanation: evidence, even if misleading, renders error possibilities relevant. Finally, an account
in which remoteness is determined solely by similarity to the actual world lacks resources to model
the second way moral considerations might affect relevance, introduced in section seven, namely
affecting relative remoteness of particular possibilities.23
For these reasons I am skeptical that remoteness is determined solely by similarity to the actual
world. That said, there is room for reasonable disagreement, which is why I characterise it as a
‘reservation’, and not a ‘restriction’.
6. Moral Encroachment and Relevant Alternatives
I turn now to the second locus of the epistemology of risk—the putative risks, costs, and harms
of the beliefs themselves, regardless of any actions that rely on the belief. Recall the earlier
examples, Cosmos Club and Tipping Prediction. The woman and Spencer form beliefs about
individuals based on membership of a racial group.
Advocates of moral encroachment argue that, in at least some cases, such beliefs are morally
wrong. The beliefs stereotype and pigeonhole individuals based on group-level features, fail to
recognise people as individuals, and systematically judge people as occupying lower social status.
Many advocates of moral encroachment also argue that such beliefs are impeccable by the lights
of orthodox, purist epistemology. The beliefs are well-supported by the evidence, reliably formed
and, in some cases, true. Advocates of moral encroachment argue that if beliefs can be morally
wrong yet epistemically impeccable, then we can be epistemically permitted or even required to do
something morally impermissible.24
As Mark Schroeder (2018a: 13) writes,
Gendler argues that in cases like [the Cosmos Club] there is a conflict between epistemic
rationality and avoiding implicit bias—given underlying statistical regularities in the world,
many of which are directly or indirectly caused by past injustice, perfect respect for the evidence
23
Gardiner (2017) argues that safety cannot account for the epistemic value of knowledge because safety is always
determined by other properties—such as good reasons and evidence—and those other properties, not safety,
confer epistemic value. I am grateful to Duncan Pritchard for many helpful conversations on these topics.
24
Cf. Bolinger (forthcoming), Basu and Schroeder (2019), Basu (2018, forthcoming-a), Moss (2018a, 2018b),
Schroeder (2018b), Gendler (2011). For critical discussion, see Gardiner (2018a), Toole (ms), Bolinger (ms-b),
Fritz (ms), Fritz and Jackson (ms).
12
will require sometimes forming beliefs like the woman in the club. But the belief that the woman
forms is racist. And I hold out hope that epistemic rationality does not require racism. If it does
not, then the costs of [the woman’s] belief must play a role in explaining why the evidential standards are higher,
for believing that a black man at a club in Washington, DC is staff. And I believe that they
are—a false belief that a black man is staff not only diminishes him, but diminishes him in a
way that aggravates an accumulated store of past injustice. [Italics added.]
Advocates of moral encroachment claim that epistemic demands are partially shaped by moral
features of beliefs. The vignettes do not exhibit conflict between moral and epistemic demands,
they argue, because moral features affect whether the belief is epistemically justified. Given the
moral significance of such beliefs, moral encroachment holds, more evidence is required for the
belief to qualify as epistemically justified.
Moral Encroachment. The epistemic justification of a belief can depend on its moral features.
In the above quote, Schroeder contends that purist epistemology cannot explain the apparent
conflict between epistemic and moral demands in Cosmos Club and Tipping Prediction. The
challenge he sets is explaining the moral and epistemic normativity in such cases within a purist
framework without concluding that epistemic normativity requires morally impermissible beliefs.25
One strategy for responding to Schroeder’s challenge holds that such beliefs violate epistemic
demands of purist epistemology. The cases do not threaten purist epistemology because purist
epistemology would not condone such beliefs.26 The relevant alternatives framework bolsters this
strategy. The relevant alternatives condition on knowledge, recall, holds:
Relevant Alternatives Condition on Knowledge. S knows that p only if S can rule out
relevant alternatives to p.
The relevant alternatives condition suggests a purist reply to Schroeder’s challenge. In each
vignette that putatively motivates moral encroachment, the believer fails to eliminate a relevant
alternative. This precludes knowledge-level epistemic standing, and so the belief is epistemically
faulty according to purist epistemology.
Advocates of moral encroachment claim the beliefs in cases like Cosmos Club and Tipping
Prediction are, according to purism, ‘epistemically impeccable’ and exhibit ‘perfect respect for the
evidence’.27 The relevant alternatives framework suggests this claim is false. Crucially, this purist
response to Schroeder’s challenge holds the belief violates purist epistemic demands. The relevance
of the uneliminated relevant alternative is independent from moral features of the case.
25
Versions of this challenge are articulated in Gendler (2011), Basu (2018; forthcoming-a), and Gardiner (2018a).
See also the ‘problem of coordination’ in Basu and Schroeder (2019). Note that advocates of moral encroachment
must explain the status of testimonial beliefs in such cases. That is, if the woman tells a third-party that Franklin
is staff, is the hearer’s belief epistemically justified? This question interrogates whether the relevant moral stakes
are transferred by testimony. If they are not transferred, this suggests a form of knowledge laundering (cf.
MacFarlane (2005)). If they are, the moral encroachment account should explain why the stakes remain high, given
that the hearer’s belief is based on testimony rather than profiling. Accounts that locate the moral stakes in the
social effects of such beliefs, rather than the believer’s moral conduct, will likely fare better at this. I am grateful
to the Moral Encroachment Reading Group at Oxford University for insightful discussion on these topics.
26
Gardiner (2018a) defends purism by arguing such beliefs are not well supported by the evidence. Cf. Munton
(2019), Toole (ms).
27
Schroeder (2018a; 2018b), Basu and Schroeder (2019).
13
Consider the claim that John Hope Franklin is a club member. This is an error possibility for the
woman’s belief that Franklin is an attendant. Relevant alternative theorists who hold that error
possibilities are relevant in virtue of actually obtaining have a simple explanation. On such views,
the proposition’s truth—Franklin is a member—renders it relevant. The woman does not rule out
this possibility, and so her belief is not knowledge. Those who deny that truth renders relevant can
appeal to other resources: The error possibility is relevant because it is a common source of error.
People often mistake Franklin, and black members generally, for staff. Or it is relevant because
suggested by available evidence, such as Franklin’s lack of uniform. The error possibility is a
normal circumstance and is one an epistemically virtuous inquirer would think to address. The
relevant alternatives framework can thereby respond to the challenge from moral encroachment
by giving a purist account of the central motivating cases: The believer fails to rule out a relevant
error possibility, and the possibility is relevant for purist reasons. This response to Schroeder’s
challenge undermines the motivation for endorsing moral encroachment.
Some theorists hold that moral encroachment only affects the epistemic status of false beliefs.28
On such views, Cosmos Club illustrates moral encroachment, but Tipping Prediction does not.
The Cosmos Club belief is false, and false racial profiling constitutes distinctive wrongs, and so
the belief needs more evidential support to qualify as epistemically justified. Many purist relevant
alternative accounts readily explain purist epistemic error with such beliefs. The belief is false, and
so (according to prominent relevant alternative theories) there is at least one uneliminated relevant
error possibility, namely, the truth.29 For accounts that deny that truth renders an alternative
relevant, note that since the alternative is true, typically other features obtain that render the
alternative relevant. It is normal and mundane, modally nearby, widely believed, a common source
of error, and so on. In most cases there will be evidence or overlooked clues indicating the true
error possibility obtains. Thus, where p is false, typically there is at least one uneliminated relevant
alternative. And, I aver, for vignettes in which there is no uneliminated relevant alternative, we
accordingly lack the intuition that the belief commits any moral (or epistemic) fault to explain.
Most advocates of moral encroachment hold that moral features can also affect the epistemic
status of true beliefs.30 They claim moral features of Spencer’s belief can increase the amount of
evidence needed to justify his belief, even though his belief is true and—by hypothesis—well
supported by evidence.31 A purist relevant alternatives account plausibly highlights epistemic flaws
in such cases—cases where the target belief is true—without invoking moral encroachment.
Plausibly in such cases, p is true—Spencer will tip lower than average—but relevant alternatives
remain uneliminated. A relevant alternatives account can thereby explain such cases while retaining
purism. To see how, consider a related example, from Moss (2018b).
Administrative Assistant. A consultant visits an office. He knows few people visit the office
who are not employees and that almost every woman employee is an administrative assistant.
The consultant sees a woman and forms the belief ‘she is an administrative assistant’. His belief
is true.
Although his belief is true and (putatively) well-supported by the evidence, the consultant’s
evidence fails to eliminate relevant error possibilities. Alternatives include that the woman is a
28
One must not conflate (i.) whether moral features affect the epistemic justification of true beliefs or only affects
false beliefs with (ii.) whether moral encroachment arises from costly beliefs or only from risky beliefs. Several
theorists, such as Moss (2018a) and Schroeder (2018b) hold that only riskiness—not costliness—gives rise to
moral encroachment, but maintain that moral features can affect the epistemic status of true beliefs.
29
Cf. Lewis (1996: 554)’s rule of actuality.
30
Cf. Moss (2018a: 19), Basu (2018), Schroeder (2018b), Bolinger (forthcoming).
31
Gardiner (2018a) describes purist epistemic flaws with Spencer’s belief.
14
visitor, interviewee, researcher, manager, custodian, caterer, and so on. These alternatives are not
eliminated by the consultant’s evidence. (Some sub-possibilities may be eliminated, such as her
being a caterer in catering uniform.) And plausibly at least some of these alternatives are relevant.
They might be relevant because the possibilities are normal, everyday, or suggested by his evidence.
Perhaps there are several woman researchers at the office, for example, and these women render
relevant the possibility that the observed woman is also a researcher. Perhaps her being a manager
is rendered relevant because mistaking a woman boss for an administrative assistant is a common
error, or underestimating professional status on the basis of gender is a common source of error.
These are purist epistemic flaws with the consultant’s belief. His evidence does not eliminate
relevant error possibilities, and the error possibilities are relevant for purist epistemic reasons.
In general, statistical demographic evidence—when used to support outright beliefs about
individuals—leaves relevant error possibilities uneliminated. That is, bare statistical evidence
cannot, in most cases, eliminate all relevant alternatives.32 This is because, if evidence is purely
statistical, possibilities in which an individual does not conform to their reference class are not
evidentially distinguished from possibilities in which they do conform. Given only demographic
evidence, the error possibilities are not farfetched compared to the target claim p. The alternatives
are commensurate along important epistemic dimensions, including those that determine
farfetchedness.33 To illustrate: Men tend to have shorter hair than women, and an arbitrarily
selected woman probably has longer hair than the average adult hair length. Suppose now we
arbitrarily select a woman and, using only this evidence, form the outright belief that her hair is
longer than average. Given all we know is her gender and the general demographic statistic, the
possibility her hair length is longer than average is just as preponderant as the possibility that it is
shorter than average. Nothing about the case makes one claim more preponderant, relevant,
salient, or normal than the other. The case is epistemically similar to—perhaps identical with—a
lottery case. Since at least one possibility is relevant, and nothing distinguishes the possibilities’
remoteness, the error possibilities are also relevant. Thus, we cannot rule out a relevant alternative
and so lack knowledge.34
The relevant alternatives account can thereby explain epistemic errors exhibited by cases that
motivate moral encroachment, and can do so within a purist framework. It can answer Schroeder’s
challenge, and undermine the motivation for moral encroachment. It also makes a prediction: For
vignettes in which there is no uneliminated relevant alternative, we accordingly lack the intuition
that the belief has any moral (or epistemic) fault to explain. The reason we judge there is something
amiss with the consultant’s belief, for example, is that he believes despite uneliminated relevant
error possibilities.
32
Lewis (1996: 557) outlines a relevant alternatives explanation of lottery beliefs. Cf. Jackson (2018), Smith (2010;
2016), Moss (2018a; 2018b), and Gardiner (forthcoming-b).
33
Given bare statistical evidence, error possibilities might be statistically improbable, but they can be normal
outcomes, modally nearby, true, typical sources of error, outcomes to which we are insensitive, and so on. Gardiner
(2020) evaluates which epistemic values are secured—and, crucially, not secured—by bare statistical evidence.
Note most social judgements are not based on statistical evidence. People draw on appearances, interpretation,
stereotypes, associations, and so on. Note also social statistics usually have low magnitudes. A lottery example
might underwrite an extremely low probability of error, but realistic social base rates do not. An arbitrary woman
likely has longer-than-average hair, for example, but the probability magnitude is relatively slim. On many relevant
alternatives accounts, to the extent the error possibility is likely, it is accordingly less remote.
34
We can find ways to discriminate the two possibilities, of course. Suppose someone tells us the person’s hair is
long. The only uneliminated sub-possibilities in which her hair is shorter than average are ones where the informant
has outdated information, a poor understanding of hair length, or is mistaken about the woman’s identity. These
remaining possibilities are more distant.
15
7. A Model for Moral Encroachment
In section six I argued against moral encroachment. Specifically I articulated epistemic flaws
exhibited within the central cases marshalled to motivate moral encroachment, but did so within
a purist relevant alternatives framework. I deny moral encroachment, but the relevant alternatives
framework can instead serve to develop the view.
Recall the central vignettes that motivate moral encroachment, such as Cosmos Club, Tipping
Prediction, and Administrative Assistant. Advocates of moral encroachment could argue the
central moral and epistemic flaw exemplified by such cases is the believer fails to eliminate a
relevant alternative and—crucially for moral encroachment—moral features underwrite why the
alternative is relevant. On this view, it is because the belief is morally wrong, risky, or costly, that
uneliminated alternatives are preponderant. The view qualifies as moral encroachment because
moral features affect the belief’s epistemic status.
To test for moral encroachment, we can contrast the vignette with one where the evidence is
equivalent, but the belief is—by the lights of moral encroachment—morally neutral. If the
epistemic status of the belief differs, this indicates moral encroachment. Advocates of moral
encroachment might argue, for example, that Spencer’s belief denigrates people on the basis of
race. And this moral fact makes particular error possibilities relevant, namely error possibilities
that do not denigrate people based on race. A relative alternatives approach to moral
encroachment holds, for example, that before an individual can justifiably believe p, where p is
racist, they must first eliminate various particular error possibilities. And those error possibilities
are rendered relevant by the fact that p is racist, and the particular error possibilities are non-racist
interpretations of the evidence.
Sarah Moss (2018b: 221) employs a relevant alternative condition to explain cases like
Administrative Assistant. She argues for a moral rule of consideration, which holds:
[I]n many situations where you are forming beliefs about a person, you morally should keep in
mind the possibility that they might be an exception to statistical generalizations. To give an
intuitive example, as you form beliefs about a woman in an office building, you should keep in
mind the possibility that she is unlike the average woman in the building with respect to whether
she is probably an administrative assistant. That is, you should keep in mind the possibility that
she probably has some other position.
Moss argues this rule, if followed, renders salient an error possibility. By following the rule, the
consultant thereby considers the possibility, which makes it relevant. Since his evidence cannot
eliminate this possibility, his belief fails to qualify as knowledge. Moss (2018a: 191) explains, ‘if [a
person] abides by the moral rule of consideration as she forms an opinion by racial profiling, her
opinion may fail to be knowledge, since it will be inconsistent with salient possibilities that she
cannot rule out.’
But Moss’s relevant alternatives approach to moral encroachment faces a problem. The moral rule
of consideration cannot explain an epistemic fault in those cases where people simply fail to abide
by the moral norm. If the consultant violates the norm, by failing to ‘keep in mind the possibility
that [the woman] might be an exception to statistical generalizations’, this error possibility remains
irrelevant on Moss’s view; the norm does not generate an uneliminated error possibility. This is
because, on Moss’s view, the error possibility is relevant only because considered.
Moss (2018a: 192) notes this problem and suggests other features of moral encroachment explains
cases where the believer violates the moral norm. But if these other features explain cases where
the moral norm is violated, they might equally explain cases where the norm is followed. This can
16
leave the moral rule of consideration explanatorily redundant. This is because if Moss’s moral
norm obtains—S should bear the error possibility in mind—it obtains only because the error
possibility is rendered epistemically relevant by moral facts. Salience drops out of the picture.
Note too that linking relevance to what is being considered, as Moss does, violates the restriction
articulated in section five. On Moss’s view, whether relevant alternatives are eliminated can depend
on fleeting features of context.
I propose that a better relevant alternatives account is available to advocates of moral
encroachment. Advocates of moral encroachment should posit that moral facts themselves
determine whether error possibilities are relevant. Whether non-racist error possibilities are
relevant is not hostage to whether Spencer follows the moral norm of consideration. On this
proposal, relevance is not mediated only by what is considered. Relevance is directly affected by
moral facts.
Moral facts can, on this proposal, affect which error possibilities are relevant in two ways. The first
is that high moral stakes mean that additional alternatives are relevant; increasingly farfetched error
possibilities must be eliminated for the belief to qualify as epistemically justified. Given the high
moral stakes, in other words, the disregardability threshold expands. This mechanism is familiar
from section four.
The second potential role is more controversial. It holds that moral considerations can render
particular individual error possibilities relevant or irrelevant, precisely because they are morally
differentiated. This second role does not simply move the disregardability threshold, expanding or
contracting the set of relevant alternatives; it affects the relative disregardability of particular
alternatives.35 To illustrate, recall the consultant’s belief that the woman is an administrative
assistant. Suppose ‘researcher’ and ‘custodian’ error possibilities are equally evidentially supported.
Some advocates of moral encroachment claim the possibility that the woman is a researcher is
rendered relevant by moral considerations, while the custodian possibility is not. Advocates of
moral encroachment might posit this is because researcher and custodian alternatives exhibit
different moral features.36 According to this second moral encroachment role for moral
considerations, the possibility the woman is a researcher can be relevant, undermining the
consultant’s knowledge, whilst the possibility the woman is a custodian remains irrelevant.
This exemplifies particular alternatives being rendered relevant by moral features. Particular
alternatives can also be rendered irrelevant by moral features. Perhaps in some cases the error
possibility that a speaker is lying about mistreatment, for example, is treated as remote, and so
disregardable, for moral reasons. Suppose a woman says that she left her doctoral programme
because her senior male advisor plagiarised her research, for example. Consider the error
possibility that instead she was not sufficiently motivated or organised to complete the programme.
Some versions of moral encroachment hold this error possibility is disregardable—it should be
treated as remote—because of moral contours of the case. From an impartial perspective, it is
plausibly an ordinary error possibility. Doctorates are difficult to earn, students leave because illsuited, and people frequently underplay or confabulate causes of perceived failures. Advisor’s work
can innocuously resemble students’. In some contexts, such as designing university pastoral care,
35
I am grateful to Kyle Scott, Dominic Alford-Duguid, and the Moral Encroachment Reading Group at Oxford
University for helpful discussions.
36
This is not something I endorse, but presumably some advocates of moral encroachment do, since employment
examples are frequently used to motivate moral encroachment. Perhaps some such employment examples merely
illustrate judging individuals based on race, regardless of social status. But usually relative social status plays some
dialectical role.
17
we must acknowledge the ordinariness of such possibilities. Many theorists endorse the idea that,
for at least some purposes, we can disregard these possibilities and believe the woman for moral
reasons. Some versions of moral encroachment endorse such beliefs as epistemically justified
because of these moral contours.37
According to this second role, moral factors do not simply affect the location of the
disregardability threshold; moral factors also affect the relative preponderance of possibilities. To
be clear, I do not endorse moral encroachment. I instead think the relevant alternatives framework
helps undermine the motivations for moral encroachment by responding to Schroeder’s challenge.
But it is a virtue of the relevant alternatives framework that it can distinguish and model these
features of moral encroachment views.
Most advocates of moral encroachment endorse the simple ‘quantifiable balance’ conception of
the relationship between stakes, evidence, and justification.38 On this model, stakes affect the
quantifiable probability of truth required for a belief to qualify as justified. If stakes are high,
evidential probability must accordingly be higher. But this simple quantifiable balance model
cannot capture the second potential role for moral considerations. The relevant alternatives
framework, by contrast, can distinguish and model these two distinct potential roles.
Similarly, the relevant alternatives framework also provides resources to model faith, such as faith
in God’s existence or a partner’s romantic fidelity. Having faith in claim p can be understood as
treating nearby error possibilities as if they are distant—that is, as more distant than impartial
assessment would permit or more distant than the relevant disregardability threshold. This occurs
in the two ways outlined above. The overall threshold for disregardability is affected or assessment
of particular individual possibilities is affected.
Thus the relevant alternatives framework can illuminate the second locus for the epistemology of
risk by either undermining the motivation for moral encroachment or, alternatively, by providing
resources to model the nuances of moral encroachment.39 As I develop below, the relevant
alternatives framework also illuminates the third locus of the epistemology of risk, the
vulnerabilities and risks we face as socially-situated epistemic agents.
37
Such moral or political reasoning might underwrite some general calls to ‘believe women’ about rape accusations.
Ferzan (ms) and Bolinger (ms-a) discuss the epistemology of #BelieveWomen. Gardiner (ms) argues that rape
accusations are highly likely to be true, which provides purist reasons for belief. Note the above error possibility
includes that the woman is lying or mistaken about her advisor. Lying about mistreatment is (plausibly) relatively
unusual, which provides purist reason for treating the error possibility as somewhat remote—it is somewhat
remote. Gardiner (forthcoming-a) employs the relevant alternatives framework to illuminate the epistemology of
rape accusations.
38
Cf. Worsnip’s (forthcoming) discussion of the ‘beaker model’ and Bolinger’s (ms-b) distinction between thresholdraising and sphere-expanding variants of moral encroachment. A further objection to the rival quantifiable balance
model for moral encroachment is that for many cases, simply having stronger evidence of the same kind does not
solve the problem. If the belief is based on profiling, for example, the problem is not solved simply by having
stronger (non-extremal) statistical support. The quantifiable balance model cannot explain why the problem
remains, since according to the quantifiable balance model, the only available solution is increasing evidential
probability. A relevant alternatives account, by contrast, offers a remedy: evidence must address particular error
possibilities.
39
One virtue of the relevant alternatives framework for modelling epistemic risk is its superiority over rival accounts
at explaining the central vignettes used to explore the epistemology and morality of profiling. These vignettes
include the iPhone, taxi, and prisoner cases, and the three vignettes above. Cf. Nesson (1979), Thomson (1986),
Buchak (2014), and Enoch, Spectre, and Fisher (2012). Elsewhere I argue that rival views—specifically, safety,
sensitivity, causal relation, normic support, and quantifiable balance accounts—cannot explain why judgements
exhibited in these vignettes are epistemically flawed. See Gardiner (2018a, 2018b, 2020, forthcoming-b,
forthcoming-c). Rather than criticising rival accounts, this essay instead focuses on developing and motivating the
relevant alternatives framework.
18
8. Social Epistemic Risks: Testilying, Crying Wolf, Conspiracy Theories, Gaslighting
The relevant alternatives framework highlights and systematises two kinds of error. Firstly, a
person can err by treating as remote an error possibility that is relatively ordinary. Suppose S treats
the possibility that a priest or police officer lies about their conduct, for example, as farfetched and
so disregardable. If such error possibilities are not remote, this inaccurate background assumption
breeds ignorance. Suppose S considers claim p, and disregards an error possibility because it
involves a priest’s lying. Ignorance results because either (i.) p is falsely believed or (ii.) p is true
and believed but unknown because of uneliminated relevant error possibilities.
Secondly, the converse error treats a farfetched error possibility as ordinary and so relevant.
Suppose children rarely lie about sexual molestation by religious leaders, for example, and that
almost never have multiple children conspired to falsely accuse. Now suppose eight children each
accuse a particular priest of molestation, but the village elders take very seriously the error
possibility that the children conspired against the priest. This potential source of doubt is remote,
but the elders deem it relevant. The children cannot provide evidence they did not conspire, and
so the elders do not believe the children. The elders err by treating a remote source of doubt as
relevant.
These two kinds of error are, in some domains, committed systemically. That is, the errors are not
randomly distributed. Misestimates of remoteness of certain kinds of error possibility exhibit
patterns. This systemic misestimation, in at least some cases, comprises systemic social epistemic
injustice.40
These kinds of error—and concomitant social epistemic injustice—are pernicious. The envisioned
remoteness of an error possibility is typically implicit and not easily raised to explicit attention. It
is fuelled by prejudice, emotion, upbringing, and social context, and so is likely resistant to
counterevidence. Misplacements of error possibilities might be difficult to discuss, diagnose, and
correct. It can be relatively easy to tell whether someone lacks evidence. It can be harder to
communicate about relative perceived remoteness of error possibilities.
This discussion highlights an epistemic danger we face, not as believers but as testifiers. The
relevant alternatives framework illuminates and systematises how a person’s conduct—and the
conduct of other members of their social group—can undermine their credibility. Suppose a police
officer lies under oath. She might conceive of this as benign. She aims only to secure conviction
in this one trial. But in so doing she thereby helps make relevant the error possibility that police
officers commit perjury. This error possibility can thus become relevant to each legal proceeding
involving police testimony, making it harder to prove claims beyond a reasonable doubt. Indeed,
the term ‘testilying’ has been coined to describe police officers committing perjury. Instances of
police perjury infect police witness statements in other trials.
Similarly suppose a wife lies to her partner about her health. She might do this only occasionally,
to protect her partner’s feelings. But in so doing she renders the error possibility that she is lying
relevant when making other similar assertions. The possibility is no longer remote. Thus the
relevant alternatives framework helps model the phenomenon of undermining one’s own
credibility. It similarly models the epistemological force of ‘crying wolf’.
Social epistemology often focuses on the evidence we possess, and our abilities and resources for
accessing further evidence. A community can enfeeble an individual’s epistemic position by failing
40
Gardiner (forthcoming-a) describes systematic misestimates concerning rape accusations.
19
to supply adequate educational and evidential opportunities. The relevant alternatives framework
highlights a further epistemic vulnerability we face as socially-situated epistemic agents: Our
community helps determine our relevant alternatives.41 A community thus has the ability to
undermine or enfeeble an individual’s epistemic position by raising error possibilities that are
difficult to eliminate. This is illustrated by conspiracy theories. A community might take a particular
conspiracy theory seriously, and thereby raise it to relevance. This is because, as noted in section
five, the very fact that many people take seriously the possibility can constitute evidence it is true or
is a serious possibility. Conspiracy theories can be difficult to eliminate; once live, they remain
uneliminated relevant alternatives. Individuals in the community thereby face additional challenges
in gaining associated knowledge.42
Compare two teenagers. One lives in a wealthy, privileged community. His friends and family do
not entertain broad anti-government conspiracies. The idea is simply never raised. The second
lives in a marginalised, poor community. Anti-government conspiracy theories are frequently
raised, discussed, taken seriously, or presumed. Many friends and relatives treat them as true, or at
least well-supported. Consider the claim that human-caused increases in CO2 emissions contribute
substantially to climate change. Given some considerable evidence, such as a textbook, the first
teen can know this claim with relative ease. The second teen faces more challenges. Plausibly his
knowledge is threatened by the many live error possibilities his community forces upon him. He
must eliminate more relevant alternatives—some of which, like conspiracy theories, are
notoriously difficult to disprove. It is harder for him to gain knowledge.43
A related phenomenon occurs when an individual hijacks a person’s epistemic context by raising
error possibilities to relevance, or at least attempting to. A husband might ventilate error
possibilities and so render it harder for his wife to possess knowledge and conviction. Suppose the
wife initially believes p, ‘My sister visits frequently because of sisterly affection’. Her husband might
continually raise error possibilities, such as ‘Your sister only contacts you when she wants
something’, ‘She is stealing money during visits’, ‘She only likes to spend time with her nephew
and cares little for you.’ By repeatedly raising such possibilities, the husband can eventually cause
them to become relevant where previously they were not. This is because the husband’s
intimations can provide evidence for error possibilities such as q, ‘The visits are financially
motivated’. Assertions like ‘q’ are typically evidence to believe their content, and the husband’s
various other assertions, such as ‘Your sister is in debt again’, can build a case for q. He draws
attention to reasons to believe q rather than p, and thereby supplies his wife with evidence for q.
If his campaign generates sufficient evidence, he no longer raises irrelevant possibilities; he himself
has rendered them relevant. Given the wife’s evidence—which includes sustained interactions with
her husband—these rival explanations for her sister’s visits are no longer remote possibilities.
41
Cf. Lewis’s (1996: 559) rule of conservation. See also Blake-Turner (2020). Which error possibilities are relevant
is partially determined by what the community considers disregardably farfetched. This socially-embedded aspect
of the relevant alternatives approach might illuminate the Jamesian distinction between ‘live’ and ‘dead’ hypotheses
(James, 1896). I am grateful to Kenny Easwaran for illuminating discussion.
42
Gardiner (forthcoming-a) asks whether society’s treating an otherwise remote error possibility as relevant can
render it relevant. Suppose a culture continually draws attention to the possibility that rape accusers are lying. I
explore whether these error possibilities are thereby rendered relevant, perhaps because they are salient, spring
readily to mind, or are taken seriously by others. If so, this can be understood as society-wide gaslighting;
possibilities that should be deemed disregardably remote are inflicted upon evaluators as legitimate sources of
doubt; evaluators are burdened with undue relevant alternatives.
43
It is worth emphasising that social marginalisation also engenders epistemic advantages, including evidence and
heightened awareness of structural features of society, such as discrimination and barriers to social equity.
20
People are not epistemic islands, and it can be irresponsible to wholly ignore others and retain
conviction despite others’ doubts.44
In addition to constituting evidence for the error possibilities, gaslighting can render the
possibilities relevant through other mechanisms. Perhaps she now begins to suspect, take seriously,
or even believe the error possibilities. On some relevant alternatives accounts, this renders them
relevant. And arguably trusting the motives of kin despite monitions from spouses is a common
source of error. His gaslighting exploits the fact that, as a social pattern, his warning has good
pedigree which should not be disregarded.
On some views, the husband cannot affect which error possibilities are relevant simply by
ventilating them. Relevance is instead determined by external features of the world, such as the
sister’s actual motives. According to these views, the husband only ever raises irrelevant
possibilities. This can still lead to the wife’s losing knowledge. She can lose conviction, and so no
longer believe her sister’s visits are motivated by affection, and she can mistakenly think the error
possibilities are relevant—led astray by her husband—and so no longer believe responsibly.
The husband can ventilate error possibilities, furthermore, without once uttering a falsehood. He
can instead simply ask questions, assert true but suggestive observations, or draw attention to
epistemic possibilities. Examples include: ‘Is money missing from your purse?’ ‘Did you think your
sister was behaving coldly today?’ ‘Was that your sister on the phone again? What does she want
this time?’ ‘Family members can take advantage of elderly relatives.’ ‘Your sister’s husband has a
lot less money than I do.’ ‘Maybe your sister is in debt again.’ If the husband avoids asserting
falsehoods, his gaslighting thereby appears harder to rebut and responsibly disregard. The relevant
alternatives framework explains why articulating questions and misleading truths are particularly
insidious tools for gaslighting. They can raise error possibilities and obscure the appropriate
response. Emphasising the epistemic significance of the distinction between relevant and irrelevant
alternatives can be ameliorative.
This hijacking of a person’s error possibilities is particularly effective and pernicious when
executed by a legitimate epistemic authority figure, such as a teacher or parent. This is because not
only does the victim tend to consider these error possibilities, they often should consider them.
Typically a person makes an epistemic error when they fail to address error possibilities raised by
epistemic authorities. Normally an epistemic authority attending to an error possibility constitutes
evidence the proposition is true or should be taken seriously. In short, the relevant alternatives
framework can model key epistemic features of gaslighting: Gaslighting forces error possibilities
onto a person. It burdens them with undue relevant alternatives.45
9. Unwitting Substitution
Social epistemology has drawn attention to distinctly epistemic kinds of injustice. One central kind,
a credibility deficit, is when a person’s assertion is perceived as less credible that it warrants. This
is commonly characterised as the hearer’s underestimating the speaker’s abilities, epistemic
44
In some marriages, granted, the wife can responsibly wholly ignore her husband and be confident he is wrong.
But gaslighting is insidious and effective when she has some reason to trust or respect her partner, and it is these
cases I have in mind.
45
Another illustration of gaslighting is when an individual is upset about someone’s conduct. Claim p might be, for
example, ‘His actions were racist’. Interlocutors raise error possibilities: He didn’t mean it, it wasn’t racist, you’re
too sensitive, you misunderstood his comment. Cf. McKinnon (2017), Abramson (2014). My discussion of
gaslighting benefitted greatly from conversations with Mark Alfano, Dominic Alford-Duguid, Renee Bolinger,
Michael Ebling, and Jessie Munton.
21
credentials, insight, reliability, knowledge, or evidence. Crewe and Ichikawa (forthcoming) propose
an alternative—compatible—mechanism of testimonial injustice.46 They suggest a speaker can
suffer testimonial injustice when hearers unduly raise the epistemic standards when she speaks.
After introducing epistemic contextualism and describing how rape accusations provoke undue
levels of doubt, Crewe and Ichikawa (forthcoming: 20, n. 41) write:
Testimonial injustice is a ‘credibility deficit owing to identity prejudice in the hearer’ (Fricker
2007, p. 28). A credibility deficit is naturally thought of as a lower-than-deserved confidence in
one’s credibility. An alternate proposal… might characterize the inappropriate invocation of
high epistemic standards as a kind of credibility deficit. Perhaps one can commit an epistemic
injustice, not by having too low an opinion of someone’s credibility, but by setting too high a
bar for accepting their word.
Crewe and Ichikawa thereby invoke the apparatus of epistemic contextualism to illuminate the
epistemic injustice of the disproportionate skepticism endemic to rape accusations.
Disproportionate, that is, because they are typically true, yet are treated with suspicion. Note this
epistemic injustice centres on assertion kinds, rather than individuals or social identities.
One potential worry for Crewe and Ichikawa’s proposal is if rape accusations do in fact raise the
conversational epistemic stakes. Some theorists might hold that rape accusations are the kind of
assertion that, by their nature, bring gravity to a discussion. If so, treating the stakes as raised is
not epistemic injustice in the sense of an undue or disproportional reaction to an assertion. The
speaker is not wronged by a hearer’s mistake because, according to this view, the hearer does not
commit a mistake. The hearer responds appropriately to the epistemic context. On this view, the
scourge is not that people unfairly treat the stakes as raised; the scourge is that such assertions do
in fact raise the stakes.47
The relevant alternatives framework suggests a novel species of epistemic injustice. Before I
explain it, I will briefly recap a different species of epistemic injustice well-illuminated by the
relevant alternatives framework. Hearers systematically overestimate which alternatives qualify as
relevant. This can happen if a hearer thinks of the disregardability threshold as distant when it is
not. (Suppose the threshold varies by practical context, and hearers overestimate the disvalue of
falsely believing rape accusations.48) Or a hearer might overestimate the ordinariness of error
46
Ichikawa (forthcoming) develops this proposal.
47
Important clarifications: Firstly, even if the stakes are raised, hearers can err by overestimating this raise. Secondly,
hearers can commit multiple simultaneous kinds of epistemic injustice. Various diagnoses are mutually compatible.
Thirdly, even if such assertions characteristically raise the stakes because of the interests of the accused, the accuser
also has interests, and these are largely overlooked or underplayed. Given the costs for the accuser of being
disbelieved, Basu (2018) argues rape accusations illustrate that moral stakes can not only raise the evidential
threshold, but also lower it. Finally, Gerken (forthcoming) and Dotson (2018) express a related worry, namely that
according to pragmatic encroachment, it is proper to treat marginalised individuals as accordingly less
knowledgeable; such hearers simply track that high stakes undermine knowledge.
48
A doubter, D, might opine ‘the stakes are serious; the accused could go to prison’. Even in legal contexts, this is
unlikely. But, regardless, that is not a consequence of D’s belief; D is an ordinary member of society. In the public
imagination, rape accusations are strongly associated with legal contexts, which drives up the perceived stakes.
This association is specious. The conversational and deliberative context is almost always interpersonal and only
rarely has legal, or even professional, consequences. An interlocutor said, to explain the high threshold for
believing rape accusations, ‘the usual context for such accusations is law courts’ (and so, the thought goes, those
high standards bleed into other contexts). This claim is false, and the mistake is common. The usual context for
believing and asserting rape accusations has relatively low practical consequences, but this is not widely noted.
Gardiner (forthcoming-a) employs epistemological tools—especially the relevant alternatives framework and the
epistemic effects of stakes—to diagnose the undue doubt endemic to rape accusations.
22
possibilities. They think of error possibilities like ‘the accuser is lying for financial gain’ as nearby
when such error possibilities are remote. Either way, the accuser faces undue skepticism.
I turn now to the novel species of epistemic injustice. This species of injustice can occur with other
kinds of assertion, but I illustrate with assertions about rape. A person articulates claim p, ‘I was
raped’. Hearers should treat ‘I was raped’ as the central claim, and only need consider relevant
error possibilities to p. Normally assertions that p are treated as eliminating most or all relevant
nearby not-p possibilities. The only remaining uneliminated error possibilities include things like
‘she is lying about whether p’ and ‘she is mistaken about whether p’. Such error possibilities are—
for most ordinary claims—treated as relatively remote possibilities. For most purposes and most
assertions, these error possibilities can be disregarded.49 That is, when people assert p, they are
typically believed without needing to also present evidence addressing the error possibilities that
they are lying or mistaken.
The epistemic injustice I delineate is that on hearing a rape accusation, hearers accidentally treat
‘she is telling the truth about p’ as the central claim, rather than ‘p’. The error possibilities that
should seem distant—ones in which she is lying or mistaken—now seem relevant. This is because
they are relevant to the mistakenly substituted claim.
A less charged example will help illustrate. Imagine evaluating either (i) ‘Bill broke his leg’ or (ii)
‘Bill is not faking, he broke his leg’. Typically error possibilities in which Bill borrows someone’s
crutches, plays a gag, lies to evade work, your third-party informant is misled, and so on, are
irrelevantly farfetched for claim (i). You would not even think of them. But at least some such
‘faking’ possibilities are relevant to claim (ii). Which ones are relevant depends on, for example,
whether Bill is present. When we consider claim (ii), sources of doubt spring to mind. And they
should.
Return now to rape accusations. Given she asserted p then—given certain background conditions,
such as that rape accusations are typically true—possibilities in which she is lying or mistaken are
distant. If instead of ‘p’ you mistakenly evaluate ‘she is telling the truth about p’, the mistakenly
substituted claim similarly brings with it the relevance of possibilities in which she is lying or
mistaken. Possibilities in which she is lying or mistaken are thus preponderant error possibilities
for this illicitly substituted claim. I propose this is a common unwitting error that hearers make
when they hear rape accusations. This conflation can help diagnose the disproportionate doubt
such claims receive and explain hearers’ tendency to think so readily of ‘she is lying’ or ‘she is
mistaken’ error possibilities.
This substitution is pernicious in part because it is considerably harder to eliminate relevant
alternatives to ‘I am telling the truth that p’, especially where p is a claim, such as ‘I was raped’,
where there is often little corroborative evidence. Hearers might not recognise this conflation in
part because it is so common. It constitutes testimonial injustice because the speaker is held to a
more demanding epistemic standard. To be believed, her evidence must rule out error possibilities
that ought to be treated as remote.
The preceding discussion focuses on when an accuser says ‘p’ and the hearer erroneously evaluates
a different claim, such as ‘I’m not mistaken that p’ or ‘I’m not lying that p’. Sometimes the accuser
49
This is because assertions are normally sensitive to p. S would not have asserted p unless p, so assertions eliminate
nearby error possibilities. Remaining error possibilities are where S asserts p despite not p, and for most assertions
these are relatively unusual situations. For some assertions, error possibilities in which the speaker is lying or
mistaken are nearby and preponderant. This includes assertions where the topic is commonly lied about, for
example.
23
herself makes formerly irrelevant alternatives relevant. Suppose Sally says ‘I was raped’. An
interlocutor asks ‘Are you sure; maybe you misremember?’ Sally replies ‘I am not mistaken that I
was raped.’ On the view proposed here, Sally’s second assertion is importantly different from her
first. Under normal circumstances, many error possibilities that are irrelevant to her first assertion
are relevant to her second. These include various possibilities in which Sally is mistaken. The
interlocutor’s question, even if well-meaning, thereby induces Sally to assert a claim for which it is
considerably more demanding to rule out relevant alternatives. The relevant alternatives
framework thus illuminates how interlocutors impair accusers in unacknowledged ways: In normal
cases, Sally’s being mistaken is irrelevant to her first assertion. The possibility is farfetched. Yet it
is relevant to her second assertion. This mechanism is more conspicuous, and accordingly less
pernicious, in the broken leg example.50
10. Conclusion
I have explored three loci of the epistemology of risk. I argued the relevant alternatives framework
can model the effect of stakes on whether evidence suffices for action. Secondly, it can respond
to Schroeder’s challenge, and thereby undermine the motivation for moral encroachment.
Alternatively, it provides a fruitful way to model the nuances of moral encroachment. As I describe
in sections eight and nine, drawing on the rich apparatus of the relevant alternatives framework
can help illuminate various risks and vulnerabilities that stem from the social-situatedness of our
epistemic agency. The framework explains and systematises several kinds of epistemic injustice
and harm. This includes patterns of over- and under-estimation of the remoteness of error
possibilities, the danger of our community enfeebling our epistemic position by rendering error
possibilities relevant, and the epistemological mechanisms of crying wolf, conspiracy theories, and
gaslighting. I also suggest a suggest a novel form of testimonial injustice, underlying the undue
skepticism rape accusations provoke, namely an illicit substitution of ‘p’ with ‘she’s telling the truth
about p’ when assessing accusations.
I articulated features that contribute to the relative remoteness of an error possibility, including
especially whether the possibility is a typical source of error and whether it is suggested by the
evidence. I also considered other features that might determine relative relevance, such as whether
the possibility in fact obtains or is morally differentiated. Crucially, the overall relevant alternatives
structure is schematic and can be combined with competing claims about which features determine
remoteness. A theorist might hold that remoteness is entirely determined by modal closeness of
possible world, for instance, or solely by convention. The basic relevant alternatives framework
remains neutral on these questions. I motivate that the framework is worth taking seriously as a
rival to the dominant ‘quantifiable balance’ model of evidential support.
I close by drawing attention to three virtues of the framework. Epistemologists often focus on
what is known, what the available evidence supports, what claims we are warranted in believing,
and how confident we can be in those claims. The rival ‘quantifiable balance’ framework
emphasises exclusively the balance of available evidence. The emphasis, then, is on what we possess
epistemically. The relevant alternatives framework, by contrast, is structured around error
possibilities and thereby draws attention to what is unknown, ways the evidence is lacking, and
what should be considered. It highlights what our evidence fails to address, and is thus a helpful
framework for thinking about the effect of risk on epistemic properties. When risks abound, the
epistemological effects of the unknown, absent, unconsidered, or unappreciated are paramount.
50
I am grateful to Heather Battaly, Catherine Elgin, Jon Garthoff, Hilary Kornblith, Declan Smithies, and an
anonymous reviewer for helpful discussions about these ideas.
24
Secondly, the framework does not rely on quantification of evidential support, and thereby avoids
problems afflicting quantificational approaches. Thirdly, it provides a richer structure—more
foundation than simple numerical probabilities. It describes increasing remoteness of alternatives
and a threshold of disregardability. This additional structure allows us to perceive and model more
features of our epistemic lives, such as the epistemic injustices described in sections eight and nine,
that are obscured by the simple quantifiable balance conception. I believe it is a framework worth
adopting or, at least, not disregarding.
Acknowledgements
Many thanks to Mark Alfano, Dominic Alford-Duguid, Rima Basu, Heather Battaly, Claire
Becerra, Grace Boey, Renee Bolinger, Rebecca Brown, Bruce Chapman, Marcello Di Bello, Julien
Dutant, Kenny Easwaran, Michael Ebling, Catherine Elgin, Iskra Fileva, Branden Fitelson, Jamie
Fritz, Jon Garthoff, Mikkel Gerken, Hilary Kornblith, Seth Lazar, Clayton Littlejohn, Sebastian
Liu, Linh Mac, Silvia Milano, Sarah Moss, Beau Madison Mount, Jessie Munton, Maura Priest,
Duncan Pritchard, Joe Pyle, Paul Roberts, Sherri Roush, Kyle Scott, Paul Silva, Martin Smith,
Declan Smithies, Alex Walen, William Wells, Alex Worsnip, and anonymous referees for
invaluable comments. I am grateful for helpful discussions at Sherri Roush’s graduate
epistemology seminars at UCLA, the Moral Encroachment Reading Group at Oxford University,
and the Between Ethics and Belief conference at Cologne University. Finally, many thanks to
audience members at ANU and to my autumn 2019 students at the University of Tennessee for
fruitful conversations on these topics.
Bibliography
Abramson, Kate (2014) ‘Turning Up the Lights on Gaslighting’ Philosophical Perspectives 28:1–30.
Achinstein, Peter (2003) The Book of Evidence Oxford University Press.
Amaya, Amalia (2015) Tapestry of Reason Hart.
Anderson, Charity (2015) ‘On the Intimate Relationship of Knowledge and Action’ Episteme
12(3):343–353.
Basu, Rima (2018) Beliefs That Wrong Doctoral Thesis.
_______ (2019) ‘What We Epistemically Owe to Each Other’ Philosophical Studies 176(4):915–
931.
_______ (forthcoming-a) ‘Radical Moral Encroachment: The Moral Stakes of Racist Beliefs’
Philosophical Issues.
_______ (forthcoming-b) ‘The Wrongs of Racist Beliefs’ Philosophical Studies.
Basu, Rima and Mark Schroeder (2019) ‘Doxastic Wronging’ Pragmatic Encroachment in Epistemology
Kim and McGrath (eds.) Routledge, 181–205.
Blake-Turner, Christopher (2020) ‘Fake News, Relevant Alternatives, and the Degradation of
Our Epistemic Environment’ Inquiry.
Bolinger, Renee (forthcoming) ‘The Rational Impermissibility of Accepting (Some) Racial
Generalizations’ Synthese.
_______ (ms-a) ‘#BelieveWomen and the Ethics of Belief’.
_______ (ms-b) ‘Varieties of Moral Encroachment’.
Bradley, Darren (2014) ‘A Relevant Alternatives Solution to the Bootstrapping and SelfKnowledge Problems’ Journal of Philosophy 111(7):379–393.
Brown, Jessica (2008) ‘Subject-Sensitive Invariantism and the Knowledge Norm for Practical
Reasoning’ Noûs 42(2):167-189.
_______ (2014) ‘Impurism, Practical Reasoning, and the Threshold Problem’ Noûs 48(1):179–
192.
Buchak, Lara (2013) Risk and Rationality Oxford University Press.
_______ (2014) ‘Belief, Credence, and Norms’ Philosophical Studies 169(2):285–311.
25
Cohen, Jonathan (1977) The Probable and the Provable Oxford University Press.
Cohen, Stewart (1999) ‘Contextualism, Skepticism, and the Structure of Reasons’ Philosophical
Perspectives 13:57-89.
Crewe, Bianca and Jonathan Jenkins Ichikawa (forthcoming) ‘Rape Culture and Epistemology’
Applied Epistemology Lackey (ed.) Oxford University Press.
DeRose, Keith (1992) ‘Contextualism and Knowledge Attributions’ Philosophy and Phenomenological
Research 52(4):913–29.
_______ (2009) The Case for Contextualism Oxford University Press.
Di Bello, Marcello (2013) Statistics and Probability in Criminal Trials: The Good, the Bad and the Ugly
Doctoral Dissertation, Stanford University.
Dotson, Kristie (2018) ‘Distinguishing Knowledge Possession and Knowledge Attribution: The
Difference Metaphilosophy Makes’ Philosophical and Phenomenological Research 96:475–482.
Dretske, Fred (1970) ‘Epistemic Operators’ Journal of Philosophy 67(24):1007–23.
Dutant, Julien (2016) ‘How to Be an Infallibilist’ Philosophical Issues 26(1):148–71.
Enoch, David, Levi Spectre, and Talia Fisher (2012) ‘Statistical Evidence, Sensitivity, and the
Legal Value of Knowledge’ Philosophy and Public Affairs 40(3):197-224.
_______ (1971) ‘Conclusive Reasons’ Australasian Journal of Philosophy 49:1–22.
Fantl, Jeremy and Matthew McGrath (2002) ‘Evidence, Pragmatics, and Justification’ Philosophical
Review 111(1):67–94.
_______ (2007) ‘On Pragmatic Encroachment in Epistemology’ Philosophy and Phenomenological
Research 75(3):558–89.
_______ (2009) Knowledge in an Uncertain World Oxford University Press.
Ferzan, Kimberly Kessler (ms) ‘#BelieveWomen and the Presumption of Innocence: Clarifying
the Questions for Law and Life’.
Franklin, John Hope (2005) Mirror to America: The Autobiography of John Hope Franklin New York,
NY: Farrar, Straus and Giroux.
Fricker, Miranda (2007) Epistemic Injustice Oxford University Press.
Fritz, James (2017) ‘From Pragmatic Encroachment to Moral Encroachment’ Pacific Philosophical
Quarterly 98(1):643–661.
_______ (ms) ‘Moral Encroachment and Reasons of the Wrong Kind’.
Fritz, James and Elizabeth Jackson (ms) ‘Belief, Credence, and Moral Encroachment’.
Gardiner, Georgi (2017) ‘Safety’s Swamp: Against the Value of Modal Stability’ American
Philosophical Quarterly 54(2):119–129.
_______ (2018a) ‘Evidentialism and Moral Encroachment’ Believing in Accordance with the Evidence:
New Essays on Evidentialism, ed. Kevin McCain. Springer,169–195.
_______ (2018b) ‘Legal Burdens of Proof and Statistical Evidence’ The Routledge Handbook of
Applied Epistemology, eds. David Coady and James Chase. Routledge,171–195.
_______ (2019a) ‘Legal Epistemology’ Oxford Bibliographies: Philosophy, ed. Duncan Pritchard.
Oxford University Press.
_______ (2019b) ‘The Reasonable and the Relevant: Legal Standards of Proof’ Philosophy &
Public Affairs 47(3):288–318.
_______ (2020) ‘Profiling and Proof: Are Statistics Safe?’ Philosophy 95(2).
_______ (forthcoming-a) ‘Doubt and Disagreement in the #MeToo Era’ Feminist Philosophers and
#MeToo ed. Yolonda Wilson. Routledge.
_______ (forthcoming-b) ‘Legal Evidence and Knowledge’ The Routledge Handbook of the
Philosophy of Evidence, eds. Maria Lasonen-Aarnio and Clayton Littlejohn. Routledge.
_______ (forthcoming-c) ‘The “She Said, He Said” Paradox and the Proof Paradox’ Truth and
Trials: Dilemmas at the Intersection of Epistemology and Philosophy of Law, eds. Zachary Hoskins and
Jon Robson. Routledge.
_______ (ms) ‘She Said, He Said: Rape Accusations and the Preponderance of Evidence’.
Gendler, Tamar (2011) ‘On the Epistemic Costs of Implicit Bias’ Philosophical Studies 156:33–63.
26
Gerken, Mikkel (2017) On Folk Epistemology Oxford University Press.
_______ (forthcoming) ‘Pragmatic Encroachment and the Challenge from Epistemic Injustice’
Philosophers’ Imprint.
Goldman, Alvin (1976) ‘Discrimination and Perceptual Knowledge’ Journal of Philosophy 73:771–
791.
Haack, Susan (2014) Evidence Matters Cambridge University Press.
Hannon, Michael (2015) ‘The Universal Core of Knowledge’ Synthese 192(3):769–786.
Hawthorne, John (2004) Knowledge and Lotteries Oxford University Press.
Hawthorne, John and Jason Stanley (2008) ‘Knowledge and Action’ Journal of Philosophy
105(10):571-90.
Heller, Mark (1989) ‘Relevant Alternatives’ Philosophical Studies 55(1):23–40.
Ho, Hock Lai (2008) A Philosophy of Evidence Law Oxford University Press.
_______ (2015) ‘The Legal Concept of Evidence’ Stanford Encyclopedia of Philosophy Edward
Zalta (ed).
Ichikawa, Jonathan Jenkins (2017) Contextualising Knowledge Oxford University Press.
_______ Contextual Injustice (forthcoming) Kennedy Institute of Ethics Journal.
Jackson, Elizabeth (2018) ‘Belief, Credence, and Evidence’ Synthese.
James, William (1896) ‘The Will to Believe’ The New World 5:327–347.
Kim, Brian (2017) ‘Pragmatic Encroachment in Epistemology’ Philosophy Compass 12(5):1–14.
Lawlor, Krista (2013) Assurance Oxford University Press.
Leitgeb, Hannes (2017) The Stability of Belief Oxford University Press.
Lewis, David (1996) ‘Elusive Knowledge’ Australasian Journal of Philosophy 74:549–567.
Littlejohn, Clayton (2012) Justification and the Truth-Connection Cambridge University Press.
_______ (2018) ‘Truth, Knowledge, and the Standard of Proof in Criminal Law’ Synthese.
MacFarlane, John (2005) ‘Knowledge Laundering: Testimony and Sensitive Invariantism’
Analysis 65(2):132–138.
McKinnon, Rachel (2013) ‘Lotteries, Knowledge, and Irrelevant Alternatives’ Dialogue 52(3):523–
549.
_______ (2017) ‘Allies Behaving Badly: Gaslighting as Epistemic Injustice’ Routledge Handbook of
Epistemic Injustice (ed.) Ian James Kidd, José Medina, and Gaile Pohlhaus Jr., New York:
Routledge, 167–175.
Moss, Sarah (2018a) ‘Moral Encroachment’ Proceedings of the Aristotelian Society 118(2):177–205.
_______ (2018b) Probabilistic Knowledge Oxford University Press.
_______ ‘Knowledge and Legal Proof’ (2021) Oxford Studies in Epistemology 7 Oxford University
Press/
Nelson, Mark (2002) ‘What Justification Could Not Be’ International Journal of Philosophical Studies
10(3):265–281.
Nesson, Charles (1979) ‘Reasonable Doubt and Permissive Inferences: The Value of Complexity’
Harvard Law Review 92(6):1187-1225.
Munton, Jessie (2019) ‘Beyond Accuracy: Epistemic Flaws with Statistical Generalizations’
Philosophical Issues 29:228–240.
Nance, Dale (2016) The Burdens of Proof Cambridge University Press.
Pritchard, Duncan (2002) ‘Recent Work on Radical Skepticism’ American Philosophical Quarterly
39:215-57.
_______ (2005) Epistemic Luck Oxford University Press.
_______ (2015) ‘Risk’ Metaphilosophy 46:436–61.
_______ (2017) ‘Legal Risk, Legal Evidence and the Arithmetic of Criminal Justice’ Jurisprudence
9(1):108–119.
Reed, Baron (2010) ‘A Defense of Stable Invariantism’ Noûs 44(2):224–244.
Ross, Jacob and Mark Schroeder (2014) ‘Belief, Credence, and Pragmatic Encroachment’
Philosophy and Phenomenological Research 88(2):259–288.
27
Rysiew, Patrick (2006) ‘Motivating the Relevant Alternatives Approach’ Canadian Journal of
Philosophy 36(2):259-279.
Schroeder, Mark (2018a) ‘Rational Stability under Pragmatic Encroachment’ Episteme 15(3):297–
312.
_______ (2018b) ‘When Beliefs Wrong’ Philosophical Topics 46(1).
Smith, Martin (2010) ‘What Else Justification Could Be?’ Noûs 44(1): 10–31.
_______ (2016) Between Probability and Certainty: What Justifies Belief Oxford University Press.
Staffel, Julia (2016) ‘Beliefs, Buses and Lotteries: Why Rational Belief Can’t Be Stably High
Credence’ Philosophical Studies 173(7):1721–1734.
Stanley, Jason (2005) Knowledge and Practical Interests Oxford University Press.
Stine, Gail (1976) ‘Skepticism, Relevant Alternatives, and Deductive Closure’ Philosophical Studies
29:249–61.
Thomson, Judith Jarvis (1986) ‘Liability and Individualized Evidence’ Law and Contemporary
Problems 49(3):199–219.
Toole Briana (ms) ‘The Not-So-Rational Racist: Articulating An (Unspoken) Epistemic Duty’.
Worsnip, Alex (2015) ‘Two Kinds of Stakes’ Pacific Philosophical Quarterly 96:307–324.
_______ (forthcoming) ‘Can Pragmatists Be Moderate?’ Philosophical and Phenomenological Research.
28