Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Kopec2018 Article APluralisticAccountOfEpistemic

Download as pdf or txt
Download as pdf or txt
You are on page 1of 26

Synthese (2018) 195:3571–3596

https://doi.org/10.1007/s11229-017-1388-x

A pluralistic account of epistemic rationality

Matthew Kopec1,2

Received: 8 August 2016 / Accepted: 23 March 2017 / Published online: 5 April 2017
© Springer Science+Business Media Dordrecht 2017

Abstract In this essay, I motivate and defend a pluralistic view of epistemic rationality.
The core of the view is the notion that epistemic rationality is essentially a species of
(teleological) practical rationality that is put in the service of various epistemic goals.
First, I sketch some closely related views that have appeared in the literature. Second,
I present my preferred, pluralistic version of the view, and I sketch some of its benefits.
Third, I defend the view against a prominent objection recently offered against a class
of closely related views by Selim Berker. Last, I raise some distinct, lingering worries,
and I sketch some possible ways one might address them.

Keywords Epistemic rationality · Epistemic teleology · Epistemic instrumentalism ·


Epistemic consequentialism · Pluralism

1 Introduction

Contemporary epistemology has been overrun with debates over which specific fea-
tures of our epistemic world make beliefs or credences rational. And, for many folks,
there is a lot at stake. Long and illustrious careers have been built upon defending
reliabilism against evidentialism, internalism against externalism, or what have you.
Meanwhile, many scholars working on rationality in the sciences and in social episte-
mology have gone about their business without much concern for any of these debates.
It might even seem that the latter have made more progress in their examinations of
rationality than the former have.

B Matthew Kopec
matthewckopec@gmail.com

1 School of Humanities and Social Sciences, Charles Sturt University, Canberra, ACT, Australia
2 School of Philosophy, Australian National University, Canberra, ACT, Australia

123
3572 Synthese (2018) 195:3571–3596

How could this be? Here’s a hunch: the scientists and social epistemologists
studying rationality can get along just fine without engaging in the contemporary
debates because those debates, in a large part, are not genuine conflicts. Perhaps
“epistemic rationality” is not a solitary feature of our world but rather a normative
realm with many distinct, equally legitimate sub-realms. If that hunch is correct, then
it could be that most of the views in contemporary epistemology ultimately con-
tain some truth—they are just, unbeknownst to their creators, answers to different
questions.
In this essay, I aim to examine exactly how far we could press this hunch about
epistemic rationality before the latter starts to look very unlike what epistemologists
take themselves to be studying. In the process, I’ll develop a modified and elaborated
version of the view sketched by Foley (1993). According to Foley, rationality is always
and everywhere a goal-oriented notion. While this view is rather well accepted in
the realm of practical rationality, where teleological views are mainstream, many
epistemologists have been hesitant to accept that epistemic rationality operates in a
similar way. Tradition will tell you that practical rationality and epistemic rationality
are very different kinds of things. In particular, what agents ought to do, practically
speaking, is often relative to the goals and preferences they have, meaning that the kinds
of norms at issue are always hypothetical in nature. Epistemic norms intuitively are not
like this—the goals of particular agents are irrelevant to what they ought to believe.
And there is surely a worry, lingering in the background, that allowing epistemic
norms to depend on the goals of agents will yield an “anything goes” epistemology.
For example, agents will be able to change their goals to avoid negative assessments,
and agents with bizarre goals will be able to believe bizarre propositions, all while
staying in rationality’s good graces.
I agree that these are worrisome consequences and that other views might better
capture some of these strong intuitions. But ultimately I believe we can devise a goal-
oriented view of epistemic rationality that lacks the worrisome consequences, and
I also believe that many of the strong intuitions sketched above are mistaken. That
said, the defense of my view must take a slightly circuitous route. I will not be able
to tackle the intuitions head on by simply cooking up my own hypothetical cases
and hoping the reader’s intuitions will agree. Instead, I will need to spell out what
I take to be the wide range of benefits of the view, in the hopes that this will make
the reader more comfortable when it comes time to leave some of those intuitions
behind.
The specific plan of the essay is as follows. In Sect. 2, I offer a taxonomy of some
closely related goal-oriented views of epistemic rationality, ending with a sketch
of the pluralistic view I prefer. In Sect. 3, I lay out the main benefits of taking a
pluralistic, goal-oriented perspective on epistemic rationality. In Sect. 4, I present
a critique by Berker (2013a, b) of a class of closely related views (which he calls
varieties of ‘epistemic consequentialism’), and I show why this critique fails. In
Sect. 5, I first raise the worry that my view might make a tractable decision the-
ory impossible, I then return to the worry that a goal-oriented approach leads to an
anything-goes epistemology, and I finish by sketching possible routes to address such
worries.

123
Synthese (2018) 195:3571–3596 3573

2 A taxonomy of teleological views in epistemology

Goal-oriented (or teleological) views of epistemic rationality have attracted a great


deal of popularity, especially in the late 1980s and early 1990s. Around that time,
such views attracted the interest of traditional epistemologists like Richard Foley,
Alvin Goldman, Hilary Kornblith, and Robert Nozick; philosophers of science like
Ron Giere, Larry Laudan, Philip Kitcher, and David Papineau; and some philosophers
like Isaac Levi and Stephen Stich who are tough to classify. More recently, closely
related views bearing such labels as ‘epistemic consequentialism’, ‘epistemic utility
theory’, and ‘epistemic decision theory’ have found proponents. There has been a
tendency to lump the earlier views under a single label, namely ‘epistemic instrumen-
talism’, as in Kelly (2003) and Lockard (2013). Epistemic utility theory and epistemic
decision theory have also been lumped together, along with many of the aforemen-
tioned, under the single label ‘epistemic consequentialism’. To add to the confusion,
in his recent, much discussed attack on epistemic consequentialism, Selim Berker
refers to such views as versions of ‘epistemic teleology’, but he insists that he’s not
attacking the views often referred to as instrumentalist views in epistemology (Berker
2013a, p. 362 fn34). There is unfortunately much room for confusion here. To be
clear, I will refer to all of these views as teleological views in epistemology, thus
lifting Berker’s restriction of the term to a mere subset of all of the goal-oriented view
possible.
It is important to notice, from the outset, that different teleological views in episte-
mology might be aimed at answering fundamentally different questions. One question
of interest is what generates or grounds the normativity of epistemic norms. We can
think of this as a meta-epistemological question, since the parallel question of what
grounds the normativity of moral norms belongs to the field of metaethics. As I see
it, philosophers who argue that truth is a constitutive aim of belief, e.g., Hieronymi
(2006), Steglich-Petersen (2009), Velleman (2000), Wedgewood (2002), and Williams
(1970), tend to be aiming at meta-epistemology questions.
But there is a distinct question of interest, one concerned about which features of
beliefs or credences would make them rational or irrational. Just as questions about
the right and wrong-making features of an action belong to normative ethics, we might
consider this second question as part of normative epistemology.1 There are a number
of ways one might answer this normative question. One might think that what makes
a belief rational is simply whether it is the appropriate response to the agent’s total
body of evidence, as an evidentialist would argue (Conee and Feldman 2004). Or it
might be whether the belief is formed through a generally reliable cognitive process,
as a reliabilist would argue (Goldman 1986). One’s credences might be rational just
in case they abide by the standard probability axioms and are updated by Bayesian
conditionalization, as an extreme subjective Bayesian would argue (Earman 1992;
Howson and Urbach 1993). And so on.

1 This terminology is perhaps not ideal, due to some confused early scholars who used the term ‘normative’
when applied to epistemology to contrast it with ‘naturalistic’ epistemology, on the false assumption that
naturalistic accounts in philosophy are necessarily devoid of proper oughts. All the same, the usage is
starting to take hold. Berker (2013a) is one prominent example.

123
3574 Synthese (2018) 195:3571–3596

Some of these normative accounts will be teleological, but others may not be. The
teleologist, as I’m conceiving of her, thinks that the rational status of a belief (or
credence, or belief-forming process, etc.) is wholly dependent on whether it is an
effective means to pursuing some relevant set of goals. Which goals are the relevant
ones will then be a further topic for teleologists to argue about amongst themselves, and
similarly for the issue of whether the set contains a solitary goal or rather a multitude.
I will discuss such disagreements shortly.
At times, meta-epistemological considerations ultimately motivate the teleologist
to hold onto her particular normative epistemological views. Take Kornblith (1993,
2002) for example. On his view, epistemic norms have normative force because when
we follow these norms we are more likely to achieve our practical goals. Since we
ought to care about achieving our practical goals, we ought to follow the norms of
epistemic rationality. This teleological aspect of his view is meta-epistemological,
since it is an account of where the normativity of epistemic rationality stems from.
And his meta-epistemological commitment that epistemic norms be grounded in the
value of achieving our practical goals actually leads him to a form of ‘veritism’, as
Goldman (1999) calls it, since he assumes that believing truthfully is the best way
to achieve one’s practical goals. (Veritists hold, roughly speaking, that the relevant
goal for assessing beliefs is whether those beliefs are true.) So, this is a case where
a teleological answer to the meta-level question motivates a teleological answer to
the normative-level question. But a view need not be teleological at both levels to
be viable. Simply as a matter of logic, there are various ways one might combine
the meta and normative views at issue. Cowie (2014) argues that one can combine
meta-epistemological instrumentalism (a quintessential teleological view) with evi-
dentialism (a quintessential non-teleological view).2 The idea is that one might hold
that beliefs are rational when they are formed in accord with the evidence, while also
holding that the value of following such an evidentialist norm is that doing so will
help the agent attain various ends she finds worthwhile. Such a view would parallel
Kornblith’s view at the meta-level, while jettisoning his veritism for an evidentialist
alternative. And there are various other viable combinations.
In the remainder of the paper, I will be solely concerned with normative epistemic
teleology. I will eventually argue that the best version of this view, a radically pluralistic
one, will actually yield wide swaths of rationality assessments that are identical with
many of its competitors. In fact, the norms of the competitors are, in a sense, special
cases of the view I advocate. But in the present paper, I won’t be concerned at all with
the source or grounding of the normativity that generates these requirements.

2.1 Varieties of (normative) epistemic teleology

So far, I’ve focused on the common feature amongst all of the varieties of normative
epistemic teleology, namely, that the rational status of a doxastic attitude is determined

2 Cf. Brössel et al. (2013). Cowie doesn’t specify that he means meta-epistemological instrumentalism,
since he doesn’t focus on this distinction. But I think it is clear from his discussion that this is what he
would intend.

123
Synthese (2018) 195:3571–3596 3575

by whether forming that attitude would foster some relevant goal, or set of goals. But,
this rough specification leaves several questions unanswered.3 First, what are the
relevant goals? Do we determine which attitudes are epistemically rational by fixing
upon whatever goals an agent happens to hold, even if these goals are not especially
epistemic or cognitive in nature? Or is there some preferred set of goals that we ought
to use to judge an agent’s doxastic attitudes, regardless of which goals she happens to
personally hold? Second, which perspective should we use to judge the effectiveness
of a particular means toward achieving these goals? Should we assess effectiveness
from the perspective of the agent, that is, according to how she believes the world
to be? Or should we, instead, assess her according to how the world actual is? Each
combination of answers to these questions represents a possible teleological view,
many of which are represented in the literature. In this section, I’ll situate some of the
dominant views according to how they address these questions.
I’ll start with the matter of which goals are relevant to judgments of epistemic ratio-
nality. The least restrictive accounts hold that there simply are not any restrictions—any
goals the agent might have would be relevant. Such views have a long history, from
Blaise Pascal’s famous wager (Pascal 1670), to William James’ argument against
William Clifford over the ethics of belief (James 1896). It has also found contempo-
rary support in the work of Stich (1990, 1993) and more recently Rinard (2017). On
such so-called ‘pragmatist’ views, merely practical goals like being healthy, perform-
ing well at a job interview, or going to Heaven (on the off chance that a Christian
God exists), might be important enough that an agent could rationally believe certain
propositions that her evidence speaks strongly against. Take the goal of being healthy,
for example. Positive thinking has been shown to have a positive effect on the likeli-
hood and speed of recovery from several diseases, some of which have a rather low
rate of recovery (Scheier and Carver 1992). Take an agent with one of these highly
virulent illnesses, and suppose she is well aware that her recovery is very improbable.
The pragmatist would claim that if this agent’s goal of recovering to health is her dom-
inant goal, then she ought to go ahead and believe she will recover,4 even though her
evidence speaks against it. Pragmatists don’t merely think that such beliefs are prac-
tically rational to hold, which pretty much all teleologists could accept. Importantly,
this agent’s belief that she will recover is epistemically rational on this account.
Taking this kind of pragmatist line does carry some benefits. For example, it yields
a very parsimonious account of rationality, since we do not need to posit a special
realm of epistemic norms in addition to the all the practical norms already out there.
But this supposed benefit also comes at a high cost. The ability to distinguish between
the practical rationality and the epistemic rationality of holding onto a belief can often
do a great deal of philosophical work. Humans certainly seem to exhibit a great deal of
epistemic irrationality. Just as some examples, individuals who are extremely bad at

3 I should say at the outset that my inspiration for classifying the various views in the way that follows
was, in large part, the excellent discussion of many of these issues in Foley (1993). Folks familiar with that
work will see much in common.
4 That is, if she’s capable of forming such a belief. This is a somewhat important hedge, since some have
tried to use it to make pragmatist views seem less absurd, as in Bishop’s (2009) attempt to bolster Stich’s
view. But since Stich (2009) himself doesn’t accept Bishop’s help here, I’ll set the worry aside.

123
3576 Synthese (2018) 195:3571–3596

certain epistemic tasks believe themselves to be highly capable (Kruger and Dunning
1999), most people believe their chances of getting ill are lower than average (Gilovich
1993), and we all tend to ignore evidence that conflicts with our politically charged
views (Taber and Lodge 2006). One very promising explanation of these kinds of
phenomena is that we all tend to get caught in the grips of conflicting norms. In these
cases, it may well be practically rational for us to hold onto these beliefs, even though
it is epistemically irrational for us to do so. And the conflict can occasionally work
in reverse. For example, we can explain why some cancer patients don’t believe they
will recover after they are told the recovery rates—they are caught in the grips of
epistemic rationality, to the possible detriment of their practical rationality. We lose
these explanations if we don’t restrict the kinds of goals that can factor into epistemic
rationality.
A more promising teleological line restricts the kinds of goals that can factor in our
assessments of epistemic rationality to those with an epistemic or cognitive character,
since such a restriction salvages the kind of explanation discussed above. Here is an
example to show why this is. Let us say we restrict our teleology so that the only
goal that is relevant to assessing beliefs for rationality is the goal of having as wide
a range of true beliefs about the propositions in question,5 i.e., a sort of veritism. On
such an account, beliefs are rational just in case they are formed through an effective
means toward achieving a broad set of true beliefs. Obviously, allowing oneself to fall
victim to wishful thinking is not an effective means to achieving this goal. The world
usually is not as we want it to be! Believing in line with survival rate data is surely a
better means toward achieving this goal. So, our patient using wishful thinking from a
couple paragraphs back would qualify as epistemically irrational on this account. But
she still counts as practically rational, so long as her goal of recovery dominates her
preference set, and wishful thinking does, in fact, promote this goal. Such a teleologist
can explain that the patient is in the grips of a conflict of norms.
For the remainder of the paper, I’ll limit myself to these kinds of “intellectualist”
views, as Lockard (2013) calls them, i.e., those that restrict the relevant goal set to
epistemic or cognitive goals.6 But what exactly makes a goal epistemic or cognitive in
character? There’s a fair bit of disagreement about this matter in the literature. Some
teleologists take a very restrictive line. For example, Foley (1987, pp. 6–8) argues that
epistemic rationality depends upon whether an agent’s epistemic behavior promotes
the goal of now believing true propositions and not now believing false propositions.7
Kitcher (1992), on the other hand, argues that the relevant goal is to hold beliefs that
promote a full understanding of nature—regardless of their truth-value—a view he is
seemingly driven to because scientists regularly believe “useful fictions” in the process

5 It will soon become clear that this formulation is not quite specific enough to be useful. But I hope the
reader will accept the intuitive notion for the time being.
6 Lockhard takes himself to be talking solely about ‘instrumentalist’ views, but there is no harm in using
his apt label there to discuss a broader range of teleological views.
7 In his later book (1993), Foley moved to calling this kind of goal a ‘purely epistemic goal’, and modified
it slightly to be the goal of having as accurate and comprehensive a belief system as possible. I leave this
out of the discussion above, because Foley also leans toward a kind of pluralism in that later work, and I’ll
discuss that move in depth below.

123
Synthese (2018) 195:3571–3596 3577

of inquiry.8 As a philosopher of science, Kitcher wouldn’t be terribly satisfied with a


view that diagnosed most scientific activity as irrational.9
More recently, several scholars have proposed accounts of epistemic rationality
that require agents to assign credences so as to maximize the expected epistemic
utility of these credences (Joyce 1998, 2009; Greaves and Wallace 2006; Leitgeb and
Pettigrew 2010a, b; Pettigrew 2013), which has given rise to a new area of research
called epistemic utility theory, or epistemic decision theory [as Greaves (2013) refers
to it]. On these accounts, credences are measured for their epistemic utility according
to some particular measure (or possibly a set of measures), and agents epistemically
ought to do their best to pursue higher scores on this measure (or these measures),
thus making these views teleological. Recent discussion has also focused on a closely
related class of views, often referred to under the label ‘epistemic consequentialism’
(see e.g., Berker 2013a, b; Talbot 2014; Ahlstrom-Vij and Dunn 2014; Dunn nd.).
These views have a similar structure except that they tend to deal in coarse-grained
doxastic states, i.e., beliefs, instead of fine-grained credences. They tend to measure
epistemic utility in a similarly veritistic way, except they must do so essentially by
counting and comparing true beliefs and false beliefs, as opposed to employing a
measure of credal accuracy, given they prefer to deal in coarse-grained terms. I will
call these views ‘idealistic’ versions of epistemic teleology, since they all insist that
what is relevant to epistemic rationality is either a single noble goal (like accuracy
or promoting understanding), or some small set of noble goals (like maximizing true
beliefs and minimizing false beliefs or having as accurate and comprehensive a set of
beliefs as possible). Importantly, in the idealistic accounts sketched above, the actual
goals of the agent are not relevant to whether she is rational, since the goal or goals
that we base our assessment are held fixed.
Other teleologists have taken a different line, where instead of fixing some noble
set of goals as the relevant ones, we judge epistemic rationality by assessing how well
the actual epistemic or cognitive goals of the agents are promoted. Such views have
gained some popularity among philosophers of science like Laudan (1990a, b) and
Giere (1988, 1989). Scientists often differ on what they take to be the most important
aspects of a theory. For example, some scientists care a great deal about the simplicity or
elegance of a theory, while others care more about its predictive accuracy. Since these
are cognitive or epistemic goals, we could assess the scientist based upon whether
her epistemic behavior is an effective means to the end of settling upon the theory
with just the right balance of these aims. And since different scientists can balance
these virtues differently, this opens the door for a possible explanation of why two
scientists might hold conflicting views even if they have the same evidence. Take, for
example, a Copernican astronomer and a Ptolemaic astronomer during the era before
heliocentrism took a dominant hold. While these two astronomers might have exactly
the same evidence, the Copernican might be more attracted to the greater simplicity

8 As he puts it, “Cognitive value derives from the project of trying to understand nature. Some truths are
worthless because they play no role in that project. Some falsehoods are valuable because they do play such
a role” (104).
9 Even Kuhn, who many consider the father of relativistic leaning sociology of science with his (1962/1996),
backtracked after scholars started taking his earlier work in this direction (1970).

123
3578 Synthese (2018) 195:3571–3596

of a heliocentric theory, while the Ptolemaist might be more attracted to the higher
predictive accuracy of the geocentric theory.10 On this account, it is possible that both
these scientists are epistemically rational. Since these views rationally permit agents
to weigh their epistemic or cognitive goals differently, I’ll call these “liberalistic”
teleological views.
It is worth noting that idealistic and liberalistic accounts will often deliver con-
flicting assessments. Here is an example, which should help to bring out the contours
of each grouping of views. First, let us consider Descartes, early in his Meditations,
sitting in his study and doubting pretty much everything (Descartes 1641). Descartes,
in this story, is extremely risk averse when it comes to his beliefs—his predominant
epistemic goal is to suspend judgment on any proposition he is not absolutely certain
is true. Consider how different a liberalistic assessment and an idealistic assessment
of Descartes might go. For the sake of the example, assume our idealistic assessment
is veritistic (i.e., based upon how well his attitudes track the truth). The liberalistic
assessment would likely judge that Descartes’ behavior is in line with epistemic ratio-
nality, since at this point in the story he really does only believe those propositions
he is certain about. But the idealistic assessment in question is going to judge him as
being severely irrational. In the process of suspending judgment on most of his beliefs,
Descartes inadvertently lost an enormous number of true beliefs. Thus, the process of
giving up all of those beliefs, even those for which he had extremely good evidence,
fails to promote the truth-tracking goal. Since liberalistic and idealistic assessments
can come into conflict, it seems like a teleologist needs to decide which she thinks
is more important to epistemic rationality, the actual cognitive goals of the agent, or
some other laudable set of goals (regardless of whether the agent being assessed cares
about them).
As Foley (1993) rightly points out, this is not the only decision that faces the
teleologist—she must also choose which perspective is relevant to epistemic ratio-
nality. In other words, should we assess the agent according to whether her behavior
promotes the attainment of the relevant goals, given how the world seems to her,
perhaps on reflection?11 Or should we assess her according to whether her behavior
promotes those goals, given the way the world really is?12 In keeping with common
usage in the epistemology literature, I’ll call the former perspective the ‘internalist’

10 In case the reader wasn’t aware, the geocentric theory was, in fact, the more predictively accurate theory
for many decades after the scientific community had rejected it as false (cf. Forster and Sober 1994).
11 For Foley, when we assess an agent based upon facts about her own perspective, we shouldn’t simply base
it upon the agent’s current state of mind, but rather upon what perspective she would have after achieving
some manner of reflective stability (1993, pp. 94–101). I’m not yet convinced that this modification is
required, at least outside of cases where an agent is so badly incoherent that it makes little sense to talk of
her perspective on things in the first place. So, I won’t make such a restriction in what follows. I thank an
anonymous referee for reminding me of this aspect of Foley’s view.
12 Foley (1993) adds a third category, namely, how the world seems to the agent’s community, which he
calls the sociocentric perspective. Since I’m concerned with what makes agents rational, not what makes
agents seem rational to their community, I’ll drop this category from the account. But I do not mean to
minimize the possible importance of those kinds of questions. For example, they will be very important if
we wish to retrospectively judge exactly how revolutionary a particular historical scientist’s thinking was.

123
Synthese (2018) 195:3571–3596 3579

Liberalistic (Desire-based) Idealistic (Value-based)

Internalist (Egocentric) Foley (1987), epistemic utility theory

Externalist (Objectivist) Giere, Laudan Kitcher, reliabilists

Fig. 1 Taxonomy of intellectualist (i.e., non-pragmatist) varieties of epistemic teleology

perspective, and the latter the ‘externalist’ perspective.13 Philosophers of science,


like Giere, Laudan, and Kitcher, have tended to take the externalist line, since what
concerns them is what ways are in fact the best ways for scientists to achieve the
relevant epistemic goals. Goldman’s process reliabilism is another classic example of
an externalist view (1986). Foley (1987) is one traditional defender of the internalist
line in teleological epistemology. I would argue that many of the formal epistemolo-
gists working on epistemic utility theory would likely also fall into this camp, since
rationality requires that agents maximize the expected utility of their credence func-
tions, and such expectations are to be understood from an internalist perspective. Like
before, the assessments of a teleological internalist and a teleological externalist will
often come into conflict. Figure 1 above gives a representation of the taxonomy I’ve
presented in this section, and lists some proponents of the various choices.

2.2 Pluralistic epistemic teleology

So, which of the choices above would yield the most promising kind of epistemic
teleology? I actually think the most promising version is one that refuses to choose.
I’ve been inspired by the view of rational belief that Foley presents in his book Working
without a Net (1993), which is in many ways a pluralistic view of epistemic rationality.
In this section, I’ll lay out the rough contours of Foley’s pluralism before presenting
mine, which I take to be preferable even though it is a more radical departure from
tradition.
Foley starts from the assumption that rationality is always and everywhere a goal-
oriented notion, which he argues is just as true in the case of belief as it is in the case
of action. Epistemic rationality, on such a view, becomes a special case of practical
rationality, and Foley has a pluralistic take on what constitutes practical rationality.
The practical rationality literature exhibits a somewhat parallel taxonomy of stake-
holders to the one I’ve sketched above. For example, the dominant view concerning
instrumental rationality, sometimes referred to as ‘prudential’ rationality, holds that
an action’s instrumental rationality is determined by whether the agent’s action is

13 The distinction has appeared under different names. For example, Foley (1993) refers to this as the
egocentric/objective distinction. In the literature on instrumental or practical rationality, it is sometimes
referred to as the subjective/objective distinction. I think these terms are likely to invite confusion. For
example, a liberalist may well think there are objective facts of the matter concerning what an agent ought
to do from her subjective point of view. The epistemologist’s terminology is just awkward enough to clearly
signal that we are talking in technical terms.

123
3580 Synthese (2018) 195:3571–3596

the most effective means to achieving her actual desires (Kolodny and Brunero nd).
Some scholars then make a further step and argue that practical rationality just is
instrumental rationality (cf. Hubin 1999, 2001).14 These purely ‘desire-based’ views
correspond to the staunch liberalists in the epistemic case. On the other hand, some
hold ‘value-based’ views of practical rationality, which determine what an agent ought
to do according to whether her actions are the most effective means to achieving some
state of affairs of genuine value, independent of whether the agent actually desires that
state of affairs to obtain.15 These correspond to the staunch idealists. Similarly, there
are staunch internalists (often called ‘subjectivists’) as well as staunch externalists
(often called ‘objectivists’) about practical rationality. Foley seems to think that each
of these combinations isolates a legitimate notion of practical rationality, and I think
rightly so. When we ask a question like, “Was that action rational?” there may not be
a single answer, because the question is ambiguous.16
Foley thinks the same kind of ambiguity holds for rational belief. First, let us take
the internalist/externalist distinction. When we ask whether a particular agent’s belief
is rational, we need to first fix which perspective is relevant to the question at hand. Just
so we have a concrete example, say we are inquiring about whether George W. Bush
was rational to believe that Saddam Hussein was manufacturing weapons of mass
destruction before the second U.S. invasion of Iraq.17 In this case, we are looking for
a retrospective assessment, and it would not really make sense for us to assess him
according to the way the world really was. What is relevant to such an assessment is
how the world seemed to him at the time, even if his perspective was badly flawed due
to the selective filtering of crucial evidence by his staff.18 Foley believes that when

14 This position is often associated with Hume (1739). Whether Hume was himself a ‘Humean’, in this
sense, is up for debate.
15 One example would be a view of practical rationality based on an objective-list theory of well-being,
such as the view defended in Nussbaum and Sen (1993). Maguire (2016) is a more recent defender of the
value-based approach, although he seems to equate value-based assessments with moral assessments. (I
prefer not to equate the two.)
16 One sort of pluralism is already a well-established position in the practical rationality literature, that is,
the pluralism between external and internal practical reasons. (It’s usually referred to as pluralism between
the “subjective” and “objective” notions of practical rationality, although I feel that terminology invites
confusion, since those terms have also been used to refer to the desire-based and value-based distinction.)
One recent example of a pluralist of this sort is Schroeder (2010). Foley (1993) seems to be the closest
thing to an explicit desire-based/value-based pluralist about practical rationality at the non-morally loaded
normative level. Those who take a so-called ‘hybrid’ approach to the grounding of practical normativity,
like Ross (1930), Chang (2013), and Behrends (2015), think that both desires and values can generate
practical reasons, but this leaves open whether these distinct domains give rise to a plurality of normative
requirements. Some authors hold so-called “dualist” views (cf. Crisp 1996; Dorsey 2013), where prudential
reasons and moral reasons generate incommensurable normative demands, e.g., Copp (1997). Such views
would amount to a desire-based/value-based pluralism of the kind I’m after, so long as the moral status of
any action is determined by facts about egoistic value promotion (which is obviously dubious). And those
who deny that there is an ‘all-things-considered ought’ or an ‘ought simpliciter’ in the practical domain,
such as Tiffany (2007) with his ‘deflationary normative pluralism’, seem to hold closely related pluralist
views. See Baker (2017) for a survey of some of the various views one can take on the nature, and possible
incommensurability, of different normative domains. I thank two anonymous referees for suggesting that I
spell out more clearly the relationship between my view here and those in the practical rationality literature.
17 Of course, it’s possible that he might not, in fact, have held such a belief. Let us just assume that he did.
18 Perhaps after we add the further specifications I mention in fn11.

123
Synthese (2018) 195:3571–3596 3581

assessing an agent’s beliefs in future events, the opposite is the case. For example, we
might inquire about whether Donald Trump should believe, on some specified future
date, that North Korea will deploy a nuclear weapon against a U.S. ally before the end
of his presidency. Our interest in this context, Foley insists, is in what Trump ought
to believe given the way the world actually is. It would be best if he believes the truth
about this possibly dangerous state of affairs, regardless of how the world looks from
his internal perspective.19
Second, take the question of which goals are relevant to assessing the rationality of
a belief. On this matter, Foley is somewhat restrictive. He agrees that merely practical
goals cannot factor into the kind of rationality that interests epistemologists, so he
is a solid intellectualist, in Lockard’s (2013) terminology. He also makes a further
distinction between cognitive goals, per se, and what he calls the ‘purely epistemic’
goals of having an accurate and complete belief system. But even with this restriction,
Foley is somewhat liberal. Agents can decide for themselves exactly how important
it is to have an accurate belief system, and how important it is to have a complete
belief system, and these two goals can often come into conflict. Since different agents
can rationally weigh these goals differently, some beliefs that are rational for one
agent might not be rational for another agent. That said, Foley doesn’t think just any
way of weighing purely epistemic goals should count. Take the example of Descartes
mentioned above. Foley admits that someone might be able to devise a notion of
Cartesian rationality, which is preposterously risk averse when it comes to the accuracy
of a belief system. But for Foley, such a notion of rationality would not be of much
interest to epistemologists.
While I find many aspects of Foley’s view attractive, I find a full-blown pluralistic
view even more appealing. Notice that there are several aspects of Foley’s view that
serve to moderate the pluralism. First, while Foley agrees that questions of rationality
are ambiguous concerning which perspective is the correct mode of assessment, he
follows this with an account of how the context serves to disambiguate, i.e., to specify
which kind of assessment is most relevant. Second, instead of a full-blown liberalism
that would assess an agent according to whichever cognitive or epistemic goals she
might hold, Foley retreats to a restrictive set of “purely epistemic” goals, and he doesn’t
even allow determinations of epistemic rationality to accept all possible weights of
these already restricted goals as an input.
The view I advocate keeps the parallel between epistemic rationality and a pluralis-
tic understanding of practical rationality (teleologically understood), but lifts most of
these further restrictions that Foley places upon epistemic rationality. I see no reason,
in principle, to limit our epistemic assessments of agents by fixing only on the “purely
epistemic” goals of having an accurate and comprehensive belief system. In particu-
lar, when making liberalist assessments of rationality, we ought to include whichever
cognitive or epistemic goals an agent happens to hold.20 For example, say an agent’s
dominant epistemic goal is to have as coherent a system of beliefs as possible. Fine.
Or, instead, say her dominant goal is to believe in accord with the evidence, while

19 Of course, one might find Foley’s insistence implausible. Let us take it for granted for now, but more
on this below.
20 I’ll back off from this a bit later in the essay. But for now, I’ll be as liberal as possible.

123
3582 Synthese (2018) 195:3571–3596

caring little about the truth of the propositions she would be led to believe in pursuit
of this goal. Also fine. And when she does have the goals of having an accurate and
complete belief system, let us let her be as risk averse, or risk seeking, as she likes.
If epistemologists take little interest in the resulting norms of such assessments, so
be it. (After all, it is already common for epistemologists to ignore the assessments
that their rivals make.) And I would also take an ecumenical approach to idealis-
tic assessments. Sometimes we might be interested in whether an agent’s cognitive
behavior promotes her understanding of nature and, at others, whether it is truth, accu-
racy, or coherence promoting, and perhaps even whether it accords with the evidence.
These are all ideals that hold some epistemic value, and, I feel, there is no need to
choose.21
In a similar vein, why should we insist that the context of a question will dictate the
kind of rationality assessment that is of most interest? Why not allow the epistemologist
to make these decisions for herself? Surely there are some instances where it would be
valuable to assess a belief retrospectively from an externalist perspective, and, perhaps
more obviously, cases where it is worth offering an internalist rationality assessment
for a belief about future states of affairs. Everyone who has played a strategic game
involving chance has surely made many such evaluations in practical terms,22 and I
see no reason to banish such assessments from epistemology.
Given these modifications, we are left with a thoroughly pluralistic understanding
of epistemic rationality. When we are asked whether an agent’s belief is rational, this
question does not necessarily have a single answer. Many other factors need to be
specified. Are we interested in whether that belief is formed as an effective means
to promoting the epistemic goals the agent happens to have? Or are we, instead,
interested in whether it promotes some other valuable goal, regardless of the agent’s
own cognitive desires? If so, which valuable goals are we interested in? Is it something
like Goldman’s veritistic value (1999)? Or epistemic utility as measured by a proper
scoring rule? Or correspondence with an objective evidential favoring relation?23 Or
coherence? etc. And after we specify all this, we are still left with another question: are
we interested in assessing the agent from her own perspective, or, instead, according
to how the world really is? It is not until we specify all of this that we start to get
some answers about whether the agent is epistemically rational in holding the belief
in question. And, I would argue, each such specification is equally apt to be called a
form of epistemic rationality. Epistemic rationality, in short, is not a singular thing—it
is many things, each of which exerts a normative pull on us.
At this point, I think it’s worth being upfront about one aspect of my view that
many epistemologists will surely find uncomforting. On my account, the question of

21 In other words, I’m assuming that epistemic value monism is wrong-headed, although I won’t present
a full argument against that view here. For a defense of the monist position, see Ahlstrom-Vij (2013).
22 For example, this kind of assessment often occurs when we judge whether a poker player will be rational
in calling a raise, given the cards she has in her hand and the cards she can see on the table. Since we can
conclude, very naturally, that she would be right to call even if we (watching on the television) know that
her opponent already has her beat, we must be making internalist assessments in such contexts.
23 I’m doubtful that such an objective evidential favoring relation exists, for the reasons sketched in
Titelbaum and Kopec (ms). But I could be wrong.

123
Synthese (2018) 195:3571–3596 3583

whether a belief is epistemically rational is not, on closer inspection, a singular ques-


tion. And, as I pointed out earlier, the different assessments can conflict. One possible
response to the possibility of such conflicts would be to insist that one variety of assess-
ment is the most important, and thus that variety trumps all others when they conflict.
Another possible response would be to insist that there is some higher-order epistemic
assessment that takes all the various epistemic assessments into account and spits out
the determination of what the agent truly ought to believe.24 My preferred response is
different. I prefer to view epistemic rationality as a collection of incommensurable nor-
mative sub-domains, any of which can genuinely pull the agent in different directions
depending on the specifics of the case. So, much like some authors deny the existence
of an ought simipliciter when it comes to practical rationality, e.g., Tiffany (2007)
and Baker (ms),25 I would deny that there is an ought simpliciter in the epistemic
case. Perhaps we can have reasons to think that one assessment is better than another
in a certain case of conflict. But I doubt it will be epistemic facts that provide those
reasons.26
One might then wonder whether epistemic rationality, on such an understanding,
really has any normative pull at all. Typically, when we ask questions about what an
agent ought to believe, we do so because we want to tap into a set of norms that could
offer the agent guidance. If many answers to such questions end up being “Well, that
depends,” this motivation of ours seems frustrated and, in turn, this seems to speak
against the view.27 What the epistemologist wants is something like an all-things-
considered judgment that can serve as the final word. While I understand this impulse,
I think it is an impulse we ought to ignore. In the next section, I show that a pluralistic
view like mine promises to offer a great deal of guidance on how we can improve our
epistemic behavior, even though the view does not deal in the kinds of final judgments
many epistemologists desire.

3 The benefits of pluralism

In this section, I’ll present what I take to be the benefits of the pluralistic approach to
epistemic rationality of the kind I sketched in the previous section. In slightly sloga-
nized form, we could list them off as follows: the view is potentially more promising
if we are interested in offering useful guidance to individuals; most of the assessments
the view yields are easy to justify to those assessed; the view is able to capture a wide
range of intuitions and even explain why some philosophical debates have seemed so

24 We could think of this position as an epistemic analogue of a view like Ross’s (1930) in the practical
domain. I thank an anonymous referee for suggesting this parallel.
25 I should mention that I am only sympathetic to the view that there is no all-things-considered ought in
the practical case if we are setting aside all moral assessments. I am much less comfortable denying such
an ought simpliciter, once we allow moral normativity to also weigh in the judgement.
26 I thank an anonymous referee for pressing me to make my position on this matter clear.
27 Just to be clear, this kind of situation will not always occur. In any case where an agent’s own epistemic
goals align with the epistemic ends that we think are of independent value, and the world looks to her as
the world actually is, all the various assessments will issue the same verdict. But, admittedly, we will not
usually be so lucky.

123
3584 Synthese (2018) 195:3571–3596

intractable; the view actually yields a wide range of widely held views in normative
epistemology as special cases; and the view yields a highly unified account of both
traditional and social epistemology.
Let us start where the last section left off, on the issue of guidance. I feel that many
epistemologists that prefer views that promise all-things-considered final judgments
of rationality have not appreciated the fact that many of the dominant views in the
literature offer less useful guidance than it may at first seem. Take evidentialism, for
example. This view, in its pure form, holds that an agent ought to believe a proposition
if and only if that proposition is supported by the agent’s total evidence, and this bicon-
ditional is supposed to hold even if the agent is not aware of the evidential relations
at issue (Conee and Feldman 2004). On some versions of the view, it is supposed to
hold even if she justifiably believes false things about the evidential relations.28 Sim-
ilar things could be said about a pure coherentist view (like Quine and Ulian 1970).
When an agent’s belief set does not cohere, she might know she ought to drop one
of her beliefs, but there might be multiple beliefs that could be dropped to recover
the coherence. Rationality does not dictate which one to drop (Harman 1986). A pure
reliabilist (like Goldman 1986) is caught in the somewhat awkward position of insist-
ing that agents use reliable belief forming processes, even if they do not recognize, or
even cannot recognize, which processes are reliable. A pure subjective Bayesian (like
Howson and Urbach 1993) performs somewhat better on this measure, since an agent
could at least know that she ought to always update her credences using Bayesian con-
ditionalization. But if she discovers that her credences fail to abide by the probability
axioms, she is forced into a similar situation as the agent in the coherentist dilemma
above.
The moral here is that even though epistemological views that seem to offer final, all-
things-considered verdicts might carry the feel of offering robust normative guidance,
on closer inspection, the story is more complicated. Some of the rival views out there
also fail to give a final answer in various cases. Others tend to deliver final verdicts
that, from the perspective of the individual agent, would be hard to follow. Put another
way, it doesn’t follow from the fact that a view offers final normative judgements in
some cases that it will do so in all cases. And it doesn’t follow from the fact that a
view offers final normative judgements in all cases that the normative suggestions are
implementable in the way an epistemologist should want.29 This kind of realization has

28 I should admit that the possibility of having justified false beliefs about the evidence is not entirely uncon-
troversial. For example, Titelbaum (2015) argues, roughly, that mistakes about what rationality requires
are, themselves, mistakes of rationality. If he is right, and evidentialism happens to be true, then the kind of
case I’m sketching here isn’t really a possibility.
29 To borrow an example from Jackson and Smith (2016), someone might tell you how to escape a labyrinth
by saying something like, “Walk in a path that doesn’t cross any hedges until you reach the end.” While
this may still counts as advice in some sense of the word, it is certainly not helpful advice, since it is not
implementable.

123
Synthese (2018) 195:3571–3596 3585

motivated some epistemologists (e.g., Bishop and Trout 2005) to seek an alternative
to all these views.30
The pluralist, on the other hand, always has a mode of assessment that promises to
be normatively guiding. According to the pluralist, one notion of epistemic rationality
dictates what an agent ought to believe by determining whether the belief at issue is
the most effective means to achieving her actual epistemic goals according to how
the world seems to her, i.e., the liberalist internalist assessment. Thus, the agent has
access to all factors used in the assessment. Some idealistic internalist assessments
might also give quality normative guidance to the agent, for example by assessing an
agent’s epistemic behavior according to whether it fosters an accurate belief system
from the agent’s perspective. On the other hand, some idealistic assessments, and some
externalist assessments, will inherit many of the problems of other views like those
sketched above. But this should not be seen as a fault of the view, since it is also no
worse off than the rival views at issue. And in those cases where the other views fail
to offer an assessment that could be normatively guiding from the agent’s perspective,
the pluralist can often switch to another mode of assessment that actually is.
A closely related benefit of my pluralistic view is that many of the assessments
that follow from the view can be easily justified to the agent being assessed. To make
the point concrete, say we are dealing with a religious fundamentalist, and we are
assessing her belief that humans and dinosaurs coexisted. If it turns out that this belief
is irrational from all modes of a pluralist’s assessment, there is an easy way to justify
this negative assessment to the fundamentalist herself. We could point out that based
upon the actual epistemic goals she has, forming this belief is a bad way to promote
those goals. To put it another way, if she were to retort, “Well, who cares?” after we
tell her about our negative assessment of her epistemic behavior, we have an easy
retort: “You do!” It is her own goals that are being poorly promoted by her behavior,
even if we hold fixed the way the world looks from her internal perspective. Things
admittedly get more complicated in cases where the different modes of assessment
disagree. For example, it is possible that our fundamentalist actually does not care
about the accuracy of her beliefs. In this case, the idealistic assessment might suggest
she is irrational while a liberalistic assessment might say she is rational. Similarly, it
may come out that when assessed from her own perspective, she is rational in her belief
that dinosaurs roamed the Earth with humans, but from an externalist perspective she
is not. These negative assessments would, indeed, be more difficult to justify to the
agent. Sometimes we might be able to justify the assessment by, for example, pointing
to the fact that accurate beliefs generally help an agent promote her other practical
goals. But, again, these difficulties should not be seen as a detriment to the view. The
cases where it is difficult to offer the subject a satisfying justification of our assessment
are exactly those cases where the competing, monistic views have similar difficulties.

30 I do not mean to suggest that these rival views don’t offer agents any useful guidance for how they can
improve their cognitive behavior. They surely do, since they give the agent something to aim for, be it true
beliefs, coherence, beliefs that accord with the evidence, or what have you. I only intend to point out that
these views might promise less guidance than is often thought, so that the pluralist view doesn’t seem like
an automatic non-starter by comparison. I thank an anonymous referee for encouraging me not to overstate
my point here.

123
3586 Synthese (2018) 195:3571–3596

Another benefit of a pluralistic view that allows for both internalist and externalist
assessments, as mine does, is that such a view can capture the kinds of seemingly
competing intuitions that internalist and externalist views were specifically designed
to capture. Both internalists and externalists have spilled a great deal of ink attempting
to establish their side as victorious, often by presenting cases and thought experiments
designed to elicit strong intuitions one way or the other. While the intuitions these
cases and thought experiments elicit might seem to speak for (or against) a purely
internalist (or externalist) view, they do not speak against a pluralistic view, since the
latter can respect both kinds of intuitions. When a pluralist encounters a case that elicits
internalist intuitions, she can accept the validity of those intuitions from an internalist
perspective. And similarly for cases that elicit externalist intuitions. In other words, the
pluralist can respect both sets of intuitions by noting that those intuitions do hold water,
but only relative to a particular perspective of assessment.31 In fact, a pluralist can even
offer an explanation for why these cases seem to speak so strongly in favor of their
designers’ preferred views: the cases that elicit internalist intuitions are cases where
an internalist assessment is more natural, and vice versa for the externalist intuitions.
In addition, the pluralist can offer an explanation for why the internalist/externalist
debate seems so intractable, namely, the two sides to the debate don’t realize that the
questions they are asking and answering are actually ambiguous. They are, in fact,
perpetually talking past each other.
The kind of pluralism I favor can capture an especially wide range of intuitions in
epistemology, since it yields the very same verdicts as a number of its competitors. In a
sense, we can derive the same verdicts as most other teleological views as special cases
of one broader teleological view. For example, if an agent’s primary epistemic goal is
to have as coherent a set of beliefs as possible, then she ought to follow the standards
of coherentism. If her dominant goal is to have all and only those beliefs that properly
respect the evidence, then she ought to follow the dictates of evidentialism. If her
dominant goal is to maximize the expected accuracy of her credences as measured by
some proper scoring rule (like the Brier score), then she ought to follow the dictates of
subjective Bayesianism.32 The process works similarly for most other views we could
think up.33 And given that the pluralist also has access to idealistic assessments, we
can derive these kinds of assessments in many cases even if the agent does not hold
the relevant goal or set of goals.
So far, I have focused predominately on the benefits of my brand of pluralism from
the standpoint of individualistic epistemology, but it also brings a wealth of benefits to
social epistemology. In fact, large swaths of the social epistemology literature already
presuppose a kind of goal-oriented epistemology,34 and many prominent social epis-

31 I thank an anonymous referee for helping me clarify my position here.


32 Assuming Leitgeb and Pettigrew (2010a, b) are right.
33 If views in epistemology can be ‘consequentialized’ as some have suggested is possible in ethics (see
Portmore 2009), then the vast majority of other normative epistemological views would be special cases.
Thanks to Daniel Cohen for pointing me toward the literature on consequentializing deontological views.
34 Here I’m thinking of work in the division of cognitive labor literature (Kitcher 1990; Strevens 2003;
Weisberg and Muldoon 2009; Muldoon 2013) and much of the work on network epistemology (Mayo-
Wilson et al. 2011, 2013; Zollman 2013).

123
Synthese (2018) 195:3571–3596 3587

temologists (e.g., Goldman 2010; Coady 2012) insist we ought to be ‘ecumenical’


when making judgments about which things are of epistemic value to various groups.
This might leave the mistaken impression that social epistemology and traditional
epistemology are not really two parts of the same field of inquiry. It might seem like
there is just genuine, traditional epistemology, and then applied philosophy of science,
or some such. My account can explain why these seemings are mistaken. The kind
of ecumenical, goal-oriented notions of rationality that social epistemologists find
appealing are exactly the same notions of rationality that apply to the individual.
And the unification of the two subfields brings a number of benefits beyond mere
parsimony. Here are just a few examples. First, agents who inquire in groups can have
group directed epistemic goals, such as the goal of the group uncovering the truth of
a certain proposition, or perhaps having the most accurate possible set of collective
credences.35 On my account, there is a type of epistemic rationality that applies to the
cognitive behavior of these agents in light of their goals. Importantly, this will be true
even if the groups they are a part of never count as agents in its own right (cf. List
and Pettit 2011). Second, given that the view also has access to idealistic assessments,
we can devise rationality assessments even for agents who lack the kinds of epistemic
goals we think they ought to have, like, for example, the scientist whose epistemic
behavior is driven by greed as opposed to a search for knowledge or understanding (cf.
Kitcher 1990). Idealistic assessments are also helpful when we wish to assess group
agents, like juries, science labs, and corporations, even in cases where they either fail
to have the epistemic goals we feel they ought to, or fail to have any epistemically
directed attitudes at all.36
I hope that all of the benefits that I have sketched have convinced the reader that
this kind of pluralistic, goal-oriented account of epistemic rationality has a great deal
to offer. I’ll now turn to presenting and addressing some criticism facing the view.

4 Berker’s critique of epistemic consequentialism

Selim Berker has recently criticized one component of a pluralistic view like mine,
namely, views that make idealistic teleological assessments (Berker 2013a, b).37
According to such views, which Berker refers to as varieties of ‘epistemic consequen-
tialism’, there is some feature that beliefs can have that is of final epistemic value,
and what it is rational for an agent to believe depends on whether or not that thing of
final epistemic value is promoted. While Berker’s main target seems to be veritism,
including sophisticated versions such as Goldman’s process reliabilism, he takes his
arguments to apply to a much wider range of views. As Berker puts it, “The problem
with truth-conducivism is that it is a form of epistemic-value-conducism, and con-
ducivism of any sort is the wrong way to think about epistemic normativity”(2013b,

35 For some of the possible normative consequences of these kinds of group directed epistemic goals, see
Kopec (2012).
36 I sketch the benefits of my pluralistic account when applied to social epistemology in more detail in
Kopec (ms).
37 As Berker admits, the kind of criticism he presents really goes back at least to Firth (1981). But since
Berker’s version has been the topic of much recent debate, I’ll focus on his as well.

123
3588 Synthese (2018) 195:3571–3596

p. 369). Because of this wide net, many of the major figures in contemporary ana-
lytic epistemology appear on Berker’s list of epistemic consequentialists (2013a, pp.
350–357). I will not dispute his diagnosis here, since the idealistic branch of my view
is indeed a form of epistemic-value-conducivism, as he calls it. If his arguments are
good, this will indeed speak against my brand of pluralism.38
Berker argues that epistemic consequentialism and ethical consequentialism share
a fatal flaw: just as ethical consequentialism sometimes sanctions cross-person trade-
offs, so epistemic consequentialism sometimes sanctions cross-proposition tradeoffs.
Consequentialist views in ethics have been widely criticized for implying that an agent
ought to kill one person to save five. Even those ethical consequentialists willing to bite
the bullet here must admit that this consequence of such views is very counterintuitive.
Berker attempts to show that a very similar kind of tradeoff follows from epistemic
consequentialism and, furthermore, that the tradeoff is similarly problematic in the
epistemic realm.
Take the following example, which Berker attributes to Fumerton (2001). An atheist
scientist is pursuing a large grant from a religious funding organization, which, if
successful, will allow her to gain a very large number of true beliefs in her area of
study, none of which she would gain otherwise. But she also knows that her grant
will only be successful if she can convince them that she believes God exists, and
she is such a bad liar that she will not be able to convince them that she is a theist
unless she actually does believe God exists. Berker points out that even if all of this
scientist’s evidence points towards there being no God, and even if there really is no
God, the epistemic consequentialist must say the rational thing for the scientist to do
is to go ahead and believe that God exists. This is because while she will gain one false
belief in the process, she will also gain a wide range of true beliefs she would not gain
otherwise. So on balance, she best promotes the attainment of true beliefs in this case
by forming this one false belief that her evidence speaks strongly against. But this is
absurd. As Berker puts it:

The more general point is this: when determining the epistemic status of a belief
in a given proposition, it is epistemically irrelevant whether or not that belief
conduces (either directly or indirectly) toward the promotion of true belief and
the avoidance of false belief in other propositions beyond the one in question…
When it comes to the evaluation of individual beliefs, it is never epistemically
defensible to sacrifice the furtherance of our epistemic aims with regard to one
proposition in order to benefit our epistemic aims with regard to other proposi-
tions. (2013a, p. 365)

Since epistemic consequentialism entails such indefensible sacrifices, Berker con-


cludes that all versions of the view must be rejected.
The first problem with Berker’s argument is that there are plausible varieties of
epistemic consequentialism that don’t entail that Berker’s scientist ought to make the

38 Others have criticized aspects of Berker’s arguments, e.g., Ahlstrom-Vij and Dunn (2014) and Goldman
(2015), but I won’t rehearse those critiques here. My defense of epistemic teleology will take a rather
different route.

123
Synthese (2018) 195:3571–3596 3589

counterintuitive tradeoff.39 Recall that Berker includes all varieties of epistemic-value-


conducivism under his label of epistemic consequentialism. Here is one example,
which I will call ‘evidential teleology’. On this view, a doxastic attitude generates
final epistemic value to the extent that it accords with the possessor’s total body of
evidence. Furthermore, the evidential teleologist holds that a doxastic attitude is ratio-
nal to the extent that it promotes the attainment of this kind of epistemic value. As a
second example, we could form a very similar view, which we could call ‘coherence
teleology’, that focuses instead on an attitude’s overall coherence with the agent’s web
of other doxastic attitudes. While both of these views will count as varieties of epis-
temic consequentialism on Berker’s account, neither would yield the counterintuitive
consequences he focuses on.
Take the evidential teleologist, for example. We can assume that Berker’s scientist
starts out rational, since otherwise the intuitions get rather cloudy. On the evidential
teleologist account, the scientist’s being rational means that her current doxastic atti-
tudes, i.e., all of her beliefs, disbeliefs, and suspensions of judgment, accord well with
her total evidence. Now consider the scientist’s epistemic situation after she comes
to believe in God and completes all of her research under the successful grant. At
this point, she has a wide range of true beliefs, many of which, by assumption, she
lacked previously. And we can assume that these new true beliefs also accord with her
evidence. But notice that she also has one belief that accords poorly with her evidence,
namely her belief in God. So whereas she started out with attitudes that all accorded
with her evidence, she now has a set of attitudes that contains a member that fails
to accord with her evidence. The evidential teleologist will see this as a decrease in
epistemic value, and so she can explain why it would be irrational for the scientist to
start believing that God exists. A similar argument could be made in the case of the
coherence teleologist. This shows that Berker’s argument does not work against all
forms of epistemic consequentialism.
Berker anticipates this kind of objection, and he attempts to fend it off by arguing
that the examples could simply be recast to address these concerns (2013a, p. 379).
For example, here is a reworked version of the scientist case meant to dispute what
I am calling coherence teleology. Assume that if the scientist gets her grant, she will
go on to revise her web of beliefs in such a way that they will have a much greater
level of coherence. Of course, the one belief that God exists will not cohere well. But,
all the same, in this new case we are to assume that the total level of coherence of all
of her attitudes will increase. Thus, the coherence teleologist will have to say that the
scientist would be rational to form the belief that God exists, which is, like before,
absurd. The strategy here is much like earlier: build a case where the epistemic value
of one attitude is sacrificed for the greater epistemic good overall and then cite this
counterintuitive result to establish that such tradeoffs are illegitimate.
The problem with Berker’s response here is that, given the lack of detail on how
the modified cases are supposed to work, it is unclear whether the verdicts at issue

39 Berker realizes that there are many versions that will lack the consequence in question. In fact, he
raises many such examples and then cooks up new cases corresponding to each to show they also share in
the counterintuitive consequences under the revised cases. But he ignores one that I discuss here, and his
modified case fails for the other, as I show shortly.

123
3590 Synthese (2018) 195:3571–3596

really do clash with our intuition. For example, the coherence teleologist would insist
that the scientist must have started out highly irrational. Berker stipulated that the
scientist had a great deal of evidence against the existence of God, and a coherence
teleologist would likely cash out evidential relations in terms of coherence relations.
Additionally, attitudes about religion tend to have a great deal of centrality within
one’s web of doxastic attitudes. This suggests that revising her attitude toward God’s
existence will greatly harm the overall coherence of her attitudes, on a par, perhaps,
with revising some attitude about basic math. All we then know about the case is that
the scientist’s overall coherence improves after she completes her research. Since her
gain in coherence on this other subject matter must be enough to offset the drastic loss
from revising her attitude about God’s existence, this suggests that her views on the
scientific subject matter must have been a complete mess. So would it be rational for
her to revise her belief just to get the grant money? I honestly have no intuition one
way or the other. She must have started out this new case in such bad epistemic shape
that I can understand why we might think she ought to modify her religious belief. I do
not mean to argue that the intuition clearly speaks in favor of the coherence teleologist
here. Even if the intuition merely gets cloudy once the details are spelled out a bit
better, that speaks against Berker’s reply.
While I think this line of objection has much going for it, I prefer another strategy. I
am happy to simply bite the bullet and accept that it might be rational for the scientist
in Berker’s case to form a belief in God (if she even can), because I believe it is
easy for a pluralist like myself to explain away the intuition to the contrary. When
we assess the scientist according to a veritistic form of epistemic consequentialism,
we are, in my terminology, explicitly making an idealistic assessment. And on an
idealistic assessment, we must completely ignore the epistemic goals of the scientist
herself, since those are irrelevant to this particular kind of assessment. That said, the
intuition of Berker’s readers surely is not so strict. It is easy to slip into a liberalistic
assessment when given cases like these to consider. We imagine that the scientist, as
scientists are wont to do, cares about what attitudes her evidence supports. And given
that the central proposition at issue is about God’s existence, it is easy to focus on this
proposition and be swayed by the fact that her believing this proposition would be an
awful way of pursuing her personal goal of believing in accord with the evidence. We
then judge that she would be irrational to form this belief. And, in a sense, we are right.
But forming that belief is not irrational in the way that is relevant to the task at hand.
This is simply one of those cases where the idealistic assessment and the liberalistic
assessment offer conflicting judgments. Berker is only able to convince his reader that
the idealistic veritistic assessment is problematic by eliciting intuitions driven by a
completely different kind of assessment.
To sum up, if Berker were correct that there is something generally wrong with
epistemic-value-conducivism, this would have spoken against my pluralistic teleo-
logical view. In essence, his arguments dispute whether idealistic assessments are
genuine assessments of rationality. He makes his case by giving examples where
cross-proposition trade-offs seem illegitimate. But his argument is inconclusive at
best. First, his argument would only speak against idealistic assessments that focus on
truth promotion, leaving those that focus instead on having coherent or evidentially
supported attitudes unscathed. His attempted reply to the coherent attitudes version is

123
Synthese (2018) 195:3571–3596 3591

inconclusive since it is underspecified, and filling in the details clearly clouds the rel-
evant intuitions. Second, a pluralist like myself can simply bite the bullet and explain
away the intuitions Berker relies upon. A pluralist can accept that the behavior Berker
seeks to condemn is indeed irrational in one sense, while being fully rational in another.
The cases he presents are engineered precisely to elicit the intuitions from the conflict-
ing mode of assessment, which does make them convincing. But it also makes them
ultimately irrelevant.

5 Some lingering worries (and a sketch of some possible solutions)

While Berker’s worries are ultimately unfounded, there are some lingering worries
about the kind of pluralistic approach I advocate, which I will attempt to address in
this section. The first lingering worry could be stated roughly as follows. One might
worry that since my account makes beliefs subject to a kind of practical rationality,
it then becomes difficult to see how beliefs could play the special roll that they are
typically thought to play in practical reasoning. In standard accounts of decision theory,
we think of beliefs as fixed inputs into the decision equation, much like we do with
preferences, and therefore it is difficult to see how decision theory could work if beliefs
are both inputs and outputs of the system. Here is an example to make the worry more
salient. Say I am thirsty and I need to decide whether I ought to head to the fridge in
the kitchen or instead to the fridge in the garage to retrieve my preferred beverage.
Which choice is rational, in this case, depends not only on my preference ranking over
the various beverages in the house, but also upon my beliefs about which beverages
are in which locations. If I prefer having a soda over all the other alternatives, this
does not by itself dictate where I ought to go. We also need to know where I believe
my household keeps the soda—is it in the kitchen fridge or the one in the garage? So
the worry here is that on my account these beliefs about where we keep the soda are
not held fixed in the right kind of way. They, themselves, are an output of the kind of
practical reasoning that decision theory attempts to capture.
While I agree with the objector here that, on my account, decision theory will not
end up being as clean and tidy as decision theorists might hope, I actually think this is
the correct result. As Hausman (2011) convincingly argues, economists and rational
choice theorists were already wrong to treat preferences as fixed givens to be easily
plugged into our decision theoretic machinery. In fact, preferences are constructed
through a very messy process, a process that itself ought to be a subject of a mature
decision theory.40 I think the same is true of our beliefs. We should not think of beliefs
as fixed givens to be plugged into our decision machinery. Instead, they too are formed
in messy localized processes where various previously formed epistemic preferences
and beliefs guide our reasoning to eventually settle on further beliefs. When I face a

40 Hausman’s apt example involves Jack, a patient who needs to determine his preference for either a
treatment that will leave him deaf or a treatment that will leave him without the use of his legs (2011, pp.
120–123). It should be obvious that Jack, unless he is rather unusual, would not come to such a decision
with a ready-made preference one way or the other. Rather, he would have to go through a somewhat messy
process to form such a preference. The point here is that a mature decision theory ought to have something
to say about which ways of forming such a preference would be rational.

123
3592 Synthese (2018) 195:3571–3596

decision problem, like the kitchen versus garage problem above, I need to weigh my
goals and desires to construct both my preferences and my beliefs over various states
of the world. The clean picture of decision theory is good enough for a number of
problems. But like many aspects of the rational choice literature, it is largely a useful
fiction. Things get much more complicated as they get closer to the truth.
The second lingering worry that I will address is that the availability of liberalistic
assessments ultimately leads to an anything goes form of rationality. In particular,
agents that seem highly irrational can get themselves off the hook by adopting bizarre
goals. As an example, take our religious fundamentalist from much earlier. Her belief
that humans and dinosaurs contemporaneously roamed the earth seems very irrational.
But notice that, from a liberalistic assessment, we really cannot say from the outset
that this belief is irrational—that will depend on which epistemic goals she happens
to have. If one of her goals is to believe anything she has been told by her funda-
mentalist media sources, or perhaps to believe anything that accords with her deeply
held religious beliefs, then her belief may well count as rational on my account. Her
seemingly problematic belief might in fact be the most effective means to achieving
these epistemic goals. Thus, if we allow every manner of bizarre epistemic goals to
affect our assessments, it looks like we will be able to cook up a situation where any
belief, no matter how bizarre, might turn out rational.
My first response to this kind of worry would be to point out that while this may be
true of liberalistic assessments, we should not lose sight of the fact that this is not the
only type of assessment that the pluralist has at her disposal. While the individual with
the bizarre goals gets off the hook when we judge her based upon her own goals, she
does not get off the hook when we judge her according to the laudable epistemic goals
we feel she really ought to have. If, in forming her beliefs, she is ignoring the evidence,
forming beliefs that do not cohere with other beliefs she holds about the world around
her, or is not using reliable belief forming processes, then these all might ground valid
judgments of epistemic irrationality. So, while she is off the hook in one sense, she
surely is not in another sense.41 And I think the possibility of such conflicts actually
makes salient a very important insight. If agents can get off the hook on one legitimate
measure of epistemic rationality simply by lacking the kinds of goals we would rather
members of our epistemic community hold, then this gives us a strong motivation
to do what we can to instill the more laudable goals in members of our community.
This in turn has important implications for how we ought to educate members of our
community.
My second response would be to point out that we may still have some resources
to apply rational criticism to the bizarre epistemic goals themselves. It is commonly
thought that desires, goals, or preferences are not the kind of thing that can be rationally

41 The pluralist could offer a similar response to the charges Kelly (2003, 2007) levels against what
he calls ‘epistemic instrumentalism’. Kelly argues that epistemic instrumentalism, which amounts to the
liberalistic mode of assessment in my terminology, can’t be correct, because epistemic norms are intuitively
categorical in nature. Since they are categorical in nature, they cannot be determined by the goals held (or
lacked) by particular agents. A pluralist who accepts the validity of idealistic assessments can account for
these intuitions. In other words, the agents in Kelly’s cases really are irrational, in some sense, i.e., when
assessed according to whether their attitudes accord with the evidence, are truth tracking, etc. But the agents
aren’t irrational when assessed from a liberalistic perspective.

123
Synthese (2018) 195:3571–3596 3593

scrutinized, a view often attributed to Hume. But it is important to note that not
everyone agrees. For example, Hausman (2011, pp. 124–132) argues that preferences
ought to be open to rational criticism. There is a sense in which a certain preference
might not cohere well with the rest of an agent’s preferences, as in a case where
satisfying a particular preference would cause the agent to frustrate a large number
of her other preferences. If this is true for standard practical preferences, surely the
same could be true for epistemic preferences. For example, take an agent who has
anti-induction preferences, meaning that the more times she sees one state of affairs X
followed by another state of affairs Y, her preferences dictate she become less confident
that Y will follow X the next time.42 Surely this bizarre preference will wreak havoc
with her attempts to pursue her other preferences, epistemic or otherwise. Thus, it
seems we can claim that there is something wrong with having such a preference, since
the very having of the preference decreases the agent’s overall preference satisfaction.
Or, in other words, the most effective way to pursue her goals may involve giving that
problematic goal up. A goal-oriented approach to epistemic rationality should be able
to slight such an agent for not doing so.
I conclude this essay by admitting that there is much work left to do. We do not
have a well-articulated account of preference coherence, and one would need such an
account before one could tell which epistemic preferences ought to be ruled out. It is
also worth admitting that the liberalness of my view will decrease with every epistemic
goal that is ruled out. I still believe that there will be a great deal of flexibility in how
agents can rationally weigh their various epistemic goals. But it is possible that a
fully articulated account of epistemic preference coherence will narrow the set of
respectable preferences down to one, or perhaps just a few, of the preferences hailed
by the idealists.

Acknowledgements I would like to thank Sandy Goldberg, Matthew Lockhard, Sophie Horowitz, Brian
Talbot, and the audiences at Charles Sturt University-Wagga Wagga, Monash University, Northwestern
University, University of Melbourne, University of North Carolina-Charlotte, University of Sydney, the 2015
meeting of the Central Division of the American Philosophical Association, and the Morris Colloquium on
Cognitive Values at University of Colorado-Boulder for their helpful comments and discussion. I would
especially like to thank Jeff Behrends and James Willoughby for their timely help with various aspects of the
paper, and two anonymous referees for their careful and critical remarks that helped me greatly improved
the paper. My apologies to anyone I’ve accidentally omitted.

References
Ahlstrom-Vij, K. (2013). In defense of veritistic value monism. Pacific Philosophical Quarterly, 94, 19–40.
Ahlstrom-Vij, K., & Dunn, J. (2014). A defence of epistemic consequentialism. The Philosophical Quar-
terly, 64, 541–51.
Baker, D. (2017). The varieties of normativity (chap 36). In T. McPherson & D. Plunkett (Eds.), The
Routledge Handbook of Metaethics.
Baker, D. (ms). Skepticism about ought simpliciter. In Presented at the 2016 Metaethics Workshop.
Behrends, J. (2015). Problems and solutions for a hybrid approach to grounding practical normativity.
Canadian Journal of Philosophy, 45, 159–178.

42 In more technical terms, her preferences dictate that she’ll maximize her expected subjective epistemic
utility every time she commits the gambler’s fallacy for two temporally correlated events.

123
3594 Synthese (2018) 195:3571–3596

Berker, S. (2013a). Epistemic teleology and the separateness of propositions. The Philosophical Review,
122, 337–393.
Berker, S. (2013b). The rejection of epistemic consequentialism. Philosophical Issues, 23, 363–87.
Bishop, M. (2009). Reflections on cognitive and epistemic diversity: Can a Stich in time save Quine? In D.
Murphy & M. Bishop (Eds.), Stich and his critics (pp. 113–136). New York: Wiley.
Bishop, M., & Trout, J. D. (2005). Epistemology and the psychology of human judgment. Oxford: Oxford
University Press.
Brössel, P., Eder, A.-M., & Huber, F. (2013). Evidential support and instrumental rationality. Philosophy
and Phenomenological Research, 87, 279–300.
Chang, R. (2013). Grounding practical normativity: Going hybrid. Philosophical Studies, 164, 163–187.
Coady, D. (2012). What to believe now: Applying epistemology to contemporary issues. New York: Wiley-
Blackwell.
Conee, E., & Feldman, R. (2004). Evidentialism: Essays in epistemology. Oxford: Oxford University Press.
Copp, D. (1997). The ring of Gyges: Overridingness and the unity of reason. Social Philosophy and Policy,
14, 86–101.
Cowie, C. (2014). In defence of instrumentalism about epistemic normativity. Synthese, 191, 4003–17.
Crisp, R. (1996). The dualism of practical reason. Proceedings of the Aristotelian Society, 96, 53–73.
Descartes, R. (1641). Meditations on first philosophy. Indianapolis: Hackett Publishing (Printed 1993)
Dorsey, D. (2013). Two dualisms of practical reason. In R. Shafer-Landau (Ed.), Oxford Studies in metaethics
(Vol. 8, pp. 114–139). Oxford: Oxford University Press.
Dunn, J. (nd). Epistemic consequentialism. Internet Encyclopedia of Philosophy. http://www.iep.utm.edu/
epis-con/.
Earman, J. (1992). Bayes or bust? A critical examination of bayesian confirmation theory. Cambridge:
Cambridge University Press.
Firth, R. (1981). Epistemic merit, intrinsic and instrumental. Proceedings and Addresses of the American
Philosophical Association, 55, 5–23.
Foley, R. (1987). The theory of epistemic rationality. Cambridge: Harvard University Press.
Foley, R. (1993). Working without a net: A study of egocentric epistemology. Oxford: Oxford University
Press.
Forster, M., & Sober, E. (1994). How to tell when simpler, more unified, or less ad hoc theories will provide
more accurate predictions. British Journal for the Philosophy of Science, 45, 1–35.
Fumerton, R. (2001). Epistemic justification and normativity. In M. Steup (Ed.), Knowledge, truth, and
duty: Essays on epistemic justification, responsibility, and virtue (pp. 49–60). Oxford: Blackwell.
Giere, R. (1988). Explaining science: A cognitive approach. Chicago: Chicago University Press.
Giere, R. (1989). Scientific rationality as instrumental rationality. Studies in History and Philosophy of
Science, 20, 377–384.
Gilovich, T. (1993). How we know what isn’t so: The fallibility of human reason in everyday life. New York:
First Free Press.
Goldman, A. (1986). Epistemology and cognition. Cambridge: Harvard University Press.
Goldman, A. (1999). Knowledge in a social world. Oxford: Oxford University Press.
Goldman, A. (2010). Systems-oriented social epistemology. In T. Gendler & J. Hawthorne (Eds.), Oxford
studies in epistemology (Vol. 3, pp. 189–214). Oxford: Oxford University Press.
Goldman, A. (2015). Reliabilism, veritism, and epistemic consequentialism. Episteme, 12, 131–143.
Greaves, H. (2013). Epistemic decision theory. Mind, 122, 915–952.
Greaves, H., & Wallace, D. (2006). Justifying conditionalization: Conditionalization maximizes expected
epistemic utility. Mind, 115, 607–32.
Harman, G. (1986). Change in view. Cambridge: Cambridge University Press.
Hausman, D. (2011). Preference, value, choice, and welfare. Cambridge: Cambridge University Press.
Hieronymi, P. (2006). Controlling attitudes. Pacific Philosophical Quarterly, 87, 45–74.
Howson, C., & Urbach, P. (1993). Scientific reasoning: The Bayesian approach. La Salle: Open Court.
Hubin, D. (1999). What’s special about Humeanism. Noûs, 33, 30–45.
Hubin, D. (2001). The groundless normativity of instrumental reason. The Journal of Philosophy, 98,
445–468.
Hume, D. (1739). Treatise of human nature. Oxford: Oxford University Press (printed 1978).
Jackson, F., & Smith, M. (2016). The implementation problem for deontology. In E. Lord & B. Maguire
(Eds.), Weighing reasons (pp. 279–291). Oxford: Oxford University Press.
James, W. (1896). The will to believe. Mineola: Dover Publishers (Printed 1956)

123
Synthese (2018) 195:3571–3596 3595

Joyce, J. (1998). A nonpragmatic vindication of probabilism. Philosophy of Science, 65, 575–603.


Joyce, J. (2009). Accuracy and coherence: Prospects for an alethic epistemology of partial belief. In F.
Huber & C. Schmidt-Petri (Eds.), Degrees of belief (pp. 263–297). Berlin: Springer.
Kelly, T. (2003). Epistemic rationality as instrumental rationality: A critique. Philosophy and Phenomeno-
logical Research, 66, 612–640.
Kelly, T. (2007). Evidence and normativity: Reply to Leite. Philosophy and Phenomenological Research,
75, 465–474.
Kitcher, P. (1990). The division of cognitive labor. The Journal of Philosophy, 87, 5–22.
Kitcher, P. (1992). The naturalists return. The Philosophical Review, 101, 53–114.
Kolodny, N., & Brunero, J. (nd). Instrumental rationality. Stanford Encyclopedia of Philosophy. https://
plato.stanford.edu/entries/rationality-instrumental/.
Kopec, M. (2012). We ought to agree: A consequence of repairing Goldman’s group scoring rule. Episteme,
9, 101–114.
Kopec, M. (ms). Unifying group rationality.
Kornblith, H. (1993). Epistemic normativity. Synthese, 94, 357–376.
Kornblith, H. (2002). Knowledge and its place in nature. Oxford: Oxford University Press.
Kruger, J., & Dunning, D. (1999). Unskilled and unaware of it: How difficulties in recognizing one’s own
incompetence lead to inflated self-assessments. Journal of personality and social psychology, 77,
1121–1134.
Kuhn, T. (1962/1996). The structure of scientific revolutions (3rd edn). Chicago: University of Chicago
Press.
Kuhn, T. (1970). Reflections on my critics. In I. Lakatos & A. Musgrave (Eds.), Criticism and the growth
of knowledge. Cambridge: Cambridge University Press.
Laudan, L. (1990a). Normative naturalism. Philosophy of Science, 57, 44–59.
Laudan, L. (1990b). Aimless epistemology? Studies in the history and philosophy of science, 21, 315–322.
Leitgeb, H., & Pettigrew, R. (2010a). An objective justification of Bayesianism I: Measuring inaccuracy.
Philosophy of Science, 77, 201–35.
Leitgeb, H., & Pettigrew, R. (2010b). An objective justification of Bayesianism II: The consequences of
minimizing inaccuracy. Philosophy of Science, 77, 236–72.
List, C., & Pettit, P. (2011). Group agency: The possibility, design, and status of corporate agents. Oxford:
Oxford University Press.
Lockard, M. (2013). Epistemic instrumentalism. Synthese, 190, 1701–18.
Mayo-Wilson, C., Zollman, K., & Danks, D. (2011). The independence thesis: When individual and social
epistemology diverge. Philosophy of Science, 78, 653–77.
Mayo-Wilson, C., Zollman, K., & Danks, D. (2013). Wisdom of crowds versus groupthink: Learning in
groups and in isolation. International Journal of Game Theory, 42, 695–723.
Maguire, B. (2016). The value-based theory of reasons. Ergo, 3, 233–262.
Muldoon, R. (2013). Diversity and the division of cognitive labor. Philosophy Compass, 8, 117–25.
Nussbaum, M., & Sen, A. (1993). The quality of life. Oxford: Oxford University Press.
Pascal, B. (1670). Pensées. London: Penguin Classics (Printed 1995).
Pettigrew, R. (2013). Epistemic utility and norms for credences. Philosophy Compass, 8, 897–908.
Portmore, D. (2009). Consequentializing. Philosophy Compass, 4, 329–347.
Quine, W. V. O., & Ulian, J. S. (1970). The web of belief. New York: Random House.
Ross, W. D. (1930). The right and the good. Oxford: Oxford University Press.
Rinard, S. (2017). No exception for belief. Philosophy and Phenomenological Research, 94, 121–143.
Scheier, M., & Carver, C. (1992). Effects of optimism on psychological and physical well-being: Theoretical
overview and empirical update. Cognitive Therapy and Research, 16, 201–228.
Schroeder, M. (2010). Slaves of the passions (1st ed.). Oxford: Oxford University Press.
Steglich-Petersen, A. (2009). Weighing the aim of belief. Philosophical Studies, 145, 395–405.
Stich, S. (1990). The fragmentation of reason: Preface to a pragmatic theory of cognitive evaluation.
Cambridge: Bradford Books.
Stich, S. (1993). Naturalizing epistemology: Quine, Simon and the prospects for Pragmatism. Royal Institute
of Philosophy Supplements, 34, 1–17.
Stich, S. (2009). Replies. In D. Murphy & M. Bishop (Eds.), Stich and his critics (pp. 190–252). New York:
Wiley.
Strevens, M. (2003). The role of the priority rule in science. The Journal of Philosophy, 100, 55–79.

123
3596 Synthese (2018) 195:3571–3596

Taber, C., & Lodge, M. (2006). Motivated skepticism in the evaluation of political beliefs. American Journal
of Political Science, 50, 755–769.
Talbot, B. (2014). Truth promoting non-evidential reasons for belief. Philosophical Studies, 168, 599–618.
Tiffany, E. (2007). Deflationary normative pluralism. Canadian Journal of Philosophy, 37, 231–262.
Titelbaum, M. (2015). Rationality’s fixed point (or: In defense of right reason). Oxford Studies in Episte-
mology, 5, 253–94.
Titelbaum, M., & Kopec, M. (ms). When rational reasoners reason differently.
Velleman, D. (ed.). (2000). On the aim of belief. In The possibility of practical reason (pp. 244–281).
Oxford: Oxford University Press.
Wedgewood, R. (2002). The aim of belief. Philosophical Perspectives, 16, 267–97.
Weisberg, M., & Muldoon, R. (2009). Epistemic landscapes and the division of cognitive labor. Philosophy
of Science, 76, 225–52.
Williams, B. (1970). Deciding to believe. In H. Kiefer & M. Munitz (Eds.), Language, belief, and meta-
physics (pp. 95–111). Albany: SUNY Press.
Zollman, K. (2013). Network epistemology: Communication in epistemic communities. Philosophy Com-
pass, 8, 15–27.

123

You might also like