Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Accuracy Across Doxastic Attitudes: Recent Work on the Accuracy of Belief

American Philosophical Quarterly, 2022
James Joyce's article "A Nonpragmatic Vindication of Probabilism" introduced an approach to arguing for credal norms by appealing to the epistemic value of accuracy. The central thought was that credences ought to accurately represent the world, a guiding thought that has gone on to generate an entire research paradigm on the rationality of credences. Recently, a number of epistemologists have begun to apply this same thought to full beliefs, attempting to explain and argue for norms of belief in terms of epistemic value. This paper examines these recent attempts, showing how they interact with work on the accuracy of credences. It then examines how differing judgments about epistemic value give rise to distinct rational requirements for belief, concluding by considering some of the fundamental questions and issues yet to be fully explored....Read more
Accuracy Across Doxastic Attitudes: Recent Work on the Accuracy of Belief Forthcoming, American Philosophical Quarterly Robert Weston Siscoe Florida State University wsiscoe@fsu.edu Abstract James Joyce’s article “A Nonpragmatic Vindication of Probabilism” in- troduced an approach to arguing for credal norms by appealing to the epistemic value of accuracy. The central thought was that credences ought to accurately represent the world, a guiding thought that has gone on to generate an entire research paradigm on the rationality of credences. Re- cently, a number of epistemologists have begun to apply this same thought to full beliefs, attempting to explain and argue for norms of belief in terms of epistemic value. This paper examines these recent attempts, showing how they interact with work on the accuracy of credences. It then exam- ines how differing judgments about epistemic value give rise to distinct rational requirements for belief, concluding by considering some of the fundamental questions and issues yet to be fully explored. 1 Keywords: Accuracy, Veritism, Credence, Belief, Lockean Thesis 1 For helpful comments on drafts of this paper as well as resources on the latest work on credence and belief, I am grateful to Clayton Littlejohn, Elizabeth Jackson, and Richard Pettigrew.
Introduction Doxastic attitudes have a mind-to-world direction of fit – they attempt to rep- resent the world as it is, and they should be revised insofar as they do not accomplish this goal. Not all doxastic attitudes, however, achieve this objective in the same way. Beliefs appropriately mirror the world by being true. If it raining, then believing that it is raining reflects this state of affairs. If it is not raining, however, that belief misrepresents that actual state of the world. Credences, on the other hand, can reflect how things are by being accurate. De- pending on how well they accomplish this task, they are said to be more or less accurate. The thought that credences should be accurate has recently lead to a number of exciting results. Leaning on veritism, the thought that accuracy is the only fundamental epistemic value generating rational norms, 2 formal episte- mologists have constructed novel arguments for probabilism, conditionalization, the principal principle, and the principle of indifference. The fecundity of the accuracy research program raises the question of whether a veritist approach could also be applied to belief. Up to this point, those working within the accuracy paradigm have been especially focused on the rationality of credences, but this does not rule out the possibility that an analogous method- ology might extend to other doxastic attitudes as well. Richard Pettigrew ends his 2016 book, Accuracy and the Laws of Credence, by entertaining just this sort of possibility: “It is natural to think that, for any doxastic state, veritism holds and the sole fundamental source of epistemic value is their accuracy. If that’s the case, the strategy of this book... should be applicable to other doxastic states” (p. 226). In keeping with Pettigrew’s thought that veritism holds true for other doxastic states, a number of authors have recently begun to extend this paradigm to be- lief. In this paper, I will explore these efforts, outlining the advances that have been made thus far as well as indicating possible directions for future research. In section 1, I further detail the accuracy approach to epistemology, outlining briefly how this project has proceeded in the case of credences. In section 2, I consider some early efforts to characterize the norms of belief in terms of ac- curacy, projects that took credences to be more normatively fundamental than beliefs. I then turn in section 3 to some recent frameworks that attempt to ap- ply the veritist paradigm by starting with beliefs, taking epistemic value to be properly characterized in terms of true belief instead of just accurate credences. section 4 then examines the results of this framework for a number of epistemic issues, while section 5 considers several modifications of this veritist project. I then conclude by considering some areas for further research, pointing out a few avenues for additional applications along with some possible challenges to the veritist paradigm of rational belief. 2 See Pettigrew (2016a), p. 6. 2
Accuracy Across Doxastic Attitudes: Recent Work on the Accuracy of Belief Forthcoming, American Philosophical Quarterly Robert Weston Siscoe Florida State University wsiscoe@fsu.edu Abstract James Joyce’s article “A Nonpragmatic Vindication of Probabilism” introduced an approach to arguing for credal norms by appealing to the epistemic value of accuracy. The central thought was that credences ought to accurately represent the world, a guiding thought that has gone on to generate an entire research paradigm on the rationality of credences. Recently, a number of epistemologists have begun to apply this same thought to full beliefs, attempting to explain and argue for norms of belief in terms of epistemic value. This paper examines these recent attempts, showing how they interact with work on the accuracy of credences. It then examines how differing judgments about epistemic value give rise to distinct rational requirements for belief, concluding by considering some of the fundamental questions and issues yet to be fully explored.1 Keywords: Accuracy, Veritism, Credence, Belief, Lockean Thesis 1 For helpful comments on drafts of this paper as well as resources on the latest work on credence and belief, I am grateful to Clayton Littlejohn, Elizabeth Jackson, and Richard Pettigrew. Introduction Doxastic attitudes have a mind-to-world direction of fit – they attempt to represent the world as it is, and they should be revised insofar as they do not accomplish this goal. Not all doxastic attitudes, however, achieve this objective in the same way. Beliefs appropriately mirror the world by being true. If it raining, then believing that it is raining reflects this state of affairs. If it is not raining, however, that belief misrepresents that actual state of the world. Credences, on the other hand, can reflect how things are by being accurate. Depending on how well they accomplish this task, they are said to be more or less accurate. The thought that credences should be accurate has recently lead to a number of exciting results. Leaning on veritism, the thought that accuracy is the only fundamental epistemic value generating rational norms,2 formal epistemologists have constructed novel arguments for probabilism, conditionalization, the principal principle, and the principle of indifference. The fecundity of the accuracy research program raises the question of whether a veritist approach could also be applied to belief. Up to this point, those working within the accuracy paradigm have been especially focused on the rationality of credences, but this does not rule out the possibility that an analogous methodology might extend to other doxastic attitudes as well. Richard Pettigrew ends his 2016 book, Accuracy and the Laws of Credence, by entertaining just this sort of possibility: “It is natural to think that, for any doxastic state, veritism holds and the sole fundamental source of epistemic value is their accuracy. If that’s the case, the strategy of this book... should be applicable to other doxastic states” (p. 226). In keeping with Pettigrew’s thought that veritism holds true for other doxastic states, a number of authors have recently begun to extend this paradigm to belief. In this paper, I will explore these efforts, outlining the advances that have been made thus far as well as indicating possible directions for future research. In section 1, I further detail the accuracy approach to epistemology, outlining briefly how this project has proceeded in the case of credences. In section 2, I consider some early efforts to characterize the norms of belief in terms of accuracy, projects that took credences to be more normatively fundamental than beliefs. I then turn in section 3 to some recent frameworks that attempt to apply the veritist paradigm by starting with beliefs, taking epistemic value to be properly characterized in terms of true belief instead of just accurate credences. section 4 then examines the results of this framework for a number of epistemic issues, while section 5 considers several modifications of this veritist project. I then conclude by considering some areas for further research, pointing out a few avenues for additional applications along with some possible challenges to the veritist paradigm of rational belief. 2 See Pettigrew (2016a), p. 6. 2 A quick note before we get started. One potential worry when extending the veritist program to belief is whether or not it makes sense to characterize beliefs in terms of accuracy. As we have introduced things here, beliefs are evaluated as true or false while credences are evaluated as accurate or inaccurate, so it might be strange to think that an emphasis on accuracy could have much to tell us about belief. Fortunately for our purposes, nothing much hangs on whether or not beliefs can be properly described as accurate or inaccurate. What matters is whether or not there is a plausible way to characterize the epistemic value of true belief, and whether or not maximizing that value leads to a feasible list of rational norms. It is this project that I have in mind when asking whether the accuracy paradigm can be extended to belief, a project that can move forward regardless of whether it is appropriate to describe beliefs as accurate or not. 1 Accuracy Epistemology The accuracy-based approach to epistemology got its start as a way of understanding the rationality of credences. Up until James Joyce’s 1998 article “A Nonpragmatic Vindication of Probabilism,” defenses of credal norms were often mired in concerns about the distinctively pragmatic flavor of Dutch Book-style arguments.3 In contrast to the pragmatic character of those arguments, Joyce’s account attempted to defend probabilism solely in terms of the epistemic value of accuracy, making his paper the first to use Veritism as a core commitment of a defense of probabilistic credal norms:4 Veritism – Accuracy is the sole value used to generate rational norms Since Joyce’s article, Veritism has spawned an entire approach to arguing for rational requirements on credences. In Accuracy and the Laws of Credence, Richard Pettigrew begins by assuming that accuracy is the “only fundamental epistemic virtue: all other epistemic virtues derive their goodness from their ability to promote accuracy” (p. 6). Veritism is now so ubiquitous within the accuracy approach to credences that it often goes unacknowledged. As Dorst (2019) puts it, Veritism “is rarely stated, but it’s implicit in the way epistemic utility theorists set up their frameworks” (p. 180). The primary commitment of the accuracy program is thus Veritism, the thought that all rational requirements are ultimately based on considerations of accuracy. By itself, Veritism does not tell us anything about which credences are most accurate. It simply instructs us to derive rational requirements from accuracy considerations. Veritism arguments thus must be supplemented with a more filled-out view of how to score doxastic attitudes, a Definition of Epistemic Value: 3 For a summary of the pragmatic criticism of Dutch Book Arguments, see Christensen, (2004), pp. 109-115. 4 Joyce’s work in this paper depends on theorems first advanced by de Finetti (1974) and Savage (1971), though it was Joyce who first employs these theorems in service of a completely epistemic defense of probabilism. 3 Definition of Epistemic Value – assigns a score to doxastic states based on how accurately they reflect the world Let’s look at how this might work in the case of credences. If we assign the truth value of the proposition p at the actual world to either one or zero, the accuracy score of an agent’s credence that p can be determined by measuring the distance between the value of p and the agent’s credence. If we suppose that p is false in the actual world, we get the accuracy of cr(p) by measuring the distance from cr(p) to zero: Accuracy of cr(p) 0 x 1 p cr(p) This score, of course, only gives us the accuracy of cr(p) at the actual world. Given that, from the agent’s point of view, there are more possible worlds under consideration than just the actual world, we might also be interested in how cr(p) would score across a number of worlds. We can determine this by summing over the accuracy of cr(p) relative to all the worlds under consideration, a sum weighted by the credence that the agent assigns to each of the worlds:5 X cr(w) · Accuracy of cr(p) at w Expected Accuracy of cr(p) = w∈W By using this Definition of Epistemic Value, we can then compare the accuracy scores of various credences regarding p, supplementing Veritism with a strategy for scoring credences.6 The final step of the accuracy program is generating rational norms by comparing the accuracy of doxastic attitudes. Using Veritism and a Definition of Epistemic Value, formal epistemologists have argued for a range of credal norms, including versions of probabilism (Joyce, 1998, and Pettigrew, 2016a), conditionalization (Briggs and Pettigrew, 2020; Easwaran, 2013; Greaves and Wallace, 2006; and Schoenfield, 2017), the principal principle (Pettigrew, 2012, and Pettigrew, 2013), and the principle of indifference (Pettigrew, 2014, and 5 Of course, scoring the accuracy of a credence in a particular proposition should not be confused with scoring the accuracy of an agent’s entire credence function, but this bare outline will be sufficient for the purposes of our discussion. For how scoring the accuracy of an entire credence function differs from scoring the accuracy of a credence in a single proposition, see Leitgeb and Pettigrew’s (2010) distinction between local and global inaccuracy measures (pp. 205-207). 6 This is, of course, just a brief overview of the considerations that go into scoring the accuracy of credences. For discussions of other types of constraints that scoring rules for accuracy must satisfy, see Greaves and Wallace (2006), section 3.1, Hajek (2008), pp. 814815, Joyce (1998), section 4, Joyce (2009), p. 279, Leitgeb and Pettigrew (2010), Pettigrew (2016a), chapter 4, Predd et al. (2009), and Selten (1998). 4 Konek, 2016). Other authors have provided accuracy arguments in favor of particular responses to peer disagreement (Lam, 2013; Levinstein, 2015; Moss, 2011; Steel, 2018), higher-order evidence (Schoenfield, 2018), and the uniqueness/permissivism debate (Horowitz, 2014, and Schoenfield, 2019), putting the accuracy program at the forefront of discussions of rational requirements on credences. 2 Reducing Beliefs to Credences Since its inception, proponents of accuracy-centered epistemology have focused mostly on the rationality of credences. This preoccupation is in part explained by the fact that many took credences to be more normatively fundamental than beliefs, making all of the rational requirements on belief explainable in terms of the accuracy of credences.7 Recent attention, however, has turned to offering veritist accounts of rational norms for belief. In this section, we will begin by considering a view of rational belief that attempts to do both, adopting the early assumption that credences are more normatively fundamental than beliefs while also giving an accuracy-based view of rational norms on belief.8 The most prominent view on which credences are more normatively fundamental than beliefs is the Lockean thesis, the thought that there is a threshold for rational credence that dictates what is rational to believe: Lockean Thesis It is rational for S to believe that p iff it is rational for S to have a credence in p that is greater than or equal to threshold t 9 If we fill in t with a high credence, the Lockean Thesis is intuitive on a number of levels. We are often very confident of the things that we believe, and it seems strange to think that someone should be highly confident of a proposition 7 There are multiple ways in which credences might be more fundamental than beliefs. It might be that, metaphysically speaking, beliefs just are certain sorts of credences. For discussions of views along these lines, see Christensen (2004), Clarke (2013), Greco (2015), Leitgeb (2013), Levi (1991), Pettigrew (2016a), Sturgeon (2020), van Fraassen (1995), Weatherson (2016), and Wedgwood (2012). It might also be that, regardless of whether beliefs are identical to a certain type of credence, the norms for belief are reducible to norms for credences. It is this latter thesis that we will be focused on here, though taking credences to be more descriptively fundamental than beliefs may also lend itself to taking credences to be more normatively fundamental. For possible metaphysical and normative connections between beliefs and credences, see Genin (2019), Jackson (2020b), and Jackson and Moon (2020). 8 A number of those who think that credences should take center stage in epistemology have also entertained the position that, despite all of our talk of what we do and do not believe, there is no such thing as belief. All of the work supposedly done by belief in describing our mental lives can instead be done by credences; see Jeffrey (1970), Kaplan (1996), Maher (1993), Pettigrew (2016a), and Stich (1996). If this is the case, however, then the question of the accuracy of beliefs is a non-starter, so I will not be considering this view here. 9 For defenses of the Lockean thesis, see Christensen (2004), Foley (2009), Shear and Fitelson (2019), and Sturgeon (2008), and for critiques, see Buchak (2014), Friedman (2013), Jackson (2020), and Staffel (2015). 5 they should not also believe. Nevertheless, there are outstanding issues for the Lockean Thesis that call into question this simple picture of the relationship between rational credence and rational belief. The challenge for the Lockean Thesis is two-fold. From one side, the lottery paradox seems to indicate that any particular credal threshold does not suffice for rational belief. Consider a fair lottery with one thousand tickets – the winning ticket has been picked and will be announced shortly. The probability that each ticket loses is .999, and thus it is rational to assign a credence of .999 to each ticket losing. If the Lockean Thesis is true of a credence short of one, then it should be rational to believe that each ticket will lose as well. Thus, if we let Wn be the proposition that ticket n wins, then B(¬W1 ) is rational, B(¬W2 ) is rational, and so on for each n. It is also rational to believe that a ticket has won – the winning ticket has already been chosen and is about to be revealed. So we can also say that B(W1 ∨ W2 ∨ W3 ∨ ...W1000 ) is rational. But this belief set is ultimately logically inconsistent. Because B(¬Wn ) is rational for all n, deductive closure requires a commitment to the conjunction that all the tickets will lose, conflicting with the belief that one of the tickets will win. The lottery paradox thus appears to show that even very high rational credences are not sufficient to generate rational beliefs. While the lottery paradox challenges whether a high rational credence is sufficient for rational belief, the preface paradox questions whether a high rational credence is even necessary for rational belief. Suppose an author acknowledges in the preface of their book that, although they believe and have thoroughly researched every claim made therein, there are inevitably some errors in the book’s pages that they have failed to eliminate.10 With this preface, the author seems to indicate two things. First, they imply that they rationally believe every claim made in the book, that B(C1 ) is rational, that B(C2 ) is rational, and so on. By deductive closure, this produces the result that it is rational to believe the conjunction of all the claims in the book, B(C1 &C2 &C3 &...Cn ). But they also indicate that they have a very high credence that they made an error at some juncture, as it is very unlikely that they have eliminated all mistakes from the manuscript. If that is correct though, then it is both rational for them to believe the conjunction of all of the claims in the book while at the same time assigning that conjunction a very low credence, making the case that a high rational credence is not even necessary for rational belief. One way to avoid these difficulties is to make the threshold in the Lockean Thesis a credence of one. This resolves the lottery paradox by forbidding believing that any of the individual tickets will lose. Because only a .999 credence that a given ticket will lose is rational, B(¬Wn ) for any n is irrational, preventing a conflict between believing that all of the tickets will lose and that one of them will win. The credence one view can also block the preface paradox. Even 10 For the original preface paradox, see Makinson (1965). 6 though the author has done a great deal of research, there are probably very few claims in their book that they should be completely certain about. This then forbids the author from believing the conjunction of all the claims in their book.11 Of course, there are independent problems for the credence one account. Just like the author, there are very few things that we are absolutely certain about, but it seems that we are rational in believing far more propositions than merely the ones of which we are completely certain. Similarly, this view cannot explain why we should be more confident in some of our beliefs than others, simply ruling out the rationality of all beliefs that fall below a credence of one. 3 Starting With Belief Recently, there have been a number of epistemologists who have attempted to take the epistemic value of belief on its own terms. Instead of presupposing that credences are more normatively fundamental, they have started their theorizing by only considering beliefs. William James famously gives two principles for how we should manage our beliefs – “Believe Truth! Shun Error!”12 – advice which in recent years has given way to the slogan that “belief aims at truth.”13 James’s guidance and the aim of belief slogan, though, are both still very general. How do they translate into advice about how to honor Veritism with our beliefs? Should we believe every proposition so that we believe as many truths as possible, or should we always suspend judgment so that we never fall into error? Before we are in a position to heed James’s advice, we will need to provide a more precise Definition of Epistemic Value in order to balance these considerations. Those who extend the Veritism picture to full beliefs start out by assigning values to each of the tripartite doxastic attitudes, belief, disbelief, and suspension of judgment.14 We’ll begin here with the most popular account given by Dorst (2019), Easwaran (2016), Pettigrew (2016b) and (2017a), and to a lesser extent, Fitelson and Easwaran (2015). When a belief or disbelief gets things right – someone believes that p when p is true or disbelieves q when q is false – we’ll say that it receives score R. On the other hand, if a belief or disbelief gets things wrong, then it will receive the negative score W. Suspension of judgment will always receive a score of 0 because it does not attempt to accurately reflect the world. What do we know about the values of R and W ? Does the positive value of R 11 For advocates of this solution, see Clarke (2013), Gardenfors (1986), and Greco (2015). James (1897), p. 18. 13 This slogan was coined by Bernard Williams (1973) and has been defended by a number of theorists, including Boghossian (2008), Gibbard (2005), Millar (2009), Shah and Velleman (2005), Shah (2003), Velleman (2000), Wedgwood (2002), and Whiting (2010, 2013a, and 2013b). 14 Easwaran (2016) notes that many of the same results surveyed here can also be acquired without treating the suspension of judgment as a distinct attitude (pp. 27-28). 12 See 7 equal the negative value of W ? (I.e., does |R| = |W |?) Probably not. If that were the case, then believing both p and -p would be rationally on par with suspending judgment, but believing contradictory propositions is clearly worse that suspending judgment. Along the same lines, it also cannot be that the positive value of R exceeds the negative value of W : |R| > 6 |W |. If this were the case, then believing both p and -p would be a better rational strategy than suspending judgment.15 So when assigning values to R and W, the negative value of W should be greater than the positive value of R: |W | > |R|. If we are only considering our doxastic attitudes towards a single proposition, the epistemic value of those attitudes would be either R, W, or 0. But we typically hold beliefs concerning a wide range of propositions simultaneously, requiring that we consider not only the value of one particular belief, but a whole set of beliefs. Suppose that we have doxastic attitudes concerning both p and q. In order to find the total epistemic value of our set of beliefs, we would have to add up the values of our attitudes towards both p and q. If we believe p but disbelieve q and both propositions are true, then our belief set has a value of R + W. If we are correct on both counts, then our belief set has a value of R + R. Adding beliefs in more propositions keeps this general strategy intact. To find the total epistemic value of a belief set, we can simply add up the values of each individual doxastic attitude. If we represent this a bit more formally, we get roughly the same definition of single-world accuracy as Pettigrew (2017a), p. 461: X Value(w(p), B(p)) Accuracy of belief set B at w = p∈B If we are trying to maximize the accuracy score of our beliefs, a seemingly easy solution would just be to believe all true propositions and disbelieve all false propositions. The problem, of course, is that we don’t always know which world is the actual world. From our point of view, a number of worlds are possible, with some more likely than others. So when deciding what to believe, we must choose the beliefs that will fare best given the range of possible worlds. To this end, we will introduce a probability function that assigns probabilities to worlds. In order to find the expected value of our belief set B in terms of the probability function P, we sum the values of the belief set at each world weighted by the likelihoods assigned to those worlds: Expected Accuracy of B on P = X P (w)Value of B at w w∈W Amongst the authors that provide a similar expected accuracy measure for belief, there are a few different ways that this probability function is understood. Fitelson and Easwaran (2015) and Pettigrew (2017a) are non-committal as to the ontological status of the probability function, simply using it as a tool for 15 For this point, see Dorst (2019), p. 185. 8 demonstrating when a belief set maximizes expected accuracy. Dorst (2019), on the other hand, plugs in an agent’s pre-existing credences, while Easwaran (2016) argues that the probability function is just a numerical approach to representing a set of beliefs, two views that we’ll discuss further in the next section. Before we move on though, it will be helpful to flag a few underlying assumptions. To begin with, by assuming that the positive value of R is greater than the negative value of W, this account has taken a stance that Pettigrew (2017a) describes as epistemically conservative. It is also possible to score beliefs in ways that are epistemically centrist (|R| = |W |) and epistemically radical (|R| > |W |), possibilities that Pettigrew (2017a), pp. 464-467, explores and options that we will consider again in section 5. Our framework also scores all propositions equally – all true beliefs receive a score of R even if those beliefs are of little or no practical or epistemic significance. Even though we will maintain this assumption here, it seems plausible that there are some beliefs that it is more important to get right. For those who consider how taking this thought on board alters the framework given here, see Dorst (2019), pp. 187-192, Easwaran (2016), pp. 33-36, and Fitelson and Easwaran (2015), pp. 82-83. Finally, I will only be discussing applications of this framework that hold within classical logic, though Pettigrew (2017a), pp. 471-477, considers how to measure the accuracy of belief while Williams, (2012a) and (2012b), extends the Joycean framework for credences to non-classical logics. 4 Results: Lockeanism, the Lottery, and the Preface We purposely designed our framework to always make suspending judgment preferable to believing both p and -p, but Veritism combined with our Definition of Epistemic Value can also give rise to other plausible rational norms. Consider, for instance, the combination of both believing p and disbelieving one of its classical entailments q. Because we are limiting ourselves to worlds that abide by classical logic, one of these beliefs will always be mistaken, making the value of this belief combo R + W. Because the negative value of W is greater than the positive value of R, suspending judgment about p and q yields more epistemic value regardless of the world in which we find ourselves. So if our goal is to maximize the epistemic value of our beliefs, we should always opt for suspending judgment over believing the conjunction of a proposition and the negation of one of its entailments. Along with generating some plausible belief norms, our framework also produces some interesting results in relation to the Lockean Thesis, the lottery paradox, and the preface paradox. Even though we began by focusing on the epistemic value strictly of belief, by using the expected value of B and treating the probability function P as a set of credences, we get the following Belief-Credence Threshold: 9 Belief-Credence Threshold A set of beliefs maximizes expected accuracy iff S believes every propo−W sition p for which S’s credence in p is greater than or equal to R−W Starting with our definition of expected epistemic value, the proof for the BeliefCredence Threshold proceeds as follows. From left to right,16 we begin by supposing that the belief set B maximizes expected accuracy. Now take a belief set B* that (i) does not assign an attitude to just one of the propositions p in B, giving B* an accuracy score of 0 for that proposition, and (ii) assigns the same attitude as B to all the other propositions in B. Because B maximizes expected accuracy, this guarantees that the value of B(p) must be greater than or equal to 0: Expected Accuracy of B(p) ≥ 0 B Maximizes Expected Accuracy P (wp ) · R + P (w−p ) · W ≥ 0 Expected Accuracy Definition P (wp ) · R + (1 − P (wp )) · W ≥ 0 −W P (wp ) ≥ R−W Probability Complement Rule Simplification If we interpret our probability function as a set of credences, then this has some implications for how we view the relationship between beliefs and credences. Dorst (2019) utilizes the Belief-Credence Threshold to argue that credences are normatively and ontologically fundamental. On Dorst’s view, beliefs are a particular sort of epistemic bet – you are betting that the epistemic value of guessing that p is more valuable than guessing that -p. These guesses then play a particular functional role, serving to rationalize things like saying “probably p” or taking bets with particular odds. If this is all that belief is and our BeliefCredence Threshold is correct, we can fully describe these guesses in terms of credences, making credences more normatively and metaphysically fundamental than belief. Dorst thus starts with our Belief-Credence Threshold and argues that this can be used to vindicate the previous assumption that credences are more normatively fundamental then beliefs. Easwaran (2016) pursues the opposite strategy, arguing that our Belief-Credence Threshold makes possible a view on which credences can be reduced to beliefs. As we have already seen, Fitelson and Easwaran (2015) and Pettigrew (2017a) simply use the probability function as a tool to demonstrate when a belief set maximizes expected accuracy. If we start off by taking beliefs to be the only real sort of doxastic attitude and use the probability function in this way, then credences, or the values assigned by the probability function, are just a mathematical device for representing belief sets that maximize expected accuracy. Thus, Easwaran’s view takes “belief as the only fundamental doxastic attitude,” arguing that this combined with the Belief-Credence Threshold 16 In order to avoid cluttering this overview too much with the technical details, I will direct those interested in the right to left direction to Dorst (2019), pp. 203-204, and Easwaran (2016), section 3.2. 10 then makes it possible to characterize credences entirely in terms of beliefs. Now that we have a definitive Belief-Credence Threshold, how does that impact the lottery and preface paradoxes? If our Belief-Credence Threshold falls below a credence of one, then our accuracy framework recommends adopting logically inconsistent beliefs in response to both paradoxes, rejecting deductive closure requirements on full belief. If our assignments of R and W result in a credal theshold of 0.9, then in the thousand ticket lottery, we should both believe of each ticket B(¬Wn ) that it will lose and also believe that one ticket will win B(W1 ∨ W2 ∨ W3 ∨ ...W1000 ). We should not, however, believe the conjunction that all the tickets will lose B(¬W1 &¬W2 &¬W3 &...¬W1000 ), for at some point the rational credence in the conjunction that all the tickets will lose will fall below 0.9. Similarly, even though the author in the preface paradox should believe each of the individual claims in their book, there will come a point where they should not also believe the conjunction of those claims, removing the conflict with the belief that they made a mistake at some point. Because their framework recommends abandoning deductive closure requirements, Easwaran (2016), pp. 14-15, Fitelson and Easwaran (2015), pp. 83-84, and Pettigrew (2017a), p. 471, all reject that belief sets must be logically consistent, albeit with some important limitations. Inconsistent beliefs might maximize expected accuracy, but this will always depend on the size of the conflicting set of beliefs. In cases with just two contradictory beliefs, it will always be more accurate to hold just one of those beliefs, making it irrational to believe that all of the tickets will lose and that one of the tickets will win or to believe that all of the claims in the book are correct while also believing that there is a mistake somewhere. As the inconsistent set of beliefs grows, however, consistent beliefs are not always guaranteed to be the most accurate. That is why the lottery and the preface, two paradoxes that are generated using very large belief sets, can make it rational to be logically inconsistent. 5 Modifying Veritism Thus far, we have surveyed accounts of belief that assume that accuracy is the sole consideration that gives rise to rational requirements. However, recent work has challenged whether everything in these frameworks can really be justified by Veritism. Recall, for example, that we have assumed epistemic conservatism, that the negative value of W should be greater than the positive value of R: |W | > |R|. Dorst (2019) adopted this position because it is clear that we should never believe an outright outright contradiction like p and -p. However, if we are epistemic radicals and take |W | to be less than |R|, then believing both p and -p would be more rational than believing neither. Similarly, if we are epistemic centrists and hold that |W | = |R|, then believing both p and -p would be rationally on par with suspending judgment. But can accuracy considerations themselves actually tell us that |W | should be greater than |R|? 11 Despite the intuitive pull that we should be epistemic conservatists, Steinberger (2019) and Skipper (2021) both argue that Veritism alone cannot explain why |W | should be greater than |R|. In his original case that |W | > |R|, Dorst (2019) says that, when faced with contradictory propositions, “believing both is not as accurate as believing neither” (p. 185). Steinberger (2019) challenges this, however, saying that Dorst never says why this should be the case (p. 663). Instead, it seems that R and W receive the assignments that they do because it seems obviously irrational to believe a contradiction. But this calls into question the veritist methodology itself, for it is supposed to be that we can derive rational norms for belief from accuracy, not decide which beliefs are most accurate based on which seem most rational. If we are trying to lean on accuracy scores to tell us what is rational, it seems like we can’t also determine which belief combinations score the highest by appealing to their rationality, so perhaps Veritism itself cannot explain why we should be epistemic conservatives. In order to evade this worry, Hewson (2020) attempts to shore up the veritist program by limiting the number of worlds under consideration. On Hewson’s view, in order for a belief set to be permissible, it must maximize accuracy at all “live” worlds, the worlds the belief set leaves open as actually possible. Since contradictions are not true at any worlds, inconsistent belief sets rule out all possible worlds, preventing them from maximizing accuracy at any worlds. Thus, from the outset, Hewson rules out beliefs in inconsistencies as candidates for maximizing accuracy. This move, however, comes with a cost. Hewson’s solution commits the accuracy theorist to deductive closure, blocking the solutions to the lottery and the preface paradox we considered in section 4. Apart from the challenge of justifying all rational norms using accuracy, other recent work contends that Veritism does not capture everything we care about when it comes to belief. Staffel (2019), for example, argues that belief plays a crucial role in our epistemic lives, simplifying our reasoning by allowing us to reason while also ruling out small error possibilities.17 In order for beliefs to play this simplifying role, however, they must sometimes be less than maximally accurate. Staffel (2019) puts the point as follows - “We should expect tradeoffs between simplicity and coherence... Incoherence is generally taken to be problematic, since it leads to suboptimal accuracy and Dutch book vulnerability. Yet, since the kinds of feasible strategies thinkers might use to manage their outright beliefs and credences are likely to introduce only a fairly minimal amount of incoherence, the tradeoff can be beneficial for the thinker” (p. 957). Staffel thus holds that there is sometimes a conflict between accuracy and simplifying our reasoning, providing a pluralistic picture of the values that govern the norms of belief. Another critique of the account we considered in section 3 comes in terms of 17 See also Jackson (2019), Tang (2015), and Weisberg (2020). 12 its Definition of Epistemic Value. Following Plato’s Meno, epistemologists have long thought that knowledge is more valuable than true belief.18 Thus, right alongside those who have thought that truth is the aim of belief, there has also been a sizable contingent that takes knowledge to be the aim of belief.19 Even William James, an oft-cited motivation for our previous account, gestures at knowledge in his original statement of epistemic value: “We must know the truth; and we must avoid error – these are our first and great commandments as would-be knowers.”20 Perhaps instead of veritism, we should really be interested in gnosticism, taking knowledge to be the most fundamental epistemic good.21 There are further reasons to think that knowledge, rather than simply true belief, plays a role in what we should believe. We have already seen that our Definition of Epistemic Value, combined with Veritism, would endorse believing that a given ticket has not won the lottery. However, a large number of epistemologists also take it that we have strong a priori evidence that we do not know this ticket has lost.22 But if this is right, then the accuracy program would also support believing that we do not know that the lottery ticket has lost, ultimately recommending believing the Moorean paradoxical conjunction I do not know the ticket has lost but the ticket has lost. But as Littlejohn (2015) points out, believing Moorean paradoxes seems to be paradigmatically irrational, casting doubt that a theory based entirely on true belief can really generate the right kinds of epistemic norms. How does the picture change if we include knowledge in our Definition of Epistemic Value? Fortunately for our purposes, Dutant and Fitelson (manuscript) have explored this question in detail. Instead of assigning true belief a value of R and false belief a value of W, Dutant and Fitelson instead take believing while in a position to know to have a value of R and believing while not in a position to know to have a value of W. These reassignments then give rise to a new account of the threshold between rational credence and rational belief: Belief-Credence Threshold* A set of beliefs maximizes expected accuracy iff S believes every proposition p for which S’s credence that S is in a position to know that p on −W basis b is greater than or equal to R−W The first thing that differs in our Belief-Credence Threshold* is that we are not interested in S’s credence in p but rather in S’s credence that they are in 18 For proposals that hope to explain why the value of knowledge surpasses that of mere true belief, see Goldman and Olsson (2009), Jones (1997), Kvanvig (1992 and 1998), Sosa (1985), and Williamson (2000). 19 For representative views, see Adler (2002), Bach (2008), Bird (2007 and 2019), Engel (2005), Huemer (2007), Littlejohn (2018), McHugh (2011), Pritchard (2007), Sutton (2007), and Williamson (2000). 20 See James (1897), p. 18. 21 The term ‘gnosticism’ is coined by Littlejohn (2018 and 2020). 22 See, for example, Foley (1979), Klein (1985), Hawthorne (2004), and Vogel (1990). 13 a position to know that p. This knowledge is possible via a particular basis b, the evidence or method that S uses in coming to their belief. So if S’s credence −W that they are in a position to know p via b is greater than or equal to R−W , −W then they ought to believe p, and if their credence is less than R−W , then they ought to refrain from believing p. Like the previous threshold, our Belief-Credence Threshold* always forbids believing contradictions. I cannot be in a position to know both p and -p, as one is guaranteed to be false, so my credence that I am in a position to know a contradiction should be zero. The new threshold then solves the issue with Moorean paradoxical sets of beliefs surrounding the lottery. I am not in a position to know ¬W1 , so I should not believe that ticket will lose, removing the conflict with believing that I do not know the ticket has lost. This difference from our earlier account then carries over to the lottery paradox. The view we previously considered advised believing that each ticket had lost and also believing that a single ticket had won, whereas the current account only directs us to believe that a single ticket has won. This contrasts with how both views handle the preface paradox. Like before, our new Definition of Epistemic Value recommends believing each claim in the book, since it is likely that the author is in a position to know each one, while also recommending the belief that there is a mistake somewhere in the manuscript. So the knowledge-centered account gives the same results when it comes to contradictions and the preface paradox, but then goes on to give different verdicts for Moorean paradoxical sentences and the lottery paradox.23 Conclusion: Questions for Further Research Joyce’s work on probabilism launched a revolution within epistemology. Over twenty years later, the project inspired by Joyce is still bearing fruit – formal epistemologists have recently applied accuracy frameworks to questions ranging from the proper response to peer disagreement and higher-order evidence to the debate between uniqueness and permissivism. We have seen that Joyce’s strategy need not be limited to work on credences, as veritism also holds promise for studying and clarifying the norms of belief. As Fitelson and Easwaran (2015) point out, the “general approach (which was inspired by Joycean arguments for probabilism) can be applied to many types of judgment — including both full belief and partial belief” (pp. 62-63). Just as it did with credences, the veritist paradigm can also produce original theoretical results for belief. In this paper, I have provided a summary of recent veritist accounts of belief, showing how this research is connected with previous results on the accuracy of 23 Dutant and Fitelson (manuscript) also consider a number of modifications to the knowledge-centered view, including variations in which a mere true belief that p ranks between knowing that p and not believing that p and mere true belief that p ranks below not believing that p. 14 credences. Much of the early formal work on the norms of belief still assumed that credences were more normatively fundamental than belief, but more recent research has proceeded without this assumption, treating the value of belief independently from how it might ultimately relate to credences. This work has led to a number of results on the Lockean Thesis, the lottery paradox, and the preface paradox, and has called into question how precisely belief norms relate to epistemic value. Because much of this work is still very new, published within the past five years, there are yet a number of remaining questions that call for further research. Foundational Questions: Along with the discussions in this paper of the interactions between beliefs and credences and how we should be measuring epistemic value, a number of foundational issues still warrant further attention: • How do veritist frameworks for belief interact with credences to produce an overall value for an agent’s overall doxastic state? Pettigrew (2015) shows that, if we include both beliefs and credences in the agent’s overall doxastic state, then there are non-probabilistic credences that are not accuracy dominated by a total doxastic state. Staffel (2017) attempts to solve this issue by isolating credences and beliefs to different reasoning contexts, a possibility that Pettigrew (2017b) explores further. • How should we weigh different epistemic values? If knowledge is more valuable than true belief, what should we say about true beliefs that do not raise to the level of knowledge? Are they more valuable than false beliefs? Less valuable? Littlejohn (2018) argues that without knowledge, true beliefs do not have value, and Dutant and Fitelson (manuscript) develop some preliminary results for gnostic theories that assign either positive or negative value to knowledgeless true beliefs. Further Applications: Beyond the results we have seen here for the Lockean Thesis, the lottery paradox, and the preface paradox, can the accuracy approach to belief provide novel insights regarding other issues within epistemology? • Shear and Fitelson (2019) rely on Dorst’s (2019) veritist framework to generate a number of results for norms on belief revision, comparing this with extant accounts of conditionalization, while Briggs et al. (2014) applies the framework of Fitelson and Easwaran (2015) to the doctrinal paradox of group judgment aggregation. • Applications of gnosticism have already made, with Littlejohn (2020) applying the gnostic view to the standard of proof in criminal law and Littlejohn and Dutant (Forthcoming) using gnosticism to defend a novel account of epistemic defeat. Future Challenges: Along with these promising areas for further applications of veritism, there are also a number of potential challenges. 15 • Talbot (2019) and Pettigrew (2018) have recently argued that veritists are committed to the epistemic repugnant conclusion, the thought that a large number of minimally accurate credences can be more valuable than a small number of very accurate credences. Talbot also thinks that these conclusions can be extended to veritist accounts of belief, noting that “recent work extends some results of accuracy-first epistemology to full beliefs... My arguments can, with some modifications, be applied to these extensions” (p. 541, fn. 4). Does an epistemic version of the repugnant conclusion indict veritism about beliefs in the same way that it does veritism about credences, and if so, are there any modifications that can be made to the framework we sketched in section 3 to respond to these worries? 16 Bibliography Adler, Jonathan. 2002. Belief’s Own Ethics. MIT Press. Bach, Kent. 2008. “Applying Pragmatics to Epistemology.” Philosophical Issues 18: pp. 68-88. Bird, Alexander. 2007. “Justified Judging.” Philosophy and Phenomenological Research 74: pp. 81-110. Bird, Alexander. 2019. “The Aim of Belief and the Aim of Science.” Theoria 34: pp. 171-193. Boghossian, Paul. 2008. Content and Justification. Oxford University Press. Briggs, R.A., Fabrizio Cariani, Kenny Easwaran, and Branden Fitelson. 2014. “Individual Coherence and Group Coherence.” In Essays in Collective Epistemology. Edited by Jennifer Lackey. Oxford University Press. Briggs, R. A., and Richard Pettigrew. 2020. “An Accuracy-Dominance Argument for Conditionalization.” Nous 54: pp. 162-181. Buchak, Lara. 2014. “Belief, Credence, and Norms.” Philosophical Studies 169: pp. 285-311. Christensen, David. 2004. Putting Logic in its Place: Formal Constraints on Rational Belief. Oxford University Press. Clarke, Roger. 2013. “Belief is Credence One (In Context).” Philosopher’s Imprint 13: pp. 1-18. de Finetti, Bruno. 1974. Theory of Probability, Vol. 1. New York: John Wiley and Sons. Dorst, Kevin. 2019. “Lockeans Maximize Expected Accuracy.” Mind 128: pp. 175-211. Dutant, Julien, and Branden Fitelson. Manuscript. “Knowledge-Centered Epistemic utility Theory.” Easwaran, Kenny. 2013. “Expected Accuracy Supports Conditionalization – and Conglomerability and Reflection.” Philosophy of Science 80: pp. 119-142. Easwaran, Kenny. 2016. “Dr. Truthlove or: How I Learned to Stop Worrying and Love Bayesian Probabilities.” Nous 50: 816-853. Engel, Pascal. 2005. “Truth and the Aim of Belief.” In Laws and Models in Science. Edited by Donald Gillies. King’s College Publications). Fitelson, Branden and Kenny Easwaran. 2015. “Accuracy, Coherence and Evidence.” Oxford Studies in Epistemology 5: pp. 61-96. Foley, Richard. 2009. “Beliefs, Degrees of Belief, and the Lockean Thesis.” In Degrees of Belief. Edited by Franz Huber and Christoph Schmidt-Petri. New York, NY: Springer. Friedman, Jane. 2013. “Rational Agnosticism and Degrees of Belief.” In Oxford Studies in Epistemology. Edited by Tamar Szabo Gendler and John Hawthorne. Oxford University Press. Foley, Richard. 1979. “Justified Inconsistent Beliefs.” American Philosophical Quarterly 16: pp. 247-257. Friedman, Jane. 2019. “Inquiry and Belief.” Nous 53: pp. 296-315. Gardenfors, Peter. 1986. “The Dynamics of Belief: Contractions and Revisions of Probability Functions.” Topoi 5: pp. 29-37. Genin, Konstantin. 2019. “Full and Partial Belief.” In The Open Handbook of Formal Epistemology. Edited by Richard Pettigrew and Jonathan Weisberg. The PhilPapers Foundation. Gibbard, Allan. 2005. “Truth and Correct Belief.” Philosophical Issues 15: pp. 338-350. Goldman, Alvin and E. J. Olsson. 2009. “Reliabilism and the Value of Knowledge.”In Epistemic Value. Edited by A. Haddock, A. Millar and D. H. Pritchard. Oxford University Press. Greaves, Hilary and David Wallace. 2006. “Justifying Conditionalization: Conditionalization Maximizes Expected Epistemic Utility.” Mind 115: pp. 607-632. Greco, Daniel. 2015. “How I Learned to Stop Worrying and Love Probability 1.” Philosophical Perspectives 29: pp. 179-201. Hajek, Alan. 2008. “Arguments for-or Against-Probabilism?” British Journal for the Philosophy of Science 59: pp. 793-819. Hawthorne, John. 2004. Knowledge and Lotteries. Oxford University Press. Hewson, Matthew. 2020. “Accuracy Monism and Doxastic Dominance: Reply to Steinberger.” Analysis 80: pp. 450-456. Horowitz, Sophie. 2014. “Immoderately Rational.” Philosophical Studies 167: pp. 41-56. Huemer, Michael. 2007. “Moore’s Paradox and the Norm of Belief.” In Themes from G.E. Moore. Edited by S. Nuccetelli and G. Seay. Oxford University Press. 17 Jackson, Elizabeth. 2019. “How Belief-Credence Dualism Explains Away Pragmatic eEncroachment.” The Philosophical Quarterly 69: pp. 511-533. Jackson, Elizabeth. 2020a. “Belief, Credence, and Evidence.” Synthese 197: pp. 5073-5092. Jackson, Elizabeth. 2020b. “The Relationship Between Belief and Credence.” Philosophy Compass 15: e12668. Jackson, Elizabeth and Andrew Moon. 2020. “Credence: A Belief-First Approach.” Canadian Journal of Philosophy 50: pp. 652-669. James, William. 1897. “The Will to Believe.” In The Will to Believe and Other Essays in Popular Philosophy. Harvard University Press. Jeffrey, Richard. 1970. “Dracula Meets Wolfman: Acceptance vs. Partial Belief.” In Induction, Acceptance, and Rational Belief. Edited by Marshall Swain. Dordrecht, Netherlands: D. Reidel. Jones, Ward. 1997. “Why Do We Value Knowledge?” American Philosophical Quarterly 34: pp. 423-440. Joyce, James. 1998. “A Nonpragmatic Vindication of Probabilism.” Philosophy of Science 65: pp. 575-603. Joyce, James. 2009 “Accuracy and Coherence: Prospects for an Alethic Epistemology of Partial Belief.” In Degrees of Belief. Edited by Franz Huber and Christoph Schmidt-Petri. Dordrecht: Springer. Kaplan, Mark. 1996. Decision Theory as Philosophy. Cambridge University Press. Klein, Peter. 1985. “The Virtues of Inconsistency.” The Monist 68: pp. 105-135. Konek, Jason. 2016. “Probabilistic Knowledge and Cognitive Ability.” Philosophical Review 125: pp. 509-587. Kvanvig, Jon. 1992. The Intellectual Virtues and the Life of the Mind: On the Place of the Virtues in Contemporary Epistemology. Rowman and Littlefield. Kvanvig, Jon. 1998. “Why Should Inquiring Minds Want to Know? Meno Problems and Epistemological Axiology.” The Monist 81: pp. 426-451. Lam, Barry. 2013. “Calibrated Probabilities and the Epistemology of Disagreement.” Synthese 190: pp. 1079-1098. Leitgeb, Hannes, and Richard Pettigrew. 2010. “An Objective Justification of Bayesianism I: Measuring Inaccuracy.” Philosophy of Science 77.2: pp. 201-235. Leitgeb, Hannes. 2013. “Reducing Belief Simpliciter to Degrees of Belief.” Annals of Pure and Applied Logic 164: pp. 1338-1389. Levinstein, Ben. 2015. “With All Due Respect: The Macro-Epistemology of Disagreement.” Philosophers’ Imprint 15: pp. 1-20. Levi, Isaac. 1991. The Fixation of Belief and Its Undoing: Changing Beliefs Through Inquiry. Cambridge University Press. Littlejohn, Clayton. 2012. Justification and the Truth-Connection. Cambridge University Press. Littlejohn, Clayton. 2015. “Who Cares What You Accurately Believe?” Philosophical Perspectives 29: pp. 217-248. Littlejohn, Clayton. 2018. “The Right in the Good: A Defence of Teleological NonConsequentialism.” In Epistemic Consequentialism. Edited by K. Ahlstrom-Vij and J. Dunn. Oxford University Press. Littlejohn, Clayton. 2020. “Truth, Knowledge, and the Standard of Proof in Criminal Law.” Synthese 197: pp. 5253-5286. Littlejohn, Clayton and Julien Dutant. Forthcoming. “Defeaters as Indicators of Ignorance.” In Reasons, Justification, and Defeat. Edited by Mona Simion and Jessica Brown. Oxford University Press. Maher, Patrick. 1993. Betting on Theories. Cambridge University Press. Makinson, D. C. 1965. “The Paradox of the Preface.” Analysis 25: pp. 205-207. McHugh, Conor. 2011. “What Do We Aim at When We Believe?” Dialectica 65: pp. 369-392. Millar, Alan. 2009. “How Reasons for Action Differ from Reasons for Belief.” In Spheres of Reason. Edited by Seth Robertson. Oxford University Press. Moss, Sarah. 2011. “Scoring Rules and Epistemic Compromise.” Mind 120: pp. 1053-1069. Nelkin, Dana. 2000. “The Lottery Paradox, Knowledge, and Rationality.” Philosophical Review 109: pp. 373-409. 18 Pettigrew, Richard. 2012. “Accuracy, Chance, and the Principal Principle.” Philosophical Review 121: pp. 241-275. Pettigrew, Richard. 2013. “A New Epistemic Utility Argument for the Principal Principle.” Episteme 10: pp. 19-35. Pettigrew, Richard. 2014. “Accuracy, Risk, and the Principle of Indifference.” Philosophy and Phenomenological Research 92: pp. 35-59. Pettigrew, Richard. 2015. “Accuracy and the Credence-Belief Connection.” Philosopher’s Imprint 15: pp. 1-20. Pettigrew, Richard. 2016a. Accuracy and the Laws of Credence. Oxford University Press. Pettigrew, Richard. 2016b. “Jamesian Epistemology Formalized: An Explication of ‘The Will to Believe’.” Episteme 13: pp. 253-268. Pettigrew, Richard. 2017a. “Epistemic Utility and the Normativity of Logic.” Logos and Episteme 8: pp. 455-492. Pettigrew, Richard. 2017b. “Precis and Replies to Contributors for Book Symposium on Accuracy and the Laws of Credence.” Episteme 14: pp. 1-30. Pettigrew, Richard. 2018. “The Population Ethics of Belief: In Search of an Epistemic Theory X.” Nous 52: pp. 336-372. Predd, Joel, Robert Seiringer, Elliott Lieb, Daniel Osherson, H. Vincent Poor, and Sanjeev Kulkarni. 2009. “Probabilistic Coherence and Proper Scoring Rules.” IEEE Transactions on Information Theory 55: pp. 4786-4792. Pritchard, Duncan. 2007. “Recent Work on Epistemic Value.” American Philosophical Quarterly 44: pp. 85-110. Savage, Leonard. 1971. “Elicitation of Personal Probabilities.” Journal of the American Statistical Association 66: pp. 783-801. Schoenfield, Miriam. 2017. “Conditionalization Does Not Maximize Expected Accuracy.” Mind 126: pp. 1155-1187. Schoenfield, Miriam. 2018. “An Accuracy Based Approach to Higher Order Evidence.” Philosophy and Phenomenological Research 96: pp. 690-715. Schoenfield, Miriam. 2019. “Permissivism and the Value of Rationality: A Challenge to the Uniqueness Thesis.” Philosophy and Phenomenological Research 99: pp. 286-297. Selten, Reinhard. 1998. “Axiomatic Characterization of the Quadratic Scoring Rule.” Experimental Economics 1: pp. 43-61. Shah, Nishi. 2003. “How Truth Governs Belief.” Philosophical Review 112: pp. 447-482. Shah, Nishi and David Velleman. 2005. “Doxastic Deliberation.” Philosophical Review 114: 497-534. Shear, Ted and Branden Fitelson. 2019. “Two Approaches to Belief Revision.” Erkenntnis 84: pp. 487-518. Skipper, Mattias. 2021. “Belief Gambles in Epistemic Decision Theory.” Philosophical Studies 178: pp. 407-426. Smith, Michael. 2010. “What Else Justification Could Be.” Nous 44: pp. 10-31. Sosa, Ernest. 1985. “Knowledge and Intellectual Virtue.” The Monist 68: pp. 224-245. Staffel, Julia. 2015. “Beliefs, Buses, and Lotteries: Why Rational Belief Can’t be Stably High Credence.” Philosophical Studies 173: pp. 1721-1734. Staffel, Julia. 2017. “Accuracy for Believers.” Episteme 14: pp. 39-48. Staffel, Julia. 2019. “How Do Beliefs Simplify Reasoning?” Nous 53: pp. 937-962. Steel, Robert. 2018. “Anticipating Failure and Avoiding It.” Philosophers’ Imprint 18: pp. 1-28. Steinberger, Florian. 2019. “Accuracy and Epistemic Conservatism.” Analysis 79: pp. 658-669. Stich, Stephen. 1996. Deconstructing the Mind. Oxford University Press. Sturgeon, Scott. 2008. “Reason and the Grain of Belief. Nous 42: pp. 139-165. Sturgeon, Scott. 2020. The Rational Mind. Oxford University Press. Sutton, Jonathan. 2007. “Without Justification.” MIT Press. Tang, Weng Hong. 2015. “Belief and Cognitive Limitations.” Philosophical Studies 172: pp. 249-260. Talbot, Brian. 2019. “Repugnant Accuracy.” Nous 53: pp. 540-563. van Fraassen, Bas. 1995. “Fine-Grained Opinion, Probability, and the Logic of Full Belief. Journal of Philosophical Logic 24: pp. 349-377. 19 Velleman, David. 2000. “On the Aim of Belief.” In The Possibility of Practical Reason. Oxford University Press. Vogel, Jonathan. 1990. “Are there Counterexamples to the Closure Principle?” In Doubting: Contemporary Perspectives on Skepiticism. Edited by Michael David Roth and Glenn Ross. Dordrecht: Kluwer Academic Publishers. Weatherson, Brian. 2016. “Games, Beliefs, and Credences.” Philosophy and Phenomenological Research 92: pp. 209-236. Wedgwood, Ralph. 2012. “Outright Belief.” Dialectica 66: pp. 309-329. Wedgwood, Ralph. 2002. “The Aim of Belief.” Philosophical Perspectives 16: pp. 267-297. Weisberg, Jonathan. 2020. “Belief in Psyontology.” Philosophers’ Imprint 20: pp. 1-27. Whiting, Daniel. 2010. “Should I Believe the Truth?” Dialectica 64: pp. 213-224. Whiting, Daniel. 2013a. “Nothing but the Truth: On the Norms and Aims of Belief.” In The Aim of Belief. Edited by Timothy Chan. Oxford University Press. Whiting, Daniel. 2013b. “Truth: the Aim and Norm of Belief.” Teorema: International Journal of Philosophy 32: pp. 121-136. Williamson, Timothy. 2000. Knowledge and Its Limits. Oxford University Press. Williams, Bernard. 1973. “Deciding to Believe.” In Problems of the Self. Edited by Bernard Williams. Cambridge University Press. Williams, J. Robert G. 2012a. “Generalized Probabilism: Dutch Books and Accuracy Domination.” Journal of Philosophical Logic 41: pp. 811-840. Williams, J. Robert G. 2012b. “Gradational Accuracy and Nonclassical Semantics.” Review of Symbolic Logic 5: pp. 513-537. 20
Keep reading this paper — and 50 million others — with a free Academia account
Used by leading Academics
Mikel Burley
University of Leeds
Shai Held
Mechon Hadar
Ben D Craver
Wayland Baptist
Bharat Ranganathan
University of Nebraska at Omaha