Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Jurisprudence An International Journal of Legal and Political Thought ISSN: (Print) (Online) Journal homepage: https://www.tandfonline.com/loi/rjpn20 Standards of proof as competence norms Don Loeb & Sebastián Reyes Molina To cite this article: Don Loeb & Sebastián Reyes Molina (2022): Standards of proof as competence norms, Jurisprudence, DOI: 10.1080/20403313.2022.2049077 To link to this article: https://doi.org/10.1080/20403313.2022.2049077 Published online: 29 Mar 2022. Submit your article to this journal View related articles View Crossmark data Full Terms & Conditions of access and use can be found at https://www.tandfonline.com/action/journalInformation?journalCode=rjpn20 JURISPRUDENCE https://doi.org/10.1080/20403313.2022.2049077 Standards of proof as competence norms 1 Don Loeba and Sebastián Reyes Molinab a University of Vermont, Burlington, Vermont, USA; bUppsala University, Uppsala Sweden ABSTRACT KEYWORDS In discussions of standards of proof, a familiar perspective often emerges. According to what we call specificationism, standards of proof are legal rules that specify the quantum of evidence required to determine that a litigant’s claim has been proven. In so doing, they allocate the risk of error among litigants (and potential litigants), minimizing the risk of certain types of error. Specificationism is meant as a description of the way the rules actually function. We argue, however, that its claims are either mistaken or at a minimum deeply misleading, especially when it comes to standard of proof rules (SPRs) that contain indeterminate formulas, as is typical. As against specificationism, we argue that SPRs are best understood as rules that confer competence to decide whether a given standard has been met— according to whatever vague or inchoate interpretation (if any) of the rule in question triers of fact implicitly or explicitly employ. We call this the competence-norm approach. Standards of proof – competence norms – legal power – evidence law – discretion 1. Introduction In discussions of standards of proof in law, a familiar perspective often emerges. According to what we call specificationism, roughly, standards of proof are legal rules that specify the quantum of evidence triers of fact must find in order to determine that a litigant’s claim has been proven. Specificationism is meant to describe the way the rules actually function. We argue, however, that its claims are either mistaken or at a minimum deeply misleading, especially when it comes to typical indeterminate standard of proof rules (SPRs).2 As against specificationism, we argue that SPRs are best understood as rules that confer limited competence to decide whether a given standard has been met —according to whatever vague or inchoate interpretation (if any) of the rule in question triers of fact implicitly or explicitly employ.3 We call this the competence-norm approach, and argue that it better explains how SPRs function.4 CONTACT Don Loeb don.loeb@uvm.edu University of Vermont; Sebastián Reyes Molina sebastian.reyes@ Uppsala University filosofi.uu.se 1 In philosophy, the order in which authors’ names appear does not customarily reflect their relative contributions. We follow that convention here, having contributed equally. We thank Sebastian Lutz, Jens Johansson, Riccardo Guastini, Michael Giudice, Visa Kurki, Donald Bello, Julieta Rábanos, Alejandro Calzetta, and the anonymous reviewers for helpful comments on previous versions of this paper. 2 The content of an SPR is a contingent matter, so such a rule can in principle be more or less indeterminate. 3 Anglo-American scholarship treats ‘competence’ and ‘power’ as synonymous. For an overview of the concept of competence see Torben Spaak, The Concept of Legal Competence (Dartmouth 1994) 1–25. 4 Although this paper focuses primarily on descriptive issues, we expect to address normative questions about how SPRs ought to function in a later article. On our view, one must understand how the rules actually function if one hopes to © 2022 Informa UK Limited, trading as Taylor & Francis Group 2 D. LOEB AND S. REYES MOLINA We begin by providing a brief account of specificationism and illustrating its prevalence. We go on (in Section 3) to argue that the most familiar SPRs in use around the world are indeterminate and as such are unable to provide relevant quanta for resolving the quaestio facti (the question whether the litigants’ factual claims have been proven). We consider whether SPRs can be given greater specificity, offering an overview of the two most prominent theoretical approaches to decision making under SPRs and showing that neither of them correctly describes how they do or could function. Spelling out why puts us in a good position to articulate our central argument against specificationism, a dilemma. Before turning to our alternative explanatory approach, we confront and respond to predictable objections to the effect that we have mischaracterized or misattributed specificationism. We then (in Section 4) sketch our alternative to specificationism, and argue that it allows for a more straightforward and plausible way of understanding how SPRs actually work. We conclude with a hint about how our understanding of the descriptive issues on which we focus here can be relevant to resolving certain nearby normative questions. 2. Specificationism Although one is unlikely to see specificationism’s claims articulated by its proponents as a neat theoretical framework, they would sound familiar to anyone acquainted with the literature on standards of proof. According to specificationism: SPRs predetermine a uniform quantum of evidence required for decisions about the quaestio facti, which triers of fact employ to make their determinations. In employing them, triers of fact minimize the risk of certain sorts of errors and by that means allocate the risk of error among the parties.5 Expressions of these claims can be found scattered throughout scholarship on SPRs. For example, Stein claims that: Rules and principles allocating the risk of error fall into two general categories. One of these categories accommodates rules and principles that determine the probability thresholds for making factual findings in conditions of uncertainty. These rules and principles include burdens and standards of proof that apply in both civil and criminal trials.6 Similarly, Haack claims that ‘standards of proof specify the degree of level of proof that must be supplied in various kind of cases.’7 Although Haack is reluctant to tell us form a reasonable judgment about how they should function. While we recognize that a full answer to the descriptive question requires empirical investigation, we think traditional philosophical reasoning (including uncontroversial presuppositions about human reasoning capacities and behavior) can help clarify the question and shed light on the contours of a plausible answer. To simplify, we focus here on the three most common SPRs. But there are other SPRs (e.g. probable cause, substantial evidence, and reasonable suspicion) and many other contexts in which they play a role (e.g. probable cause determinations by police officers and judges, pre-trial negotiations, consultations with counsel about how to avoid legal trouble, etc). Much of what we say here is also relevant there, but we do not attempt to sort out the complicating factors in this paper. 5 We are aware that that the specificationist claims we point to in this section are subject to more than one interpretation, some considerably more moderate than others. We will have more to say about the more moderate interpretations in § 3.6. 6 Alex Stein, Foundations of Evidence Law (Oxford University Press 2005) 134. Emphasis added. 7 Susan Haack, Evidence Matters: Science, Proof, and Truth in the Law (Cambridge University Press 2014) 50–51. Even though Haack does not explicitly state that she is providing a descriptive account of the functions of SPRs, it is plausible that she is doing so. Although she does provide answers to normative questions about how the quantum of evidence JURISPRUDENCE 3 precisely how much warrant is needed to satisfy any particular SPR, she does speak of the ‘required degree’ of warrant for reaching a decision, and this at least strongly suggests that she has in mind a quantum.8 Likewise, Pardo claims, ‘Proof rules in law specify when a disputed fact has been proven … by specifying a level or standard of proof: “beyond a reasonable doubt,” “by preponderance of evidence,” or “by clear and convincing evidence”.’9 The specifications supplied by SPRs are thought to provide uniformity, in advance of any particular litigation. For example, Ho claims that ‘standards of proof are fixed and predetermined: one standard applies to all cases within the relevant category, and the decisional threshold operating in each category does not vary with the circumstances of individual cases.’10 And Clermont writes, ‘Magisterially, but opaquely, the law tells its fact-finder to apply the standard of proof. The law will have already chosen the appropriate standard.’11 The claimed uniformity and predetermination might be thought to accommodate values often said to be promoted by the rule of law, such as equality (in the case of uniformity) and legal certainty (in the case of predetermination). According to specificationism, employment of the relevant quantum allows triers of fact to make factual determinations in a way that minimizes the risk of certain types of errors (for example wrongful convictions in criminal trials) at the expense of others (failures to convict the guilty) and in doing so allocates that risk among the parties in ways thought appropriate for cases of that sort. For example, according to Laudan, ‘We all understand that standards of proof are vehicles for the distribution of errors, informed by a determination of their respective costs and benefits.’12 Similarly, Pardo says that ‘proof rules . . . allocate the risk of erroneous decisions between the parties,’ and that, ‘by dictating outcomes [under conditions of uncertainty] proof rules [also] serve . . . to minimize total errors or certain types of errors … .’13 The claim that they do so fits naturally with the platitude that minimizing and distributing the risk of error are main functions of the rules of evidence, which include SPRs.14 Specificationism minimizes the role of triers of fact in determining how much evidence is needed and how the risk of error is to be distributed. All they must do is assess whether the evidence that has been adduced meets the relevant quantum—to apply the specified standard. Only if they judge that the evidence satisfies the relevant should be understood (as degrees of warrant given the evidence) her starting point is what she believes the standards of proof do in practice (namely, set the quantum of evidence). She attempts to support this description throughout her paper with references to several cases in the U.S. Ibid 47–77. 8 Ibid 56. 9 Michael Pardo, ’Second-Order Proof Rules’ [2009] 61 Florida Law Review, 1084. Emphasis added. Pardo does not claim that the formulas he points to themselves establish the relevant quanta, but suggests that they do so in conjunction with other rules—before triers of fact are called upon to make a decision. Ibid 1084–1113. His current view is more subtle than this picture suggests, however, as we will see in what follows. 10 HL Ho, A Philosophy of Evidence Law: Justice in the Search for Truth (Oxford University Press 2008) 174. Against this view, there are scholars that have proposed floating standards of proof. That is, the quantum of evidence will vary either caseby-case or according to a class of cases to which an SPR is applied. For example, homicide cases might employ a different standard of proof (or might interpret the same standard in a different way) than do sexual abuse cases. See, for example, Erik Lillquist, ’Recasting Reasonable Doubt: Decision Theory and the Virtues of Variability’ [2002] 36(1) University of California Davis Law Review, 85–197; Gustavo Ribeiro, ’The Case for Varying Standards of Proof’ [2019] 56(1) San Diego Law Review 161–220. 11 Kevin Clermont, ’Standards of Proof Revisited’ [2009] 33 Vermont Law Review 469. 12 Larry Laudan, ’Strange Bedfellows: Inference to the Best Explanation and the Criminal Standard of Proof’ [2007] 11(0) The International Journal of Evidence & Proof 304. 13 Pardo (n 9) 1086. 14 Stein (n 6) 1. 4 D. LOEB AND S. REYES MOLINA quantum do they declare a litigant’s claim proven. In what follows, we criticize specificationism’s claims about how SPRs function. We begin with the central claim, that SPRs predetermine uniform quanta of evidence for triers of fact to employ. If specificationism is wrong about that, then it lacks the resources to support its claims about how triers of fact employ SPRs and what they accomplish. 3. Why specificationism is wrong as a description of actual practice 3.1 Setting the quantum Our objection to the claim that SPRs set the quantum of evidence begins with the fact that the standards contained in typical SPRs are formulated in a way that makes them indeterminate. A formula is indeterminate when it is subject to multiple interpretations, as is the case with Beyond a Reasonable Doubt (BARD).15 Such a formula cannot provide the degree of specificity required for identifying a quantum of evidence, and it cannot be correct to describe triers of fact as assessing whether a nonexistent quantum has been met. But the problems for specificationism do not end with indeterminacy, we argue, for even when triers of fact have been supplied with a determinate quantum it is typically implausible to describe them as determining whether that quantum has been reached. In order for the rules to function in the way specificationism claims they do, they would have to serve as implementable guides to decision making, at least typically. Vague standards are insufficient to fulfil this function. No doubt even they provide some guidance, and in that sense they are partially implementable. Even so, we can still ask whether or not the standards guide by providing a quantum, and to what degree triers of fact could use such a quantum in the way specificationism suggests. If the guidance is better understood as a rough gesture than as the setting of a quantum, then SPRs are unlikely to produce the vaunted uniformity in the way factual issues are decided. But if the guidance does involve the provision of a precise target, then it is highly unlikely that decisions are best understood as efforts to meet it. Thus, a dilemma argument against specificationism begins to emerge. Either the SPRs do not set determinate quanta, or they set determinate quanta that are unable to function in the way that specificationism imagines. Attempts to make a standard more determinate, we will argue, make it less determinable, to the point where it is not plausible to describe triers of fact as deciding in much more than an impressionistic way whether the relevant quantum (if any) has been reached. 3.2 Indeterminate SPRs To develop this dilemma argument against specificationism, we begin by exploring the first horn, considering three paradigmatic SPRs. Two of these are widely agreed (even by specificationists) to be facially indeterminate, and typical attempts to precicify them often seem grossly inadequate.16 First, the familiar BARD SPR, according to which triers of fact must conclude that criminal charges have been proven beyond a reasonable 15 Sebastián Reyes Molina, ’Judicial Discretion as a Result of Systemic Indeterminacy’ [2020] XXXIII(2) Canadian Journal of Law and Jurisprudence 372. 16 See Sebastian Reyes Molina, ’On Legal Interpretation and Second-order Proof Rules’ [2017] Analisi e Diritto 165–184. JURISPRUDENCE 5 doubt if there is to be conviction, does not tell us how much doubt is reasonable.17 Furthermore, as Laudan points out, several meanings have been ascribed to BARD: moral certainty, ‘the degree of security of belief appropriate to important decisions in one’s life,’ the absence of ‘doubts that would make a prudent person hesitate to act,’ an ‘abiding conviction of guilt,’ no doubts ‘for which a reason could be given,’ and high probability.18 Far from providing a quantum of evidence to be met, each of these attempts at clarifying BARD merely gestures at a vaguely-described way to understand it. Moreover, each of the phrases in question points to a state of mind—doubt, conviction, belief, etc. Interpretations of questions involving such states are likely to vary across interpreters, depending to some degree on their idiosyncratic psychological features. Laudan says that if a decision about the facts is this dependent on the state of mind of the fact-finder, then, ‘we are not in the presence of a standard of proof but an excuse, a weak pretext to convict or acquit.’19 That might be a bit extreme. We should not deny that even the vague standards discussed in this section, along with the accompanying glosses provided in jury instructions and interpretive principles, have a significant impact. Criminal convictions under BARD are much more difficult to secure than civil determinations under a Clear and Convincing Evidence SPR. What BARD does not do, however, is identify a specific amount of evidence that triers of fact must find in order to resolve the factual questions before them in criminal cases. A similar worry arises for the Clear and Convincing Evidence SPR. The standard is intended to establish a threshold higher than the one set by the Preponderance of Evidence SPR but lower than the one established by BARD. And, once again it is reasonable to assume that it has an impact, resulting in a higher rate of positive decisions than are produced by BARD. Still, it does not provide a precise quantum, but merely motions in the direction of a vaguely-defined range. That being so, we should not expect the rule to be understood in a uniform way by triers of fact.20 What about the Preponderance of Evidence SPR used in many civil trials? Of the three most prominent SPRs, this one seems best suited to providing a quantum of the sort specificationism needs. A ‘preponderance’ would seem simply to require that the evidence on one side be judged to be better than the evidence on the other, for example >.5 17 Examples of similar rules include article 340 of the Chilean Código Procesal Penal: más allá de toda duda razonable, article 553 of the Italian Codice di Procedure Penale: ragionevole dubbio, article 353 of the French Code de Procédure Penalé: intime conviction. 18 Larry Laudan, Truth, Error, and Criminal Law: An Essay in Legal Epistemology (Cambridge University Press 2006) 33–44. Five of these interpretations appear as headings on a numbered list at the beginning of the book. (We have substituted lower-case letters for the original capitalized letters in order to improve readability.) Perhaps because most of the formulations Laudan discusses there come from actual cases, some of the headings are phrased awkwardly, especially when compiled as a list. For example, ‘BARD as the Sort of Doubt That Would Make a Prudent Person Hesitate to Act’ (37) actually seems intended to point to BARD as the absence of such doubt. And while the other four entries on the list articulate conceptions of what it is to lack reasonable doubt, the fourth entry (originally, ‘Reasonable Doubt as a Doubt for Which a Reason Could Be Given’ (40) offers a take on reasonable doubt itself. Laudan notes powerful reasons, many of which have been cited by courts, for rejecting each of the listed interpretations. 19 L Laudan, ’Por qué un estándar de prueba subjetivo no es un estándar de prueba subjetivo y ambiguo no es un estándar’ [2005] 28 Doxa: Cuadernos de Filosofía del Derecho 106. Translated by Sebastián Reyes Molina. (Original text: ‘Lo que observamos aquí no es un EdP sino una excusa o un pretexto débil para condenar o absolver.’) 20 Pardo says: ‘As an intermediate decision rule between the preponderance and BARD rules, the clear-and-convincingevidence rule exhibits the difficulties of both.’ Pardo (n 9) 1096. He argues that the common understanding of this rule as a ‘high probability … directs decision-makers to focus on their subjective beliefs rather than on features of the evidence.’ Ibid 1097. 6 D. LOEB AND S. REYES MOLINA probable.21 Thus, perhaps PARDO overstates the case when he says, ‘The phrase “preponderance of evidence” is ambiguous. The word “preponderance” refers to a superiority of some kind, but it may refer to a superiority of weight, power, importance, strength, or quantity.’22 Setting aside quantity, it could be that all of these factors should be understood as pointing to something like a more-probable-than-not standard. If so, we might think Preponderance identifies a kind of quantum, understood not as an amount of evidence, but as a probabilistic threshold (or the like).23 More probable than not, however, does not supply triers of fact with a specific method for evaluating the quality and strength of the evidence, least of all doing it in some uniform way. Testimony, for example, will typically be evaluated in light of any number of other factors including the perceived credibility of the witnesses, the contrary and corroborating testimony of other witness, the plausibility of the overall narrative, the physical evidence, etc. But, how these factors are assessed and weighed is left to triers of fact to decide. Imagining that in such cases triers simply determine which side’s evidence is better (or which side’s picture is more probable or the better explanation) oversimplifies and distorts the decision-making process. Whatever quantum the Preponderance SPR provides is insufficient to support the degree of uniformity in either interpretation or application suggested by the specificationist picture. In sum, two of most familiar SPRs seem unable to set a quantum of evidence against which to evaluate particular claims of fact. And, although the third might be thought to identify a kind of a quantum, it cannot function in the way specificationism imagines. Even familiar attempts to clarify the standards by pointing to analogies and offering supposedly-clarifying language, we will argue, do not provide triers of fact with predetermined standards that can reasonably be expected to be understood and applied uniformly.24 3.3 The probability approach In this and the following subsection, we explain and assess two competing theories often understood to be concerned with how triers of fact should evaluate whether the relevant quantum has been met in a given case. Although typically offered in answer to such normative questions, the competing theories point up alternative ways that we might understand specificationist descriptive claims as well. As we have intimated, some of the concerns that 21 This seems to be the prevailing view. It is thought that, ‘preponderance requires “belief that what is sought to be proved is more likely true than not,” and that jurors, “be persuaded that [the facts alleged in the plaintiff’s case are] ”more probably true than not true”.’ Ibid 1092 (citations to standard jury instructions omitted). By analogy, for the explanatory approach (discussed in Section 3.4, below) the threshold is sometimes thought to be ‘better explanation’. Presumably, the same interpretations are also thought to apply when judges act as triers of fact. 22 Ibid 1091. 23 We thank an anonymous referee for this journal for pointing this out. For an alternative to probabilistic formulations, see, Section 3.4, below. 24 The Crown Court Compendium (a manual for conducting criminal trials, used by English judges) now says: ‘The standard of proof is to the criminal standard: the prosecution proves its case if the jury, having considered all the evidence relevant to the charge they are considering, are sure that the defendant is guilty,’ adding, ‘It is unwise to elaborate on the standard of proof.’ (CCC, 5.1, emphasis supplied.) It suggests that judges consider abandoning reference to reasonable doubt, cites a case proclaiming that ‘sure and beyond reasonable doubt meant the same thing,’ and offers jury instructions according to which the ’prosecution must make the jury sure that D is guilty. Nothing less will do.’ (CCC, 5.2) In our view, however, sure is as indeterminate as BARD. (See https://www.judiciary.uk/publications/crowncourt-compendium-published/.) JURISPRUDENCE 7 have been raised about the alternatives (understood in normative terms) also help to show why neither approach fits actual practice in the way specificationism imagines. The two approaches are not separated by disagreement over where the legal system ought to set the quantum, but about how triers of fact ought to conceive of it in the first place.25 For one group, it should be understood as a numerical value that reflects a probability judgment. For another, it should be sought in the evaluation of narratives offered in explanation of a set of evidence. The availability of two approaches (one, itself subject to substantially different interpretations, as we will see) by itself introduces some ambiguity into the standards. In the absence of further guidance, triers of fact have no choice but to select which interpretation to employ—if they are to employ either and are even aware of the options. That said, further guidance is often available in the form of jury instructions or interpretive policies (typically set by higher courts) for judges. All the same, we argue, the standards, even when combined with such guidance, do not identify a quantum triers of fact can use to resolve the quaestio facti. The probability approach conceives the quantum of evidence as a threshold represented by a numerical value prescribed by the legal system. The main goal of the scholarship on the probability approach is to develop ‘mathematical formulations of such matters as the probative value of courtroom evidence and the burden of persuasion,’ and to ask, ‘which such formulations (if any) best further our understanding of the rules of evidence and how jurors or jurists should apply these rules.’26 For example, Kaplow says that ‘decisionmakers should behave as if following some probabilistic rule’27 and that ‘the legal system must choose an evidence threshold, denoted here by xT, which indicates the value of x above which liability will be assigned and below which there is no liability.’28 For some scholars, probability is thought of as the relative frequency of a type of event.29 This is the frequency interpretation of the probability approach. Thus, Allen writes, ‘When the fact finder determines the probability of some element to be .6, this means that the element will be true six out of 10 times in the set of similar cases.’30 Alternatively, under the Bayesian or subjectivist interpretation, probability is conceived of in terms of credences, degrees of belief, to which Bayes’ theorem should be applied.31 25 For attempts at bridging these two models, see Edward Cheng ‘Reconceptualizing the Burden of Proof’[2013] 122 The Yale Law Journal 1258–1278. DH Kaye, ’What is Bayesianism? A Guide for the Perplexed’ [1988] 28(2) Jurimetrics 161. Evidence-law scholarship’s serious interest in probabilistic reasoning was sparked: ‘in large part by the Californian case People v Collins where an erroneous attempt was made to use statistical reasoning to resolve problems of evidence. … [some Scholars argued] that the mistake in Collins was not the attempt to use mathematical probability to resolve problems of fact but the failure to utilize Bayes’ theorem to do so.’ JD Jackson, ’Analysing the New Evidence Scholarship: Towards a New Conception of the Law of Evidence’ [1996] 16(2) Oxford Journal of Legal Studies 311. 27 Louis Kaplow, ’Burdens of Proof’ [2012] 121(4) The Yale Law Journal 773, citing Leonard Savage, The Foundations of Statistics (Dover 1954) 6–104. 28 Ibid 738. In civil cases the standard of proof is often conceived to be p >0.5 (for Preponderance of Evidence) and sometimes p >0.75 (for Clear and Convincing Evidence), while in criminal cases it is sometimes thought to be >0.9 (for BARD). See Laudan (n 19) 44; Ho (n 10) 179–181; Pardo (n 9) 1011; Jorge Larroucau, ’Hacia un estándar de prueba civil’ [2012] 39(3) Revista chilena de derecho 800–804, among others. 29 Notice that at face value this conception of probabilities might seem odd (or especially difficult to apply) when it comes to unrepeated events such as those that are dealt with in court. 30 Ronald Allen, ’The Nature of Juridical Proof’ [1991] 13(4) Cardozo Law Review 376. Arguably, the two conceptions can be combined into a single normative view. It could be, for example, that one’s credences with respect to particular events should in some sense reflect the frequency of such events. But it is hard to see a descriptive analogue here. That is, it is hard to imagine that people are actually setting credences by consulting those elusive frequencies. 31 Daniel Shaviro & Jonathan Koheler ’Veridical Verdicts: Increasing Verdict Accuracy through the use of Probabilistic Evidence and Methods’ [1990] Cornell Law Review 253. 26 8 D. LOEB AND S. REYES MOLINA Commonly thought of as a normative proposal about how judges should conceive their roles as fact finders and juries should be instructed, the probability view has been the target of several objections. Among the most prominent of these are two that focus on the extreme difficulty involved in actually implementing a probabilistic calculus in legal decision making.32 First, what we will call the natural way of reasoning objection claims that the probability account does not fit the way people actually reason. Although we sometimes assign vague probabilities to factual claims (as in, ‘It seems very likely that he will be here tonight’) we do not typically engage in mathematical reasoning when evaluating such claims. If the probability view were correct, triers of fact would be expected to use an atypical and unfamiliar reasoning process in deciding the quaestio facti. Understanding why that expectation would be unreasonable puts us in a position to see how criticisms of probabilism as a normative view can be used to undermine it if put forward as a description of actual practice. Consider the frequency interpretation. In the vast majority of legal cases there is no obvious way for triers of fact to gain epistemic access to the relevant frequencies.33 But, even if we were to imagine them to have such access, it seems extremely unlikely that most triers of fact could assess matters mathematically, as the probability view (interpreted as a recommendation) suggests they should. The unfamiliarity of sophisticated probabilistic reasoning makes it unlikely that most people, including most judges, have the relevant mathematical knowledge and skill necessary for even attempting it. Thus, we can object to the recommendation that triers of fact treat SPRs as probability thresholds on the basis of a legal analogue for the principle that ought implies can. Triers of fact have no legal duty to perform mathematical operations that they are (non-culpably) incapable of performing. The descriptive issue is even clearer. However plausible the principle that ought implies can, the principle that does implies can is even more so. It cannot be that triers of fact are actually doing what they cannot do. Indeed, even those with relevant mathematical knowledge and ability might not be able to deploy them in the way the probability model suggests. Setting aside the worry about discovering frequencies, the mathematics would often (at a minimum) be time consuming, even for those with the specialized knowledge necessary to attempt it. Moreover, it would be very difficult to communicate one’s reasoning (along with when and why it can be trusted) to those, such as fellow jurors or the litigants themselves, who are less mathematically savvy. This would undoubtedly discourage at least explicit reliance on it. In aiming to identify a precise quantum of evidence for each SPR, the frequentist version represents specificationism in its purest and most ambitious form. Its advocates might think identifying precise quanta necessary (or at least useful) for assuring that like cases are treated alike, arguably one of the most fundamental values in any legal system.34 But the uniformity that frequentist probabilism offers is illusory, for as a standard 32 For other objections to the probability approach see Lawrence Tribe, ’Trial by Mathematics: Precision and Ritual in the Legal Process’ [1971] 4(6) Harvard Law Review, 1334; LJ Cohen, The Probable and the Provable (Oxford University Press 1977) 58–69; LD Ross, ’Recent Work on the Proof Paradox’ [2020] 15(6) Philosophy Compass 1–11; Ronald Allen & Michael Pardo, ‘Relative plausibility and its critics’ [2019] 23 The International Journal of Evidence and Proof 11–14; among others. 33 Allen and Pardo say that ‘such data is simply unavailable for most items of evidence. . . . Given the limitations of such an “objective” approach, it is simply a non-starter.’ Ibid 12. 34 Lon Fuller, The Morality of Law (Revised edn, Yale University Press 1964) 33–41. JURISPRUDENCE 9 becomes more precise it becomes harder to imagine that typical triers of fact could actually apply it. The very precision that is sometimes seen as an advantage for frequentist probabilism is responsible for making it unworkable as a recommendation. And this shows that it is wrong as a description of how triers of fact actually reach their conclusions as well. But the Bayesian approach also appears to be unworkable. By itself, Bayes’ theorem does not set a quantum at all, least of all a precise one for each of the three most familiar SPRs. Rather, it offers a mathematical formula for adjusting one’s confidence in various possibilities, under conditions of uncertainty and given one’s prior probabilities (or priors)—one’s own assignments of probabilistic starting points (however one has arrived at them). Like frequentists, Bayesians can remedy the absence of precise quanta by assigning a specific confidence threshold to each standard. But, even if triers of fact knew how to perform the relevant calculations (and usually they do not) their decisions would depend on their priors in ways that undermine the claim that they are all following the standard—applying a uniform quantum that has been supplied to them. When decisions about whether a given quantum has been reached depend on something as idiosyncratic as personal assignments of probabilities, the quantum, however precise, cannot reasonably be said to be guiding the decision-making process in the way specificationism envisions. Arguably, the problem for the Bayesian approach runs even deeper, because the complicated probabilistic calculations it demands are often quite beyond the computational capabilities of even the most mathematically sophisticated of people. The second objection states that, at least in the legal setting, even mathematically savvy triers of fact cannot typically perform the complex mathematical operations that Bayes’ theorem identifies, because humans lack the relevant computational power. We call this the lack of computational capacity objection As Amaya claims: ‘Computing probabilities in the way that Bayes theorem prescribes would require a storage capacity and computational power that goes far beyond our cognitive resources.’35 Here again, an argument, grounded in the principle that ought implies can, against probabilism as a normative proposal supports a corresponding argument, grounded in does implies can, against probabilism as a description.36 For these reasons, SPRs, interpreted as probability thresholds, do not supply triers of fact with usable quanta in the way specificationism imagines. We do not deny that they exert significant influence on the decision-making process. Presumably, they can point triers of fact in the intended direction and communicate some rough indication of how confident one is expected to be if one is to reach a particular decision. When that is true, however, triers of fact can rarely be accurately described as determining whether the identified probability thresholds have been crossed. Insofar as triers of fact even attempt to follow SPRs and accompanying instructions, it is not reasonable 35 Amalia Amaya, The Tapestry of Reason: An Inquiry into the Nature of Coherence and its Role in Legal Argument (Hart Publishing 2015) 84. 36 As things stand, juries are typically not allowed to have or use calculators or computers when deliberating. While judges might not be forbidden from using them to calculate probabilities, it seems unlikely that many are doing so. They still face a lack of access to frequencies, and most would not know how to do complex probability calculations even with the assistance of such devices. (The system could change, no doubt, but we are attempting to understand how it operates now.) 10 D. LOEB AND S. REYES MOLINA to think that they typically do so by assigning probabilities to events and seeing whether those probabilities are high enough to meet the standards. 3.4 An alternative to probabilism: the explanatory approach For reasons that include those considered above, a number of scholars have recommended an alternative to probabilism, the explanatory or holistic approach. Supporters of this view ‘take inference to the best explanation [IBE] to be the basic kind of nondeductive reasoning (if not the only legitimate kind).’37 According to this way of thinking, the explanatory power of a hypothesis is what justifies triers of fact in accepting the facts it envisions as proven. Triers of fact evaluate whether the quantum has been met by assessing the quality of the explanations that have been offered to account for the evidence adduced in trials. Scholars defending the explanatory approach as a descriptive theory claim that IBE does not suffer from the practical difficulties thought to beset the probability approach. In fact, they argue, we all employ IBE-style reasoning on an everyday basis, whether we know it or not. As AMAYA claims: ‘It is a main advantage of holistic theories of evidence that, unlike probabilistic models, they do not impose upon jurors a way of reasoning that is alien to them, but that, on the contrary, build upon legal decision-makers’ ordinary forms of inference.’38 The explanatory approach assumes that evidence in law is not an arcane parcel of knowledge nor that its evaluation requires specialized mathematical knowhow and ability. Rather, it claims that in law triers of fact employ the same reasoning techniques that they employ in everyday affairs. While the details vary, the rough idea is that the parties advance explanatory hypotheses and triers of fact makes judgments of comparative plausibility. Proponents of the explanatory view do not believe that comparative plausibility reduces to a form of probabilism. For example, ALLEN argues that while ‘more plausible,’ ‘. . . may mean ‘more likely’ . . . this does not mean comparative plausibility reduces to probability. Rather, what is ‘plausible’ is a function of the explanation, its coherence, consistency, coverage, consilience, and how it fits into the background knowledge possessed by the fact finder.’39 We agree that it is more plausible to describe triers of fact as employing a familiar and common form of reasoning like IBE.40 But a methodology is not a standard. If the explanatory approach does not reduce to probabilism, that leaves us with questions about how the relevant quanta are to be identified and how the approach can account for the 37 Ibid 196. See also, Michael Pardo & Ronald Allen, ’Juridical Proof and the Best Explanation’ [2008] 23 Law & Philosophy 223–268; (n 33) 5–59. 38 Ibid 103. 39 Ronald Allen, ’Explanationism All the Way Down’ [2008] 5(3) Episteme: A Journal of Social Epistemology 325–26. See also, Pardo and Allen (n 37), which does not deny that fact finding aims at conclusions that are in some sense probabilistic. Even so, the authors argue, the IBE reasoning process is comparative in a way that probabilistic reasoning is not. Whether the approach describes and recommends a process is a matter of degree, and we do not wish to quibble about where to draw lines. We simply note that insofar as a process has been identified, it is one whose execution leaves much to the discretion of triers of fact, and this shows a limitation in the explanatory approach’s own explanatory power. 40 As Allen & Pardo point out: ‘In many areas of life, from hard science to managing one’s everyday affairs, explanatory considerations help to guide inference. From the fact that some proposition would explain a given phenomenon we infer that the proposition is true. And when several propositions may explain a given phenomenon we infer the one that best explains it. … Because legal proof falls somewhere between science and managing one’s everyday affairs, it should perhaps not be surprising that the juridical proof process involves similar inferential practices.’ (n 37) 223–34. JURISPRUDENCE 11 differences among the SPRs. Just as Bayesian probabilists must identify target levels of confidence—quanta, to which triers of fact are expected (and thought) to compare their own confidence levels—proponents of the explanatory approach must tell us how good an explanation must (seem to triers of fact to) be in order for triers to reach a result under a particular SPR. And, insofar as the explanatory approach is meant to be descriptive, they must give us reason to think that the practice of triers of fact conforms to its description. One well-developed answer is offered by Allen and Pardo.41 On their view, different SPRs identify different ‘explanatory thresholds’ and these set the actual quanta for various classes of cases. For example, they say that when it comes to Preponderance, ‘the party with the burden must provide an explanation of the evidence and events that is better than the alternative(s) in light of the evidence and the cognitive capacity of the fact-finder.’42 In this case, it might appear reasonable to say that, to the best of their ability, triers of fact are expected to seek the best of the available explanations of the evidence placed before them.43 When it comes to the other SPRs, however, the ‘best explanation’ terminology seems misleading. What is sought is not the best overall explanation, but an explanation good enough to meet some as-yet-unspecified standard. Allen and Pardo continue: ‘The BARD Standard is met when there is a plausible explanation consistent with guilt and no plausible explanation consistent with innocence.’ For Clear and Convincing Evidence, ‘parties with the burden of proof need not eliminate any reasonable doubt, but they must do more than prove that the elements are slightly more likely to be true.’44 As a description, the explanatory approach represents an advance over probabilism, in part because it eschews any claim that triers of fact make precise mathematical calculations that seem unavailable to most actual people. And, as before, explanatory thresholds might provide triers of fact with significant guidance.45 But, even Allen and Pardo recognize that no plausible theory can ‘resolve all uncertainty surrounding the standards.’46 Indeed, phrases like ‘no plausible explanation’ and ‘do more than prove’ are once again facially vague. Thus, idiosyncratic interpretations of those standards, whether explicit or implicit, are bound to play a significant role in the way triers of fact understand the task before them. For that matter, terms like ‘coherence,’ ‘consilience,’ and ‘fit’ are also vague and their relative importance unspecified, leaving decisions (if any) about how to interpret and use them in fact finders’ hands, as well. 41 Allen & Pardo (n 32) 15–17 and 27–29. Ibid 27. 43 Although Allen and Pardo purport to be defending the explanatory approach as a descriptive theory, normative considerations sometimes make their way into their reasoning. For example, they say that, ‘the conventional probabilistic account . . . is non-comparative in ways that conflict with how proof proceeds at trial and with the goals underlying the standards of proof.’ (Ibid 14, emphasis added) This need not represent a confusion on their part. As we will see, even statements like this can be given a descriptive interpretation. Moreover, in principle, clues about what the legal system is doing might sometimes be found in evidence of what its participants think it is or should be doing. 44 Ibid 27–28. 45 Under any plausible interpretation of an SPR, some cases are easy to evaluate, as, for example, where almost all the evidence goes one way. In such cases, indeterminacy in the quantum and the epistemic idiosyncrasies of triers of fact become less relevant, because the results of different interpretations and approaches to reasoning converge. But, even there, triers of fact do not assess the evidence by following a ready-made and commonly-understood roadmap to a ready-made and commonly-understood destination—a threshold in the form of a quantum. 46 Allen & Pardo (n 32) 27. 42 12 D. LOEB AND S. REYES MOLINA Because the evidence must be evaluated under standards that are themselves vaguely articulated and incompletely understood, with no roadmap for how to proceed in determining whether they have been satisfied, the explanatory approach can only partially explain how decisions are made. Decisions about whether the standards have been met depend significantly on the reasoning abilities and dispositions and personal knowledge and beliefs triers of fact bring to the decision-making process. In criticizing a number of interpretations of BARD that focus on the personal beliefs of triers of fact, Pardo says that since ‘any connection can exist between subjective beliefs of decisionmakers and the truth, these interpretations of the rule cannot perform’ as intended.47 But, the same ‘subjectivity’ besets even the more-nuanced explanatory approach he and Allen now favor, leaving the standard vague and triers of fact without the sort of guidance needed to apply them in the way specificationism imagines. 3.5 Allocating the risk of error With our central argument in place, we can quickly dispense with the claim that in employing quanta set by SPRs, triers of fact allocate the risk of error among the parties, minimizing the risk of certain types of errors thought to be especially undesirable. In the case of BARD, for example, what is thought to be minimized is false convictions in criminal trials, not errors in general, since minimizing the risk of errors of one sort means increasing the risk of complementary errors (in this case, failures to convict the guilty). But it is an exaggeration to say that the standard even aims to minimize false convictions. Convicting no one would diminish that risk to zero, but responsible people agree that the social costs of such an approach would be too high. For this reason, we employ BARD instead of the No Doubt at All standard or doing away with convictions altogether. The one SPR that is often claimed to minimize overall errors is Preponderance of the Evidence. Statements to the effect that it does are widespread in the literature. For example, ‘The common law consciously chose its low standard for civil cases to pursue error minimization.’48 Although Preponderance awards ties to the defendant (and no one thinks that when the evidence is in equipoise defendants are usually in the right) it might get us as close as we can come to overall error minimization. When triers of fact resolve questions in favor of the party whose evidence they judge to be better, the thought goes, errors of fact are largely minimized, at least given certain assumptions, for example that the evidence correctly represents the relevant facts and that triers of fact can evaluate the evidence well.49 Because Preponderance requires interpretation by triers of fact, the assumptions are often false. In practice, the SPR fails to supply them with a determinate and uniform quantum of evidence. Moreover, as before, the difficulty or impossibility of making precise and accurate assessments and the idiosyncratic ways triers of fact apply the SPRs (in part due to their differing abilities) cause their approaches to decision making to vary widely. Thus, even Preponderance falls short of guiding the process in the way specificationists understand it to, and it is an exaggeration to claim that SPRs 47 Pardo (n 9) 1095. Emily Sherwin & Kevin Clermont ’A Comparative View of Standards of Proof’ [2002] 50(2) The American Journal of Comparative Law 258. See also pp 252, 272. 49 For a brief discussion of some conditions of this sort, see Allen & Pardo (n 32) 10. 48 JURISPRUDENCE 13 allocate the risk of error. It would be more accurate to say that they partially allocate it or that they contribute to its distribution. 3.6 Two predictable objections Specificationism is likely to be familiar, not just to those who think about SPRs in an academic context, but to practitioners acquainted with the way the standards are discussed in the real-world. Yet, practitioners would also recognize the vagueness of the most common SPRs, the variability we can expect in the thinking patterns and abilities of those we call upon to asses factual claims in legal contexts, and the disorder that ensues when our lofty and aspirational glosses meet actual practice. Indeed, we suspect, most practitioners not only recognize both perspectives but feel the pull of each, despite having some sense that they are in tension. We would be surprised if scholars working in the area did not recognize and feel the pull of both perspectives as well. Moreover, as we have seen, some scholars are careful to note the limitations of their own and other frameworks in accounting for the untidiness of real-world conditions and eager to acknowledge the roots of that messy state of affairs in vague SPRs and variability in the thinking patterns and abilities of those we call upon to assess factual claims in legal contexts. This raises the possibility that specificationism as we have described it misrepresents the views of some or all of those to whom we have attributed it. They, along with some of the many others who have voiced similar views, might claim to be defending more modest positions than the one we have described. Interestingly, the more moderate versions of specificationism they might claim to be defending take different and partiallyopposing paths. The first strategy is to say (we assume sincerely) that one never intended to claim that there are quanta of the sort we have been discussing. Instead, some theorists might well agree with us that SPRs (together with accompanying interpretive guidance) merely point toward vaguely-articulated targets that do not even approach the precise formulations we have identified as quanta. A powerful reason for thinking at least some of those we have called specificationists would answer along these lines can be found in their discussions of vagueness in the standards and the variety of ways triers of fact actually reason. For example, Allen and Pardo are careful to acknowledge that SPRs are ‘vague, ambiguous, and often have uncertain applications,’50 sometimes even presenting novel and compelling arguments to that effect. They also recognize and illuminate the significant role played by the doxastic idiosyncrasies of triers of fact and the variability (in interpreting and applying the standards) these factors inject into the decision-making process. These do not sound like concessions. For reasons like these, we think it highly likely that Allen and Pardo would deny ever having accepted, put forward, or defended anything as stark and extreme as specificationism in the form we have offered.51 Thus, when Pardo says, ‘Proof rules in law specify when a disputed fact has been proven … by specifying a level or standard of proof: “beyond a reasonable doubt,” “by 50 51 Ibid 27. See also, Larry Laudan, ‘Is Reasonable Doubt Reasonable?’ [2003] 9 (4) Legal Theory 295–331. 14 D. LOEB AND S. REYES MOLINA preponderance of evidence,” or “by clear and convincing evidence”,’ perhaps what he means is that SPRs ‘specify . . . a level’ only insofar as they identify and invoke the words of one of these rules. Similarly, perhaps, Ho’s claim that ‘the decisional threshold operating in each category does not vary with the circumstances of individual cases,’ only means that the threshold (vague though it is) does not vary. On this reading, every case of a given type is treated alike, but only in that the same vague words (like ‘beyond a reasonable doubt’) are provided to all triers of fact. If some or all of those we have called specificationists reject its central theses or accept significantly more moderate versions of it than the one we have identified, we are happy to find ourselves in greater agreement with them than initially appeared. But the position we have identified is no straw man. It is what at least some of the words these scholars employ actually mean, at least on the most natural readings available. Phrases like ‘determine the probability thresholds’ (Stein), identify the ‘required degree’ of warrant (Haack), and ‘allocate the risk of erroneous decisions between the parties’ (Pardo)52 represent each of the quoted figures as defending specificationism in something like the extreme form we described, because words like specify, allocate, degree, and threshold all indicate or strongly suggest precision and uniformity. Using such words to characterize decision making by triers of fact is extremely misleading because it exaggerates the role SPRs play in that process. That is especially true given the availability of any number of ways of more clearly expressing the less extreme claims. Theorists wishing not to be interpreted as uncompromising specificationists have at their disposal phrases like partially specify and to some degree specify (despite their resemblance to the oft-derided but useful very unique) along with identify a vague and poorly defined range, gesture at, hint at, point in the direction of, and so on. None of these phrases exaggerates the precision or impact that SPRs bring to the decision-making process in the way done by any number of terms commonly used to describe them and the role they play. Moreover, the quotations we offered were hardly cherry-picked. Indeed, the claims we point to are so widespread in the legal world that some of them might well have acquired the status of truisms. Presumably, some of them trace their origins to platitudes appearing over the course of a long tradition of scholarly and judicial commentary on SPRs, platitudes now familiar to practitioners and scholars alike. They reflect our ideals and aspirations for the legal system, calling to mind the magisterial image of a fair legal system—one that assures that like cases are treated alike. But, they exaggerate the system’s success at realizing those ideals. Claim’s like Ho’s convey the same impression of impartiality and control over the uniformity of the process. In one sense, of course, these impressions are accurate. It is true, for example, that all defendants in certain types of criminal cases are tried under an identically-worded standard (BARD) and that triers of fact within the same jurisdiction are often sent off with identical instructions. But, in important respects, as we have seen and as some of those we have been discussing explicitly acknowledge, defendants are not treated equally in the application of that standard by triers of a fact. For this reason, even if some of those we have called specificationists hold more nuanced and accurate views than those we outlined above, as the first objection states, 52 Page citations and specific sources for all of the passages quoted above are identified in Section 2, above. JURISPRUDENCE 15 they are often partially responsible for this state of affairs, for, they have frequently represented their views in sloppy and misleading ways. If that is so, we welcome their concurrence and we would welcome their clarification as well. In contrast to the reply we just considered, however, another strategy (presumably more palatable to probabilists) is to embrace the claim that SPRs identify quanta, but to hold that the quanta nevertheless serve only as ideals and that nobody expects triers of fact to conform neatly to some glamorized picture of how decisions are to be made. If triers of fact are (typically) at least trying to understand and apply the standards (understood, say, as probability thresholds) then it is reasonable to think that they are being guided by those thresholds in reaching their decisions, more moderate specificationists of this sort might argue. But, although one can be engaged in an activity without being able to perform it perfectly or even capably, some putative failures are so severe that it is not appropriate to think of someone as engaged in the activity at all.53 There are situations in which it is more accurate to say that triers of fact are doing something other than determining (even in shoddy fashion) whether an actual quantum has been met—even if they are willing to characterize their decision in the favored way. For example, surely it is almost never true that triers of fact would even attempt to make the sort of complicated mathematical calculations that the Bayesian approach to probability assessment recommends. But even if some of them claim to be reasoning this way (and, in some cases, even make feeble attempts in that direction) it would be misleading to say that they are typically trying to calculate the probabilities in Bayesian fashion. Although these are largely empirical questions, it seems safe to assume, that in some cases (especially where the task of applying the standard is difficult or impossible) triers of fact wind up merely pretending to apply it. In others, they wrongly think that the method of reasoning they are employing is the method they have been instructed to employ. In still others, they are simply ignoring that methodology (perhaps because they cannot figure out how to use it). Still, in some cases, triers of fact are trying the best they can to determine whether an SPR (as it has been explained to them) has been met and are in some sense succeeding in at least trying to apply it. Even so, trying to make a reasonable determination whether a particular quantum has been satisfied is not the same as being guided to one’s conclusion by comparing one’s estimation of the evidence to an identifiable minimum, nor is it fully allocating the risk of error. In the end, both of the objections we have been considering grant that specificationism in its original form is incorrect, as we have argued. But, if those charged with deciding the quaestio facti are not typically being guided to their results by quantum-providing SPRs, then we still need an account of how decisions are being made and the role SPRs play in the process. The answer, we think, is that triers of fact are exercising a limited discretion they have been granted under an SPR functioning as a competence norm. We develop a sketch of this position, the competencenorm approach, in the next section. 53 The mere presence of mistakes does not undermine the claim that the activity is taking place. One is still multiplying 6digit numbers even if one errs from time to time. 16 D. LOEB AND S. REYES MOLINA 4. The competence-norm approach We have argued that specificationism exaggerates the impact SPRs have on the decisionmaking process, giving an inflated impression of the explanatory power of the specificationist picture and underplaying the role and contribution of triers of fact. Our case against it becomes stronger if we can offer a more plausible explanation of what triers of fact are doing. In this section, we sketch the outlines of an alternative we think more accurately portrays the way decisions of fact are made in legal cases. It also helps us to see why a more thoroughgoing explanation is so difficult to come by. 4.1 Descriptive and normative questions again We begin by clarifying the nature of the descriptive questions and claims on which we focus and contrasting them with certain normative questions with which they might otherwise be confused. When we said that we would be setting normative matters aside, the examples we gave concerned how triers of fact should decide and how the system should be designed or how it works best. But other questions and claims, perhaps less obviously normative, can be found in the vicinity. Specifically, many of the descriptive claims we are concerned with here have normative counterparts, often phrased in exactly the same words. For example, we could understand the claim that the legal system confers a power on triers of fact as a normative claim to the effect that granting competence legitimizes the exercise of that power or as any of various descriptive claims: that triers of fact make decisions they think of as legitimate, for example, or that the system endorses that exercise of power in the sense that decisions made under its framework are ordinarily given effect and treated as valid. Similarly, although we have set normative matters aside, we can ask whether an action creates something the law treats as a duty and will ordinarily enforce as such, or whether certain people can as a matter of fact issue directives that will, in certain circumstances, be viewed as obligating others and enforced by the state. Indeed, legal validity itself can be understood descriptively in something like this way, and not just normatively. The account we offer is meant to be entirely descriptive. In presenting it, we draw heavily on the insights and theoretical apparatus of Alf Ross, especially as outlined in his book Directive and Norms.54 Explaining Ross’s account of competence norms allows us to show that SPRs exhibit many of their main features and in that way bolsters our case for the plausibility of treating them as competence norms. But much of what Ross and others say employs the ambiguous vocabulary that we have been discussing. For our purposes, it is irrelevant whether they intended to use words like validity, authority, and even phrases like having competence in their normative or descriptive senses, for we are interested only in borrowing a model we can use to begin articulating our view. For this reason, we take the liberty of characterizing Ross’s view (and the explanations offered by others) as purely descriptive. We do not think this reading does violence to 54 Ross laid out a different theory in On Law and Justice (1959). There is a debate regarding whether the two theories are compatible. For different positions on this debate, see: Jordi Ferrer, Las normas de competencia: Un aspecto de la dinámica jurídica (Centro de estudios políticos y constitucionales 2000) 86–96; Alejandro Calzetta, ‘Los enfoques sobre la competencia de Alf Ross’ [2018] XXXIII(1) Revista de derecho UACh 9–29. JURISPRUDENCE 17 Ross’s approach, but for our purposes it is unimportant whether some or all of what he and others say about norms that confer competence can plausibly be given a normative interpretation. 4.2 Competence norms We understand SPRs to be norms that confer a limited power on triers of fact to decide whether a given standard has been met—according to whatever vague or inchoate interpretation (if any) of the rule in question they implicitly or explicitly employ.55 SPRs do not determine a uniform quantum of evidence required for the decision about the quaestio facti. Nor do they fully allocate that risk of error, although the behavior of triers of fact, no doubt influenced by an SPR, collectively results in a particular distribution of the risks of error for a given context. Following Ross, we define ‘competence’ as ‘the legally established ability to create legal norms (or legal effects) through and in accordance with enunciations to this effect.’56 ROSS distinguishes two types of competence: (1) private autonomy and (2) public authority.57 The first is the power of every person to shape their legal relationships according to their interests within the framework of the legal system.58 The second is power granted to specific persons (or those occupying certain roles) in accordance with certain rules of law.59 We focus only on the second type. According to Ross: [T]here are the rules of competence that create what we call a public authority. They have the following features. They create a power only for a certain qualified person. The required qualification consists in a designation in accordance with certain rules of law. … The substance of this power is a capacity to create rules that bind others (statutory enactments, judgments, administrative acts).60 We can distinguish between having competence and exercising competence. ‘To have competence is to possess the ability to change legal positions by performing a special kind of act. … To exercise competence is to bring about the intended change of legal positions by performing a competence-exercising act.’61 As we have suggested, these are to be interpreted descriptively. Having competence is having the power to effect changes in the legal landscape (as opposed to the right or duty to do it) and exercising competence is doing so (as opposed to doing so legitimately). Similarly, we will treat ‘validity’ in the legal context as the suitability in the eyes of the legal system (including the bulk of those engaging with it) of an action for producing legal consequences. Competence is a necessary condition for legal validity, ‘in the sense that only a competent person can change legal positions—if you lack the competence, you 55 Whether the system was designed with the aim of granting competence to triers of fact is yet another descriptive question. Although it is hard to believe that those responsible were utterly naïve about the limited degree to which SPRs guide decisions on the quaestio facti and the inevitable lack of uniformity in decision making that ensues, resolving questions about those aims seems irrelevant to our present purpose of explaining what triers of fact actually do. 56 Alf Ross, Directive and Norms (The Lawbook Exchange 1968) 130. 57 Ibid 132–3. Along the same lines see: Hans Kelsen, Pure Theory of Law (2nd edn, University of California Press [1960] 2005) 148–150. For a different typology see: Torben Spaak, Explicating the Concept of Legal Competence, in J Haage (ed), Concepts in Law (Springer 2009) 78–79. 58 Ross (n 56) 132. 59 Ibid 133. 60 Ibid. 61 Torben Spaak, ’Norms that Confer Competence’ [2003] 16(1) Ratio Iuris 91. 18 D. LOEB AND S. REYES MOLINA cannot change legal positions, even if you perform the competence-exercising act properly and in accordance with any formal requirements laid down by the pertinent legal norms.’62 Furthermore, even if a person is competent to change or create a legal position or legal rule, the validity (in the descriptive sense) of an adjudicative decision or other act is conditioned on the (perceived) fulfillment of the formal requirements for exercising the competence identified in the grant of authority (itself an exercise of a different competence by the granting party). Competence norms themselves embody the conditions necessary for the exercise of the power that they confer. According to Ross: These conditions usually fall into three groups: (1) those which prescribe what person (or persons) is qualified to perform the act which creates the norm (personal competence); (2) those which prescribe the procedure to be followed (procedural competence); and (3) conditions which prescribe the possible scope of the created norm with regard to its subject, situation, and theme (substantial competence).63 Norms that confer competence ‘would be addressed to the competence-holders themselves, saying that by performing a certain kind of act in a certain kind of situation they can bring about a certain change of legal position.’64 By establishing the way that the competence-holder is able to exercise her power, norms of competence embody limits on the power being granted. Being seen to have met the conditions is ordinarily all it takes for an act to be treated as legally valid. But, if the competence-holder is seen to have exercised her power in a way other than the one prescribed by the competence norm, then that exercise will ordinarily be deemed invalid and might be declared void. 4.3 Standards of proof as norms that confer competence We can understand SPRs along the lines of the framework just articulated. First, SPRs identify who the competence-holder is: the trier of fact. This is personal competence, in Ross’s terms. Personal competence can be identified either explicitly in the formulation of the standard of proof rule or implicitly. It is explicitly stated when the competence-holder is identified (typically in the abstract) in the norm’s articulation. For example: ‘The jury may only convict if they have evidence of guilt beyond a reasonable doubt.’ Personal competence is sometimes identified implicitly within the structure of a decision-making process, when, for example, there is only one organ that can issue a verdict. Whether a competence is identified explicitly or implicitly is an empirical matter. Second, SPRs identify procedural requirements for the exercise of competence. That is, they determine what procedure competence-holders must follow in order to exercise their competence in a way that will be treated as legally valid. But the procedure they identify might be very thin, amounting to little more than a grant of limited discretion to decide factual questions under the SPR that has been provided, without any suggestion about how to evaluate whether it has been satisfied. In some cases efforts are made to limit that discretion via instructions for jurors and other forms of guidance for judges 62 Spaak (n 57) 72. Ross (n 56) 130. 64 Spaak (n 61) 94. 63 JURISPRUDENCE 19 called upon to resolve questions of fact. Still, triers of fact are granted substantial discretion to act as they choose within the limits set by the granting authority. Third, SPRs specify substantial competence. They identify the power that the competence-holder may exercise. In the case of SPRs, the conferred competence is the power to decide the case before them under the standard with which they have been provided. That power is not unlimited. The power sometimes vested in juries to resolve factual questions in trials, for example, is not the power to fire the judge! If it is believed that triers of fact have acted outside their competence or failed to follow procedural requirements, their authority might be removed and (in some cases) their decision held invalid. For example, in jurisdictions holding jury trials, judges sometimes have the power to declare a mistrial, direct a verdict, or issue a judgment notwithstanding the verdict (or JNOV65). The judge’s power to reverse or head off a decision by the jury flows from a different grant of competence. It comes with its own set of conditions, often identified in statutes that announce and in that sense grant trial judges that power (and favor their exercise of competence over the jury’s). Such decisions are themselves subject to the exercise, in even more unusual cases, of competence on the part of higher court judges to reverse the decisions of trial court judges. 4.4 What can and cannot be explained Specificationism, as we have characterized it, offers to explain decision making by triers of fact as conforming (more or less) to a simple, though very general pattern. No matter the context, it holds that triers of fact are (typically and to the best of their ability) following a standard, whether an identifiable quantum or an explanatory threshold, to whatever factual determination they make. The standard guides them to their decision, not mechanically, but by giving them a threshold against which to compare the evidence, itself evaluated according to the method of reasoning appropriate for that approach (quanta as probabilities assessed in light of the frequency of events of the sort in issue at trial, quanta as confidence levels updated according to Bayesian calculations, or quanta as explanatory thresholds evaluated according to a particular understanding of IBE). We have argued, however, that the foregoing is only very rarely the correct description of decision making by triers of fact. Instead, we think that in almost every case, triers of fact have been granted a limited power to make the decision as they see fit, as long as the grantees are not thought to have crossed lines set under the competence norms by which that power was assigned. We acknowledge that the explanatory framework we have sketched is also quite simple. Indeed, the approach leaves unexplained much of what specificationism purports to explain. In particular (setting aside the varying details concerning norms of procedural competence) the competence-norm approach does not offer any explanation of how triers of fact exercise their power to make decisions about whether factual matters have been established under the vaguely-identified standards the SPRs gesture at. Not only does the competence-norm approach refrain from giving an incorrect 65 A JNOV is now called a judgment as a matter of law in the United States Federal Court system. Plausibly, where judges have the power to intervene in this and other ways it is the power to enforce a substantive limitation (implicit or explicit) on juries’ power (that their decisions must be reasonable) and also procedural limitations on how they are to exercise it (that they must decide in light of the evidence and that they must correctly apply the law). 20 D. LOEB AND S. REYES MOLINA explanation of the way triers of fact make decisions under SPRs (as specificationism does), it refrains from attempting to offer any further general explanation. Far from representing a shortcoming of the competence-norm approach, however, this gap in explanation represents a significant advantage. The reason is that the norms allow triers of fact too much latitude—discretion concerning how to proceed— for further general explanation to be reasonable. It would be pointless to search for a general explanation encompassing each of the disparate ways triers of fact exercise their discretion. Given a well-identified competence norm, it might be theoretically possible to say more about a how a particular trier of fact has exercised the granted authority in a particular case. But we typically lack access to such information, and the outcome of our inquiry would be of little or no theoretical interest anyway.66 Considerations like these support the hypothesis that an explanation like the one provided by the competence-norm approach is needed if specificationists adopt either of the two strategies for responding to our critique. The first response was to simply reject specificationism, as we do (or to note that one never intended to hold it), leaving what happens instead unexplained and thus crying out for such an explanation. The second strategy was to say that although specific quanta are identified for triers of fact, all we actually expect is a sincere attempt to apply the SPRs. But since the approach presupposes that we do not supply triers of fact with clear and followable instruction for how to assess whether identified quanta have been met, and since we rarely police the factfinding process or ask triers of fact to justify or explain their decisions, we appear to be in the same position. Once again, we have not yet explained what triers of fact are doing within the space afforded by whatever latitude the decisional context affords them. Here too, the competence-norm approach is a good candidate for explaining as much as can reasonably be explained. In fact, even if the extreme form of specificationism we identified at the outset were correct about triers of fact being asked to decide (in some relatively uniform and quantum-guided way) whether an identified quantum has been met, there would be a need for an explanatory story of this sort. No matter how clearly the quantum is specified and how detailed and followable the guidance given to triers of fact about how to implement it, there will still be significant room for triers of fact to judge the evidence from a point of view others do not fully share and, even if conscientious, to evaluate it by employing the supplied method (if at all) in a way that is to some degree idiosyncratic. Perhaps surprisingly, on any of the approaches we have considered, an explanation of the sort offered by the competence-norm approach seems called for. 5. Conclusion We have argued that a familiar and seemingly widely-shared understanding of SPRs, what we call specificationism, is inadequate for explaining certain aspects of the 66 Indeed, the inaccessibility of the information we would need serves to protect the latitude granted to triers of fact (in addition to the integrity of their deliberations). Officially, we do not usually want to know the details of their deliberations. Even so, it is theoretically possible that we could use social science to develop reasonable hypotheses about broader trends and likelihoods, especially if researchers had greater access to the deliberative process. No doubt such hypotheses would be of greater interest to those hoping to improve the system than would one-off explanations of particular decisions. JURISPRUDENCE 21 decision-making process that it purports to explain. SPRs do not specify uniform quanta that can be used by typical triers of fact in evaluating whether the evidence meets thresholds they embody for decisions about the quaestio facti. Nor do they allocate the risk of errors in the decision about the quaestio facti, although they contribute to its distribution. According to the competence-norm approach, decisions about how to interpret and implement the SPRs are, for the most part, left to the discretion of triers of fact, who are provided with only vague and usually-unenforced guidance about how to make those decisions. Treating SPRs as competence norms might seem an unappealing option to some. Discretion is messy and its results can be uneven, failing to treat litigants with much more than a formal token of the uniformity we look for in a legal system. As unsettling as this might seem, however, we are convinced that triers of fact exercise broad discretion to decide factual issues largely as they see sees fit, whether we acknowledge it or not. At a minimum, then, it seems better to recognize such discretion where it is being exercised than to accept a more orderly but less accurate account. In fact, we suspect that no legal system can at the same time identify precise quanta and supply triers of fact with clear instructions that they can use to evaluate (in some reasonably appropriate and uniform way) whether the evidence produced at trial rises to the specified level. If we are right, then we should abandon the old platitudes about SPRs specifying how much evidence is required and about how they treat everyone the same. In the absence of the sort of guidance specificationism imagines, triers of fact cannot avoid exercising significant discretion. Finally, our hypothesis that discretion is unavoidable when people are asked to assess factual claims in legal settings, if born out, would also provide us with an important clue in our effort to answer normative questions about how SPRs ought to work and how the decision-making components of the legal system are best designed. Ought, after all, implies can. Disclosure statement No potential conflict of interest was reported by the author(s).