Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

MORAL EXPERTS, DEFERENCE & DISAGREEMENT

We sometimes seek expert guidance when we don’t know what to think or do about a problem. In challenging cases concerning medical ethics, we may seek a clinical ethics consultation for guidance. The assumption is that the bioethicist, as an expert on ethical issues, has knowledge and skills that can help us better think about the problem and improve our understanding of what to do regarding the issue. The widespread practice of ethics consultations raises these questions and more: • What would it take to be a moral expert? • Is anyone a moral expert, and if so, how could a non-expert identify one? • Is it in any way problematic to accept and follow the advice of a moral expert as opposed to coming to moral conclusions on your own? • What should we think and do when moral experts disagree about a practical ethical issue? In what follows, we address these theoretical and practical questions about moral expertise.

MORAL EXPERTS, DEFERENCE & DISAGREEMENT Jonathan Matheson, Scott McElreath, & Nathan Nobis 11/2/17 Introduction 1 Expertise 1 Moral Expertise 3 Non-Experts Identifying Moral Experts 4 Moral Deference 6 Normativity 8 Accessibility 9 Value Differences 9 Disagreements and Moral Expertise 11 Conclusion 16 Introduction We sometimes seek expert guidance when we don’t know what to think or do about a problem. In challenging cases concerning medical ethics, we may seek a clinical ethics consultation for guidance. The assumption is that the bioethicist, as an expert on ethical issues, has knowledge and skills that can help us better think about the problem and improve our understanding of what to do regarding the issue. The widespread practice of ethics consultations raises these questions and more: What would it take to be a moral expert? Is anyone a moral expert, and if so, how could a non-expert identify one? Is it in any way problematic to accept and follow the advice of a moral expert as opposed to coming to moral conclusions on your own? What should we think and do when moral experts disagree about a practical ethical issue? In what follows, we address these theoretical and practical questions about moral expertise. Expertise In an increasingly specialized world, the need for expertise continues to grow. Most people could not tell you how their computer works. Most people could not fix their car if it broke down. Most people cannot determine why they are sick or how they can get better. Put bluntly, we need experts. We need people who we can rely on for both their information and their abilities. But what exactly is expertise? The first thing to note is that expertise is relative to a domain. No one is just a flat-out expert. Rather, one could perhaps be an expert car mechanic, an expert pianist, or an expert physicist. Expertise can also be more or less fine-grained. One might be an expert on Honda automobiles, but not an expert on vehicles more generally, an expert on a certain type of jazz but not a musical expert more generally, and so on. Expertise is relative to a time as well. Expertise can come and go. It can come through the acquisition of the relevant knowledge and skills, and it can go through the loss of such knowledge and skills. Experts in a domain need to stay ‘up to date’ in their domain to retain their status as experts. This will require keeping up with the state of knowledge and keeping a certain set of skills finely honed. Following Goldman (2001) we can distinguish several factors relevant to expertise. First, expertise is composed of both a cognitive component and a skill component. To be an expert you need to have a significant amount of knowledge within the domain of your expertise and you must be able to apply that knowledge to action. Different domains of expertise will place a differing emphasis on these two components. Some domains of expertise are more skill, or know-how, oriented. Being an expert mechanic or pianist, for instance, is primarily about possessing a certain set of skills even though some propositional knowledge is required as well. In contrast, other domains of expertise are more intellectually oriented. Being an expert physicist or epistemologist, for instance, is primarily about possessing a certain body of knowledge, even though some skills are required here as well. Our focus in this paper will be on intellectual expertise in general and moral expertise in particular. While intellectual expertise is primarily concerned with having a fund of knowledge, intellectual experts must also be able to deploy and to apply that knowledge. Intellectual expertise requires the skills of applying the expert’s knowledge to new cases and discovering answers to new questions in their domain of expertise. An expert isn’t merely a “scholar” of existing knowledge; an expert must be able to apply and extend that knowledge to new questions and problems. Intellectual skills include being able to construct a valid argument, recognize and respond to objections, revise an argument in light of counter-evidence, and correctly weigh reasons. These skills also include reflecting on how an argument, or principle, applies to other issues, being able to find inconsistencies between views, and being able to make revisions to arrive at a consistent set of views. Secondly, expertise is comprised of both comparative and non-comparative elements. To be an expert in some domain you must possess significantly more knowledge in that domain than most. This is the comparative element. Experts are unusual. If everyone were an expert in some domain, then no one would be an expert in that domain. Experts also have significantly greater knowledge and/or skills than most people. That said, expertise is not a wholly comparative matter. Someone could have the most knowledge about some issue, and yet only know very little. It takes more to be an expert than to be the best of a bad lot. So, experts in a domain must possess a significant amount of knowledge and skill in the domain of their expertise. A non-comparative threshold must be passed to have expertise. Finally, expertise is relative to a population. This point relates to the comparative aspect of expertise. Experts need to have an unusual amount of both knowledge and skill, but such a comparison only takes place against a reference class. Someone may be an expert with respect to one population group but not with respect to another. For instance, a philosophy professor may be an expert on Kant with respect to the general population, but not with respect to the other members of her department. For instance, consider an atypical population in which all humans have been killed except for a group of neuroscientists. A question arises as to whether the members of this group are still expert neuroscientists since their knowledge and skills are no long unusual. On our view, they are still experts at neuroscience relative to some counterfactual populations, but not relevant to the population of existing humans. Thanks to Jamie Watson for bringing such an example to our discussion. For an alternative account, see Collins & Evans’ (2007) account of “ubiquitous expertise” according to which large segments of the population can be experts on (say) the English language. This all leads to a final account of expertise: Someone S is an expert in domain D at time T with respect to population P just in case S possesses an unusually extensive body of knowledge in D at T and S has unusually extensive skills to apply that knowledge at T to new questions and problems compared to others in P. This closely follows Goldman (2001: 92). Moral Expertise With this account of expertise in hand, we can begin to examine moral expertise. Moral expertise is a form of expertise that is relative to a moral domain. It is doubtful that someone is an expert on all of morality. Compare with Coady (2012) who says similar things about the possibility of a science expert. There are simply too many subfields and issues within morality as a whole to master all of it. For instance, within the domain of the moral are the fields of normative ethics, applied ethics and meta-ethics. Moral expertise is more likely to obtain within more limited moral domains. The smaller the domain, the more likely expertise is to obtain since the scope of relevant knowledge (if not also abilities) will also shrink. For instance, we can think of a moral domain centering on a moral question. This is not to say that moral expertise only obtains regarding such questions. Plausibly there are also moral experts in broader domains such as research ethics, clinical ethics, animal welfare, etc. There are moral domains, for example, which are focused on bioethical questions such as these, and many more: Is physician-assisted suicide ever morally permissible? If so, when and why? Are physicians always required to tell their patients the truth? If so, when and why? Is abortion ever morally permissible? If so, when and why? Experts in any of these domains must possess an unusually extensive amount of knowledge relevant to that domain. Such knowledge will consist of both descriptive and normative knowledge. That is, an expert in the domain of physician-assisted suicide must have extensive descriptive knowledge about the practice (medical knowledge, social knowledge, etc.) as well as descriptive knowledge about the state of the debate (hospital policies relevant to these issues, state and federal laws, knowledge of the options for final moral views on the issue, the moral arguments that have been made, the objections given to those arguments and replies, etc.) Such a moral expert must have significant normative knowledge (knowledge of what potential moral considerations there are, the kinds of moral reasons there are, the comparative weight of these reasons, etc.) In addition to this set of knowledge, the moral expert must also be able to apply this knowledge to solve problems and questions. This is why a moral expert might serve as a consultant, to help address an ethical problem, or as an advisor or educator, to help plan for future scenarios. That is, such a moral expert would need to be able to apply his or her general knowledge of the topic to a particular case (particularly when it is a novel case), being able to factor in the particular details of that case. Moral experts need both moral know-how and knowledge of moral propositions. Our discussion presumes some kind of realism about morality, that is, that there are moral truths or facts that are sometimes justifiably believed or known. Whether moral expertise and the issues of this chapter would be understood differently on the variety of non-realisms is an issue for another occasion. Non-Experts Identifying Moral Experts Given the above rough characterization of moral expertise, is anyone a moral expert? We think the answer is, “yes.” We know that, for established ethical issues, there are people who possess a deep understanding of the relevant facts, issues, and arguments – indeed the entire body of major scholarly literature surrounding a topic – and are able to use that understanding to engage new problems and questions about the topic. Further, some of these people also have the personal and communication skills to competently serve as advisors to families in need of navigating an ethical challenge in an informed way. Given our account of expertise, such individuals would qualify as experts in the domain of the relevant moral issue. However, the mere existence of moral experts is of little help to those seeking moral guidance. To make use of someone’s expertise we must be able to identify them as an expert in the relevant domain. Expert identification can be particularly difficult for someone who is not themselves an expert in that domain. After all, if you lack the relevant knowledge and skills, how can you determine who has them? If you don't know anything about physics, how could you identify an expert physicist? Applied to our concern, how might a non-moral expert identify a moral expert? Let us begin by describing three ways that someone can be a non-moral expert about some domain, D. First, she may have no background information about the knowledge or skills needed to be an expert in D. Maybe Carol never learned about euthanasia or critical thinking in college. Presumably, a critical thinking course would have helped her identify the intellectual skills needed to be a moral expert. Second, a non-moral expert may have some background information, but it is old or scant. Connor learned about euthanasia and critical thinking in college, but that was a long time ago and he has not thought about it since then. Third, a non-moral expert may have background information about the knowledge needed to be an expert in D, but not the skills needed, or the other way around. Ed, an epistemologist, may be able to identify the intellectual skills needed to be an expert in euthanasia, but he knows very little about euthanasia. Or Danny, a doctor who specializes in end-of-life care, may be familiar with euthanasia policies, theories, and concepts, but not familiar with how to apply critical thinking concepts. Besides seeking further education – something that not many people have the time to do – how might these individuals who lack moral expertise acquire more information that might help them decide if a given person is an expert in some moral domain? While this question deserves more attention than we can give it here, we hope to show that the worry is not as overwhelming as it may initially seem. To this end, we will briefly outline some places to start. Observe the behavior of the potential expert to see if she acts differently, more effectively, or better than others in her field. Suppose Cindy has been nominated to serve as a euthanasia therapist. She currently provides end-of-life care to patients in the hospital. A non-moral expert may shadow Cindy and compare her work to others who have a similar job. A non-moral expert may observe that Cindy asks different questions to her patients and with a more caring tone, that Cindy needs less time than others to work with her patient, and that she extraordinarily combines professionalism with sympathy. A non-moral expert has some evidence that Cindy is an expert at euthanasia therapy. To say that a subject, S, has “some evidence” for a proposition, P, leaves it open whether that evidence is sufficient for S to reasonably believe that P. Observe the behavior of the people who are directly affected by the potential expert to see if they are improved more than they are by others in the potential expert’s field. Because of Cindy’s unique methods, her patients may make more reasonable, well-informed, and timely choices for themselves. Again, we here have some evidence that Cindy is an expert at euthanasia therapy. Observe how other observers evaluate the potential expert. Ideally, a non-moral expert would ask experts what they think about a potential expert such as Cindy, but that’s not likely given that a non-expert is not sure how to identify experts. So, the more observer observations, or testimonies, that a non-expert can get, the better. The following are questions to ask an observer: Do you agree with the potential expert? Do you respect the potential expert? Does the potential expert have appropriate accreditation, degrees, or work experience? Do you put your trust in the potential expert’s knowledge and skills? Do you praise the potential expert’s past actions? Do you see the potential expert as unbiased? Does the potential expert positively engage with people who disagree with her? The more answers to these questions a non-moral expert might get, the more evidence she might have for or against the potential expert’s expertise (Goldman 2001: 92-93). We do not know when to tell a non-moral expert to stop seeking more information about whether a potential moral expert is an expert. Much depends on the quantity and quality of the information she receives. But we do know that taking the advice in this section is necessary to making a reasonable judgment when identifying a moral expert. Importantly, we see no good reason to think that in principle non-experts cannot reasonably identify experts. See Anderson (2011); Collins and Weinel (2011); Gelfert (2011); Matheson (2005); and Miller (2013) for more detail about how non-experts might identify experts. Moral Deference Suppose a non-expert on a moral issue has successfully identified a moral expert. Is it appropriate for the non-expert to believe what the expert says simply because of her expertise? That is, is there anything problematic with deferring to a person's moral expertise? This question is the focus of this section. Much of what we believe is believed on the basis of testimony. For example, our numerous beliefs about temporally and geographically distant events are mostly believed on the basis of someone else’s say-so. Among our testimonially-based beliefs are beliefs where we have simply deferred to another: S1 defers to S2 about p when S1 comes to believe p merely because S1 discovers that S2 believes p. Clearly not just anyone should be deferred to on just any matter, but in general deferring to an expert on a matter of their expertise is seen as appropriate. For more detailed discussion on the nature of deference, see Zagzebski (2012) and Lackey (forthcoming). For instance: It is appropriate for me to defer to a civil war historian as to how many soldiers in the Union army were immigrants. It is appropriate for me to defer to a chemist as to the molecular structure of caffeine. It is appropriate for me to defer to an entomologist as to what kind of insects are in the attic, etc.  However, many find there to be something problematic about other kinds of deference. For instance, moral deference in particular has come under a great deal of scrutiny. That is, many have questioned whether: It is appropriate for me to defer to a moral expert as to whether it is morally wrong to eat meat. It is appropriate for me to defer to a moral expert as to whether the use of attack drones is morally permissible. It is appropriate for me to defer to a moral expert as to whether it is morally obligatory for me to give more to charity, etc. Is there an important difference between moral deference and other kinds of deference? If so, what grounds this difference? First, let’s examine a case of moral deference. MEATLESS MELANIE Melanie meets Maggie at an academic party. Melanie is a historian and she comes to find out that Maggie is an applied ethicist who has spent most of her career examining the moral case for vegetarianism. In their conversation, Melanie doesn’t ask Maggie to present the main arguments for or against vegetarianism, but simply asks Maggie what her position is on the issue. Maggie says that she believes that it is morally wrong to eat meat. Considering Maggie’s expertise on the matter, Melanie defers and also adopts the belief that it is morally wrong to eat meat. Intuitively, something is amiss with Melanie’s deferring to Maggie. Minimally, something about Melanie’s deference is worse than the non-moral cases of deference given above. In the case, it is clear that Melanie is justified in believing that Maggie is in a better epistemic position than she is on the matter, so the problem with the deference is not that Melanie defers to someone who is not sufficiently informed on the matter. Further, things seem even worse if we add to the case that Melanie is aware of all of the relevant non-moral facts that Maggie is aware of. That is, we can imagine that while Maggie has thought about the morality of eating meat much more than Melanie has, there is no difference in their knowledge of the relevant non-moral descriptive facts (they are both equally aware of the psychology of animals, how animals are treated, etc.). So, the case of Meatless Melanie gives some strong intuitive support to the claim that there is something amiss with moral deference. This is not to say that any case of moral deference shares this same sense of impropriety. In cases where there is a great deal at stake and little to no time to act, moral deference may not seem problematic at all. In fact, it may even seem to be required. Thanks to Jamie Watson to bringing up such a scenario to us. Following McGrath (2011), let’s call this claim ‘DATUM’: DATUM: There is something amiss with moral deference that is not present in ordinary cases of deference. Given that something seems amiss with moral deference, what could it be? What accounts for the difference between the non-problematic cases of deference explored above and moral deference? DATUM cries out for an explanation given that in general we don’t find any problem with deferring to those who are in a better epistemic position on the matter. So, the problem with moral deference is not simply that it is a case of deference; deference in different circumstances is unproblematic. Further, the problem with moral deference is not simply that it is the formation of a new moral belief. We do not find anything problematic in coming to adopt a new moral belief by being led through reasoning to this conclusion. Moral deference is problematic in a way that moral persuasion is not. That is, the problem with forming a moral belief by deference is not shared by other ways of coming to form a moral belief. So, there appears to be something uniquely amiss with forming moral beliefs by deference. Perhaps some questions simply should not be outsourced. Several explanations of DATUM have been offered in the literature. In what follows, we will examine these explanations to determine whether any of them provide a reason to not defer to a moral expert on a moral matter. Normativity One proposed explanation of DATUM is that moral deference is a kind of normative deference, and the problem lies more generally with normative deference. On this view, the impropriety of moral deference comes from the fact that in doing so the subject defers on a matter of values and right conduct. While initially tempting, this suggestion should be rejected for several reasons. First, some cases of normative deference do not seem to be at all problematic. Consider the following: It is appropriate for me to defer to Emily Post on matters of etiquette. It is appropriate for me to defer to the sommelier as to what wine to pair with dinner. It is appropriate for me to defer to my lawyer as to which defense I ought to pursue. It is appropriate for a student to defer to his logic teacher as to what he may and may not infer from his premises. In each of these cases, the deference is reasonable and lacks the intuitive problems observed in cases of moral deference. Further, there are some cases of non-normative deference that appear to be amiss in a way similar to cases of moral deference. Consider the following: FRESHMAN FREDDY Freddy is a first-year university student taking introduction to philosophy. On the first day, he hears from his professor about what the course will cover. In particular, Freddy is excited to hear that they will be discussing whether humans have free will. This is something he hasn’t really thought about before. After class, Freddy quickly approaches his professor and asks here whether we have free will. Freddy’s professor tells him that she does believe that humans have free will. Freddy defers and comes to believe that humans have free will. GILL & GOD Gill has started thinking about whether God exists. He has found the issue confusing and hard to engage. He soon learns that his university is hosting a lecture on the issue from a renowned scholar in the philosophy of religion. After the talk, most of which Gill couldn’t follow, he asks the scholar about whether he believes that God exists. The scholar says that he does. Gill defers and comes to believe that God exists.  In both of these cases, the subject defers about a non-normative proposition. In neither case is the proposition about what is good/bad, right/wrong, or how things should/shouldn’t be. Nevertheless, something appears to be amiss in these cases of deference in the same way that something appears to be amiss in cases of moral deference. So, some normative deference appears entirely appropriate and some non-normative deference appears to be as problematic as moral deference. Given all of this, that moral deference entails deferring about a normative claim is not a good explanation of what is amiss in moral deference. Accessibility A second proposed explanation for DATUM is that moral deference is inappropriate in ways that other kinds of deference are not since moral facts are equally accessible to everyone. If moral facts are equally accessible to everyone, then deferring about a moral claim would be like deferring to someone who is not in any better of an epistemic position than you on the matter. Something would clearly be amiss if one colleague deferred to his or her epistemic peer on the matter. Such deference is puzzling if not entirely inappropriate. For instance, if you and I are standing outside in the rain, and I believe it is raining solely on the basis of deferring to you, then something has gone wrong. Similarly, if the moral truth was equally available to us, my forming a moral belief by deferring to you on the matter would be bizarre if not mistaken. So, the equal accessibility of morality may be able to explain why cases of moral deference are amiss – they parallel other cases where deference is inappropriate. As McGrath (2011) has pointed out, the claim that moral claims are equally accessible to all can be understood in two ways. First, it could be that moral truths are equally clear to everyone – that we are all equally accessing moral facts. However, as McGrath notes, such a claim is implausible given the vast amounts of moral disagreement of which we are all too aware. Moral disagreements exist despite the fact that there are open-minded, sincere inquirers on both sides of the issue. If moral truths were equally accessible in this sense, the extent of moral disagreement wouldn’t be nearly as vast as it is. Second, it could be that in principle everyone is equally able to access moral truths – that moral truth is equally available to everyone in principle. It is not clear that this claim is true either, but even granting its truth, there are good reasons to doubt that it is able to sufficiently explain DATUM. Even if everyone in principle is able to access moral truth equally well, it doesn’t follow that there is anything amiss with moral deference. For instance, it is reasonable for me to defer to my doctor even if, in principle, I could go to medical school myself and learn everything that she knows. Even if I can in principle access some fact, if I have not yet done so, it is reasonable for me to defer on the matter. Value Differences A final explanation of DATUM is that moral deference precludes something that is morally more important than a justified moral belief. One such candidate is moral understanding. Moral understanding is clearly valuable. However, if I simply defer to someone on a moral matter, this will perhaps preclude me from understanding why that moral truth is a truth. If I come to believe p by deferring to a moral expert, then I fail to understand why p is true. To understand why a proposition is true, one needs to possess and understand the reasons that show that it is true. In moral deference, the speaker’s reasons are not transferred to the hearer. The reason the hearer comes to believe the asserted claim in a case of deference is simply because the speaker said so. Given this, moral deference precludes moral understanding. Another such candidate is moral virtue. Moral virtue is clearly valuable. And if I simply defer to someone on a moral matter this will not allow me to act virtuously since I will not thereby acquire the right reasons, emotions, motivations, and dispositions to accompany my action. Moral deference thus prevents the kind of integration integral to possessing a virtue. So, on this explanation, deferring on a moral matter is at odds with the acquisition of greater moral goods – it falls short of the ideal and demonstrates a defect in the deferrer. Given this, such an account can offer an explanation of what is amiss with moral deference.  While such an explanation provides an account of what is amiss with moral deference, it does not show that moral deference is inappropriate. On this account, moral deference is not suitable for giving us all that we should want. We should want moral understanding, and we should want to develop and possess moral virtues, and moral deference does not provide either of these goods. However, while moral deference does not give us everything that we should want, it does give us something that we want – a justified moral belief. A justified moral belief is not everything, but it is certainly better than the available alternatives (an unjustified belief or no belief at all). There is a parallel here with moral action. Doing the right action isn’t everything. Someone who just does the right thing doesn’t do as well as he could. Such an individual lacks the relevant moral virtues and it is good to have moral virtues. Such an agent can properly be criticized for not acting virtuously. However, while there are important things lacking from our agent’s action it has something really important going for it – it was the right thing to do. It is surely better to do the right action while lacking the relevant virtues than either alternative open to the agent in such a situation (to do the wrong action or to do no action at all). So, while a mere right action does not give us everything that we want, given the situation that the agent is in (lacking the relevant moral virtues) it is the best we can get, at least for now. Similarly, while a justified moral belief is not all that we should want for a subject, given that developing moral virtues or having moral understanding on the issue are not available options for her right now, a justified belief is the best we can get, at least for now. The ‘for now’ is important here as well, since presumably a justified moral belief needn’t be the end of the matter and is plausibly even the best next step in attaining moral understanding or developing moral virtue. Acquiring these other moral goods are only further hindered by lacking a justified moral belief. In fact, the kind of moral education required to develop moral understanding and moral virtue must start with moral deference. So, while moral deference cannot give someone everything they should want, it does give them something important and can help them attain those greater goods.   Given all of this, we have an explanation for what is amiss with moral deference, but this explanation does not have it that we should not defer on moral matters. At most, this account has it that we shouldn’t stop our inquiry and moral development at the point of deference. Disagreements and Moral Expertise While there may be no problem in principle with moral deference, a different kind of challenge comes from considering disagreements among moral experts. Moral disagreements are ubiquitous, and this phenomenon holds even among moral experts. Given that moral experts often disagree about moral matters, the prescription ‘defer to the experts’ may not be so easy to follow. On many moral matters, there is no consensus among the moral experts, and this raises a challenge for moral deference to the experts. In what follows we will examine what one should believe and what one should do when there is no consensus amongst the experts. It will be helpful to distinguish several different scenarios that could obtain when there are multiple experts within a domain: FULL CONSENSUS: Every relevant expert agrees about p. PARTIAL CONSENSUS: Full consensus is not achieved, but there is a clear dominant view amongst the relevant experts regarding p. DISARRAY: The opinions of the relevant experts are sufficiently dispersed so as to prevent either full or partial consensus regarding p. What one is reasonable in believing about a moral matter will depend upon what one is reasonable in believing about which of the above scenarios obtain. If one is reasonable in believing that there is full consensus regarding the moral proposition, then that person is reasonable in adopting the same attitude as the experts toward that proposition. For instance, if Stan is reasonable in believing that all the relevant moral experts believe that torturing innocent children for pleasure is morally wrong, then Stan is also reasonable in himself believing that torturing innocent children for pleasure is morally wrong. If one is reasonable in believing that while there is not a full consensus on a moral matter amongst the relevant moral experts, that nevertheless a clear majority adopt the same belief about some moral proposition, then that person is reasonable in adopting that same belief about that proposition. For instance, if Sue is reasonable in believing that while not all of the relevant moral experts believe that it is impermissible to harvest the organs of a healthy individual without consent to save five others, a vast majority of the relevant moral experts do believe this, then Sue is reasonable in believing this proposition as well. This parallels what it is reasonable to believe in cases of partial consensus in other domains. For instance, if Sam is reasonable in believing that 98% of the relevant experts believe that human activities are contributing to climate change, then Sam will be reasonable in believing this as well. Similarly, if Shawn is reasonable in believing that 9 out of 10 dentists believing flossing improves dental health, then Shawn is reasonable in believing this proposition as well. Why think so? Recall that an expert about p is much more likely than a non-expert about p to have a true belief about p. So, given our epistemic ends of believing truths and not believing falsehoods, we will do better at satisfying these twin goals by going with what the expert believes. In cases of partial consensus there is no one doxastic attitude that all the relevant experts adopt toward the target proposition, but there nevertheless is an overwhelming consensus. The best explanation of such a distribution of opinions among the relevant experts in such a scenario is that the majority opinion is correct. For instance, if 10 equally qualified math experts were given the same problem and nine arrived at the same answer, it would be rational to believe that the 9 are correct. While it is possible that the 9 are all incorrect, this possibility is far more unlikely given what we know about the situation. The third possible scenario is disarray. If one is reasonable in believing that disarray holds, then one is reasonable in believing that there is not even a partial consensus amongst the relevant experts. In such a scenario, there is not a dominant view amongst the relevant experts, but rather opinions are fairly evenly divided. If one is reasonable in believing that this scenario obtains regarding some moral proposition, then one is reasonable in suspending judgment regarding that proposition. Such a scenario parallels a political race that is ‘too close to call’ in that even if at some point one position has a lead, the lead is too unstable and does not support believing things will hold. This analogy is given by Carey and Matheson (2013). For example, if one is rational in believing that regarding the proposition that you are morally required to give all of your disposable income to help those in poverty, the relevant moral experts are in a state of disarray, then that person is reasonable in suspending judgment regarding that claim. This too parallels what is true in other non-moral domains. If Sarah is reasonable in believing that amongst the relevant experts that slightly more of them believe that a Republican will win the next election, she should nevertheless suspend judgment about this claim. Why think so? In a case of disarray, matters are too unsettled amongst the relevant experts to make rational a belief on the matter. Even if one is reasonable in believing that one ‘side’ of the moral matter enjoys slightly more support than the others this does not suffice to make it rational to adopt that side. After all, many factors go into weighing expert opinions. So far, we have seen that expert opinions are to be trusted above those of lay people, but not all expert opinions are equal, either. Even among a group of experts there will be differences in their respective epistemic positions We can think about someone's epistemic position on matter as corresponding to how likely they are to have a true belief on the matter. What precise details matter for one's epistemic position is contested, but there is much agreement that their evidence, intelligence, and intellectual virtues are relevant. regarding a claim within their domain of expertise. While all experts are epistemically well-positioned, not all experts are equally well-positioned. Since what the layperson is reasonable in believing will depend upon how those weights are distributed, in a state of disarray, those differences can shift which position is best supported by the expert opinions. Further, it is quite difficult to determine these matters, particularly for a lay person. Even having cleared the hurdle of identifying the relevant experts, making this kind of much more precise judgment is typically beyond the abilities of a layperson – they are often not reasonable in making such judgments. But, if the lay person is not reasonable in making such judgments, and such judgments can alter which position is best supported by the expert opinion, the layperson is not reasonable in determining which position is best supported by the expert opinion in scenarios of disarray. To make matters worse, such judgments of relative epistemic positions amongst the experts are not the only factor that can tip the balance of expert opinion in a state of disarray. Another factor worth mentioning is the independence of the relevant opinions. Independently formed opinions that agree carry more weight that agreeing opinions that were not formed independently. For some criticism of this claim, see Coady (2012). So, another factor that is relevant in determining the overall balance of expert opinion is the relative independence of the various opinions. We are quite familiar with all of the ways that non-epistemic factors can create agreement. This is seen in the frequency of shared political beliefs in numerous regions of the country, shared religious beliefs in numerous countries, and shared philosophical beliefs amongst graduates of the same schools. So, the independently formed shared opinions carry greater epistemic weight. This is not to say that two or more people with a shared history who formed the same opinion don’t both lend weight to that opinion, rather, such agreement lends less weight than that same number of people who independently formed that same opinion. Given this, the independence of the opinions of the experts in the relevant domain will also be important in determining where the balance of expert opinion lies. This is yet one more factor relevant to weighing the evidence regarding expert opinion. However, here too it is unreasonable to believe that a layperson (or even an expert) can make a reasonable determination regarding the degree of independence of various opinions of the matter. For instance, can we really tell how independent two expert bioethicists’ opinions on physician-assisted suicide are? It seems unlikely. Worse still, on matters of great interest it is not merely the independence of two expert opinions that must be determined, but the independence of all the relevant expert opinions. This is an overwhelming task. Since the independence of opinions could be a determining factor in the balance of expert opinion on a matter and since a layperson is not reasonable in making such a judgment, the layperson is not reasonable in believing that the balance of expert opinion lies on any one side of the issue when it is in a state of disarray. So, a topic being in a state of disarray comes with skeptical consequences. For an argument in greater detail to this conclusion, see Carey and Matheson (2013) and Matheson (2015). Given the difficulty in determining the exact epistemic position of the relevant parties and the independence of their opinions, one might wonder why these same factors do not make determining the balance of expert opinions in cases of partial consensus also unfeasible. In cases of partial consensus, too, individual differences in epistemic position of the relevant experts will matter and so will the independence of their individual opinions. However, in cases of partial consensus, the balance of expert opinion is sufficiently settled so that even those these other factors may be unknown, it is sufficiently unlikely that they will change the balance of expert opinions on the matter. Returning to the political race analogy, in some races we can declare a winner without knowing all the details about how a number of counties voted. Yes, votes in those counties sill matter, but there is sufficient information elsewhere to declare a winner. So, too, in cases of partial consensus, an inability to determine the particular epistemic positions of the experts or how independent they are will not affect our ability to declare that the expert opinion clearly lies on one side of the issue. For instance, the vast consensus regarding climate changes renders the difficulties in determining the relevant exact epistemic positions of the experts and the independence of their opinions obsolete. In this case, it is clear that expert opinion is firmly on the side of believing that climate change is occurring, and this judgment can reasonably be made without having first sorted out these details. These details matter, but in cases of partial consensus, they won't make enough of a difference to tip the evidential scales. This is why cases of disarray are importantly different from cases of partial consensus. So, in cases where one is reasonable in believing that the expert opinion is in a state of disarray, that person is reasonable in suspending judgment toward the target moral proposition. However, even if suspension of judgment is the rational doxastic response, we often nevertheless need to act. Suspension of judgment does not have a parallel with regard to actions. While for any proposition we have three options (believe, disbelieve, or suspend judgment), the same is not true for actions. Regarding actions we have two options: do it or don’t do it. So, even if one is rational in suspending judgment regarding some moral claim, this does not determine how they should act, and regarding moral matters how we should act is of paramount importance. For instance, suppose that Syd is considering whether it is morally permissible to do a certain action, A. Suppose further that Syd knows that it is controversial amongst the relevant experts whether doing A is morally permissible. Given what we have said above, Syd should suspend judgment about whether doing A is morally permissible. While this verdict may settle her doxastic reaction, there remains the issue of what Syd should do. Even if she should suspend judgment she is still forced to either do A or not do A. Given what she knows about the expert opinions on the matter, what should she do? To answer this question, it is again important to distinguish several kinds of scenarios: ASYMMETRY: While S is rational in suspending judgment about whether A is morally permissible, S knows that an alternative to A is morally permissible. SYMMETRY: S is rational in suspending judgment about whether A is morally permissible, and S does not know of any alternative to A that is morally permissible. Asymmetry and Symmetry give an exhaustive set of options that one may find themselves in when they are rational in believing that the state of expert opinion on some moral matter is in a state of disarray. In cases of Asymmetry, there is an important asymmetry in the subject’s options. While one potential action is epistemically cloudy (the subject should suspend judgment as to whether it is morally permissible) another potential action is epistemically clear (the subject knows that it is morally permissible). In such a situation, it is plausible that the subject should exercise moral caution and should not do the epistemically cloudy action. This is to affirm the principle of Moral Caution defended in Matheson (2015): Moral Caution: Having considered the moral status of doing action A in context C, if (i) subject S (epistemically) should believe or suspend judgment that doing A in C is a serious moral wrong, while (ii) S knows that refraining from doing A in C is not morally wrong, then S (morally) should not do A in C. (p. 120) According to Moral Caution it is immoral to take unnecessary moral risks. Put differently, we are morally required to take morality seriously, and taking morality seriously requires avoiding unnecessary moral risks. Moral Caution has it that when one is rational in believing that expert opinion on some moral matter is in a state of disarray, there may nevertheless be moral prescriptions on behavior. An example may help. Suppose that Stella is considering whether it is morally permissible for her to keep most of her disposable income to herself. Call the proposition ‘It is morally permissible for Stella to keep most of her disposable income’ proposition ‘A’. Stella seeks to find out what the relevant moral experts believe about A, and finds that expert opinion is fairly evenly divided. Stella discovers that a number of experts think that there is nothing wrong with enjoying the fruits of your labor, but she also finds a number of experts who think that given the vast amounts of suffering due to poverty it would be seriously morally wrong for her to do so. Stella comes to reasonably believe that on this matter the expert opinions are in a state of disarray. Stella, however, notices an important asymmetry in the expert opinions. While the relevant experts are pretty evenly divided about A, very few think that she would do something morally wrong by giving away all of her disposable income to help those in poverty. So, while Stella is reasonable in believing that regarding A the expert opinion is in a state of disarray, she is also reasonable in believing that the experts are in a state of partial consensus regarding the permissibility of her giving away all her disposable income. So, given what Stella knows about the relevant expert opinions, she should suspend judgment about A, and she plausibly knows that giving away all her disposable income is permissible. If we then apply Moral Caution to this scenario, we get the verdict that it would be morally wrong for Stella to keep most of her disposable income. So, even in cases where the subject should suspend judgment about the permissibility of some action, there may be factors that are still action guiding. The factors that were action guiding in cases of disarray had to do with an important asymmetry in the expert opinion. While regarding one action the expert opinion was in a state of disarray, regarding an alternative action, the expert opinion was in a state of partial consensus. It is this asymmetry in the expert opinion that grounded Stella’s moral reasons in the case above. However, in many moral matters such an asymmetry does not exist. That is, in many moral matters moral risk is inevitable. Sometimes no alternative is known to be permissible because the expert opinion is in a state of disarray regarding each of the options. For instance, the expert opinion might be in disarray regarding these issues, and many more: elective limb amputations; elective (later-term) abortions for sex-selection, and for certain disabilities; whether advance directives should be respected for people who have advanced dementia; the permissibility of posthumous reproduction; the specific limits of parental authority in pediatric cases; There are many more contenders for possible disarray. Whether there is a genuine disarray requires major investigation to determine that the parties to the disagreement deeply understand the issues and arguments are indeed likely reasonable in their differing views: not all moral disagreements concerning controversial issues have those features. Given what we have said above, we should suspend judgment about what is permissible in such cases. Since these alternatives are exhaustive, we should suspend judgment about permissibility of any action we can take on this matter. This is a case of symmetry. In cases of symmetry there is no clear moral path for you to take. Every option you have involves taking a moral risk. Nevertheless, you must act. So, what should you do in a case of symmetry given that each of your alternatives requires taking a moral risk? Unfortunately, we have no advice to give in such a situation. Conclusion We often do not know what to think or do about challenging moral issues. This is especially the case with healthcare decisions that impact ourselves and loved ones. Difficult circumstances can make it hard to see what should be done in a difficult case. This is true for both medicine and morality: a physician should not be the doctor for his or her own child because those emotional connections can cloud clinical judgment and so that physician-parent needs outside help. And a difficult case can make for difficult moral decisions, decisions that are too hard for a family to make on their own, at that time. Ethics consultants, as moral experts, can help with that. Here were have characterized moral expertise and how to try to find it. We have discussed how merely relying on a moral expert’s guidance – deferring to an expert – is less than ideal, insofar as deference does not entail the kind of understanding and display of cognitive and moral virtues that are best. What’s best though is not always practically necessary in the moment, however, and can be pursued later: at least, deference can result in justified moral beliefs, which are surely better than unjustified moral beliefs. Finally, we have discussed what to think and do when there are moral disagreements about what to think and do about difficult moral issues. Not all such disagreements are among genuine experts – not all parties to heated moral debates are indeed informed and rational on those debates – but we have discussed what to think and do in a variety of kinds of disagreement. Not surprisingly, however, we have found no magic formula to resolve all such cases of disagreement: morality is often difficult, even for any experts. So, what can we do? At least, we can always do our best to keep thinking critically about the issues, discussing the issues and arguments with other people, especially people who we might disagree with, and doing our best to act on that thinking, when we must act. Bibliography   Anderson, Elizabeth. 2011. “Democracy, Public Policy, and Lay Assessments of Scientific Inquiry.” Episteme 8. 144-164. Carey, Brandon and Jonathan Matheson. 2013. “How Skeptical is the Equal Weight View?” In Diego Machuca (Ed.) Disagreement and Skepticism, New York: Routledge, 131-49. Coady, David. 2012. What to Believe Now: Applying Epistemology to Contemporary Issues. Wiley-Blackwell. Collins, H.M. and Robert Evans. 2007. Rethinking Expertise. University of Chicago Press. Collins, Harry and Weinel, Martin. 2011. “Transmuted Expertise: How Technical Non-Experts Can Assess Experts and Expertise.” Argumentation 25. 401-413. Crisp, Roger (2014) Moral testimony pessimism: a defense. Aristotelian Society Supplementary Volume 88(1), 129–143. Decker, Jason and Daniel Groll. 2014. “Moral Testimony: One of These Things Is Just Like the Others.” Analytic Philosophy 54, no. 4: 54-74. Driver, Julia. 2006. “Autonomy and the asymmetry problem for moral expertise.” Philosophical Studies 128. 619–644. Enoch, David. 2014. “A Defense of Moral Deference.” Journal of Philosophy 111, no. 5: 229-258. Feldman, Richard. Reason and Argument, 2nd Edition. Prentice Hall, 1998. Gelfert, Alex. 2011. “Expertise, Argumentation, and the End of Inquiry.” Argumentation 25. 297-312. Goldman, Alvin. 2001. "Experts, Which Ones Should You Trust?" Philosophy and Phenomenological Research 63:1 85-110. Hazlett, Allan. 2015. “The Social Value of Non-Deferential Belief.” Australasian Journal of Philosophy 94, no. 1: 131-151. Hills, Alison. 2009. “Moral Testimony and Moral Epistemology.” Ethics 120, no. 1: 94-127. Hills, Alison. 2010. The Beloved Self: Morality and the Challenge from Egoism. Oxford: Oxford University Press. Hills, Alison. 2013. “Moral Testimony.” Philosophy Compass 8, no. 6: 552-559. Hills, Alison. 2015. “Cognitivism about Moral Judgment.” In Oxford Studies in Metaethics, vol. 10, edited by Russ Shafer-Landau, 1-25. Oxford: Oxford University Press. Hopkins, Robert. 2007. “What is Wrong with Moral Testimony?” Philosophy and Phenomenological Research 74, no. 3: 611-634. Howell, Robert J. 2014. “Google Morals, Virtue, and the Asymmetry of Deference.” Noûs 48, no. 3: 389-415. Jones, Karen. 1999. “Second-Hand Moral Knowledge.” Journal of Philosophy 96, no. 2: 55-78. Lackey, Jenifer. Forthcoming. “Experts and Peer Disagreement.” In Knowledge, Belief, and God: New Insights in Religious Epistemology. Eds. Mathew Benton, John Hawthorne, and Dani Rabinowitz. Matheson, David. 2005. “Conflicting Experts and Dialectical Performance: Adjudication Heuristics for the Layperson.” Argumentation 19. 145-158. Matheson, Jonathan. 2015. The Epistemology of Disagreement. Palgrave. Matheson, Jonathan. 2016. “Moral Caution and the Epistemology of Disagreement.” Journal of Social Philosophy 47(2):120-141. McGrath, Sarah. 2009. “The Puzzle of Pure Moral Deference.” Philosophical Perspectives 23, no. 1: 321-344. McGrath, Sarah. 2011. “Skepticism about Moral Expertise as a Puzzle for Moral Realism.” Journal of Philosophy 108, no. 3: 111-137. McConnell, Terrance C. 1984. “Objectivity and Moral Expertise.” Canadian Journal of Philosophy 14, No. 2: 193-216. Miller, Boaz. 2013. “When is Consensus Knowledge Based? Distinguishing Shared Knowledge from Mere Agreement.” Synthese 190. 1293-1316. Mogensen, Andreas L. 2015. “Moral Testimony Pessimism and the Uncertain Value of Authenticity.” Philosophy and Phenomenological Research 92, no. 3: 1-24. Nickel, Philip. 2001. “Moral Testimony and its Authority.” Ethical Theory and Moral Practice 4, no. 3: 253-266. Pasnau, Robert. 2015. “Disagreement and the Value of Self-Trust.” Philosophical Studies 172, no. 9: 2315-2339. Sliwa, Paulina. 2012. “In Defense of Moral Testimony.” Philosophical Studies 158, no. 2: 175-195. Laurence Thomas "Moral Deference" Philosophical Forum 24 (1-3):232-250 (1993) Vavova, Katia. 2014. “Moral Disagreement and Moral Skepticism.” Philosophical Perspectives 28, no. 1: 302-333. Wedgwood, Ralph. 2010. “The Moral Evil Demons.” In Disagreement, edited by R. Feldman and T. Warfield, 216-246. Oxford: Clarendon Press. Zagzebski, Linda. 2012. Epistemic Authority. New York: Oxford University Press. 17