Jacob Stegenga has published widely in philosophy of science and philosophy of medicine. He is the author of Medical Nihilism and Care and Cure: An Introduction to Philosophy of Medicine, and he is currently writing a book titled Heart of Science.
A stereotype is a belief or claim that a group of people has a particular feature. Stereotypes ar... more A stereotype is a belief or claim that a group of people has a particular feature. Stereotypes are expressed by sentences that have the form of generic statements, like "Canadians are nice." Recent work on generics lends new life to understanding generics as statements involving probabilities. I argue that generics (and thus sentences expressing stereotypes) can take one of several forms involving conditional probabilities, and these probabilities have what I call a naturalness requirement. This is the natural probability theory of stereotypes. Each of the two components of the theory entails a family of fallacies that contributes to the spurious reinforcement of stereotypes: inferential slippage within and between the different generic forms, and inferential slippage from facts about frequencies of group traits to beliefs about natural propensities or dispositions of groups. Empirical research suggests that we often commit these fallacies. Moreover, this theory can referee a vitriolic debate between some psychologists, who hold that stereotypes are always false and stereotyping is always wrong, and other psychologists, who hold that stereotypes are often accurate and stereotyping is often reasonable.
The British Journal for the Philosophy of Science, 2024
If scientists violate principles and practices of routine science to quickly develop intervention... more If scientists violate principles and practices of routine science to quickly develop interventions against catastrophic threats, they are engaged in what I call fast science. The magnitude, imminence, and plausibility of a threat justify engaging in and acting on fast science. Yet, that justification is incomplete. I defend two principles to assess fast science, which say: fast science should satisfy as much as possible the reliability-enhancing features of routine science, and the fast science developing an intervention against a threat should not depend on the same problematic assumptions as the fast science which estimates the magnitude, imminence, and plausibility of the threat.
Havstad (2022) argues that the argument from inductive risk for the claim that non-epistemic valu... more Havstad (2022) argues that the argument from inductive risk for the claim that non-epistemic values have a legitimate role to play in the internal stages of science is deductively valid. She also defends its premises and thus soundness. This is, as far as we are aware, the best reconstruction of the argument from inductive risk in the existing literature. However, there is a small flaw in this reconstruction of the argument from inductive risk which appears to render the argument invalid. This flaw is superficial, and a small amendment to it rescues the claim of validity.
Advances in Experimental Philosophy of Medicine, 2023
We simulate trial data to test speculative claims about research methods, such as the impact of p... more We simulate trial data to test speculative claims about research methods, such as the impact of publication bias.
The value-free ideal for science holds that values should not influence the core features of scie... more The value-free ideal for science holds that values should not influence the core features of scientific reasoning. We defend the difference-to-inference model of value permeation, which holds that value-permeation in science is problematic when values make a difference to the inferences made about a hypothesis. This view of value permeation is superior to existing views, and suggests a corresponding maxim, namely that scientists should strive to eliminate differences to inference. This maxim is the basis of a novel value-free ideal for science.
The value-free ideal in science has been criticised as both unattainable and undesirable. We argu... more The value-free ideal in science has been criticised as both unattainable and undesirable. We argue that it can be defended as a practical principle guiding scientific research even if the unattainability and undesirability of a value-free end-state are granted. If a goal is unattainable, then one can separate the desirability of accomplishing the goal from the desirability of pursuing it. We articulate a novel valuefree ideal, which holds that scientists should act as if science should be value-free, and we argue that even if a purely value-free science is undesirable, this value-free ideal is desirable to pursue.
I defend a novel account of scientific progress centred around justification. Science progresses,... more I defend a novel account of scientific progress centred around justification. Science progresses, on this account, where there is a change in justification. I consider three options for explicating this notion of change in justification. This account of scientific progress dispels with a condition for scientific progress that requires accumulation of truth or truthlikeness, and it emphasises the social nature of scientific justification.
Routledge Companion to Philosophy of Medicine , 2017
The benefits and harms of pharmaceuticals are principal subjects of investigation in clinical sci... more The benefits and harms of pharmaceuticals are principal subjects of investigation in clinical science. In this chapter I discuss how harms are measured and note the challenges that clinical research faces in detecting harms of pharmaceuticals. This chapter provides an introduction to the structure of clinical research with a focus on harm detection. As it is usually performed today, clinical research does not reliably measure the harms of pharmaceuticals. There are at least three categories of problems with clinical research that lead to the underestimation of the harm profile of pharmaceuticals: subtle features of research methodology, secrecy surrounding the evidence from clinical research, and inadequate regulation.
The standard view about sex differences in sexual desire is that males are lusty and loose, while... more The standard view about sex differences in sexual desire is that males are lusty and loose, while females are cool and coy. This is widely believed and is a core premise of some scientific programs like evolutionary psychology. But is it true? A mountain of evidence seems to support the standard view. Yet, this evidence is shot through with methodological and philosophical problems. Developments in the study of sexual desire suggest that some of these problems can be resolved, and when they are, the standard view looks, at best, to be an exaggeration.
It is a plausible speculation that conventional choices in outcome measures might influence the r... more It is a plausible speculation that conventional choices in outcome measures might influence the results of meta-analyses. We test that speculation by simulating data from trials on antidepressants. We vary real drug effectiveness while modulating conventional values for outcome measures. We had previously shown that one conventional choice used in metaanalyses of antidepressants falls in a narrow range of values that maximize estimates of effectiveness. Our present analysis investigates why this phenomenon occurs. Moreover, our results suggest the superiority of absolute outcome measures over relative measures. This research program can be extended to test numerous other aspects of clinical research.
The COVID-19 pandemic has shown us that there are numerous research questions-empirical, politica... more The COVID-19 pandemic has shown us that there are numerous research questions-empirical, political, and philosophical-that need addressing both prior to, during, and after a pandemic. The current organisation of medical research has hindered our ability to efficiently answer these questions. This in turn suggests that there ought to be changes to how the medical research agenda is set.
Studies in History and Philosophy of Science, 2022
In a discussion note published in this journal, Hoefer and Krauss (2021) criticise an article of ... more In a discussion note published in this journal, Hoefer and Krauss (2021) criticise an article of mine published some years ago, also in this journal (Stegenga 2015). I welcome criticism, but their discussion note seriously misrepresents my work. Hoefer and Krauss neglect all of the fundamental arguments in the article they criticise, while wrongly accusing me of scholarly blunders. This is my rejoinder.
Medicalisation is a social phenomenon in which conditions that were once under legal, religious, ... more Medicalisation is a social phenomenon in which conditions that were once under legal, religious, personal or other jurisdictions are brought into the domain of medical authority. Low sexual desire in females has been medicalised, pathologised as a disease, and intervened upon with a range of pharmaceuticals. There are two polarised positions on the medicalisation of low female sexual desire: I call these the mainstream view and the critical view. I assess the central arguments for both positions. Dividing the two positions are opposing models of the aetiology of low female sexual desire. I conclude by suggesting that the balance of arguments supports a modest defence of the critical view regarding the medicalisation of low female sexual desire.
Studies in History and Philosophy of Science, 2022
There are two competing views regarding the role of mechanistic knowledge in inferences about the... more There are two competing views regarding the role of mechanistic knowledge in inferences about the effectiveness of interventions. One view holds that inferences about the effectiveness of interventions should be based only on data from population-level studies (often statistical evidence from randomised trials). The other view holds that such inferences must be based in part on mechanistic evidence. The competing views are local principles of inference, the plausibility of which can be assessed by a more general normative principle of inference. Bayesianism tells us to base inferences on both the 'likelihood' and the 'prior'. The likelihood represents statistical evidence. One influence on the prior probability of a hypothesis like 'd causes x' is mechanistic knowledge of how d causes x. Thus, reasoning about such inferences by appealing to both statistical and mechanistic evidence is vindicated by our best general theory of inference. The primary contribution of this paper is to assess the merits and weaknesses of the arguments on both sides of the debate, using the Bayesian framework. This analysis lends support to those who argue that we should base our causal inferences about interventions in part on mechanistic evidence.
A central aim of medical research is causal inference. Does this drug have harmful side effects? ... more A central aim of medical research is causal inference. Does this drug have harmful side effects? Is this medical intervention effective? Does this chemical cause cancer? To provide evidence that bears on these important questions, many sorts of measurements are made in a variety of types of studies. These measurements generate a plethora of data, and these data must be quantitatively summarized so they are rendered relevant to causal hypotheses. That is, to render measurements made in medical research into evidence for a causal hypothesis, those measurements must be transformed into summary quantifications, called "outcome measures." This chapter has two aims. First, we argue for the superiority of one form of outcome measure, called absolute measures. Second, we argue against a widely held myth in epidemiology. The myth is that in observational methods, such as case-control studies, only the relative outcome measure called the odds ratio can be calculated, and we argue that there is no justification for this myth.
We provide a novel articulation of the epistemic peril of p-hacking using three resources from ph... more We provide a novel articulation of the epistemic peril of p-hacking using three resources from philosophy: predictivism, Bayesian confirmation theory, and model selection theory. We defend a nuanced position on p-hacking: p-hacking is sometimes, but not always, epistemically pernicious. Our argument requires a novel understanding of Bayesianism, since a standard criticism of Bayesian confirmation theory is that it cannot represent the influence of biased methods. We then turn to pre-analysis plans, a methodological device used to mitigate p-hacking. Some say that pre-analysis plans are epistemically meritorious while others deny this, and in practice pre-analysis plans are often violated. We resolve this debate with a modest defence of pre-analysis plans. Further, we argue that pre-analysis plans can be epistemically relevant even if the plan is not strictly followed—and suggest that allowing for flexible pre-analysis plans may be the best available policy option.
Reasons transmit. If one has a reason to attain an end, then one has a reason to effect means for... more Reasons transmit. If one has a reason to attain an end, then one has a reason to effect means for that end: reasons are transmitted from end to means. I argue that the likelihood ratio is a compelling measure of reason transmission from ends to means. The likelihood ratio measure is superior to other measures, can be used to construct a condition specifying precisely when reasons transmit, and satisfies intuitions regarding end-means reason transmission in a broad array of cases.
Robustness is a frequently employed argument form. The idea is simple: hypotheses are more likely... more Robustness is a frequently employed argument form. The idea is simple: hypotheses are more likely to be true when they are supported by diverse kinds of evidence. Robustness requires the available evidence to be independent. We identify two general kinds of independence appealed to in robustness arguments: ontic independence (OI)—when the multiple lines of evidence depend on different materials, assumptions, or theories—and conditional probabilistic independence (CPI). The failure of independence of evidence in robustness arguments is sometimes referred to as 'pseudorobustness', and we identify two kinds of pseudorobustness based on the two kinds of independence appealed to in robustness arguments. When formulating robustness arguments, many assume that OI is sufficient for a robustness argument to be warranted, and thus that an empirical scenario can fail to be robust only by a failure of OI. We argue that OI, as typically construed, is not a sufficient independence condition for warranting robustness arguments. We show that OI evidence can collectively confirm a hypothesis to a lower degree than individual lines of evidence, contrary to the standard assumption undergirding usual robustness arguments. We employ Bayesian networks to represent the ideal empirical scenario for a robustness argument and a variety of ways that empirical scenarios can fall short of this ideal.
Amalgamating evidence of different kinds for the same hypothesis into an overall confirmation is ... more Amalgamating evidence of different kinds for the same hypothesis into an overall confirmation is analogous, I argue, to amalgamating individuals’ preferences into a group preference. The latter faces well-known impossibility theorems, most famously Arrow’s Theorem. Once the analogy between amalgamating evidence and amalgamating preferences is tight, it is obvious that amalgamating evidence might face a theorem similar to Arrow’s. I prove that this is so, and end by discussing the plausibility of the axioms required for the theorem.
Studies in History and Philosophy of Biological and Biomedical Sciences, 2020
To develop those nebulous worries about medicine that I began to have ten years ago into a cohere... more To develop those nebulous worries about medicine that I began to have ten years ago into a coherent set of arguments required a reeducation. Among the many scholars who taught me and inspired me, through their written work and conference and seminar discussions, were Miriam Solomon, David Healy, Jonathan Fuller, and Joseph Gabriel. The close reading and critical commentaries these four have given to Medical Nihilism is for me a joy and honour.
A stereotype is a belief or claim that a group of people has a particular feature. Stereotypes ar... more A stereotype is a belief or claim that a group of people has a particular feature. Stereotypes are expressed by sentences that have the form of generic statements, like "Canadians are nice." Recent work on generics lends new life to understanding generics as statements involving probabilities. I argue that generics (and thus sentences expressing stereotypes) can take one of several forms involving conditional probabilities, and these probabilities have what I call a naturalness requirement. This is the natural probability theory of stereotypes. Each of the two components of the theory entails a family of fallacies that contributes to the spurious reinforcement of stereotypes: inferential slippage within and between the different generic forms, and inferential slippage from facts about frequencies of group traits to beliefs about natural propensities or dispositions of groups. Empirical research suggests that we often commit these fallacies. Moreover, this theory can referee a vitriolic debate between some psychologists, who hold that stereotypes are always false and stereotyping is always wrong, and other psychologists, who hold that stereotypes are often accurate and stereotyping is often reasonable.
The British Journal for the Philosophy of Science, 2024
If scientists violate principles and practices of routine science to quickly develop intervention... more If scientists violate principles and practices of routine science to quickly develop interventions against catastrophic threats, they are engaged in what I call fast science. The magnitude, imminence, and plausibility of a threat justify engaging in and acting on fast science. Yet, that justification is incomplete. I defend two principles to assess fast science, which say: fast science should satisfy as much as possible the reliability-enhancing features of routine science, and the fast science developing an intervention against a threat should not depend on the same problematic assumptions as the fast science which estimates the magnitude, imminence, and plausibility of the threat.
Havstad (2022) argues that the argument from inductive risk for the claim that non-epistemic valu... more Havstad (2022) argues that the argument from inductive risk for the claim that non-epistemic values have a legitimate role to play in the internal stages of science is deductively valid. She also defends its premises and thus soundness. This is, as far as we are aware, the best reconstruction of the argument from inductive risk in the existing literature. However, there is a small flaw in this reconstruction of the argument from inductive risk which appears to render the argument invalid. This flaw is superficial, and a small amendment to it rescues the claim of validity.
Advances in Experimental Philosophy of Medicine, 2023
We simulate trial data to test speculative claims about research methods, such as the impact of p... more We simulate trial data to test speculative claims about research methods, such as the impact of publication bias.
The value-free ideal for science holds that values should not influence the core features of scie... more The value-free ideal for science holds that values should not influence the core features of scientific reasoning. We defend the difference-to-inference model of value permeation, which holds that value-permeation in science is problematic when values make a difference to the inferences made about a hypothesis. This view of value permeation is superior to existing views, and suggests a corresponding maxim, namely that scientists should strive to eliminate differences to inference. This maxim is the basis of a novel value-free ideal for science.
The value-free ideal in science has been criticised as both unattainable and undesirable. We argu... more The value-free ideal in science has been criticised as both unattainable and undesirable. We argue that it can be defended as a practical principle guiding scientific research even if the unattainability and undesirability of a value-free end-state are granted. If a goal is unattainable, then one can separate the desirability of accomplishing the goal from the desirability of pursuing it. We articulate a novel valuefree ideal, which holds that scientists should act as if science should be value-free, and we argue that even if a purely value-free science is undesirable, this value-free ideal is desirable to pursue.
I defend a novel account of scientific progress centred around justification. Science progresses,... more I defend a novel account of scientific progress centred around justification. Science progresses, on this account, where there is a change in justification. I consider three options for explicating this notion of change in justification. This account of scientific progress dispels with a condition for scientific progress that requires accumulation of truth or truthlikeness, and it emphasises the social nature of scientific justification.
Routledge Companion to Philosophy of Medicine , 2017
The benefits and harms of pharmaceuticals are principal subjects of investigation in clinical sci... more The benefits and harms of pharmaceuticals are principal subjects of investigation in clinical science. In this chapter I discuss how harms are measured and note the challenges that clinical research faces in detecting harms of pharmaceuticals. This chapter provides an introduction to the structure of clinical research with a focus on harm detection. As it is usually performed today, clinical research does not reliably measure the harms of pharmaceuticals. There are at least three categories of problems with clinical research that lead to the underestimation of the harm profile of pharmaceuticals: subtle features of research methodology, secrecy surrounding the evidence from clinical research, and inadequate regulation.
The standard view about sex differences in sexual desire is that males are lusty and loose, while... more The standard view about sex differences in sexual desire is that males are lusty and loose, while females are cool and coy. This is widely believed and is a core premise of some scientific programs like evolutionary psychology. But is it true? A mountain of evidence seems to support the standard view. Yet, this evidence is shot through with methodological and philosophical problems. Developments in the study of sexual desire suggest that some of these problems can be resolved, and when they are, the standard view looks, at best, to be an exaggeration.
It is a plausible speculation that conventional choices in outcome measures might influence the r... more It is a plausible speculation that conventional choices in outcome measures might influence the results of meta-analyses. We test that speculation by simulating data from trials on antidepressants. We vary real drug effectiveness while modulating conventional values for outcome measures. We had previously shown that one conventional choice used in metaanalyses of antidepressants falls in a narrow range of values that maximize estimates of effectiveness. Our present analysis investigates why this phenomenon occurs. Moreover, our results suggest the superiority of absolute outcome measures over relative measures. This research program can be extended to test numerous other aspects of clinical research.
The COVID-19 pandemic has shown us that there are numerous research questions-empirical, politica... more The COVID-19 pandemic has shown us that there are numerous research questions-empirical, political, and philosophical-that need addressing both prior to, during, and after a pandemic. The current organisation of medical research has hindered our ability to efficiently answer these questions. This in turn suggests that there ought to be changes to how the medical research agenda is set.
Studies in History and Philosophy of Science, 2022
In a discussion note published in this journal, Hoefer and Krauss (2021) criticise an article of ... more In a discussion note published in this journal, Hoefer and Krauss (2021) criticise an article of mine published some years ago, also in this journal (Stegenga 2015). I welcome criticism, but their discussion note seriously misrepresents my work. Hoefer and Krauss neglect all of the fundamental arguments in the article they criticise, while wrongly accusing me of scholarly blunders. This is my rejoinder.
Medicalisation is a social phenomenon in which conditions that were once under legal, religious, ... more Medicalisation is a social phenomenon in which conditions that were once under legal, religious, personal or other jurisdictions are brought into the domain of medical authority. Low sexual desire in females has been medicalised, pathologised as a disease, and intervened upon with a range of pharmaceuticals. There are two polarised positions on the medicalisation of low female sexual desire: I call these the mainstream view and the critical view. I assess the central arguments for both positions. Dividing the two positions are opposing models of the aetiology of low female sexual desire. I conclude by suggesting that the balance of arguments supports a modest defence of the critical view regarding the medicalisation of low female sexual desire.
Studies in History and Philosophy of Science, 2022
There are two competing views regarding the role of mechanistic knowledge in inferences about the... more There are two competing views regarding the role of mechanistic knowledge in inferences about the effectiveness of interventions. One view holds that inferences about the effectiveness of interventions should be based only on data from population-level studies (often statistical evidence from randomised trials). The other view holds that such inferences must be based in part on mechanistic evidence. The competing views are local principles of inference, the plausibility of which can be assessed by a more general normative principle of inference. Bayesianism tells us to base inferences on both the 'likelihood' and the 'prior'. The likelihood represents statistical evidence. One influence on the prior probability of a hypothesis like 'd causes x' is mechanistic knowledge of how d causes x. Thus, reasoning about such inferences by appealing to both statistical and mechanistic evidence is vindicated by our best general theory of inference. The primary contribution of this paper is to assess the merits and weaknesses of the arguments on both sides of the debate, using the Bayesian framework. This analysis lends support to those who argue that we should base our causal inferences about interventions in part on mechanistic evidence.
A central aim of medical research is causal inference. Does this drug have harmful side effects? ... more A central aim of medical research is causal inference. Does this drug have harmful side effects? Is this medical intervention effective? Does this chemical cause cancer? To provide evidence that bears on these important questions, many sorts of measurements are made in a variety of types of studies. These measurements generate a plethora of data, and these data must be quantitatively summarized so they are rendered relevant to causal hypotheses. That is, to render measurements made in medical research into evidence for a causal hypothesis, those measurements must be transformed into summary quantifications, called "outcome measures." This chapter has two aims. First, we argue for the superiority of one form of outcome measure, called absolute measures. Second, we argue against a widely held myth in epidemiology. The myth is that in observational methods, such as case-control studies, only the relative outcome measure called the odds ratio can be calculated, and we argue that there is no justification for this myth.
We provide a novel articulation of the epistemic peril of p-hacking using three resources from ph... more We provide a novel articulation of the epistemic peril of p-hacking using three resources from philosophy: predictivism, Bayesian confirmation theory, and model selection theory. We defend a nuanced position on p-hacking: p-hacking is sometimes, but not always, epistemically pernicious. Our argument requires a novel understanding of Bayesianism, since a standard criticism of Bayesian confirmation theory is that it cannot represent the influence of biased methods. We then turn to pre-analysis plans, a methodological device used to mitigate p-hacking. Some say that pre-analysis plans are epistemically meritorious while others deny this, and in practice pre-analysis plans are often violated. We resolve this debate with a modest defence of pre-analysis plans. Further, we argue that pre-analysis plans can be epistemically relevant even if the plan is not strictly followed—and suggest that allowing for flexible pre-analysis plans may be the best available policy option.
Reasons transmit. If one has a reason to attain an end, then one has a reason to effect means for... more Reasons transmit. If one has a reason to attain an end, then one has a reason to effect means for that end: reasons are transmitted from end to means. I argue that the likelihood ratio is a compelling measure of reason transmission from ends to means. The likelihood ratio measure is superior to other measures, can be used to construct a condition specifying precisely when reasons transmit, and satisfies intuitions regarding end-means reason transmission in a broad array of cases.
Robustness is a frequently employed argument form. The idea is simple: hypotheses are more likely... more Robustness is a frequently employed argument form. The idea is simple: hypotheses are more likely to be true when they are supported by diverse kinds of evidence. Robustness requires the available evidence to be independent. We identify two general kinds of independence appealed to in robustness arguments: ontic independence (OI)—when the multiple lines of evidence depend on different materials, assumptions, or theories—and conditional probabilistic independence (CPI). The failure of independence of evidence in robustness arguments is sometimes referred to as 'pseudorobustness', and we identify two kinds of pseudorobustness based on the two kinds of independence appealed to in robustness arguments. When formulating robustness arguments, many assume that OI is sufficient for a robustness argument to be warranted, and thus that an empirical scenario can fail to be robust only by a failure of OI. We argue that OI, as typically construed, is not a sufficient independence condition for warranting robustness arguments. We show that OI evidence can collectively confirm a hypothesis to a lower degree than individual lines of evidence, contrary to the standard assumption undergirding usual robustness arguments. We employ Bayesian networks to represent the ideal empirical scenario for a robustness argument and a variety of ways that empirical scenarios can fall short of this ideal.
Amalgamating evidence of different kinds for the same hypothesis into an overall confirmation is ... more Amalgamating evidence of different kinds for the same hypothesis into an overall confirmation is analogous, I argue, to amalgamating individuals’ preferences into a group preference. The latter faces well-known impossibility theorems, most famously Arrow’s Theorem. Once the analogy between amalgamating evidence and amalgamating preferences is tight, it is obvious that amalgamating evidence might face a theorem similar to Arrow’s. I prove that this is so, and end by discussing the plausibility of the axioms required for the theorem.
Studies in History and Philosophy of Biological and Biomedical Sciences, 2020
To develop those nebulous worries about medicine that I began to have ten years ago into a cohere... more To develop those nebulous worries about medicine that I began to have ten years ago into a coherent set of arguments required a reeducation. Among the many scholars who taught me and inspired me, through their written work and conference and seminar discussions, were Miriam Solomon, David Healy, Jonathan Fuller, and Joseph Gabriel. The close reading and critical commentaries these four have given to Medical Nihilism is for me a joy and honour.
You might think, as I did, that research on artificial life is a relatively recent endeavour���a ... more You might think, as I did, that research on artificial life is a relatively recent endeavour���a feature of the age of science fiction, contemporary with research on artificial intelligence. But Genesis Redux reveals otherwise. Growing out of a workshop at Stanford, the seventeen essays collected by Jessica Riskin draw on examples from ancient, early modern and modern science to show that people have been trying to create and re-create life for a very long time.
Philosophers have committed sins while studying science, it is said���philosophy of science focus... more Philosophers have committed sins while studying science, it is said���philosophy of science focused on physics to the detriment of biology, reconstructed idealizations of scientific episodes rather than attending to historical details, and focused on theories and concepts at the expense of experiments. Recent generations of philosophers of science have tried to atone for these sins, and by the 1980s the exculpation was in full swing.
There are two radical views regarding the role of mechanisms in causal inference. One holds that ... more There are two radical views regarding the role of mechanisms in causal inference. One holds that causal inference, at least in medicine and the social sciences, should be based only on data from population-level studies (statistical evidence). The other holds that causal inference must be based in part on mechanistic evidence. This paper appeals to Bayesian confirmation theory to defend a middle view, and explains why the arguments for both sides can seem compelling. The competing views are local principles of inference, the plausibility of which can be assessed by a general normative principle of inference. The Bayesian tells us to base inferences on both the likelihood and the prior. The likelihood represents statistical evidence. One influence on the prior probability of a hypothesis like d does x is knowledge of how d does x. Thus, reasoning about causal relations by appealing to both statistical and mechanistic evidence is vindicated by our best general theory of inference.
This is the Annual Peter Sowerby Lecture (2016) at King's College London. My talk begins at about... more This is the Annual Peter Sowerby Lecture (2016) at King's College London. My talk begins at about minute 10 in the video.
This book is an introductory textbook for philosophy of medicine. It can be used as a foundationa... more This book is an introductory textbook for philosophy of medicine. It can be used as a foundational text in an introductory philosophy of medicine course for students who have little or no background in philosophy. Supplemented with the additional readings listed at the end of each chapter, it can be used as a background text in an advanced philosophy of medicine seminar. This book can be used in a medical school curriculum as part of a course designed to elicit critical reflection among medical students about the foundations of their profession. Finally, patients themselves may be interested in developed a richer understanding of the enterprise that is so important to one’s health.
I have uploaded syllabi for philosophy of medicine courses at different levels. Please see the 'syllabi' section of my academia site.
This book argues that if we consider the ubiquity of small effect sizes in medicine, the extent o... more This book argues that if we consider the ubiquity of small effect sizes in medicine, the extent of misleading evidence in medical research, the thin theoretical basis of many interventions, and the malleability of empirical methods, and if we employ our best inductive framework, then our confidence in most medical interventions ought to be low. It is an interdisciplinary study of the epistemology of medicine, written for philosophers, physicians, and the general educated public. One recent reader, an emeritus professor of psychiatry here in Cambridge, claims that "It is years since I have read such a magnificent, coherent and respectful book on the epistemology of medicine.”
This sample syllabus for philosophy of medicine is designed for a lower-level undergraduate cours... more This sample syllabus for philosophy of medicine is designed for a lower-level undergraduate course. Please also see my syllabus titled ‘Philosophy of Medicine Upper-Level Seminar’ which is designed for an upper-level undergraduate course or graduate-level seminar. Both syllabi are based around my forthcoming textbook Care and Cure (working title, to be published by Chicago University Press in summer 2018). Both syllabi are designed for a fourteen week semester. Please use and freely modify these syllabi to suit your teaching. Please also feel free to contact me with any comments, questions, or suggestions.
This sample syllabus for philosophy of medicine is for an upper-level undergraduate course or gra... more This sample syllabus for philosophy of medicine is for an upper-level undergraduate course or graduate-level seminar. Please also see my syllabus titled ‘Introduction to Philosophy of Medicine’ which is designed for a lower-level undergraduate course. Both syllabi are based around my forthcoming textbook Care and Cure (working title, to be published by Chicago University Press in summer 2018). Both syllabi are designed for a fourteen week semester. Please use and freely modify these syllabi to suit your teaching. Please also feel free to contact me with any comments, questions, or suggestions.
Medical scientists employ ‘quality assessment tools’ to assess evidence from medical research, es... more Medical scientists employ ‘quality assessment tools’ to assess evidence from medical research, especially from randomized trials. These tools are designed to take into account methodological details of studies, including randomization, subject allocation concealment, and other features of studies deemed relevant to minimizing bias. There are dozens of such tools available. They differ widely from each other, and empirical studies show that they have low inter-rater reliability and low inter-tool reliability. This is an instance of a more general problem called here the underdetermination of evidential significance. Disagreements about the quality of evidence can be due to different—but in principle equally good—weightings of the methodological features that constitute quality assessment tools. Thus, the malleability of empirical research in medicine is deep: in addition to the malleability of first-order empirical methods, such as randomized trials, there is malleability in the tool...
Uploads
Articles by Jacob Stegenga
I have uploaded syllabi for philosophy of medicine courses at different levels. Please see the 'syllabi' section of my academia site.
Published by University of Chicago Press in 2018.