Journal of Evaluation in Clinical Practice ISSN 1356-1294
C O M M E N TA R Y
EBM and the strawman: a commentary on Devisch and
Murray (2009). ‘We hold these truths to be self-evident’:
deconstructing ‘evidence-based’ medical practice
jep_1215
957..959
Stephen Buetow PhD
Department of General Practice and Primary Health Care, University of Auckland, Auckland, New Zealand
doi:10.1111/j.1365-2753.2009.01215.x
The contribution by Devisch and Murray [1] reminds me of a
comment made in the Journal of Evaluation in Clinical Practice
by Greenhalgh and Worrall [2] 12 years ago – a comment that still
rings true today: ‘the two “sides” in this debate are, in reality,
closer together than ever before, and each is attacking the other for
a viewpoint it no longer holds’. While proponents of evidencebased medicine (EBM) have tended to ‘attack’ their critics by
ignoring them [3,4], Devisch and Murray assail a strawman – a
version of EBM that no longer applies. Strawman arguments can
be used strategically to provoke responses but these authors appear
unaware of the fallacies their dated analysis perpetuates.
I concur with Devisch and Murray on some of the issues they
raise. EBM has always defined the concept of evidence in the
restrictively narrow sense of research evidence. Moreover, it does,
indeed, declare ‘itself victorious through an appeal to intuition’
having never been able to ‘fulfil its own requirements for evidence’
or at least the evidential requirements it expects others to meet.
Sometimes taking ‘the tones of a moral imperative’ [5], it has
therefore pronounced its own verisimilitude self-evident.
Yet, such criticisms have been made before [6–8] – although not
in the context of a light deconstructionist reading based on philosopher Jacques Derrida. Various other arguments made by
Devisch and Murray are no longer true of EBM; and arguing as if
they were true is unhelpful. EBM has evolved, as this movement
has long understood that it needs to [9]. Spokesmen for EBM
[10,11], as well as critics including myself [12,13], have documented how EBM has adapted in response to criticisms of it, as
made for example in the 11 previous thematic editions that this
Journal has dedicated to the EBM debate. Such criticisms have
included the inadequacy of research evidence alone as a ‘base’ for
[14], or even guide to, clinical decision making, and the unacceptable loss of decision-making autonomy by clinicians and patients
[11]. Ignoring how EBM has changed to address such concerns,
Devisch and Murray attack the second of the three formulations of
EBM described by Haynes et al. [10,11], and reviewed below, and
so proceed from false premises as to what constitutes EBM today.
In the early 1990s, version one focused on determining and
applying the best research evidence to clinical decisions [9]. It
‘de-emphasized traditional determinants of clinical decisions,
including physiologic rationale and individual clinical experience’
[10]. This account led to the second version [9,15] according to
which Devisch and Murray reduce EBM to a singular focus on
how research “evidence itself ‘acts’, as the ‘agent’ of care”; such
that ‘decisions are made in the absence of anyone who decides’
[1]. This interpretation by Devisch and Murray misstates even
version one and is thoroughly contradicted by version two which
acknowledges that research evidence alone cannot guide clinical
decision making. Version two instead recognises three components
of clinical decisions: clinical expertise, patient preferences and
research evidence [15].
It makes no sense therefore to read EBM as medicine based on
evidence per se. Despite conflating the concepts of evidence and
research evidence, EBM has long acknowledged other justifications for action – although not as ‘evidence’. It makes this distinction because it is committed to putting more epidemiology into
clinical practice, to differentiating itself from medicine before
EBM and to retaining its successful brand. Accordingly, the
problem is not that EBM is based on evidence alone – as it no
longer is – but rather that the brand of ‘EBM’ has become a
misnomer, which is why version two attempted to clarify its
meaning. In addition, I want to question the claim, by Devisch and
Murray, that ‘the E in EBM means that not only is “evidence” true,
but also that, according to a naïve realism, that [sic] the scientist
has direct access to this truth’. I would argue instead that proponents of EBM are less likely to be positivists than post-positivists
for whom reality can only be approximated and for whom research
evidence is but a single warrant for action. As Dobrow et al. [16]
comment, however, ‘it may be more important how evidence is
utilised than how it is defined’.
The third incarnation of evidence-based decision making
[10,11] – of which Devisch and Murray appear oblivious – refines
EBM further. EBM version three stipulates the use of clinical
expertise as the means of integrating the clinical state and circumstances, research evidence, and patient preferences and actions.
Different components from this trinity may predominate in different circumstances, and ‘achieving the right balance among the
factors that can affect a decision is not necessarily easy’ [11].
However, this version of EBM views clinical expertise as the
conductor whose ‘central’ role is to bring together and integrate
the components identified. How this orchestration takes place to
produce optimal decisions is unstated [5,17] but ‘clinical judgement and expertise are viewed as essential to success’ [10] for
decision making, follow-up and monitoring. This seminal and
practical restatement of EBM by leaders of the EBM movement
cannot be reasonably overlooked.
To the best of my knowledge, there has been no subsequent,
fundamental reformulation of EBM. Version three continues therefore to predominate, with Haynes, as recently as this year,
re-emphasizing the importance of viewing ‘evidence in context’
[18]. Against this backdrop, Devisch and Murray make additional
statements that, from my perspective, appear false or dubious.
They suggest, for example, that ‘within the EBM paradigm, the
© 2009 The Author. Journal compilation © 2009 Blackwell Publishing Ltd, Journal of Evaluation in Clinical Practice 15 (2009) 957–959
957
EBM and the strawman
randomized controlled trial (RCT) is believed to offer the most
valid form of evidence, effectively denigrating or altogether
excluding other crucial aspects of decision making in the patient–
physician relationship, such as patient values, the physician’s
clinical experience, and others’. Yet, as shown above, EBM neither
denigrates nor excludes clinical experience or patient values.
Indeed, ‘rather than erasing the patient’, as Mykalovskiy and Weir
note [19], EBM ‘puts new demands on her/him’ through ‘relations
mediated by scientific evidence’. What EBM has yet to include,
however, is guidance on when and how best to determine what the
individual patient wants, and then to incorporate these wants into
clinical decision making, increasingly in a utilitarian, policy
context of population health care.
Also, despite the claims of Devisch and Murray, among others
[20,21], EBM is probably not a paradigm, at least in a Kuhnian
sense [22–24]. Application to EBM of the term ‘paradigm’ implies
a theoretical structure, epistemologically based on foundationalism [14], which has never been thoroughly explicated. Nor does
EBM adhere to the bald belief that there is a single hierarchy of
evidence, in which the RCT offers ‘the most valid form of evidence’. On the basis of expert opinion and consensus – rather than
intuition – EBM version two [9] states that different study designs
and forms of research evidence can best inform different types of
clinical questions. For questions about the clinical effectiveness of
treatments, an RCT will frequently, although not always [25–27],
provide the best research evidence. Questions posed about, say,
diagnosis and prognosis respectfully tend to require crosssectional and cohort studies, each properly designed and executed
[9]. There might not, however, be a single ranking of study designs
for a particular question type, and hierarchies for different question types are incommensurable. Such objections have prompted a
call for typologies that conceptualize different methodological
approaches [28]. Yet, the doubts are twin-edged as they invite a
searching question for critics of EBM: might support for using
well-designed observational rather than experimental studies to
evaluate the effectiveness of health care justify, in turn, the absence
of RCT evidence to evaluate EBM [26]?
Similarly, qualitative research is recognized by EBM for its
genuine ability to maximize the relevance of research to policy and
clinical practice. Consistent with the development of a Cochrane
Qualitative Research Methods Group, EBM has shown increasing
interest in using systematic reviews to combine qualitative
research [29] – and the grey literature [30] – with quantitative
research. Comprehensive search strategies to retrieve qualitative
studies from electronic databases [31,32] are not therefore as
Devisch and Murray assert, ‘empty rhetorical gestures’. Their
claim that qualitative studies in EBM ‘must be rejected immediately on the basis of faulty evidence’ misrepresents these developments. From their deconstructionist reading of EBM, Devisch and
Murray subsequently argue that to uphold itself, ‘EBM cannot
refuse what it pretends to exclude: non-quantifiable evidence’; yet,
there is no pretence in EBM to exclude such evidence on the basis
that it is ‘systematically forbidden’.
Critics do us and themselves no favours by condemning EBM
for attributes it no longer holds. Criticism should instead focus on
what constitutes EBM today in the context of understanding why
proponents and critics may each misconstrue EBM and speak at
cross-purposes. Misapprehensions can then be corrected on both
sides to produce a shared and valid platform for defining and
958
S. Buetow
influencing the current and future development of EBM. This
platform could enable, for example, an examination of how clinical expertise can best be used to integrate research evidence and
other elements of clinical decision making in patient care, including patient–doctor communication.
References
1. Devisch, I. & Murray, S. J. (2009) We hold these truths to be selfevident’: deconstructing ‘evidence-based’ medical practice. Journal of
Evaluation in Clinical Practice, 15, 950–954.
2. Greenhalgh, T. & Worrall, J. (1997) From EBM to CSM: the evolution
of context-sensitive medicine. Journal of Evaluation in Clinical Practice, 3, 105–108.
3. Buetow, S. (2002) Beyond evidence-based medicine: bridge-building
a medicine of meaning. Journal of Evaluation in Clinical Practice, 8,
103–108.
4. Miles, M., Loughlin, M. & Polychronis, A. (2008) Evidence-based
healthcare, clinical knowledge and the rise of personalised medicine.
Journal of Evaluation in Clinical Practice, 14, 621–649.
5. Haynes, R. (2002) What kind of evidence is it that evidence-based
medicine advocates want health care providers and consumers to
pay attention to? BioMed Central Health Services Research, 2, 3;
doi:10.1186/1472-6963-2-3.
6. Buetow, S., Upshur, R., Miles, A. & Loughlin, M. (2006) Taking stock
of evidence-based medicine: opportunities for its continuing evolution. Journal of Evaluation in Clinical Practice, 12, 399–404.
7. Cohen, A., Stavri, P. & Hersh, W. (2004) A categorization and analysis
of the criticisms of evidence-based medicine. International Journal of
Medical Informatics, 73, 35–43.
8. Upshur, R. & Tracy, C. S. (2004) Legitimacy, authority, and hierarchy:
critical challenges for evidence-based medicine. Brief Treatment and
Crisis Intervention, 4, 197–204.
9. Sackett, D., Rosenberg, W. C., Muir Gray, J., Haynes, R. B. & Richardson, W. S. (1996) Evidence based medicine: what it is and what it
isn’t. British Medical Journal, 312, 71–72.
10. Haynes, R., Devereaux, P. & Guyatt, G. (2002) Clinical expertise in
the era of evidence-based medicine and patient choice. ACP Journal
Club, 136, A11–A14.
11. Haynes, B., Devereaux, P. & Guyatt, G. (2002) Physicians’ and
patients’ choices in evidence based practice. British Medical Journal,
324, 1350.
12. Buetow, S. (2005) Opportunities to elaborate on casuistry in clinical
decision-making. Commentary on Tonelli (2005). Integrating
evidence into clinical practice: an alternative to evidence-based
approaches. Journal of Evaluation in Clinical Practice, 12, 248–
256.
13. Buetow, S. (2008) Metatheory, change and evidence-based medicine.
A commentary on Isaac & Franceschi (2008). Journal of Evaluation in
Clinical Practice, 14, 660–662.
14. Upshur, R. (2002) If not evidence, then what? Or does medicine really
need a base? Journal of Evaluation in Clinical Practice, 8, 113–119.
15. Haynes, R., Sackett, D., Gray, J., Cook, D. & Guyatt, G. (1996)
Transferring evidence from research into practice: 1. The role of clinical care research evidence in clinical decisions. Evidence-Based Medicine, 1, 196–198.
16. Dobrow, M. (2004) Evidence-based health policy: context and utilisation. Social Science and Medicine, 58, 207–217.
17. Tonelli, M. (2006) Integrating evidence into clinical practice: an
alternative to evidence-based approaches. Journal of Evaluation in
Clinical Practice, 12, 248–256.
18. Haynes, R. (2009) Evidence in context: one person’s poison is another’s acceptable risk. Evidence-Based Medicine, 14, 5.
© 2009 The Author. Journal compilation © 2009 Blackwell Publishing Ltd
S. Buetow
19. Mykhalovskiy, E. & Weir, L. (2004) The problem of evidence-based
medicine: directions for social science. Social Science and Medicine,
59, 1059–1069.
20. Evidence-Based Medicine Working Group (1992) Evidence-based
medicine: a new approach to teaching the practice of medicine.
Journal of the American Medical Association, 268, 2420–2425.
21. Ghali, W. & Sargious, P. (2002) The evolving paradigm of evidencebased medicine. Journal of Evaluation in Clinical Practice, 8, 109–
112.
22. Couto, J. S. (1998) Evidence-based medicine: a Kuhnian perspective
of a transvestite non-theory. Journal of Evaluation in Clinical Practice, 4, 267–275.
23. Sehon, S. & Stanley, F. (2003) A philosophical analysis of the
evidence-based medicine debate. BMC Health Services Research, 3
(14), doi:10.1186/1472-6963-3-14.
24. Miles, A., Loughlin, M. & Polychronis, A. (2007) Medicine and
evidence: knowledge and action in clinical practice. Journal of Evaluation in Clinical Practice, 13, 481–503.
25. Upshur, R. (2003) Are all evidence based practices alike? Problems in
the ranking of evidence. Canadian Medical Association Journal, 169,
672–673.
© 2009 The Author. Journal compilation © 2009 Blackwell Publishing Ltd
EBM and the strawman
26. Black, N. (1996) Why we need observational studies to evaluate the
effectiveness of health care. British Medical Journal, 312, 1215–1218.
27. Worrall, J. (2002) What evidence in evidence-based medicine? Philosophy of Science, 69, S316–S330.
28. Petticrew, M. & Roberts, H. (2003) Evidence, hierarchies, and typologies: horses for courses. Journal of Epidemiology and Community
Health, 57, 527–529.
29. Thomas, J., Harden, A., Oakley, A., Oliver, S., Sutcliffe, K., Rees, R.
Brunton, G. & Kavanagh, J. (2004) Integrating qualitative research
within trials in systematic reviews. British Medical Journal, 328,
1010–1012.
30. Hopewell, S., McDonald, S., Clarke, M. & Egger, M. (2005) Grey
literature in meta-analyses of randomized trials of health care interventions. Cochrane Database Systematic Reviews, 3; doi:10.1002/
14651858.MR000010.pub3.
31. McKibbon, K., Wilczynski, N. & Haynes, R. (2006) Developing
optimal search strategies for retrieving qualitative studies in
PsychInfo. Evaluation and the Health Professions, 29, 440–454.
32. Wilczynski, N., Marks, S. & Haynes, R. (2007) Search strategies for
identifying qualitative studies in CINAHL. Qualitative Health
Research, 17, 705–710.
959