Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Perspectives On Global Warming

Download as pdf or txt
Download as pdf or txt
You are on page 1of 30

Metascience (2012) 21:531–559

DOI 10.1007/s11016-011-9639-9

BOOK SYMPOSIUM

Perspectives on global warming


Naomi Oreskes and Erik Conway: Merchants of doubt: How a
handful of scientists obscured the truth on issues from tobacco
smoke to global warming. New York: Bloomsbury Press, 2010,
368pp, $ 27.00 HB

Steven Yearley • David Mercer • Andy Pitman •

Naomi Oreskes • Erik Conway

Published online: 12 January 2012


 Springer Science+Business Media B.V. 2011

Steven Yearley

This is a terrifically researched and very well presented book. It is a major


achievement to produce a trade book based on the social and historical analysis of
science that tells a wonderful, though alarming, story with wit and irony. The heart
of this story is the way that industry interests, friendly scientists and conservative
activists—mainly in the US—have over the last half century developed a strategy
for generating the appearance of doubt relating to the scientific claims underwriting
a series of environmental and public health reforms. The dramatic but wryly
entertaining aspect of this story is that in the differing fields of smoking, acid rain,
ozone depletion, nuclear winter and climate change, the same scientists and advisers

S. Yearley (&)
ESRC Genomics Policy and Research Forum, University of Edinburgh, Holyrood Road,
Edinburgh EH8 8AQ, UK
e-mail: steve.yearley@ed.ac.uk

D. Mercer
Science and Technology Studies Program, University of Wollongong, Wollongong,
NSW 2522, Australia
e-mail: dmercer@uow.edu.au

A. Pitman
Climate Change Research Centre, The University of New South Wales, Sydney,
NSW 2052, Australia
e-mail: a.pitman@unsw.edu.au

N. Oreskes
Department of History, University of California, San Diego, La Jolla, CA 92093-0104, USA
e-mail: naoreskes@ucsd.edu

E. Conway
Caltech, 1200 East California Blvd., Pasadena, CA 91125, USA
e-mail: kayakker41@gmail.com

123
532 Metascience (2012) 21:531–559

manage to crop up, always trying to talk the regulations down. They devise and
refine a strategy of focusing on generating the appearance of doubt, insisting on
media outlets featuring ‘balanced’ coverage of the two ‘sides’, and of targeting a
series of friendly mass-media outlets. Nearly everyone has noted the fit between
some conservative politicians and the self-styled ‘sceptics’, but the great achieve-
ment of Oreskes and Conway is twofold: first, to have shown how often it is the
same core of doubt-mongers (usually including Fred Seitz and S. Fred Singer) who
are present almost irrespective of the specific scientific issue, and second, to have
troubled to read all the contentious material and to show how often the sceptics’
arguments are based on wilful misunderstanding, tendentious interpretation and
unsubstantiated assertion.
The core of the book is a series of case studies, each well researched and
documented. Given the policy importance of many of these issues, it comes as rather
a surprise that the academic STS literature on them is not that elaborate; Oreskes
and Conway do a fine job of reviewing the arguments that the protagonists offered.
This quite often takes them into significant depth about how statistical confidence
levels are set or how the oceans’ ability to absorb heat features in climate models.
But they manage to keep the text readable and the level of detail is well judged. And
their impatience with the sceptical scientists who really should know better is very
nicely handled.
Rather than continue to burden the book with praises—and without at all
intending to diminish the level or sincerity of my praise—I propose to take the rest
of the opportunity offered by this essay to focus on two themes that I hope may be
of general interest to STS readers. The first of these themes concerns the nature of
the critical line on science developed by the self-styled sceptics. A major, recurring
topic in the book is the way that industry figures and spokespersons for the ‘right’,
in an attempt to oppose particular scientific claims—whether about climate change
or secondary smoke and so on—are led to adopt a line in scepticism about scientific
claims per se. In the chapter on secondary smoke, for example, Oreskes and Conway
outline a lengthy publication (filed in the Legacy Tobacco Documents Library) that
gives guidance on how to combat the arguments of the US EPA and other agencies
by raising doubts about the objectivity and credibility of their science (144–145).
Oreskes and Conway return frequently to the point that several of the scientists who
appear willing to lend their names and credibility to these counter-environmental
arguments have a background in physics and allied subjects and in Cold War era
military research. The consequence is that opponents of science-based environ-
mental regulations have a choice. Either they can hang on to their support for
science (and scientific institutions) and for the free market (if they can find a way to
undermine the specific science that inconveniences them) or they can jettison their
support for science and hold firm to the free market. Though Oreskes and Conway
find a lot of determined—though flawed—attempts to undermine specific bits of
science, there is also a sense in which, when push comes to shove, the critics of
environmental regulations will abandon science rather than relax their opposition
to regulation. For, as Oreskes and Conway explain (249), ‘‘these men … viewed
regulation as the slippery slope to Socialism, a form of creeping Communism’’. It
seems they were willing to throw doubt on every last aspect of contemporary

123
Metascience (2012) 21:531–559 533

science—peer review, scientific institutions, journals’ editorial policy—as a means


to preserve the free market and thwart the growth of regulation. However, once this
destructive work is done, it is very hard to build the credibility of scientific evidence
up again. A pan-sceptic position leaves the opponents with nowhere to go to
substantiate their own current or future claims.
The curious thing here is that one might expect the ‘right’ to have an elective
affinity with science, if only for science’s presumed connection to technological
progress and key instances where scientific research has aided the cause of free-
market societies, in the space race or in the Second World War. Even today, in the
IT industries and in the nuclear energy sector, industry interests and science rub
along well, with no signs of ideological anxiety among the ‘right’. In his compelling
and popular overview of the ‘short twentieth century’, Hobsbawm (1994) is perhaps
unusual among social historians in paying strong attention to science. He notes, for
example, how developments in science have been associated with cultural crises in
the last 80 years. But he also draws attention to the extent to which contemporary
science—even basic research—cannot avoid accusations of interestedness. Noting
the contest for money and the involvement of political and commercial interests in
the support of science, he observes that science has not been value-neutral:
as all scientists knew, scientific research was not unlimited and free, if only
because it required resources which were in limited supply. The question was
not whether anyone should tell researchers what to do or not to do, but who
imposed such limits and directions, and by what criteria (1994, 556).
As I have suggested elsewhere (1997),1 the ‘co-opting’ of science to economic
and political goals was at the heart of Hobsbawm’s analysis of the precarious
situation of the ideal of science at the close of the twentieth century. His analysis
seems reasonable up to a point, but I feel it is not fully correct. To my mind,
Hobsbawm’s summary reveals what he overlooks:
All states therefore supported science, which, unlike the arts and most of the
humanities, could not effectively function without such support, while
avoiding interference so far as possible. But governments are not concerned
with ultimate truth (except those of ideology or religion) but with instrumental
truth. At most they may foster ‘pure’ (i.e. at the moment useless) research
because it might one day yield something useful, or for reasons of national
prestige, in which the pursuit of Nobel prizes preceded that of Olympic medals
and still remains more highly valued. Such were the foundations on which the
triumphant structures of scientific research and theory were erected (1994,
557).
Hobsbawm’s observation about the basis of the ‘deal’ between science and the
state is an important one, but it seems to me he neglects the extent to which the
instrumental truths at which science and technology are aimed have themselves
undergone a change.

1
This section draws heavily on my paper of 1997, with minor updating only.

123
534 Metascience (2012) 21:531–559

In the immediate post-War decades, the primary social role of (and justification
for) science was to increase productivity and competitive performance, whether
economic, military or medical. But, as Oreskes and Conway indicate and as others
such as Roqueplo (1994) and Beck (1992) have noted, there has been a switch away
from science being seen as a way of increasing production to a view of it as a means
of handling risks and of achieving regulation. Of course, much R&D is still aimed at
innovative products and processes. But, to take an extreme example, scientists are
now nearly as likely to be advising politicians on the health risks arising from
emerging food technologies as they are on ways of increasing agricultural
productivity. This growing regulatory role places increasing, and increasingly
uneasy, demands on science since questions are more likely to be publicly raised
over trust and judgement in regulatory disputes than they are in relation to the
development of innovative products. If designers want trains to go faster or
aeroplanes to carry more passengers, then there can be legitimate differences about
the way that such performance is measured. One train may go faster flat out on the
straight while another copes better with the leaves that seem to plague British train
tracks in autumn. One plane may carry lots of people on short hops while another
transports large numbers across continents. Both trains can claim to be fast; both
planes claim to have a huge capacity. Very little hangs on proving, let alone proving
to the public, which train is ‘really’ fastest, especially given the variety of
circumstances under which they will be used by the public.
However, in the matter of ruling out risks, pressures exist to prove which
pesticide is safest, which disposal method for oil-drilling platforms is the least
environmentally harmful and so on. Moreover, these proofs have to be offered in
public forums where various interest groups have a legitimate role and where (the
threat of) legal review is likely to be involved. The task now facing science, as well
as the context in which that task has to be performed, place new, more exacting
demands on scientific knowledge. Accordingly, it seems to me that there is
potentially a bigger story about the relationship of the political ‘right’ to science and
scientific proof than is immediately apparent from the focus point of Oreskes and
Conway’s book.
The topic of my second general theme involves raising a related question. For an
STS reader, one of the professionally most striking features of Oreskes and
Conway’s book is the relative absence of references to STS authors and terminology.
Kuhn gets a mention and there is abundant reference to the norms of scientific
conduct. Now, I should like to stress that I do not bring this up because of a trades-
union-style attitude, but rather to ask what difference STS has made to our
understanding of science in society. This question has two components. First, there is
the point that, in some sense, the tenor of much STS work has been to diminish the
apparent distinctiveness of science. Unlike Popperians—who searched for a way to
demarcate science from all else—STS authors have tended to make science appear
rather (and sometimes entirely) ‘undistinctive’. STS work has, for example, come to
suggest that scientific norms are open to widely discrepant interpretations and that
scientific practice—even by figures of the highest stature in the history of science—is
not well characterised by a devotion to norms. For understandable reasons, many
STS authors have not come forward with their own normative versions of what it

123
Metascience (2012) 21:531–559 535

would be to be a good scientist. In that sense, the standards for proper scientific
conduct that Oreskes and Conway invoke have little affinity with the STS literature.
The second component concerns the nature of regulatory science. There is a good
deal of discussion in the STS literature about the extent to which science for policy or
other ‘mandated’ purposes is—in detail—unlike basic research. It is produced when
the policy/legal question is asked rather than when the scientists are independently
drawn to and ready for the topic. It is produced in a context where a great deal in
political or legal terms hangs on the answer and where all parties know this. By
contrast, Oreskes and Conway are keen to emphasise the similarities between the
work on these environmental and health topics and regular academic science. For
them, it is precisely the ‘regular scientificness’ of this research that gives the results
their force and highlights the inappropriateness of the sceptics’ persistence. One
cannot be a sceptic about a heliocentric solar system because the science is settled,
and no more (they say) can one now be a sceptic about ozone depletion or global
warming, for the same reason. Once again, the authors Oreskes and Conway invoke
have little overlap with the STS literature. Accordingly, it seems to me that this book
offers Oreskes and Conway a strong opportunity to comment on the state of play in
the STS literature, both as regards the normative characteristics of science and as
regards the character of regulatory or mandated or ‘trans’ science.

David Mercer

It is likely that Merchants of Doubt will have a significant impact on public debate
about global warming and science policy. Not so much because what it argues is
completely new, for example, a number of studies already exist which document
attempts by industry funded think tanks and politically conservative lobby groups to
inhibit and shape the regulation of science and medicine in the US and beyond, but
because of the persuasive way that its case is presented.2 Oreskes and Conway write
beautifully and the book is hard to put down. It is extremely well documented and as
we enter a period of critical reflection on the excesses of the George W. Bush era—
from lies over weapons of mass destruction to the GFC—their political timing is
impeccable. They make devastatingly effective use of the ‘Legacy Tobacco
Documents Library’ to guide the reader down the road along which collusion
between a small number of key scientists, big tobacco, and various politically
conservative think tanks has created roadblocks to regulation for more than four
decades and in multiple debates, from tobacco to nuclear winter, acid rain, ozone,
DDT, and now global warming. (I call this the ‘tobacco road’.) Part of what I
imagine will make this story so engaging to the general reader is that while the
contributions of various institutions and ideologies are described (with an emphasis
on free-market fundamentalism failing the environment (240–265)), the focus never
moves far from documenting the (mis)deeds of a small coterie of individuals—the
‘merchants of doubt’.

2
A sample of studies include: Rampton and Stauber (2002), Krimsky (2003), Edmond and Mercer
(2004), Mooney (2005), Michaels (2008).

123
536 Metascience (2012) 21:531–559

Oreskes and Conway also describe the contextual factors that help explain the
success of the ‘merchants of doubt’ (that is, the prominent right-wing scientists who
have supported the sides that have acted against the public interest in the several
debates mentioned above). Standing out among their ‘techniques’ or ploys are: the
use by the political right of the rhetoric of scientific uncertainty to seek to
undermine the public and regulatory credibility of consensual scientific positions;
the failure of the scientific peer review process to be understood in public and
regulatory debates; and a breakdown in science communication—with mainstream
scientists reluctant to publicly defend and protect the scientific record and the
inability of the media to exercise critical judgment, giving equal (or more than
equal) time to dubious or spurious scientific perspectives and arguments. However,
while this approach and explanatory framework offers a broadly plausible account
of the recent history of US science policy, Oreskes and Conway devote rather little
space and time to theoretical reflection or the consideration of alternative and
complementary explanations. This means there are important omissions, and
theoretical considerations left unexplored, which mar an otherwise excellent book.
Three areas where these problems stood out for me were the focus on the roles
played by individuals, scientific uncertainty and peer review.
While Oreskes and Conway describe the roles played by various institutions such
as corporate sponsored think tanks and the importance of free-market ideologies,
these factors play the part of providing the politically supportive environment for
the efforts of Oreskes and Conway’s small number of ‘merchants of doubt’, not their
driving force. Oreskes and Conway are aware that there may be questions about this
emphasis:
How did such a small group come to have such a powerful voice? …
We take it for granted that great individuals – Ghandi, Kennedy, Martin
Luther King – can have great positive impacts on the world. But we are loath
to believe the same about negative impacts – unless the individuals are
obvious monsters like Hitler or Stalin. But small numbers can have large
negative impacts, especially if they are organized, determined, and have
access to power (213).
The emphasis means that the reader is not given a deeper sense of the political
and professional motivations of other scientists and institutions promoting or
resisting regulation at any given time. While those promoting regulation hardly
fitted the simplistic stereotypes, propagated by conservative think tanks, of
watermelons (‘green on the outside, pink in the middle’) not all lacked political
agendas and social visions that extended beyond their scientific specialisations.
During the period in question (particularly the earlier years), scientists and scientific
associations did play significant roles in bringing environmental risk issues to public
attention, including sometimes fighting against mainstream scientific perspectives
that were heavily influenced by things like military and corporate funding (cf.
Moore 2008). Oreskes and Conway do briefly note these impulses in the work of
Carl Sagan, who was keen to drive public debate on nuclear winter ahead of a
scientific consensus, and in the resistance of significant parts of the US physics
community to Ronald Reagan’s so-called Strategic Defence Initiative. But these and

123
Metascience (2012) 21:531–559 537

other more overt episodes of scientific and institutional left or pro-regulatory


politics of the period are back-staged (49–53). While Oreskes and Conway
demonstrate that the politics of the ‘merchants of doubt’ were sometimes extreme
and conspiratorial, they paint a picture of scientific/political landscape where they
were the only scientists acting genuinely politically. Focusing on rogue individual
scientists helps Oreskes and Conway avoid conveying to readers that they are in the
business of symmetrically documenting the dynamics of scientific controversies, or
being drawn into taking seriously, or engaging in sustained rebuttal of, claims made
by the right (even if unsustainable) that pro-regulatory science itself may be ‘trendy’
and ‘‘propped up by liberal/left politics’’ (213–214).
This approach also fits with the tendency for Oreskes and Conway’s analysis to
treat the boundaries between science, politics and regulation as clear and distinct:
there is little attention paid to the question of whether there may be fundamental
epistemic ambiguities involved in the roles that scientists are frequently asked to
play in the democratic political process (cf. Ezrahi 2004). This omission is
unfortunate considering how often in the debates that they document the primary
role of many scientists was to sit on interdisciplinary committees, or act as advisors
to decision makers on questions that did not mesh with straightforward pre-
established bodies of scientific knowledge (cf. Jasanoff 1990).
Oreskes and Conway’s emphasis on individuals also tends to backstage the
institutional and epistemological politics of the time. While individuals certainly
played a role in whether or not environmental and health risks were regulated,
their interests and broader ideologies still had to be played out in specific legal and
regulatory arenas. It was in these arenas that various decision-making tools,
conventions and precedents emerged and were consolidated. The answers to
questions such as: Which experts should be given a voice and on what issues? What
counts as good science for purposes of law and regulation? What styles of peer
review are appropriate? provided the resources and constraints that the ‘merchants
of doubt’ worked with; and while they were important players in shaping the
answers to these questions, they were neither alone in their efforts nor were they
uncontested. Interestingly, recent books such as Michaels’ (2008) draws similar
conclusions to Oreskes and Conway’s, as far as decrying the corporate interference
in US regulatory science, even covering some of the same case studies, but it does
not single out the influence of any of Oreskes and Conway’s key individual players.
By doing this, and placing more emphasis on the pressures exerted by corporations
to shape decision-making tools and institutional practices to suit their interests,
Michaels ends up with a less straightforward storyline but compensates by providing
a more detailed set of proposals for reform.
Travelling the ‘tobacco road’ with its relatively small cast of players also tends to
foreground excessively big tobacco’s strategy of generating unreasonable doubt as a
tool distinctively suited to thwarting regulation and facilitating corporate influence in
regulatory science. Uncertainty, it is/was not a tool exclusively used by the right: the
right has not always used uncertainty as a strategy, and appeals to greater scientific
certainty in regulatory cultures (which has given the rhetoric of uncertainty more
influence) have recently emerged as a trend, as a result of influences that have been
and are much broader and politically diverse than big tobacco.

123
538 Metascience (2012) 21:531–559

Big tobacco’s strategies certainly resonated across a variety of scientific debates of


the time, as Oreskes and Conway ably demonstrate, but generating uncertainty about
mainstream science was not a strategy that was adopted only by corporate sponsored
anti-regulatory interest groups. If we look at debates about things such as nuclear safety,
food additives, lead in petrol, genetically modified food, and electric and magnetic
fields, and in neighbouring contexts involving things like product safety litigation (with
multibillion dollar implications for industry e.g.: Agent Orange, Bendectin), those
pursuing stricter regulation or financial compensation from ‘industry’ were often
adopting similar rhetorical strategies to those deployed by big tobacco as far as
foregrounding scientific uncertainty and challenging mainstream scientific perspectives
(cf. Ashmore 1996). The frequently asked questions were: how safe is safe enough? Can
current standards guarantee safety (and to what decimal point?). These approaches also
frequently challenged the policy relevance of certain areas of mainstream science,
citing industry capture of mainstream perspectives through institutional support and
biased research funding at the expense of economically and politically challenging
scientific perspectives, which became marginalised (cf. Ravetz 2006).
Depending on the political context, appeals to uncertainty have been used by both
corporate interests, to slow down regulation to maintain the profitability of their
existing enterprises by highlighting the uncertainty of newly identified risks, and
environmentalist interests, to resist the ‘roll out’ of new technological systems that
might have uncertain possibly negative consequences. As a counterpoint to these uses
of uncertainty, commentators funded by the same corporate think tanks—who have
used uncertainty to discourage regulation based on scientific consensus—have
emphasised, in different scientific debates where corporate interests have been at
stake, the need to defer to consensus science built on sound methodology and peer
review (cf. Foster and Huber 1997).
While Oreskes and Conway are correct in identifying ‘merchants of doubt’ as an
important influence encouraging regulators to make rhetorical demands for greater
certainty in science, their efforts also align with a politically and intellectually
broader context of growing public ambivalence towards experts and criticisms of risk
regulation promoted by some segments of the left and in academic fields such as the
psychological and cultural studies of risk. Fear of uncertain risks and precautionary
‘possibilistic’ approaches to policy have regularly been dismissed as symptomatic
of the irrationality and fracturing of shared political vision in popular culture. The
extremely popular critiques of maverick left commentators such as Barry Glassner
(1999) and Frank Furedi (2009) have also helped contribute to a policy environment
where concerns about risks that cannot be simply explained and quantified can more
easily be dismissed as ‘moral panics’. Critics of concerns over anthropogenic global
warming also frequently draw on these images (cf. Plimer 2009). As Theodore Porter
(1996) has noted, in periods of general mistrust in expertise, there have been
corresponding appeals to forms of expertise that can be legitimated on the basis of
more mechanical forms of objectivity, where numerical certainty is valued ahead of
‘accuracy’ and judgment. These trends may be useful for corporate interests to
piggyback on, to inhibit the regulation of environmental risks but can also be seen as
important drivers for more politically variegated initiatives such as the promotion
of things like evidence-based medicine (cf. Mercer 2008).

123
Metascience (2012) 21:531–559 539

Fitting in with Oreskes and Conway’s preoccupations with the political role
played by unreasonable expectations of certainty in science, they also look at the
question that naturally follows from this: What, then, is it reasonable to expect from
the science that we should use for regulation? Oreskes and Conway’s answer to this
is consensus science that has been subject to appropriate forms of peer review.
Throughout their analysis, they note that the failure of the public, media and
regulators to understand the processes of peer review has been one of the key factors
providing ‘merchants of doubt’ the space to politically interfere in regulation. An
obvious problem with this approach is that it edges around the problem that a factor
in a number of the debates they track involved disputes about what constitutes
legitimate peer review and who is appropriately charged to be a peer and a reviewer.
While Oreskes and Conway provide plenty of cases where the ‘merchants of doubt’
did not follow appropriate review processes with their own work they tend to
conflate this with the idea that the ‘merchants of doubt’ were opposed to the idea of
peer review more generally or were somehow anti-science as opposed to the science
that did not suit their own purposes. (Oreskes and Conway’s conflation of criticism
of science, the idea of science, consensus science, and individual sciences/scientists,
appears in a number of places: 59–65.) Without acknowledging this point, it is hard
to explain why the ‘merchants of doubt’ at different times attacked both proponents
of nuclear winter and the IPCC on the basis that they had not followed proper peer
review processes. A second problem for Oreskes and Conway is to be able to
explain what constitutes scientific peer review. They imply that such standards are
straightforward, well understood and universal, but they then go on to provide a
number of images of peer review that are at times unusual or unorthodox and
inconsistently applied.
Let me provide just a few examples:
1. Oreskes and Conway suggest that a feature of peer review is that the work of
scientists whose past records may appear to be weak or suspect is reviewed
more carefully: ‘‘If the person is thought to do sloppy work, or has previously
been involved in spurious claims he or she can expect to attract tougher
scrutiny’’ (154).
This does not equate with one of the central standard features of peer review—for
reviews to be ‘blinded’ (the authors details are concealed from the reviewer).
2. In places, Oreskes and Conway are critical of scientists being used to review
material beyond their speciality but when it suits their case, they use the
opinions of scientists similarly ill suited to offer specialist opinions. A good
example of this is Alvin Weinberg’s views on the political effects of migration
being endorsed over a ‘merchant of doubt’ Bill Neirenberg’s views. Weinberg’s
views may well have been much more sensible but neither he nor Neirenberg is
commenting on the basis of peer-reviewed research that they had published or
an area on which they could claim specialist expertise (181).
3. In relation to a heated dispute surrounding accusations that the IPCC report
of 1996 had not followed appropriate peer review procedures, Oreskes and
Conway defend the IPCC’s processes on the grounds that making changes to

123
540 Metascience (2012) 21:531–559

the report ‘‘in response to written review comments received … from


governments, individual scientists and non-government organisations’’ (208)
was a legitimate form of peer review. Such a form of review may well have
been legitimate (I would like to note that I’m not condoning the critiques of the
IPCC’s processes) but Oreskes and Conway’s defence of this more ‘extended’
form peer review is not consistent with the harder more exclusive line of peer
review they promote in other parts of their work, i.e., being strictly limited to
appropriately qualified experts (98, 154 and 269)
Oreskes and Conway’s rhetorical emphasis on the centrality of formal peer
review notwithstanding, their most convincing arguments in defence of the
soundness of the areas of science under siege from the ‘merchants of doubt’ are, I
suggest, constructed in a piecemeal and pragmatic way. They emphasise things like:
weight of evidence, theoretical plausibility, testing, absence of conflicts of interest,
consensus emerging over time, and a variety of different models of peer review:
informal, formal, workshop review, review by committee, extended peer review and
publication review. Given that Oreskes and Conway defend these scientific practices
on such broad epistemic and normative grounds, the question remains why they
seem so wedded to repeated references to the centrality of peer review. This may
reflect a challenge thrown up by their need to regularly remind the reader that
unrealistic images of scientific certainty are a problem while simultaneously
documenting the misuse of images of scientific uncertainty to thwart regulation. In
keeping with Oreskes and Conway’s distaste for equivocation and uncertainty
(‘doubt-busting’) the scientific value of peer review seems to take on a false degree
of certainty to fill their analytical gap. Another option would have been to highlight
the diversity of methods, norms and practices that make up science in regulatory
contexts and the need to develop better models of science sensitive to this, including
dispensing with, or significantly modifying, traditional images of peer review (cf.
Wynne 2010).
In all, despite Merchants of Doubt being an impressive and well-written piece of
research, its emphasis on individuals and its tendency to leave some of its
theoretical explanations undeveloped mean that it ‘works’ much better as a piece of
investigative journalism and consciousness raising (the clear message that there is
an urgent need to improve the relationship between scientists, regulators, media and
the public) than as a theoretically nuanced account of how these problems have
arisen or how they should be addressed. While Merchants of Doubt provides a rather
sketchy map of the roads that need to be followed for better policy making in the
future, its analysis provides a significant contribution to public debate by providing
a convincing case that the ‘tobacco road’ is a dangerous route that can no longer be
taken.

Andy Pitman

Imagine someone builds a ‘transporter’—the method of moving people between star


ships and planets used in Star Trek. It is not perfectly safe (the odd crewman has

123
Metascience (2012) 21:531–559 541

been lost) but it is fast, portable and cheap. Could it catch on? Probably not. The
campaigns against it would be huge, orchestrated around its safety, fears that it
would destroy the airline/rail/car industry and that scientists could not prove with
100% certainty that it does not induce a higher risk of cancer following prolonged
use. Countries would hold enquiries where ‘experts’ would highlight the lack of
certainty and emphasise the inability of countries to protect their borders from
terrorists who could use the technology to slip into and out of a country. The doubts,
fears and threats would be enough to kill the innovation.
One might assume that those arguing against the ‘transporter’ had genuine
concerns. I think there is a general assumption that the majority of people are
honest. Indeed, if later it is shown that they were wrong then their defence that
information was incomplete, they were being cautionary, etc., would be thought
legitimate. However, what happens if it turns out that those presenting arguments
to policy makers, to judges, to presidents and prime ministers were simply lying
to protect their personal, political or employers interests? That is, they simply
fabricated evidence. Cynics might suggest that this is common in politics, in
business, etc., but certain areas have been rather slow to identify this strategy and
science is clearly one. Only a very small number of scientists lie, misrepresent
findings, hide conflicting evidence, create evidence in support of their ideas, etc.,
and it would be naı̈ve to think otherwise.
There is, however, a system to manage this called peer review. Peer review is not
perfect; it is rather like democracy that Sir Winston Churchill famously noted ‘is the
worst form of government except all the others that have been tried’. Peer review
has two components. First, a scientific finding submitted for publication is assessed
by other experts, usually anonymously with the intent of finding errors, poor
assumptions, unsubstantiated conclusions, etc., in the way the finding is reported.
Peer review acts as a filter, avoiding first-order errors and mistakes and prevents the
publication of some erroneous material. However, there is a second processes that
takes longer. When someone publishes a scientific finding that is new, confronting
or unexpected scientists circle like vultures over the paper and pick the finding apart
as best they can. They look for errors, whether the result is reproducible and whether
different assumptions change the conclusions. If they can demonstrate errors or
unreliable conclusions, they publish this result and the original finding is shown to
be wrong. Thus, a scientist’s reputation is built in two ways. First, publishing in the
peer-reviewed literature. Second, publishing results that are exciting enough to
attract the vultures but robust enough to withstand attack. The more a scientific
finding is attacked without success, the more the responsible scientist or scientific
team establishes an elite reputation and the more likely it is that the finding
transforms our scientific understanding. In the case of global warming, no hint that
the basic science is wrong has ever been published and withstood subsequent
scrutiny.
So, what happens if someone does not like the new research finding but they
cannot find errors in the research? As Oreskes and Conway note, good scientists
accept the new results and move on. Sometimes, individuals withstand the
transformation of the science for their entire careers (as happened with evolution,
quantum theory and plate tectonics for example). The rigour of the peer review

123
542 Metascience (2012) 21:531–559

system makes it extraordinarily difficult to fabricate evidence to the degree that


undermines the new science for any length of time and ‘truth will out’ wins in the
end even if it sometimes takes decades (cf. Talent 1989). But this is within the
scientific community. The scientific community is fundamentally insular playing by
its own rules. The so-called scientific method has worked and no better method has
been proposed.
However, while scientific method works in science, it is poorly aligned with the
world of decision makers and the public. Thus, if someone happens to not like a new
research finding and they cannot pick holes in this research and they have chosen
not to play by the rules of the scientific community, they can orchestrate a situation
that actively undermines the science. This undermining does not occur in the eyes of
the science community who will recognise the errors. Indeed, that is not the
strategy; rather, the strategy is to undermine the science in the eyes of the public and
decision makers who tend to be poorly trained in science. Oreskes and Conway
provide a litany of examples of this behaviour starting with tobacco, through Star
Wars, the ozone hole, acid rain and global warming. It is a remarkable account of
rigorous scholarship. It is, in parts, utterly unbelievable that decision makers could
have been so naı̈ve that scientists could have been so naı̈ve and that a handful of
individuals could have been so motivated to have undertaken such a skilled
deception.
I am a climate scientist and a lead author on the Intergovernmental Panel on
Climate Change and therefore, in the eyes of climate deniers, I’m utterly
compromised. I have been accused of being both a Marxist and a Fascist, of trying
to destroy the Australian economy, trying to provide an avenue for mass migration,
arguing for genocide (on the grounds that I have apparently argued for enforced
population control—which I have not), supporting a single (presumably communist)
world government, and a range of other increasingly bizarre accusations have been
made. But I have got off lightly compared with some of the better-known scientists
mentioned by Oreskes and Conway. The late Steven Schneider, Michael Mann,
James Hansen and Ben Santer are among a group of outstanding scientists that have
been attacked for merely doing what society should be desperate for people to do—
find out the truth about global warming. Climate scientists are now commonly
labelled ‘alarmist’—almost akin to being called ‘alarmist’ for telling someone who
is about to jump off a cliff that he or she may be killed. Climate scientists commonly
use the term ‘denialist’ to label those who mislead the public on global warming.
This is preferable to ‘sceptic’ in my view because any scientist who is not, at their
core, sceptical should be sacked!
So, what do we know about global warming for certain? Almost nothing,
because, as Oreskes and Conway point out, almost nothing in science (any science)
is known for certain. By ‘certain’ I do not mean ‘within reasonable doubt’ or ‘well
enough to precipitate action’. Rather, I mean 100% certain in the sense we know
that 2 ? 2 = 4. If you catch an aeroplane tomorrow, you do not know for certain
that you will survive. I do not know, as I write this, whether it will ever be read
because there is a probability that a meteorite will wipe out all life on earth before I
reach the end of this sentence (big sigh of relief there). We do take aeroplanes and
we do bother to write sentences because we weigh the evidence and reach sensible

123
Metascience (2012) 21:531–559 543

conclusions. An awful lot of this is experience based. We know people who have
safely flown on aeroplanes and we have never seen Earth destroyed and our brains
process this information sensibly.
When it comes to global warming, we know that if we continue to emit 8.4
billion tonnes of carbon (which was what humans released in 2006; Canadell et al.
2007) into the atmosphere each year by burning fossil fuels and deforestation, the
Earth will warm. This is indisputable, based peer-reviewed science. It is, to all
intents and purposes, known for certain (though not with the logical certainty that
2 ? 2 = 4). Of course, I have not quantified ‘warm’ but the best estimates we have
is that the Earth will warm by between 1.7 and 6.48C, comparing by 2090–2099
with 1980–1999 (cf. IPCC 2007). This range, this uncertainty, is due in part to
imperfect understanding of the rates of change and some key feedback processes
within the climate system but also to uncertainties in the future emissions of
greenhouse gases. This estimated range excludes very low emission futures and also
the results of rapid destabilisation of methane and carbon stores due to warming.
The range stated above is ‘likely’, which means it has a probability greater than 66%
and that means there is a 33% chance of the warming being below or above this
range. Obviously, if we cut emissions from 8.4 billion tonnes per year to 1.0 billion
tonnes, we will see less warming than if we continue to increase the rates of
emissions towards 15 or 20 billion tonnes. Many people seem challenged by this
uncertainty and argue to that we should wait until the science is more certain. Others
note that we are highly dependent on our climate and even the lower end of this
global warming commits some regions to very large amounts of warming and the
upper range of projected temperature changes would be untenable for the
continuation of life as we know it.
So why is there all the complexity in the arguments around global warming? I
mentioned earlier that a lot of our personal decision-making is experience based.
But experience-based decision-making is unfortunately not going to work for us in
the context of global warming. Let’s break the problem apart. The cause of global
warming is greenhouse gases in the atmosphere (mainly but not only CO2). CO2 is a
‘trace gas’ (or rare) natural gas that you cannot see, taste or feel, but if you elevate
the concentration of CO2 in a glasshouse the plants grow better. Some climate
denialists have used this fact to suggest that more CO2 must be a good thing because
it is natural. It cannot be a bad thing because you cannot see it and in any case, it is
so rare it cannot matter. This seems to make sense—indeed one might argue it
makes common sense. Of course, radiation from a nuclear reactor is colourless,
odourless and tasteless but it is not safe and intuitively, we know this to the extent
that hospitals rename their nuclear medicine departments to avoid frightening
patients. Cadmium and mercury are both natural substances but you would not want
too much of either and we endeavour to restrict their levels in our environment.
Hydrogen cyanide, at 100 times less than the concentration of CO2 in the
atmosphere kills you—so it is not the amount of something that matters. Rather, it is
the impact of something at a given concentration that matters. Finally, plants only
grow better under higher CO2 in the presence of water and crucially nutrients that
are typically limited in the natural environment. However, the denialists gain
traction from these arguments because they seem to make sense and policy makers,

123
544 Metascience (2012) 21:531–559

struggling to find a strategy for solving global warming, sometimes welcome easy
solutions that make the problem apparently go away.
Humans respond to their own experiences and to threats that are perceived as real
and immediate. Oreskes and Conway remind us of the politician who, when told
global warming would be a threat in 50 years time, suggested the scientists come
back in 49 years. Global warming has, with a high degree of probability, already
been clearly observed in the observed temperature record, in atmospheric pressure
changes (Gillett and Stott 2009), in atmospheric humidity (Willett et al. 2007;
Dessler et al. 2008), in rainfall intensity (CCSP 2008), heat waves (Alexander and
Arblaster 2009), drought (Zhang et al. 2007), sea level rise (Church and White
2006), in biodiversity changes, in the reduction of the area of Arctic sea-ice, in
losses of glaciers (Pritchard and Vaughan 2007) and even in the genetics of some
insects (Balanyá et al. 2006). Global warming is not just about the future then: it is
about the present. But the present is defined by weather and weather is highly
variable. Conflating weather with climate provides rich material for the denialists to
mislead the community.
Misleading people on global warming is all too easy. A day, week or year of cold
can be used, supported by photographs of snow-clad landscapes, to ‘prove’ that
global warming false. Scientists counter by showing that the occurrence of
conditions of extreme cold is perfectly consistent with global warming (because a
more energetic climate increases the probability of cold air outbreaks). Climate
denialists point to an apparent stabilisation of the global mean temperature as
‘proof’ that the warming has stopped, using simple and elegant pictures by carefully
choosing their start and end points on their graphs. Climate scientists respond in the
scientific literature, demonstrating that the warming trend continues as expected and
that through the whole of the twenty-first century we can expect decades that are
cool relative to the previous decade (Easterling and Wehner 2009). But the climate
scientists lose such arguments. If the ‘debate’ takes place in the media it hints at
uncertainty in the science, whereas in fact none exists. More commonly, the
denialists use the media and the climate scientists use the scientific literature, which
leads to the perception that the denialists have a legitimate argument. Of course, in
each case, climate scientists win the arguments, each time by a knockout. But we
only win them by the rules of science and lose them in the court of public or
political opinion—which brings us to another reason why global warming is a
‘diabolical problem’. It is global and very few humans make decisions that benefit
the globe at the expense of themselves, their own country, or their own community.
Go on: name one.
So, let us try to design the ultimate problem. We need a problem dealing with
things that you cannot see, touch or taste. We need the problem to take decades to be
observable and even then the ‘signal’ needs to be almost lost in the noise. Let’s
make it really complex—not like malaria where there is a clear cause. Make it
linked with gases interacting with infrared radiation triggering changes in energy
and water balances on timescales of decades. Let’s scaffold our ultimate problem on
something utterly entrenched, like energy use obtained by the burning of fossil
fuels. Let’s make sure that there appear to be winners from global warming, but
make the losers the poorer communities in developing countries. Let’s make it take

123
Metascience (2012) 21:531–559 545

decades between resolving the problem and seeing clear evidence that the problem
is resolved. Let’s make sure the species capable of solving the problem is
psychologically tuned to discount problems of this type and evolved to discount
threats that lie in the distant future.
This is the challenge of global warming. In this sense one might conclude we are
basically in a hopeless position—and I personally think we are if the upper values of
projected warming are right. Put simply, climate scientists are not sure of the actual
value of ‘climate sensitivity’ or the amount the Earth will warm for an effective
doubling of CO2 and other gases. It is probably about 38C. It might be as low as
1.58C, but might be as high as 68C or even 88C. Do you feel lucky? What we are
currently doing by global warming is gambling that climate sensitivity is at the low
end of a range. But if climate sensitivity is towards the middle or upper end of the
range, we need to drastically cut emissions within a decade. I personally doubt we
can do that and Oreskes and Conway provide evidence that supports my view in
their documentation of the tobacco smoke problem. In that case, major vested
interests protected the tobacco industry as, initially, direct health impacts and then
passive smoking health impacts became apparent. Evidence for a direct health
impact was known 60 years ago. But according to Oreskes and Conway, some 25%
of Americans still doubt this, despite the evidence, the warnings, and the health
campaigns. And over 50% of men still smoke in some countries. While the
campaign against smoking in some countries has been successful, as a global
industry that science says kills people it has weathered the storm well.
It is arguable exactly when science was clear on the threat of global warming.
There was evidence in the 1990 Intergovernmental Panel on Climate Change
report—perhaps evidence that history may judge to have been sufficient. The 2001
Intergovernmental Panel on Climate Change report was certainty clear. The history
of smoking suggests, then, that we may begin to significantly resolve the causes of
global warming by perhaps 2050 or 2060. But given the inertia in the climate
system, this means we are basically committed to the climate we are presently
tracking to 2080. This is untenable since the climate we are tracking towards for
2080 risks triggering so many thresholds or ‘tipping points’ that humans surely
cannot be that stupid. Or can they?
In a ground-breaking paper, Kriegler et al. (2009) assessed the risks, via expert
judgment, of a series of tipping points. ‘Tipping points’ are analogous to a person
walking along a headland. All seems well until they step over the cliff, and once
they do, the consequences are inevitable, rapid and irreversible. Kriegler et al.
(2009) have explored a series of scenarios but I want to focus on just one—the risks
of tipping points being triggered within the life expectancy of a new born baby, born
in Australia in 2011, if the Earth warms by 38C (approximately the best estimate).
The risk of a reorganisation of the Atlantic Meridional Overturning Circulation (the
Gulf Stream) is estimated (by Krieger et al.) to be between 0 and 50%. The risk of
the Greenland ice-sheet melting is estimated to be between 15 and 90%. The risk of
a disintegration of the West Antarctic ice sheet is estimated to be between 0 and
about 85%. The risk of the Amazon rainforest undergoing dieback is estimated to be
between about 5% and 95%. Finally, the risk of a shift to a more persistent El Niño
regime is estimated to be between about 0 and 45%. At 48C of warming, almost all

123
546 Metascience (2012) 21:531–559

those interviewed placed the risk of the disintegration of the West Antarctic ice
sheet by 2100 at above 20% and the risk of the melting of Greenland’s ice at above
55%.
I urge the reader to stop and consider these numbers. If the most vulnerable parts
of the West Antarctic ice sheet disintegrate, sea levels would rise rapidly by many
metres (about 3.3 m: Bamber et al. 2009). It is very unlikely that it will disintegrate
altogether but ‘very unlikely’ is not the same as ‘will not’. A 20% risk of this, by
2100 at 48C of global warming is an untenable risk, given the consequences. We
buy insurance to offset the negligible risk of our houses burning down. We screen
passengers boarding an airplane to reduce the risk of hijacking despite the risk of
hijacking being negligible. We work hard to reduce risk via workplace safety,
education about weight loss and a good diet, exercise, etc. Yet we appear unable
to understand or appreciate the urgency that a 20% risk of something like the
destabilisation of the West Antarctic ice sheet brings. It is ‘alarmist’ to say this will
happen. It is legitimate and well-established science to say this could happen with a
probability exceeding 20%.
Ultimately, science has to break though and confront policy makers with the need
to resolve the global warming problem before most policy makers or the electorate
themselves experience the problem. The evidence that this is possible is not
convincing. With the ozone hole, it was really only when this began to appear over
the Arctic that the US took it seriously. Proving that global warming is a direct
threat to (say) the US, China or Russia at a scale that directly causes simultaneous
global action is problematic as humans also discount disasters. That is, those
wishing for a few massive climate catastrophes to trigger action on climate change
are very unlikely to be satisfied, for even a few weeks after a disaster, people not
directly affected begin to forget.
So where does Oreskes and Conway’s fabulous book leave us? It leaves me a
little more depressed but also more motivated. Reading this book should make you
angry. As a scientist, it makes me angry that a handful of individuals could
bastardise the science so effectively. Perhaps more fundamentally, I now see a
strategy. I never could understand why the climate denial movement existed. There
is too much skill in some of their web sites, opinion pieces, and reports to just say
they do not know what they are talking about. Many do understand the science.
They must do, to be able to craft such cunningly targeted prose that aligns with the
messages that policy makers, and indeed the public at large, want to hear. Its just
they use this understanding to produce misleading statements rather than being part
of crafting a solution.
In the denial of scientific evidence, at least once the evidence becomes
overwhelming, Oreskes and Conway note three major strategies used by the
denialists. First, they argue that the science is uncertain and incomplete. Second,
they argue that solving the problem will be difficult, dangerous and expensive.
Finally, they argue that the scientists who have built the case that requires a solution
are corrupt, or motivated by self-interest and/or political ideology.
In the case of the first argument, climate scientists can confront this claim via clear
web sites, media briefings and public talks, using simple language and examples.
Close discussion with communications experts and, in my view, psychologists who

123
Metascience (2012) 21:531–559 547

understand how people assimilate information (cf. Newell and Pitman 2010) can
provide strategies that climate scientists may not otherwise have recognised. The
attack on science, whether medical sciences relating to smoking or physics in the
case of climate science, is a danger to society as a whole. It undermines people’s
perception of the reliability of science-informed decision-making.
The second argument that solving the problem will be difficult, dangerous and
expensive is partially true. It will be difficult and it will be expensive but it is less
difficult the earlier we start and it is less expensive than experiencing the
consequences. In any case, those economies that are not dependent on fossil fuels
and which develop carbon-efficient alternatives are likely to do rather better than
those that blithely assume oil is an infinite resource.
The third argument that the scientists who have built the case that requires a
solution are corrupt, motivated by self-interest and/or political ideology is a
problem. The denialists are these things—and an excellent way to hide one’s own
weakness is to attack others on this specific theme. Then, if the climate scientists
accuse denialists, it can become like an argument in a playground and the public’s
confidence declines. It seems to me that this third argument has been the hardest to
resolve despite the fundamental lie at its core. But Oreskes and Conway provide the
ammunition to enable policy makers to make their own judgment on the reliability
of many who deny global warming. The climate scientists no longer have to defend
their personal reputation and attack the denialists. We can simply say: ‘Don’t take
my word for it. Read Oreskes and Conway; and here’s a copy’. Their book provides
us with a weapon to fight those who misrepresent, lie, fabricate arguments and
attack the likes of Michael Mann, James Hansen and Ben Santer. The science will
win in the end. The planet will warm if we continue to emit CO2 at current rates.
The real question is whether humans will deal with the challenge to dramatically
reduce emissions so that the Earth does not warm too much. Oreskes and Conway
have, in Merchants of Doubt, provided a key weapon that will give us a better
chance of convincing policy makers to act.

Authors’ response: Naomi Oreskes and Erik M. Conway

‘What a risky business to tell the truth on a factual level without theoretical
and scholarly embroidery’.
Hannah Arendt to Mary McCarthy, 16 September 1963
We are deeply grateful to Steven Yearley, David Mercer and Andy Pitman for their
generous praise and thoughtful engagement with our work. We appreciate especially
Yearley’s neat synopsis of the central impulse of the book—a historical
reconstruction of wilful doubt-mongering by a coterie of men that was tendentious,
purposeful and repetitive; his recognition of what we strived to achieve—to tell a
topical story in a manner that was accessible to a broad public while meeting
academic research standards; and his appreciation of the work it took to manage our
own impatience—not to mention anger, distress and stupefaction. We are glad that

123
548 Metascience (2012) 21:531–559

David Mercer found the book hard to put down and that Andy Pitman is distributing
copies down under.
The most substantive issue, raised by both Yearley and Mercer, is the work’s
affinity (or lack thereof) to academic science studies. This is a question with which
we grappled, and continue to grapple. Before turning to that rather large question,
let us first addressed some specific criticisms raised by Mercer, and to some extent
addressed by Pitman.
Mercer seems to have misread our view of peer review and its role in vetting
scientific claims. First, we should note that we were reluctant to propose remedies
to the distressing activities we described because we recognised how complex
the issue was. There is no easy solution to the challenges raised by the nexus of
personal, political, ideological, institutional and economic forces at play in our
story. It would obviously not do to exhort our readers simply to ‘trust the experts’. If
five decades of science studies research has demonstrated anything, it is that
science, like all human activities, is a social process; and experts are subject to the
same social pressures as the rest of us, and a few additional ones as well. Science
studies has also demonstrated that there is no a-contextual recipe to define what does
or should count as scientific knowledge, no algorithm instructing us how to judge
scientific claims. Earlier attempts to demarcate science from non-science, sense
from nonsense, ‘meaningful’ claims from ‘meaningless’ ones, have ranged from
empirically inadequate to downright silly. So we would be naı̈ve to think that we
could solve that problem, once and for all, and in a 250-page trade book to boot!
And we certainly did not want to be glib.
However, it was also clear that the book would fall flat—and be too depressing—
without some attempt at suggesting the general direction for a way forward. We
turned to peer review not as a panacea—indeed, rather mindful of its limits and
flaws, particularly in the domain of gender equity—but as a means to call attention
to the concept of community norms and the social vetting of scientific claims. Peer
review is the primary mechanism by which the power of collective epistemology
comes into play, as Pitman notes, not only in the formal peer review process that we
emphasised, which permits or rejects claims submitted for publication, but also in
the on-going de facto review process that constitutes continued scientific work.3
Publication in a refereed journal is a minimum threshold to which candidate
scientific claims should be held—particularly by persons outside the scientific
community uncertain how to judge claims and counterclaims—because it is the
standard of the scientific community and because it is the mechanism by which the
force of the community is brought to bear. It is a low bar—many published papers
are later shown to be incomplete or incorrect; biased and ghost-written papers may

3
Given the role of peer review in warranting scientific knowledge claims, it is surprising there is not
more science studies literature on it. The notable recent exception is Lamont (2010). In recent years,
medical professionals have begun to address peer review reliability, paying substantial attention to the
potentially biasing role of corporate sponsors, financial conflicts of interest and media attention, and to the
problem of ghost-authorship. See for example, Flanagin et al. (1998). Richard Smith, a former editor of
British Medical Journal, notes (2006) with irony that the basis of scientific vetting is a system that has not
been well studied scientifically. For an earlier but still pertinent attempt to evaluate peer review
scientifically, see Cole et al. (1981).

123
Metascience (2012) 21:531–559 549

well pass through—so the observation that most of the reports, white papers, and
press releases issued by the ‘merchants of doubt’ and their institutional and
corporate networks were not peer reviewed should have been a red flag.4 Had
journalists applied this standard, many spurious claims would have been subject to
greater scrutiny and perhaps not repeated as often and as damagingly as they were.5
Mercer, however, seems to think that we do not understand peer review,
suggesting that our claim that peer review is influenced by a researcher’s track
record is inconsistent with the principle of double-blinded review. In essence, he
alleges that we do not understand the remedy we invoke. This is a strange
suggestion, for it appears that it is Mercer—not us—who is confused about peer
review practices, at least in the physical sciences.
While peer reviewing in the social sciences is often double-blinded—and double-
blind randomised trials are routinely invoked as the gold standard of demonstration
in medicine—this is not the case in the physical sciences.6 For more than a decade,
one of us routinely supplied and received scientific peer reviews and rarely (if ever)
were they double-blinded.7 It was common as a young scientist to receive reviews to
the effect of ‘‘Oreskes is a promising young researcher, whose work should be
encouraged’’. Or, for better or worse, ‘‘Oreskes is a promising young woman
scientist’’. And later, ‘‘Oreskes has done outstanding work to date and is particularly

4
On the problem of claims that do not hold up under further scrutiny, see: Jonah Lehrer, The truth wears
off. The New Yorker, 13 December 2010. On gender bias in peer review, see Wenneras and Wold (1997),
Bornmann et al. (2007) or Abrevaya and Hamermesh (2010). While many studies do show some effect of
gender bias, others suggest a stronger effect caused by prestige bias—grants and papers by authors who
have published extensively before, or are affiliated with prestigious institutions, are more likely to receive
positive reviews. See: Peters and Ceci (1983); Fisher et al. (1994). Other studies, however, showed no
benefit to blinding or unmasking in the peer review process (e.g. Van et al. 1998), perhaps because
blinding is less effective than one would imagine because reviewers are able to identify authors through
their knowledge of the field. On the other hand, the role of bias may itself be exaggerated because there is
bias against ‘no effect’ results. Cole et al. (1981) attribute much of the variability in peer review outcomes
to chance.
5
The evidence that peer review may be deliberately undermined—and not just by Merchants of Doubt
but in diverse ways—raises the question of whether this mechanism has out-lived its efficacy. See
Flanagin et al. (1998), Rennie (1986, 1999) and Healy (2000, 2004). Some have suggested the time has
come to replace traditional peer review with open access web-based discussions on the internet; Smith
(2006) argues while this may not be any more reliable it would at least be more thought-provoking. It is
striking that most of the literature on problems in peer review addresses bio-medicine; more work is
needed to know whether the problems addressed by Smith, Healy, Rennie, and others are general to peer
review or specific to the demands, pressures, regulatory framework and financial inducements of bio-
medicine. The issue of ghost-written papers, for example, has not, to my knowledge, come up in academic
geology (although one might imagine a situation in which the CATO Institute would ghost-write an
article on climate change) perhaps because of the absence of a regulatory framework creating large
inducements.
6
For a discussion of whether this should be the case, see Cartwright (2007).
7
A science studies scholar might note a problem in the whole notion of reviewing peer review: if we
don’t know whether or not a scientific claim is correct, how can we judge whether reviewers were correct
in accepting or rejecting it? We can judge fairness—e.g. whether papers submitted by male and female
authors are treated equitably—and consistency—e.g. whether the same paper submitted and re-submitted,
or submitted under a different author and title—receives consistent treatment. Thus most studies that
examine peer review attempt to judge fairness, equity, or consistency, but not the ultimate ‘correctness’ of
the decisions being made.

123
550 Metascience (2012) 21:531–559

strong in the areas of …’’. Guidelines of granting agencies may explicitly ask
reviewers to consider the results of prior research, and in US National Science
Foundation proposals, investigators are required to list past publications and recent
collaborators, both to situate their work within the broader research community and
to exclude as reviewers close colleagues with an immediate conflict of interest. And
for good reason: scientific claims are tied up with the credibility of the claimant.
Track record counts, as Steven Shapin (1995) and others have emphasised, because
scientific consensus is grounded in a hefty dose of expert judgment about expertise.
(As Daston and Galison (2010), Theodore Porter (1996) and others have stressed,
trust in machines and numbers has been seen as a remedy to the difficulties of
trusting people.)8 The key players in our story—Frederick Seitz, S. Fred Singer,
Robert Jastrow and William Nierenberg—drew on their own hefty reputations to
achieve credibility for views that diverged markedly with the prevailing consensus.
The basis of that claim to credibility is an important part of the story.
The question of merited credibility leads us to Mercer’s complaint that we are
inconsistent in criticising Nierenberg for making claims outside his expert realm but
supporting Alvin Weinberg’s criticism of those claims. In suggesting that Weinberg
was no more qualified to review Nierenberg’s report than the latter was to write it,
Mercer misses two points. First, Weinberg had developed the Oak Ridge National
Laboratory programs on carbon cycling and the atmospheric impact of CO2. (It was
under Weinberg’s influence that the Department of Energy took on the sustained
funding of Charles David Keeling’s (1998) measurement of atmospheric CO2 at
Mauna Loa—work first begun during the International Geophysical Year and which
he had been challenged to keep funded.) One might say the same of Nierenberg,
because as Director of the Scripps Institution of Oceanography, he oversaw the
institution’s climate research programs, including Keeling’s. But the issue at stake
in the chapter to which Mercer is referring is not whether Nierenberg was a
legitimate choice to chair the National Academy of Sciences committee at that
historical juncture (before he had helped found the George C. Marshall Institute and
began his late-stage career as a serial contrarian) or whether Weinberg was an
appropriate choice to review the report. The issue is that the peer reviewers chosen
by the US National Academy of Sciences criticised the report for violating the
scientific standards of the day: the conclusions were not supported by the evidence
provided and the Executive Summary was not consistent with the body of the report.
The point of our discussion was to show that Weinberg was invoking community
standards and claiming that Nierenberg had violated them. That was a claim that any
active scientist—physicist, chemist, biologist, or geologist—might make. Weinberg
was attempting to enforce scientific norms, and we found it striking both that he did
so and that the National Academy did not pursue his complaint. The story illustrates
a failure of peer review—a failure of the scientific community to enforce its own
standards—a point to which we return in the conclusion.
Mercer also claims we are inconsistent in invoking the extended peer review
process used by the Intergovernmental Panel on Climate Change (IPCC) but not

8
For recent discussions of how trust in scientific experts is undermined by commercial interests, see also
Michaels (2008) and Rampton and Stauber (2002).

123
Metascience (2012) 21:531–559 551

defending it in our conclusion. Again he has missed a distinction between


contemporary actors’ categories and post hoc historical analysis. We do not defend
the IPCC’s extended peer review process; we simply describe it as the process that
the scientist involved—in this case, Benjamin Santer—was expected to follow and
did.9 Santer met the standards of the expert community of which he was a part, and
his colleagues who had participated in the process affirmed this. His opponents—
who had not participated in the process—accused him of scientific misconduct, yet
provided no evidence to support their accusation. Thus, we show for a second time
how Nierenberg and his colleagues stepped outside community norms, and we
suggest that such violations could have been a red flag to outside observers that
something other than normal science was taking place. We are not saying that these
norms are unproblematic or that an individual is never right to breach them.
Different problems may require different practices, and norms would never evolve
were their boundaries never stretched. But when norms are breached, there must be
a reason for it and that invites scrutiny.
Mercer ends on a note with which we fully concur. Our view of science is
pragmatic, and we believe that the strength of science emerges from a complex
nexus of methods and practices—a point one of us (Oreskes 1999) has argued at
length elsewhere. We perhaps should have made this point more strongly in the
conclusion, although we believe that we were clear on the need to accept that
scientific knowledge is never certain, always provisional. We ended the book on
precisely that point, but also arguing that ‘provisionality’ is not grounds for inaction,
for, as others have argued more eloquently than we, inaction is action in defense of
the status quo (cf. Wallerstein 2000).
Mercer—and Yearley and Pitman—are correct in noting that we do not provide a
satisfactory picture of how precisely science operates. Merton famously called
science ‘certified knowledge’, but what exactly is the process that certifies it? Is that
process adequate for contemporary needs? Is there robust reason to think that claims
certified by an expert professional scientific community are more likely to be correct
that the transparently self-serving doubt-mongering of the tobacco industry, or the
hyperventilating promotional claims of Big Pharma? Has the process that worked in
the past become inadequate to address contemporary challenges? And is there not
some way to respect authentic expertise without succumbing to technocratic and
anti-democratic impulses?10

9
It seems to us an open question whether—or in what circumstances—extended peer review produces an
epistemically (as opposed to politically) superior outcome; we see this as a fertile area for future research.
On extended peer review, see Hisschemöller et al. (2001), Funtowicz and Ravetz (1993) and Pereira and
Funtowicz (2005).
10
The potentially anti-democratic aspects of expertise has been raised especially by Jasanoff 1990, 2005
and 2010); cf. also Novotny et al. (2001) and Lentsch and Weingart (2011). While we recognise the
potential for scientific expertise to operate in undemocratic ways, our work clearly outlines an equally, if
not more distressing, pattern in which attacks on scientific expertise can undermine democracy by
undermining the basis for informed decision-making. The question is not, we believe, whether scientific
expertise is intrinsically democratic or anti-democratic, but under what conditions scientific expertise best
serves democratic governance. Our views thus overlap in some but not all respects with Collins and
Evans (2002), who called for a new approach in science studies more directly engaged in evaluating
expertise. ‘‘Wave Three’’ of science studies, they suggested, should re-establish the distinction between

123
552 Metascience (2012) 21:531–559

Vexingly, two decades as scholars did not enable us to provide satisfying answers
to these questions and therefore an entirely satisfactory conclusion to our book. We
did not have a good answer as to why ordinary citizens should accept that
anthropogenic climate change is underway other than to say that this is what the
diverse men and women across the globe who have dedicated their lives to
understanding the Earth’s climate system have concluded; that their earlier
theoretical predictions have largely come true; that our own semi-professional
reading of the evidence supports their conclusions; and that the individuals and
institutions challenging them have vested ideological or economic interests and a
history of challenging any science that threatens those interests. Climate science is
also a rather mature science: its basic claims have stabilised—they are not changing
very much in the face of additional evidence—suggesting that, although future
changes are always possible, additional evidence in the next few years is unlikely to
shift the epistemic landscape much. Thus, to the extent that science could ever
provide a stable basis for public policy, climate science has done so. Our inability to
say something more compelling is our failing, but it is also the failing of our field.
So let us return to the question of how Merchants of Doubt engages—or does not
engage—theoretical issues posed in academic science studies literature, and the
reasons why our explicit citations of academic work are scant. There are two, one
pragmatic, the other principled. Let us consider the former first.
The science-shelves of our local book stores (if we still have local book stores)
are populated with works by journalists and science writers, many of which
perpetuate the heroic and individualistic visions of science that our field had thought
itself to have dispensed. The reason is well known: academic discourse—with its
preoccupations, its vocabulary, and especially its jargon—is the kiss of death for
any work that hopes to engage a broad audience. What we call theoretical nuance,
editors call academic esoterica, even pedantry. It drives them away faster than
children suddenly discover that they have urgent homework when editors told to
clean their rooms.
This is not the place to explore the question of why the way we write and speak is
off-putting to people outside academic walls. After all, it is not just a problem for
STS; academic historians have yielded a huge territory of popular interest—even a
television station—to people we consider amateurs. If we believe that what we do
matters to the world, then why do we talk about it in a manner that the rest of the
world finds so off-putting? Why do we raise real-life problems only to flinch when
faced with answering them? This is a question that all academics might do well to
consider. In science studies, we have withdrawn from a broader effort to explain
what science is and how it works, leaving the field open to those continuing to

Footnote 10 continued
experts and lay persons, while acknowledging the continuity between the wider scientific community and
the public in all but specialists’ areas. This paper provoked a fairly negative response even from scholars
whose own work might be viewed as engaged with such questions, presumably because distinctions often
imply inequities, and these latter scholars are loathe to re-inscribe the superiority of scientific expertise as
a source of cultural authority (cf. Jasanoff 2003; Wynne 2003). The latter position, in our view, makes the
mistake of conflating equality with identity, or rather, difference with inequality (cf. also Epstein 1996
and Zammito 2004).

123
Metascience (2012) 21:531–559 553

promote ‘aha!’ moments, lonely ‘lab-rat’ geniuses, and contrarian heroes. If the
public image of science is erroneous, it is partly our fault for leaving the explanatory
territory wide open. Horror vacui, n’est-ce pas?
So let us return to the question we posed at the end of Merchants of Doubt, which
is of broad concern. Should we accept the conclusions of scientific experts, and if so
why? Put another way, under what conditions should we accept expert conclusions?
And how do we know who the experts are? Our story followed a pattern: in each
case we studied—acid rain, ozone depletion, global warming, DDT—scientific
experts debated the issue and achieved consensus on the reality of an environmental
or public health and safety threat, but that consensus was challenged by non-experts
with an ideological, political or economic stake in defending an opposing view—
which in most cases meant defending the status quo.11 Admittedly, it was easy for
us to defend science in this case, because the science fell, as it were, on the side of
‘the people’—in protecting the public and the natural environment.
But there was more to it than our own greater sympathy for cancer victims than
the shareholders of Philip Morris: we could not help but feel that perhaps this
pattern was not just coincidence. Perhaps the moral of the story is that the natural
world does place constraints on human activities. Late twentieth-century scientific
investigations affirmed what many of us intuitively felt to be true: we cannot foul
our nest without consequences. Science also affirmed that the economic problem of
negative externalities is a substantial one:—Human activities have costs inade-
quately reflected by market signals. This conclusion was noxious to neo-liberals of
varying stripes, particularly the free-market fundamentalists, so they resisted the
science that showed it to be so. They downplayed the evidence, they insisted the
problems illuminated were not serious, and, in the worst cases, they altered peer
review reports, misrepresented evidence, and defamed and libelled the scientists
who had produced it. And the money to enable these activities came to a very large
extent from the tobacco and fossil fuel industries, or from think tanks and
foundations supported by those industries.
It was not hard in this context to sympathise with science. If our story was not
black and white, it certainly contrasted off-white with awfully dark grey. Of course,
we were acutely aware that it would not do simply to conclude that we must ‘trust
the experts’. Yet, in some ways, the answer was clear, because it appeared that—as
far as it is possible for us to say—the experts were broadly correct in their
11
The exception to the rule involves nuclear winter. In 1984, when first debated, there was no scientific
consensus on its reality or severity. However, we argue that the scientific process worked as it should
during the next decade: research was done, claims were narrowed, and a consensus emerged that the
problem was real, but probably not as severe as originally suggested (hence, ‘nuclear autumn’). Yet, the
contrarians in our story did not contribute to this scientific process; they tried to undermine it. Moreover,
the story supports a historical point that chronology matters: it is crucial to pursue diverse viewpoints in
the early stages of research and debate, and not to shut down outlier voices prematurely, less potentially
fertile lines of inquiry be missed. See, for example, Solomon (2001). However, we would argue that it
becomes less valuable after decades of research (assuming that the early research was truly open), and at
some point it becomes repetitive, uninformative, and potentially a waste of resources. Once a consensus
has stabilised, new information is still important, of course, but re-hashing of old debates is rarely
productive and more often a waste of time and printer ink. An important point in the debate over
continental drift was that it was productively re-opened when scientists had new evidence to offer (cf.
Oreskes 1999 and 2002).

123
554 Metascience (2012) 21:531–559

judgments, and the doubt-mongers took positions that were incompatible with the
available evidence.12 It is not too hard to see why that might have turned out to be
the case. Given the choice between academic epidemiology and the tobacco
industry, it seems fairly obvious that while both sides have vested interests, one
side’s interests are more likely to fall on the side of truth than the other side’s.
And herein lies the rub. We just used a word that science studies scholars are
rarely willing to use at all, much less without scare quotes. (You know what word
we’re talking about.) An earlier generation—no, nearly all earlier generations—of
historians and philosophers of science, not to mention scientists, believed their
object of study was truth: what it is, how we get it, how we know it when we see it.
Scientists—whose central activity we claim to study—have no problem saying that
truth is their quarry; the object of their endeavour is to uncover truths about the
natural world. Truth does not make scientists uncomfortable. Yet it makes us
acutely so.
So we operate in a parallel universe, implicitly (or sometimes explicitly)
behaving as if the people we study operate under a collective delusion that the world
exists independently of us and that it could ever be possible for fallible humans to
grasp a substantial hold on even a bit of it. We treat the scientific community as if it
lives in a state of false consciousness. No wonder so many scientists have rejected
our field as irrelevant, seeing us as the blind men touching the tail and ears and trunk
of elephant but entirely missing the animal.
Over the past several decades, the field of science and technology studies has
successfully and appropriately dislodged earlier positivist images of scientific
investigation, enshrined since the days of Auguste Comte. Ever since Thomas
Kuhn—ironically in a series dedicated to the positivist goal of the unity of
science—we have understood that the positivist vision fails to live up to its own
reliability criterion of consistency with observation. Observed science bears little or
no resemblance to the models of it constructed in the early-to-mid twentieth century.
Observed science is messy, diverse, personal, social, subjective, complicated and
above all—changeable. Kuhn’s observation that revolutions lead to paradigm shifts
producing new weltanschauung incommensurable with what they replaced was
itself a new weltanschauung. While Kuhn’s world view had its limits—and myriad
scholars have pointed this out—it was clearly more adequate empirically that the
dominant picture that it replaced.
In the half-century since Kuhn, science studies scholars have articulated his
fundamental insight in myriad ways, provided a detailed picture of the complexities

12
Some readers will object that there are many cases where expert opinion was wrong. We would
suggest that in many, perhaps most, of those examples, the scientific community was divided; there was
not an expert consensus. Consider two cases we have studied carefully. In the 1920s, American geologists
had a consensus that continental drift was not supported, but Europeans were not so sure (cf. Oreskes
1999). In the early 1990s, physical oceanographers insisted on the safety of long-range low frequency
acoustic transmissions to detect climate change, but cetacean biologists disagreed (cf. Oreskes 2004).
In both cases there were significant divides within the scientific community, or between different
communities with relevant expertise. And it both cases debate (rightly) continued. Hence a key
component of doubt-mongering campaigns is to deny consensus and insist that the debate continues. For
if that were true, then it would be appropriate to defer judgment and resist premature closure. Their
argument was not illogical; it was unfounded.

123
Metascience (2012) 21:531–559 555

of observational work, the diverse relationships between theory, observation,


experiment and model, and the complex social relationships that mediate scientific
activity and judgment. We showed that science is not the product of lonely genius,
but of the collective activities of communities who produce and vet knowledge
claims. Scientific insight may occasionally be of the product of lonely genius,
but scientific knowledge never is, for ‘knowledge’—what counts as knowledge,
what is accepted as knowledge, what serves as knowledge, and therefore what is
knowledge—is the end product of social dynamics in, as Latour and Woolgar (1986)
famously put it, agonistic fields and, as Steven Shapin equally famously showed,
ultimately a product of trust (cf. also Rudwick 1988). What our predecessors called
proof—and thought they could analyse logically—we now realise is persuasion,
which we recognise must be analysed socially.
In short, we have shifted the focus of attention from the individual to the group,
and demonstrated the ways and means in which scientific knowledge is a social
accomplishment. We have shifted focus from the logic of methods to the processes
of community. Many (perhaps most) of our scientific colleagues interpreted this as a
debunking exercise—as if the implications of the social turn were to strip science of
social and cultural authority. No doubt some earlier practitioners of science studies
intended it that way. But consider this: would we not now argue that the very
strength of science lies in its community structure? That the give and take of
argumentation and critique—of Latour’s ‘agonistic field’—is the source of science’s
strength, not its weakness?13 If this is so—and we believe that it is—then it is in this
light that peer review emerges as a central concern—one which, we would argue, is
under-theorised. For it is in the peer review process—both sensu stricto and sensu
lato—where scientific claims are vetted.
Our understanding of what warrants collective approval is thus grounds on which
science studies might now considered advancing. We have framed our studies—
implicitly if not explicitly—around what makes science weak. We did so for reasons
that can be readily understood historically. But times have changed. We live in a
world where the cultural status of science is rather different from what it was when
our field was founded. We live in a world that needs us to explain not only what
makes science fallible, but also what makes it robust. We need to understand not
only what makes science weak—and less than what society once imagined it to be—
but also what makes it strong—and therefore under what circumstances we might do
well to act upon its claims. In short, we need a stronger theoretical framework for
understanding the social conditions under which scientific claims are likely to be
dependable.
Even our scientific colleagues now accept that science is historical and that both
its contents and methods have changed over time. We have amply demonstrated
that ‘warranted beliefs’ can be overturned. These are important insights, not to be
dismissed. They are insights to which our own earlier work contributes. Indeed,
some time ago, one of us wrote: ‘‘history is littered with the discarded belief of
13
For explicit versions of this argument, particularly in defence of feminist epistemologies, see Longino
(1990) and Solomon (2001). These feminist critiques raise the important questions of the limits of social
empiricism—is science strengthened by letting 1000 flowers bloom, or only ones cultivated in certain
ways? For a critique of Solomon on this question, see Oreskes (2008).

123
556 Metascience (2012) 21:531–559

yesterday, and the present is populated by epistemic resurrections’’. At that time, we


posed the obvious follow-up: ‘‘How are we to evaluate contemporary science’s
claims to truth, given the perishability of past scientific knowledge? [I]f our
knowledge is perishable and incomplete, how can we warrant its use in sensitive
social and political decision-making?’’ These questions remain as pertinent today as
they were then. Indeed, perhaps even more so.
The central accomplishment of our field has been essentially a negative one. We
have showed what science is not. We have done less well explaining what it is. We
have supplied many reasons why we might be sceptical of scientific claims, but few
reasons why we might be justified in accepting them.14 Worse, we have contributed
to the problem of ‘truthiness’—to a state of affairs wherein many people consider
scientists to be no more disinterested than the CEO of Exxon–Mobil, and for whom
the conclusions of epidemiologists cannot be differentiated from the claims of the
tobacco industry or their next-door neighbour.
Do not mistake our argument: We are not calling for the re-introduction of ideas
that have been shown to be untenable. We are pointing out a lacuna in our collective
work—a lacuna that helps to explain that in our own specific work, which Yearley
and Mercer correctly perceived. Our point is that STS has been more effective in
showing the problems with existing theories of science than in providing adequate
successor theories and suggestions for improved practices. In short, we have not
found an adequate positive model to replace the positivist one that the field of
science studies has discarded.
Perhaps the time has come to do so. We consider it to be a major challenge facing the
STS community to develop a more realistic model of science that embraces diversity
and disunity—that acknowledges uncertainty and equivocation—without reducing
science to mere sound and fury, however, quietly and impassively expressed. We hope
that our colleagues and our students will take up that challenge. If Merchants of Doubt
helps inspire such work in our field, then we shall be very happy. And if it has the
impact on public policy that Mercer predicts, then perhaps that will be the strongest
argument we can offer for the value of the approach that produced it.

References

Abrevaya, J., and D.S. Hamermesh. 2010. Charity and favoritism in the field: are female economists nicer
(to each other)? National Bureau of Economic Research, Working Paper No. 15972.
Alexander, L.V., and J.M. Arblaster. 2009. Assessing trends in observed and modelled climate extremes
over Australia in relation to future projections. International Journal of Climatology 29: 417–435.

14
Some scholars involved in defending climate science have noted this—for example Washington and
Cook (2011). And some scholars critiquing climate science have drawn on science-studies approaches
(e.g. Hulme 2009; and van der Sluijs et al. 2010). Hulme’s claim that climate science has become
‘hegemonic’ seems hard to understand given the failure of climate scientists to effect policies to prevent
‘‘dangerous anthropogenic interference’’, the specific goal their work laid out in the UNFCCC. Perhaps he
means hegemonic within the expert community—but if so, then that is no more than to say that the reality
of AGW is now the scientific paradigm, akin to plate tectonics or relativity—i.e. a successful scientific
theory.

123
Metascience (2012) 21:531–559 557

Ashmore, M. 1996. Ending up on the wrong side: Must the two forms of radicalism always be at war?
Social Studies of Science 26: 305–322.
Balanyá, J., J.M. Oller, R.B. Huey, G.W. Gilchrist, and L. Serra. 2006. Global genetic change tracks
global climate warming in Drosophila subobscura. Science 313: 1773–1775.
Bamber, J.L., R.E.M. Riva, B.L.A. Vermeersen, and A.M. LeBrocq. 2009. Reassessment of the potential
sea-level rise from a collapse of the west Antarctic ice sheet. Science 324: 901–903.
Beck, U. 1992. Risk society: Towards a new modernity. London: Sage.
Bornmann, L., R. Mutz, and H.-D. Daniel. 2007. Gender difference in grant peer review: A meta-analysis.
Journal of Informetrics 1: 226–228.
Canadell, J.G. et al. 2007. Contributions to accelerating atmospheric CO2 growth from economic activity,
carbon intensity, and efficiency of natural sinks. Proceedings of the National Academy of Sciences
104: 18,866–18,870.
Cartwright, N. 2007. Are RCTs the gold standard? Biosocieties 2: 11–20.
CCSP. 2008. Weather and climate extremes in a changing climate. Regions of focus: North America,
Hawaii, Caribbean, and U.S. Pacific Islands. A report by the U. S. climate change science program.
Department of Commerce, NOAA’s National Climatic Data Center, Washington, DC, USA.
Church, J.A., and N.J. White. 2006. A 20th-century acceleration in global sea-level rise. Geophysical
Research Letters 33: L01602.
Cole, S., J.R. Cole, and G.A. Simon. 1981. Chance and consensus in peer review. Science 214: 881–886.
Collins, H.M., and R. Evans. 2002. The third wave of science studies: Studies of expertise and experience.
Social Studies of Science 32: 235–296.
Daston, L., and P. Galison. 2010. Objectivity. New York: Zone Books.
Dessler, A.E., et al. 2008. Water-vapor climate feedback inferred from climate fluctuations, 2003–2008.
Geophysical Research Letters 35: L20704.
Easterling, D.R., and M.F. Wehner. 2009. Is the climate warming or cooling? Geophysical Research
Letters 36: L08706.
Edmond, G., and D. Mercer. 2004. Daubert and the exclusionary ethos: The convergence of corporate and
judicial attitudes towards the admissibility of expert evidence in tort litigation. Law and Policy 26:
231–257.
Epstein, S. 1996. Impure science: AIDS, activism and the politics of knowledge. California: The
University of California Press.
Ezrahi, Y. 2004. Science and political imagination in contemporary democracies. In States of knowledge:
The co-production of science and social order, ed. S. Jasanoff, 254–273. London: Routledge.
Fisher, M., S.B. Friedman, and B. Strauss. 1994. The effects of blinding on acceptance of research papers
by peer review. Journal of the American Medical Association 272: 143–146.
Flanagin, A., et al. 1998. Prevalence of articles with honorary authors and ghost authors in peer-reviewed
medical journals. Journal of the American Medical Association 280: 222–224.
Foster, K.R., and P.W. Huber. 1997. Judging Science: Scientific Knowledge and the Federal Courts.
Cambridge: The MIT Press.
Funtowicz, S.O., and J.R. Ravetz. 1993. Science for the post-normal age. Futures 25: 739–755.
Furedi, F. 2009. Energizing the debate about climate change. Spiked, 20 March.
Gillett, N.P., and P.A. Stott. 2009. Attribution of anthropogenic influence on seasonal sea level pressure.
Geophysical Research Letters 36: L23709.
Glassner, B. 1999. The culture of fear: Why Americans fear the wrong things. New York: Basic Books.
Healy, D. 2000. Good science or good business? Hastings Center Report 30: 19–23.
Healy, D. 2004. Let them eat Prozac. New York: New York University Press.
Hisschemöller, M., T. Hoppe, P. Groenewegen, and C.J.H. Midden. 2001. Knowledge use and political
choice in Dutch environmental policy: A problem structuring perspective on real life experiments in
extended peer review. In Knowledge, power, and participation in environmental policy, eds.
Hisschememöller, M., R. Hoope, W.N. Dunn, and J.R. Ravetz, vol 12, 437–452. Policy Studies
Review Annual.
Hobsbawm, E. 1994. Age of extremes: The short twentieth century 1914–1991. New York: Michael
Joseph.
Hulme, M. 2009. Why we disagree about climate change: Understanding controversy, inaction and
opportunity. Cambridge: Cambridge University Press.
IPCC. 2007. Climate change 2007: The physical science basis. Contribution of working group I to the
fourth assessment report of the intergovernmental panel on climate change. eds. Solomon, S., D.

123
558 Metascience (2012) 21:531–559

Qin, M. Manning, Z. Chen, M. Marquis, K.B. Avery, M. Tignor, and H.L. Miller. Cambridge:
Cambridge University Press.
Jasanoff, S. 1990. The fifth branch: science advisors as policy makers. Cambridge: Harvard University
Press.
Jasanoff, S. 2003. Breaking the waves in science studies: Comment on H. M. Collins and Robert Evans,
the third wave of science studies. Social Studies of Science 33: 389–400.
Jasanoff, S. 2005. Designs on nature: Science and democracy in Europe and the United States. Princeton:
Princeton University Press.
Jasanoff, S. 2010. Beyond calculation: A democratic response to risk. In Disaster and the politics of
intervention, ed. A. Lakoff, 14–40. New York: Columbia University Press.
Keeling, C.D. 1998. Rewards and penalties of monitoring the earth. Annual Reviews of Energy and the
Environment 23: 25–82.
Kriegler, E., J. W. Hall, H. Held, R. Dawson, and H.-J. Schellnhuber. 2009. Imprecise probability
assessment of tipping points in the climate system. Proceedings of the National Academy of
Sciences 106: 5,041–5,046.
Krimsky, S. 2003. Science in the public interest. Lantham: Rowman and Littlefield.
Lamont, M. 2010. How professors think: Inside the curious world of academic judgment. Cambridge:
Harvard University Press.
Latour, B., and S. Woolgar. 1986. Laboratory life: The construction of scientific fact. Princeton: Princeton
University Press.
Lentsch, J., and P. Weingart (eds.). 2011. The politics of science advice: Institutional design for quality
assurance. Cambridge: Cambridge University Press.
Longino, H. 1990. Science as social knowledge. Princeton: Princeton University Press.
Mercer, D. 2008. Science legitimacy and folk epistemology in medicine and law: Parallels between legal
reforms to the admissibility of expert evidence and evidence based medicine. Social Epistemology
22: 405–423.
Michaels, D. 2008. Doubt is their product: How industry’s assault on science threatens your health.
New York: Oxford University Press.
Mooney, C. 2005. The republican war on science. New York: Basic Books.
Moore, K. 2008. Disrupting science: Social movements, American scientists and the politics of the
military, 1945–1975. Princeton and Oxford: Princeton University Press.
Newell, B.R., and A.J. Pitman. 2010. The psychology of global warming. Bulletin of the American
Meteorological Society 91: 1003–1014.
Novotny, H., P. Scott, and M. Gibbons. 2001. Rethinking science: Knowledge and the public in an age of
uncertainty. Cambridge: Polity Press.
Oreskes, N. 1999. The rejection of continental drift: Theory and method in American earth science.
New York: Oxford University Press.
Oreskes, N. 2002. Gravity surveys in the ‘permanent’ ocean basins: An instrumental chink in a theoretical
suit of armor. In Oceanographic history: The Pacific and beyond, ed. K.R. Benson, and P.F.
Rehbock, 502–510. Seattle: University of Washington Press.
Oreskes, N. 2004. Science and public policy: What’s proof got to do with it? Environmental Science and
Policy 7: 369–383.
Oreskes, N. 2008. The devil is in the (historical) details: Continental drift as a case of normatively
appropriate consensus? Perspectives on Science 16: 253–264.
Pereira, Â.G., and S. Funtowicz. 2005. Quality assurance by extended peer review: tools to inform
debates, dialogues and deliberations. Technikfolgenabschätzung Theorie und Praxis 2: 74–79.
Peters, D., and S. Ceci. 1983. Peer-review practices of psychological journals: The fate of submitted
articles, submitted again. Behavior and Brain Science 5: 187–255.
Plimer, I. 2009. Heaven ? Earth. Global warming: The missing science. Balan, Victoria: Connor Court
Publishing.
Porter, T. 1996. Trust in numbers: The pursuit of objectivity in science and public life. Princeton:
Princeton University Press.
Pritchard, H.D. and D.G. Vaughan. 2007. Widespread acceleration of tidewater glaciers on the Antarctic
Peninsula. Journal of Geophysical Research 112: F03S29.
Rampton, S., and J. Stauber. 2002. Trust us, we’re experts: How industry manipulates science and
gambles with your future. New York: Penguin.
Ravetz, J.R. 2006. The no-nonsense guide to science. Oxford: New Internationalist Books.

123
Metascience (2012) 21:531–559 559

Rennie, D. 1986. Guarding the guardians: A conference on editorial peer review. Journal of the American
Medical Association 256: 2,391–2,392.
Rennie, D. 1999. Editorial peer review: Its development and rationale. In Peer review in health sciences,
ed. F. Godlee, and T. Jefferson. London: BMJ Books.
Roqueplo, P. 1994. Climats sous surveillance: Limites et conditions de l’expertise scientifique. Paris:
Economica.
Rudwick, Martin.J.S. 1988. The great Devonian controversy: The shaping of scientific knowledge among
gentlemanly specialists. Chicago: The University of Chicago Press.
Shapin, S. 1995. A social history of truth: Civility and science in seventeenth century England. Chicago:
University of Chicago Press.
Smith, R. 2006. Peer review: A flawed process at the heart of science and journals. Journal of the Royal
Society of Medicine 99: 178–182.
Solomon, M. 2001. Social empiricism. Cambridge: MIT Press.
Talent, J. 1989. The case of peripatetic fossils. Nature 338: 613–615.
van der Sluijs, J.P., R. van Est, and M. Riphagen. 2010. Beyond consensus: Reflections from a democratic
perspective on the interaction between climate politics and science. Current Opinion in
Environmental Sustainability 2: 409–415.
Van, G.F., S. Evans, R. Smith, and N. Black. 1998. Effect of blinding and unmasking on the quality of
peer review: A randomized trial. Journal of the American Medical Association 280: 234–237.
Wallerstein, I. 2000. The essential Wallerstein. New York: New Press.
Washington, H.W., and J. Cook. 2011. Climate change denial: Heads in the sand. London: Earthscan
Press.
Wenneras, C., and A. Wold. 1997. Sexism and nepotism in peer review. Nature 387: 341–343.
Willett, K.M., N.P. Gillett, P.D. Jones, and P.W. Thorne. 2007. Attribution of observed surface humidity
changes to human influence. Nature 449: 710–712.
Wynne, B. 2003. Seasick on the third wave? Subverting the hegemony of propositionalism: Response to
Collins and Evans. Social Studies of Science 33: 401–417.
Wynne, B. 2010. When doubt becomes a weapon. Nature 466: 441–442.
Yearley, S. 1997. The changing social authority of science. Science Studies 10: 65–75.
Zammito, J.H. 2004. A nice derangement of epistemes: Post-positivism in the study of science from Quine
to Latour. Chicago: University of Chicago Press.
Zhang, X., F.W. Zwiers, G.C. Hegerl, F.H. Lambert, N.P. Gillett, S. Solomon, P.A. Stott, and T. Nozawa.
2007. Detection of human influence on twentieth-century precipitation trends. Nature 448: 461–465.

123
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

You might also like