Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Process Tracing and Comparison

Download as pdf or txt
Download as pdf or txt
You are on page 1of 17

Comparative Methods and Process Tracing

Final Report of QTD Working Group III.1:

January 2019

Working Group Members*


Andrew Bennett, Georgetown University
Tasha Fairfield, London School of Economics and Political Science
Hillel David Soifer, Temple University


*
Andrew Bennett <bennetta@georgetown.edu> is a Professor of Political Science at Georgetown University.
Tasha Fairfield <t.a.fairfield@lse.ac.uk> is an Associate Professor in Development Studies at the London
School of Economics and Political Science. Hillel David Soifer <hsoifer@temple.edu> is an Associate
Professor of Political Science at Temple University.

Electronic copy available at: https://ssrn.com/abstract=3333405


Qualitative Transparency Deliberations Final Report, Working Group III.1

I. Introduction
Process tracing and cross-case comparisons are widely used in qualitative research.
Process tracing is generally understood as a within-case method for drawing inferences on
mechanisms. Understandings of the logic of inference differ across the methodological
literature, from approaches that emphasize invariant causal mechanisms to those that take
a probabilistic approach to assessing alternative explanations. Likewise, understandings of
cross-case comparison vary widely; many treatments are grounded in a logic of
approximating experimental control, while others see comparisons as useful for inspiring
theory and producing strong tests of theory without necessarily providing a distinct logic
of inference from process tracing. We aim to largely sidestep methodological debates by
articulating a list of broadly desirable transparency objectives; however, we note that
appropriate transparency practices may differ depending on the particular methodological
understanding espoused. We discuss these issues as they relate to four approaches to
process tracing that have arisen over time and that are currently practiced by researchers:
narrative-based process tracing (including traditional methods of using evidence to make
causal arguments about historical outcomes of individual cases), Van Evera’s (1997) tests,
Bayesian process tracing, and process tracing focused on mechanistic approaches.

II. Clarity and Analytic Transparency


Below, we present a set of core practices that are valued because they have the important
benefit of providing readers with the information they need to understand and evaluate
qualitative research that draws on process tracing and comparative methods. Since each
practice provides different types of information, we address the benefits of each in turn.
We then discuss a number of research exemplars that illustrate various of these practices
as they have been successfully applied in substantive research.

Core Recommended Practices


(1) Clearly define concepts and describe how they have been operationalized and scored
across cases.
Clear concepts and sensible measurement strategies are critical for any research
project. Scoring cases on key causal factors and outcomes is a major component of
qualitative research that often involves close scrutiny of cases (even substantial fieldwork)
and in-depth analysis. Providing readers with clear and consistent definitions for concepts
as well as discussing how these concepts were operationalized and scored across cases is
essential for analytic transparency.
This aspect of transparency may arise in two different portions of the research
project: (i) explaining how cases score on key variables that are used to carry out and justify
case selection, and (ii) a more detailed and nuanced ‘scoring’ of the cases chosen for
investigation on a wider range of characteristics, including background conditions and the
context in which the causal relationship is theorized to hold.

Electronic copy available at: https://ssrn.com/abstract=3333405


Qualitative Transparency Deliberations Final Report, Working Group III.1

(2) Present the rationale for case selection and the logic of comparison.
This related point is common practice in qualitative research. While the details of
what information should be provided will depend on the aims and methodological
grounding of the research, some rationale for why a given case or cases deserve close
attention and why they constitute an analytically informative set for comparison should be
provided. This has the benefits, as discussed below, of allowing the reader to evaluate the
choices made by the researcher, and to assess how compelling any claims of
generalizability might be.
(3) Clearly articulate the causal argument.
This is an obvious point not just for transparency but also for good scholarship more
fundamentally—readers cannot understand and evaluate research if the argument under
consideration is ill-specified. Yet there is of course substantial variation in the extent to
which scholarship achieves this goal. Ideally, we would like our hypotheses to include a
careful discussion of both causal mechanisms and scope conditions, so that readers
understand how and to what range of cases the argument should apply when bringing their
own knowledge to bear on the question. If the latter are not well known, some discussion
of uncertainty regarding the range of conditions and contexts under which the theory should
apply is merited. The more clarity can be achieved on this front, the better the prospects
for theory refinement and knowledge accumulation through subsequent research. This
includes being transparent about whether a qualitative case study is case-centered without
aiming for generalization, or centered on theory that applies more broadly, and a discussion
of the breadth of application of a more generalized theory.
(4) Identify and assess salient alternative explanations.
As a matter of common practice, qualitative research identifies and assesses
alternative explanations, to show how the argument proposed builds on previous
approaches and/or provides a better explanation than rival hypotheses. Scholars are advised
to, and generally do, locate alternative explanations in salient literatures and explain why
they deem these explanations inadequate, based on prior knowledge and/or evidence
uncovered during the investigation. Readers themselves should also be expected to
contemplate salient alternative explanations that the author may have overlooked and
assess how well the argument at hand holds up against those alternatives.
(5) Explain how the empirical evidence leads to a given inference.
Linking evidence to inference lies at the heart of analytic transparency. Good
scholarship presents the evidentiary basis for the analysis and takes the reader through
some explanation of how the evidence has been collected and interpreted, as well as why
and to what extent the evidence supports the author’s argument and/or undermines rival
explanations. As part of this process, authors should address any consequential evidence
that runs counter to their overall conclusions, not just the supporting evidence. Discussion
of absence of particular evidentiary findings may also be relevant, depending on the
reasons for the absence. Generally speaking, some pieces of evidence will have greater
inferential weight than others. Inference does not proceed by simply counting up clues for
and against a claim, and scholars should aim to assess and explain why certain findings
carry more substantial import.

Electronic copy available at: https://ssrn.com/abstract=3333405


Qualitative Transparency Deliberations Final Report, Working Group III.1

Different methodological approaches will give rise to different prescriptions for


connecting evidence to inference. For example, mechanistic understandings of inference
emphasize tracing out the causal mechanism of the argument and linking each piece of
evidence to each specific step in the theorized causal processes. 1 In contrast, Bayesian
inference derives from the extent to which the observed evidence--which may be partial
and incomplete--fits better with one hypothesis compared to one or more rivals. Evidence
consistent with the author's causal mechanism may or may not provide inferential weight
in support of that hypothesis, depending on whether that evidence is more or less plausible
in the world of a rival hypothesis. Other approaches include application of Van Evera's
(1997) process tracing tests, where a hypothesis tends to be either confirmed or infirmed
upon passing or failing tests of varying strengths.
In most methodological approaches, it is important to link evidence to the
observable implications predicted by alternative explanations. In narrative accounts, this
entails asking how well the evidence fits with alternative explanations. In Van Evera’s
(1997) framework, this involves identifying whether evidence constitutes a “smoking gun”
or another of the three tests relative to rival hypotheses (see below, and Van Evera, 1997,
Bennett 2010). These tests have sometimes been misunderstood as being discrete and
categorical, whereas they actually are intuitive shorthand labels for points along a
continuous spectrum of Bayesian inferences on evidence with different degrees of
probative value in discriminating among alternative explanations. In a fully Bayesian
framework, authors treat evidentiary tests more precisely by identifying their strength in
specific probabilistic terms. They do so by estimating priors on rival explanations and
likelihood ratios for different pieces of evidence, although as we discuss below, the degree
of formalization may vary.2 In approaches focused on causal mechanisms and explanatory
completeness, evidence contributes to an “event history map” that documents the realized
values in the case of the theoretical variables outlined in the causal model.
Disagreements persist on whether it is important or even relevant for an author to
clarify whether observable implications of alternative explanations were assessed prior to
looking at a piece of evidence. Some approaches to the philosophy of science emphasize
that “heuristic novelty,” or theoretical predictions made prior to observing evidence, is
important to assessing the status of evidence relative to theory.3 Many Bayesians, however,
argue that the only thing that matters is the logical relationship between a hypothesized
theory and observed evidence.4 We encourage scholars to review these debates and to
proceed according to the position they find most compelling, providing an accompanying
justification for their epistemological position.
(6) Identify and discuss background knowledge that plays a central role in how the
evidence is interpreted.
The author is generally in a unique position to interpret and assess his or her
evidence in light of extensive and highly specialized knowledge acquired about the case(s)


1
e.g., Beach & Pedersen 2016, Waldner 2015.
2
Authors can also make their estimates less confining by using upper and lower estimates of probabilities to
avoid conveying a false sense of precision).
3
Elman and Elman, 2003.
4
Fairfield and Charman 2019.

Electronic copy available at: https://ssrn.com/abstract=3333405


Qualitative Transparency Deliberations Final Report, Working Group III.1

studied. While it is neither possible nor desirable to expound every detail of accumulated
background knowledge, key elements therein that matter to the author’s interpretation of
the evidence should be shared with readers.5 Because everyone comes to the table with
very different background knowledge, based on familiarity with different bodies of
literature, different countries, different political phenomena, etc., this guideline can make
a big difference for establishing common ground and fostering consensus on inferences, or
at least for identifying the reasons for divergent interpretations of evidence.
(7) Present key pieces of evidence in their original form where feasible.
Qualitative research is at its best when it showcases salient evidence in its original
form, for example, direct quotations from documents and informants. Of course, this is not
always possible; in many instances an indirect summary of information from an interview
or a primary source will be preferable for clarity and conciseness, and in some situations
this may be the only feasible option given the conditions of research (e.g. concern over
human subject protection). Nevertheless, when it is possible to present the evidence
directly, readers have the opportunity to assess whether the author’s interpretations and
inferences are convincing. Given the ambiguities inherent in written language, small
changes in wording from an original source can make a big difference to the meaning and
import of the information. While it may not be necessary to provide full text quotations
from a secondary source, it follows that when working with such sources, page numbers
must be included so that readers can locate the original statements.6

Research Exemplars
The above practices find expression in many excellent books and articles. Wood’s (2000,
2001) research on democratization from below is a benchmark that illustrates many of these
virtues. Wood clearly articulates the causal process through which mobilization by poor
and working-class groups led to democratization in El Salvador and South Africa, provides
extensive and diverse case evidence to establish each step in the causal process, carefully
considers alternative explanations, and explains why they are inconsistent with the
evidence. Wood’s use of interview evidence is particularly compelling. For example, in
the South African case, she provides three extended quotations from business leaders that
illustrate the mechanism through which mobilization led economic elites to change their
regime preferences in favor of democratization: they came to view democracy as the only
way to end the massive economic disruption created by strikes and protests.7
Among many other exemplars, we would also call attention to the following list of
works. Most of these works highlight multiple practices discussed above, while a few are
chosen to showcase effective use of a particular practice.


5
For discussion with respect to Bayesian process tracing, see Fairfield & Charman 2017.
6
This element of transparency has been thoroughly discussed and debated in the initiative for transparency
appendices (See e.g. Moravcsik 2014) and we refer readers to that scholarship for a presentation of the
benefits as well as the costs therein. Readers might also explore the recent Annotation for Transparency
Inquiry; see https://qdr.syr.edu/guidance/ati-at-a-glance
7
Wood 2001, 880.

Electronic copy available at: https://ssrn.com/abstract=3333405


Qualitative Transparency Deliberations Final Report, Working Group III.1

• Taylor Boas, Presidential Campaigns in Latin America: Electoral Strategies and


Success Contagion. (Cambridge University Press, 2016)
This book showcases careful work on conceptualization and measurement of
different dimensions of electoral campaign strategies.8 It is also noteworthy in
providing an excellent discussion of case selection for assessing the scope of the
author’s “success contagion” theory. In particular, Boas concisely takes the reader
through the criteria by which he arrived at the final set of secondary country cases
that he examines.9 Moreover, he articulates theoretically compelling scope
limitations for his theory upon concluding this analysis.10
• Brian Downing, The Military Revolution and Political Change (Princeton
University Press, 1992)
This comparative historical study of political development in early modern Europe
is particularly notable for its precise and explicit elaboration of scope conditions
within which war financing led to absolutism. Its elaboration of a set of permissive
conditions within which war operated as a critical juncture is notable in its clarity
and provides transparent guidelines about the types of cases to which the theory
should be expected to generalize.
• Tasha Fairfield, Private Wealth and Public Revenue in Latin America: Business
Power and Tax Politics (Cambridge University Press, 2015)
In its study of the power of business in Latin America to protect its interests and
prevent taxation, this book provides a model of the precise development of concepts
and measures that lies at the heart of our discussion of research transparency. It
also, as discussed further below, marshals process-tracing evidence in a strikingly
effective and transparent way to evaluate the role of business power against
alternative explanations for variation in tax policy.
• Alisha Holland, Forbearance as Redistribution (Cambridge University Press,
2017)
In seeking to explain why public officials in Latin America (sometimes) tolerate
squatting and street vending, Holland digs into different stages of the policy process
to identify and evaluate evidence that allows her to distinguish between rival
explanations centering on state weakness and those centering on the political choice
not to enforce. This book is notable for the precision with which process-tracing
hypotheses are evaluated.
• Alan Jacobs, Governing for the Long Term: Democracy and the Politics of
Investment (Cambridge University Press, 2011)
This book is notable for the precision with which it develops and operationalizes
the concept of policy investment, theorizes the necessary conditions for their
adoption, and specifies causal mechanisms to connect cause and outcome. It is a
particularly useful example of how one might study the effects of ideas, exploring

8
Boas 2016, 3-10.
9
Boas 2016, 34.
10
Boas 2016, 205-207.

Electronic copy available at: https://ssrn.com/abstract=3333405


Qualitative Transparency Deliberations Final Report, Working Group III.1

their causal power in a nuanced framework that also includes political conditions
and the policy process. Finally, it strikes a balance between effective narrative
presentation and the explicit evaluation of theory in the empirical chapters that
highlights the potential of transparent process-tracing accounts to also be readable.
• James Mahoney, “Long Run Development and the Legacy of Colonialism in
Spanish America.” American Journal of Sociology 109 (1) (July 2003): 50-106.
This article pays careful attention to concept formation and operationalization. As
a useful transparency practice to that end, Mahoney includes a series of tables that
present information about the sources used to operationalize each variable in each
of his cases.
• Kenneth Schultz, Democracy and Coercive Diplomacy (Cambridge University
Press, 2009)
• This multimethod work uses a formal model, statistical analysis, and process
tracing to assess whether democracies’ superior ability to send credible diplomatic
signals, due to the ability of opposition parties to reinforce threats or reveal bluffs,
gives them an advantage in coercive diplomacy. The case study of the Fashoda
crisis does an excellent job of testing the alternative explanations with evidence
about the timing and content of British and French parliamentary statements and
diplomatic moves.
• Dan Slater, “Revolutions, Crackdowns, and Quiescence: Communal Elites and
Democratic Mobilization in Southeast Asia.” American Journal of Sociology 115
(1) (July 2009): 203-254.
This article is noteworthy for its careful treatment of alternative explanations.
Slater (220f) clearly identifies four salient rival explanations for the advent of mass
popular protest against authoritarian regimes that differ from his argument that
emotive appeals to nationalist or religious sentiments spark and sustain popular
collective action. The case studies present compelling pieces of evidence that not
only support Slater’s argument, but also undermine the rival explanations, as the
author clearly explains in the context of his case narratives.

III. Emerging Practices


This section examines emerging and evolving practices with respect to two of the areas
discussed in Section II: case selection, and connections between evidence and inference,
in accord with growing interest in the methodological literature on these issues.

Case Selection
Our general recommendation as noted above entails providing a rationale for why given
cases were chosen for close analysis. Common practices to this end vary depending on the
research goals and the methodological approach espoused. 11 Beyond explaining why the
studied cases are substantively or theoretically salient, comparative historical analysis and

11
Seawright and Gerring, 2008; Seawright, 2016: 75-106; Gerring & Cojocaru 2016; Rohlfing 2012, chapter
3.

Electronic copy available at: https://ssrn.com/abstract=3333405


Qualitative Transparency Deliberations Final Report, Working Group III.1

comparative case study research generally describes relevant similarities and differences
across the chosen cases that facilitate multiple structured, focused comparisons. These
studies often aim to encompass variation on the dependent variable and key causal factors
of interest, as well as diversity of background conditions, for the purposes of testing the
theory or explanation in multiple different contexts and assessing its scope and/or
generalizability. In addition to Boas (2015) as noted above, examples of effective case
selection discussions in this tradition include Amengual (2016), Boone (2014), and Garay
(2016).
One potential transparency practice that is less common in existing literature could
entail being more explicit about what information was known during the case selection
process. In some instances, little salient information is available for case selection, and
strategies in the methodological literature that assume broad cross-case knowledge about
key independent and/or dependent variables will be inapplicable.12 As a rule in such
situations, scholars should not falsely imply that cases were selected prospectively on the
basis of knowledge that in fact was only available after in-depth research on the selected
cases was undertaken. In line with Yom’s (2015) discussion of iterative research, we
recommend that scholars be up front when case selection occurs through an iterative
process in conjunction with theory development and concept formulation.13 How much
information about the iterative process should be provided for readers is a matter of some
debate and depends on the framework within which one makes inferences. Yom (2015)
advocates providing substantial details about how the process unfolded over time. If one
takes a Bayesian perspective, Fairfield & Charman (2019) argue that what matters for
making and evaluating inferences is simply the evidence uncovered from the cases studied,
not what was in the scholar’s mind when choosing the cases or the timing of when a given
case was added to the analysis.
A second possible transparency practice that scholars might consider entails
identifying cases that were almost chosen but ultimately not included, and providing
reasons for why those cases were excluded. This can include pragmatic considerations such
as language skills or data availability. Advantages to this practice could include addressing
reviewers’ and readers’ concerns about potential confirmation bias or potential selection
bias (the latter will be more salient for multi-methods research designs that aim to assess
population-level effects within an orthodox statistical framework). For example, specifying
that cases were selected for compelling pragmatic reasons can justify why the researcher
did not choose other possible cases that might strike readers as appropriate options.
Relatedly, such information could help ease the way for other scholars to conduct follow-
up research and further test the generalizability of the theory. In particular, communicating
practical considerations in case selection can alert readers to future opportunities for
analyzing new cases, for example if previously inaccessible archives are made public, or
political changes in a country facilitate interview-based research, or if scholars possess
language skills that allows them to investigate theoretically optimal but unstudied cases.
Disadvantages of this approach could include allocating valuable time and space to details


12
For a practical discussion of these issues, see Fairfield (2015a: Appendix 1.3).
13
e.g. Glaser and Strauss 1967; Ragin 1997.

Electronic copy available at: https://ssrn.com/abstract=3333405


Qualitative Transparency Deliberations Final Report, Working Group III.1

that many readers might find irrelevant and/or distracting from the flow of the book or
article if included in the main text.14

Linking evidence to inference and clarifying background knowledge


Our discussion below considers emerging practices associated with three approaches:
(1) application of Van Evera’s (1997) process tracing tests (smoking gun, hoop, straw in
the wind, doubly decisive), (2) Bayesian reasoning, where initial views about the
plausibility of explanations/hypotheses are updated in light of evidence gathered during the
investigation, and (3) the conceptualization of a mechanism as a chain of entities and
activities,15 possibly anchored in a set-relational framework.16
We will first address Van Evera’s tests and Bayesian process tracing, given that in
some senses, the former approach, which highlights that evidence may support or
undermine a hypothesis to different degrees, can be considered a precursor to the latter
approach.17 We then turn to mechanistic approaches. For each, we discuss specific
empirical examples that have sought to implement these approaches more explicitly, with
attention to practices that entail low to moderate effort as well as practices requiring higher
costs for the author. We highlight advantages with respect to analytic transparency along
the way, and then turn to some salient caveats.

Van Evera’s Tests and Bayesian Process Tracing


Van Evera’s (1997) process tracing tests have occupied a prominent place in recent
qualitative methods literature.18 The logic of this approach is that causal process
observations may provide more or less decisive evidence in favor of or against a causal
hypothesis. For example, passing a smoking-gun test is viewed as strongly supporting the
hypothesis in question whereas failing does not significantly undermine that hypothesis;
for a straw-in-the-wind test, passing (failing) only weakly supports (undermines) the
hypothesis. Applications in empirical research range from occasional references to
smoking gun evidence or hoop tests in process tracing narratives, to work that explicitly
recasts inference in the language of process tracing tests.19
McKeown (1999) was one of the first to propose Bayesianism as a methodological
foundation for qualitative research, with his analogy of “folk Bayesianism” whereby
intuitive, narrative-based analysis roughly resembles Bayesian updating.20 Simply put, this
process entails using prior knowledge to assess how much confidence we initially hold in
a given hypothesis relative to rivals, and updating our views about which hypothesis
provides the best explanation as we gather evidence by evaluating likelihood ratios, which


14
As for other issues mentioned before, the details of the case selection process could be part of a
transparency appendix.
15
Beach & Pedersen 2016.
16
Goertz 2016; Beach & Rohlfing 2018.
17
See Bennett 2015 and Humphreys and Jacobs 2015 on correspondences between Van Evera’s tests and
Bayesianism.
18
e.g. Bennett 2008; Collier 2011; Mahoney 2012.
19
Fairfield 2013, Handlin 2017.
20
See also Bennett 2008.

Electronic copy available at: https://ssrn.com/abstract=3333405


Qualitative Transparency Deliberations Final Report, Working Group III.1

entails asking which hypothesis makes the evidence more plausible. Bayesian process
tracing has become an active area of methodological research, with a recent turn toward
efforts to explicitly apply Bayesian analysis in qualitative research.21 Empirical
applications of this approach range from appendices that qualitatively discuss a few
illustrative pieces of evidence and two main rival hypotheses, to analyses that quantify
degrees of belief in multiple rival hypotheses, assign likelihood ratios for each piece of
evidence under these rival hypotheses, and derive an aggregate inference using Bayes’ rule.
(i) Low–Moderate cost practices
A relatively low-cost emerging practice for journal articles entails providing a
structured explication of the evidence-to-inference logic, usually (for reasons of space and
narrative coherence) in an appendix. The idea is to briefly illustrate how the method of
choice underpins analysis in the case study narratives presented in the main text.
Van Evera’s Tests:
• Tasha Fairfield, “Going Where the Money Is: Strategies for Taxing
Economic Elites in Unequal Democracies,” World Development 47 (2013):
42-57.
The process-tracing appendix (roughly 2000 words) explicitly casts the
author’s analysis of one of the article’s three case studies in the framework
of Van Evera’s (1997) process-tracing tests.
Bayesian Analysis:
• Tasha Fairfield & Candelaria Garay, “Redistribution Under the Right in
Latin America: Electoral Competition and Organized Actors in
Policymaking,” Comparative Political Studies (2017).
Drawing on Fairfield & Charman (2017), the online process-tracing
appendix (roughly 3300 words) explicitly but informally applies Bayesian
reasoning about likelihood ratios to a few case examples from the main text
of the article, without quantifying probabilities or discussing every piece of
evidence from the case narratives.

These kinds of appendices are valuable for clarifying the logic of inference and
illustrating how specific pieces of evidence contribute to the overall inference without
interrupting the flow of causal narratives that aim to paint a more holistic picture of the
events and sequences studied.
As a practical matter, these appendices can be especially useful when authors and
reviewers disagree on inferences and/or do not share a common methodological
understanding of inference in process tracing; both of the example appendices above were
elaborated in response to reviewer queries about the extent to which evidence weighs in
favor of the arguments advanced and/or about methodology more broadly.
The book format, where space constraints are less stringent, offers more room for
experimentation with these practices, above and beyond the appendix approach.

21
Bennett 2015; Humphreys and Jacobs 2015; Fairfield and Charman 2017.

Electronic copy available at: https://ssrn.com/abstract=3333405


Qualitative Transparency Deliberations Final Report, Working Group III.1

Van Evera’s Tests:


• Samuel Handlin, State Crisis in Fragile Democracies: Polarization and
Political Regimes in South America, Cambridge University Press, 2017.
Chapters 3, 4, and 5 intersperse sections that provide narrative analysis with
sections that explicitly apply process tracing tests, following a similar
template to that used in Fairfield (2013).
Several PhD dissertations have also included explicit applications of Van Evera’s tests.22
Future work might also experiment with the degree to which informal Bayesian
reasoning about prior probabilities of hypotheses and especially the weight of evidence (an
intuitive concept that is related to the likelihood ratio) can be incorporated into the main
text, without making the analysis inaccessible to a broad audience

(ii) Higher-cost practices


The practices we classify as higher-cost entail fully formal Bayesian analysis where
degrees of belief are quantified, and Bayes’ rule is explicitly used to draw an aggregate
inference. Quantification is necessarily approximate but may entail estimating ranges for
probabilities.

• Qualitative research: Fairfield and Charman, “Explicit Bayesian Analysis


for Process Tracing: Guidelines, Opportunities, and Caveats,” Political
Analysis (2017): Online Appendix A.
Appendix A re-analyzes the Chilean tax reform case that was previously
treated with process tracing tests23 from a fully Bayesian perspective. This
14,600 word appendix was elaborated to serve as a detailed pedagogical
tool and to highlight technical challenges and pragmatic workarounds for
applying explicit Bayesian analysis to qualitative evidence in case study
research.

• Multi-methods research: Humphreys and Jacobs, “Mixing Methods: A


Bayesian Approach.” American Political Science Review, 109:4 (2015):
653-673.
These authors provide a Bayesian potential-outcomes model for combining
within-case clues with cross-case dataset scores. The article contains two
applications of formal Bayesian analyses to real-world data.
Fully formal treatments offer the advantages of allowing the authors to (1) more
carefully and explicitly assess and communicate their prior level of confidence in rival
hypotheses and the inferential weight of distinct pieces of evidence, (2) explicitly use the
mathematical framework of Bayesian analysis to derive a replicable aggregate inference


22
Lengfelder 2012; Schwartz 2016.
23
Fairfield 2013.

10

Electronic copy available at: https://ssrn.com/abstract=3333405


Qualitative Transparency Deliberations Final Report, Working Group III.1

from pieces of evidence that may pull in different directions, (3) assess how sensitive the
findings are to different interpretations of the evidence, different prior views, and/or other
starting assumptions (other approaches to process tracing also include attention to this third
issue; the point here is that formal Bayesianism includes a specific mathematical
framework for sensitivity analysis).

The drawbacks of fully formal treatments include the substantial time, effort, and
training required, as well as inherent limitations in that probabilities cannot be
unambiguously quantified in qualitative social science, and practical difficulties that arise
when handling large amounts of complex evidence and multiple nuanced hypotheses.24
In practice, there is flexibility in how formally or informally Bayesian analysis is
employed, how extensively it is used, and how it is integrated alongside narrative accounts.

Mechanistic approaches
As it is a within-case method of analysis, process tracing, including the Bayesian
approaches outlined above, is grounded in philosophical approaches that focus on the role
of causal mechanisms in the causal explanation of individual cases. Debates continue,
however, over how exactly to define causal mechanisms and over what constitutes a
satisfactory explanation, and process tracing approaches vary in the level of detail they
seek regarding causal mechanisms and the degree of explanatory completeness to which
they aspire.
A widely-used minimal definition of a mechanism is that it links a cause (or
combination of causes) to the outcome.25 Based on developments in philosophy of biology,
an emerging understanding of mechanisms decomposes them into a sequence of entities
and activities in which the activity of one entity is causal for the next entity performing its
activity and so on. The advantage over the minimal definition and other definitions such as
the covering-law model of explanations is that it achieves productive continuity and can
satisfactorily answer why-questions. 26
One mechanism-focused approach outlined by David Waldner aims at a high level
of explanatory completeness. In this approach, explanatory accounts are adequate or
complete to the extent that: 1) they outline “a causal graph whose individual nodes are
connected in such a way that they are jointly sufficient for the outcome;” 2) they provide
“an event history map that establishes valid correspondence between the events in each
particular case study and the nodes in the causal graph;” 3) they give “theoretical statements
about causal mechanisms [that] link the nodes in the causal graph” … in ways that “allow
us to infer that the events were in actuality generated by the relevant mechanisms;” and 4)
“rival explanations have been credibly eliminated, by direct hypothesis testing or by
demonstrating that they cannot satisfy the first three criteria listed above.”27


24
See Fairfield & Charman 2017:§5.1.
25
Gerring 2008.
26
Machamer et al. 2002.
27
Waldner 2015, 129.

11

Electronic copy available at: https://ssrn.com/abstract=3333405


Qualitative Transparency Deliberations Final Report, Working Group III.1

While Elizabeth Wood’s book Forging Democracy from Below: Insurgent


Transitions in South Africa and El Salvador does not explicitly follow Waldner’s
framework or produce a causal graph, Waldner cites it as an exemplar. Wood argues that
insurgent collective action in El Salvador depressed the economic returns to elites of
existing institutional arrangements, changed elite preferences over institutions in a
democratic direction, and led to elite-insurgent bargaining and a democratic transition.
Waldner shows that this argument can be represented as a causal graph (Waldner,
2015:138).28 Moreover, Wood meets Waldner’s call for the equivalent of an event history
map by providing empirical evidence for each step in the hypothesized process. Wood also
gives evidence against alternative explanations, thereby satisfying all four of Waldner’s
criteria for a convincingly complete explanation of the democratic transitions in her two
cases.
A high-cost analysis theorizes and operationalizes the mechanism in a very detailed
manner. The level of detail can be understood as the number of steps in the mechanism
and, by implication, the spatio-temporal proximity of entities. The more steps, the more
complex the theory, the more voluminous the evidence needed to instantiate each step in
the theory, and the greater the amount of evidence about which an author needs to be
transparent. Productive continuity is generally easier to establish the more proximate two
entities are. Theorizing a fine-grained mechanism requires substantial theoretical effort and
extended theoretical discussions that might only be possible in book-length form. However,
proponents argue that this approach does not necessarily devolve into infinite regress. First,
more fine-grained mechanisms are not necessarily always better. What matters is that the
level of detail specified by a researcher allows her to convincingly claim that productive
continuity is achieved and the why-question answered. Second, the research question
should also determine the level on which the mechanism is located. In political science, it
is usually possible to specify a mechanism on a lower level of analysis.29 In practice, the
level of analysis “bottoms out”30 at the level on which the research question is located.

Caveats
We advocate for scholars to try out the types of practices described above while the
methodological literature continues to evolve. At this point in time, however, we caution
against making any of these practices a norm for publication or requisite components of
what research transparency entails, for two central reasons.
First, case narratives are central to what we generally recognize as comparative
historical analysis and process tracing, and they serve a vital role in communication by
making our research broadly comprehensible to a wide audience.31 As such, for most
research agendas, case narratives will be the basis for add-on applications of process
tracing tests or formal Bayesian analysis. Others have argued that case narratives in and of
themselves do substantial analytical work by cogently conveying temporal processes.32 As


28
Waldner 2015,138.
29
Gerring 2010
30
Machamer et al. 2000.
31
Mayer 2014.
32
Crasnow 2017; Büthe 2002; Sharon Crasnow (in press), “Process Tracing in Political Science—What’s
the Story?” Studies in History and Philosophy of Science.

12

Electronic copy available at: https://ssrn.com/abstract=3333405


Qualitative Transparency Deliberations Final Report, Working Group III.1

such, explicit Bayesian analysis and even the application of Van Evera’s tests can entail
investing substantial time and effort above and beyond constructing a written-language
account of the research, which in itself already tends to be a time- and effort-intensive
endeavor. As Hall writes:
Efforts to weigh the importance of every observation quickly make the text
of an article cumbersome, rendering studies that might otherwise deserve a
large audience virtually unreadable.... Demanding that such efforts be
included in an online appendix, in effect asking qualitative researchers to
write their articles twice—in short and then extended form—does not make
the task any more feasible.33
Moving forward, finding a middle ground between coherent narrative and highlighting the
inferential weight of key pieces of evidence is an important methodological agenda.
Second, given that methodological literatures on Bayesian process tracing and more
ambitious mechanistic approaches are still in their infancy, making definitive best-practice
recommendations, let alone imposing standards for how these approaches should be
implemented in empirical work, is premature. There is not yet a clear technical consensus
among methodologists on how to apply these approaches. Moving toward specific
standards or requirements on transparency would require not only such a consensus, but
training for both authors and reviewers. The training that is needed to apply formal
Bayesian analysis or to use formal means of diagramming causal arguments is not yet
widely available, and in the absence of substantial training, efforts to apply these
approaches may result in inferences that are worse than those that scholars would obtain
based on less formal or more intuitive practices. Moreover, what seems like a reasonable
standard today might be superseded by new, better practices in one or two years. For
example, while Van Evera’s labels for “smoking gun,” “hoop,” and other tests are
intuitively appealing for conveying some basic Bayesian insights on the strength of
evidence, authors should not deploy them without reminding readers that they are points
on a continuum of strong to weak evidentiary tests and that Bayesianism does not allow
for 100% certainty that an explanation is true.
Journal editors should thus proceed with an abundance of caution regarding any
specific requirements for transparency in process tracing. Authors, whether they employ
narrative, Bayesian, or other kinds of mechanism-focused process tracing, have a range of
practices to choose from, representing various tradeoffs of transparency, effort, and ease
for readers and reviewers. Whatever approach they use, scholars should aim to ensure that
their narratives are written in a manner that prioritizes analytical coherence over purely
narrative storytelling, such that readers can follow the logic of inference as clearly as
possible.


33
Hall 2016, 30.

13

Electronic copy available at: https://ssrn.com/abstract=3333405


Qualitative Transparency Deliberations Final Report, Working Group III.1

References
Amengual, Matthew. 2016. Politicized Enforcement in Argentina: Labor and Environmental Regulation.
Cambridge: Cambridge University Press.
Beach, Derek, and Ingo Rohlfing. 2018. “Integrating cross-case analyses and process tracing in set-
theoretic research: Strategies and parameters of debate,” Sociological Methods and Research
47(1): 3-36.
Beach, Derek, and Rasmus Brun Pedersen. 2016. Causal Case Study Methods: Foundations and Guidelines
for Comparing, Matching, and Tracing. Ann Arbor: University of Michigan Press.
Bennett, Andrew. 2008. “Process-tracing: A Bayesian perspective.” In Oxford Handbook of Political
Methodology, ed. Janet M. Box-Steffensmeier, Henry Brady and David Collier. Oxford: Oxford
University Press: 702-721.
Bennett, Andrew. 2010. “Process Tracing and Causal Inference.” In Rethinking Social Inquiry: Diverse
Tools, Shared Standards second edition, ed. Henry Brady and David Collier. Lanham: Rowman &
Littlefield Publishers: 207-220.
Bennett, Andrew. 2015. “Disciplining Our conjectures: Systematizing Process Tracing with Bayesian
Analysis.” Appendix to Process Tracing: From Metaphor to Analytic Tool, ed. Andrew Bennett
and Jeffrey Checkel. Cambridge: Cambridge University Press.
Boas, Taylor. 2016. Presidential Campaigns in Latin America: Electoral Strategies and Success
Contagion. Cambridge: Cambridge University Press.
Boone, Catherine. 2014. Property and Political Order in Africa: Land Rights and the Structure of Politics.
Cambridge: Cambridge University Press.
Büthe, Tim. 2002. “Taking temporality seriously: Modeling history and the use of narratives as evidence.”
American Political Science Review 96 (3): 481-493.
Collier, David. 2011. “Understanding Process Tracing.” PS: Political Science and Politics 44(4):823-830.
Crasnow, Sharon. in press. “Process Tracing in Political Science—What’s the Story?” Studies in History
and Philosophy of Science.
Downing, Brian. 1992. The Military Revolution and Political Change. Princeton: Princeton University
Press.
Elman, Colin, and Miriam Fendius Elman. 2003. Progress in International Relations Theory: Appraising
the Field. Cambridge: MIT Press.
Fairfield, Tasha. 2013. “Going Where the Money Is: Strategies for Taxing Economic Elites in Unequal
Democracies.” World Development 47: 42-57.
_____. 2015. Private Wealth and Public Revenue in Latin America: Business Power and Tax Politics.
Cambridge: Cambridge University Press.
Fairfield, Tasha and Andrew E. Charman. 2017. “Explicit Bayesian Analysis for Process Tracing:
Guidelines, Opportunities, and Caveats.” Political Analysis 25 (3): 363-380.
_____. 2019. “A Dialog with the Data: The Bayesian Foundations of Iterative Research.” Perspectives on
Politics 17 (1): _________.
Fairfield, Tasha and Candelaria Garay. 2017. “Redistribution under the right in Latin America: Electoral
competition and organized actors in policymaking.” Comparative Political Studies. 50 (14): 1871-
1906.
Garay, Candelaria. 2016. Social Policy Expansion in Latin America. Cambridge: Cambridge University
Press.
Gerring, John. 2008. “The Mechanismic Worldview: Thinking Inside the Box.” British Journal of Political
Science 38(1) (January): 161-179.

14

Electronic copy available at: https://ssrn.com/abstract=3333405


Qualitative Transparency Deliberations Final Report, Working Group III.1

Gerring, John. 2010. “Causal Mechanisms: Yes, But . . .” Comparative Political Studies 43 (11): ______.
Glaser, Barney and Anselm L. Strauss. 1967. The Discovery of Grounded Theory: Strategies for
Qualitative Research. Chicago: Aldine Publishing Company.
Handlin, Samuel. 2017. State Crisis in Fragile Democracies: Polarization and Political Regimes in South
America. Cambridge: Cambridge University Press.
Holland, Alisha. 2017. Forbearance as Redistribution. Cambridge University Press.
Humphreys, Macartan and Alan M. Jacobs. 2015. “Mixing methods: A Bayesian Approach.” American
Political Science Review 109(4): 653-673.
Jacobs, Alan. 2011. Governing for the Long Term: Democracy and the Politics of Investment. Cambridge:
Cambridge University Press, 2011.
Lengfelder, Christina. 2012. “Triangular Development Cooperation: How Emerging Powers Change the
Landscape of Development Cooperation.” Ph.D. dissertation, UC Berkeley, Universidad Catolica
de Chile.
Peter Machamer, Lindley Darden, Carl F. Craver. 2000. “Thinking about Mechanisms.” Philosophy of
Science 67 (1): 1-25.
Mahoney, James. 2003. “Long Run Development and the Legacy of Colonialism in Spanish America.”
American Journal of Sociology 109 (1) (July): 50-106.
_____. 2012. “The Logic of Process Tracing Tests in the Social Sciences,” Sociological Methods and
Research 41: 570-597.
McKeown, Timothy J. 1999. “Case studies and the statistical world view.” International Organization
53 (1): 161-190.
Moravcsik, Andrew. 2014. “Transparency: The revolution in qualitative research.” PS: Political Science &
Politics 47 (1): 48-53.
Ragin, Charles C. 1997. “Turning the tables: How case-oriented research challenges variable-oriented
research.” Comparative Social Research 16: 27-42.
Rohlfing, Ingo. 2012. Case Studies and Causal Inference: An Integrative Framework. London: Palgrave,
Macmillan.
Seawright, Jason, and John Gerring. 2008. “Case Selection Techniques in Case Study Research: A Menu of
Qualitative and Quantitative Options. Political Research Quarterly 61(2): 294-308.
Seawright, Jason, 2016. Multi-Method Social Science: Combining Qualitative and Quantitative Tools.
Cambridge: Cambridge University Press.
Schultz, Kenneth. 2009. Democracy and Coercive Diplomacy. Cambridge: Cambridge University Press.
Slater, Dan. 2009. “Revolutions, Crackdowns, and Quiescence: Communal Elites and Democratic
Mobilization in Southeast Asia.” American Journal of Sociology 115 (1): 203-254.
Schultz, Kenneth. 2009. Democracy and Coercive Diplomacy. Cambridge: Cambridge University Press.
Schwartz, Elizabeth. 2016. “Local solutions to a global problem? Canadian municipal policy responses to
climate change,” University of British Columbia.
https://open.library.ubc.ca/cIRcle/collections/ubctheses/24/items/1.0300060
Van Evera, Stephen. 1997. Guide to Methods for Students of Political Science. Ithaca: Cornell University
Press.
Waldner, David. 2015. “What makes process tracing good? Causal mechanisms, causal inference, and the
completeness standard in comparative politics.” In Process Tracing: From Metaphor to Analytic
Tool, ed. Andrew Bennett and Jeffrey Checkel. Cambridge: Cambridge University Press: ______.

15

Electronic copy available at: https://ssrn.com/abstract=3333405


Qualitative Transparency Deliberations Final Report, Working Group III.1

Wood, Elizabeth. 2000. Forging Democracy from Below: Insurgent Transitions in South Africa and El
Salvador. Cambridge: Cambridge University Press.
_____. 2001. “An Insurgent Path to Democracy Popular Mobilization, Economic Interests, and Regime
Transition in South Africa and El Salvador.” Comparative Political Studies 34 (8): 862-888.
Yom, Sean. 2015. “From methodology to practice: Inductive iteration in comparative research.”
Comparative Political Studies 48 (5): 616-644.

16

Electronic copy available at: https://ssrn.com/abstract=3333405

You might also like