Process Tracing and Comparison
Process Tracing and Comparison
Process Tracing and Comparison
January 2019
*
Andrew Bennett <bennetta@georgetown.edu> is a Professor of Political Science at Georgetown University.
Tasha Fairfield <t.a.fairfield@lse.ac.uk> is an Associate Professor in Development Studies at the London
School of Economics and Political Science. Hillel David Soifer <hsoifer@temple.edu> is an Associate
Professor of Political Science at Temple University.
I. Introduction
Process tracing and cross-case comparisons are widely used in qualitative research.
Process tracing is generally understood as a within-case method for drawing inferences on
mechanisms. Understandings of the logic of inference differ across the methodological
literature, from approaches that emphasize invariant causal mechanisms to those that take
a probabilistic approach to assessing alternative explanations. Likewise, understandings of
cross-case comparison vary widely; many treatments are grounded in a logic of
approximating experimental control, while others see comparisons as useful for inspiring
theory and producing strong tests of theory without necessarily providing a distinct logic
of inference from process tracing. We aim to largely sidestep methodological debates by
articulating a list of broadly desirable transparency objectives; however, we note that
appropriate transparency practices may differ depending on the particular methodological
understanding espoused. We discuss these issues as they relate to four approaches to
process tracing that have arisen over time and that are currently practiced by researchers:
narrative-based process tracing (including traditional methods of using evidence to make
causal arguments about historical outcomes of individual cases), Van Evera’s (1997) tests,
Bayesian process tracing, and process tracing focused on mechanistic approaches.
(2) Present the rationale for case selection and the logic of comparison.
This related point is common practice in qualitative research. While the details of
what information should be provided will depend on the aims and methodological
grounding of the research, some rationale for why a given case or cases deserve close
attention and why they constitute an analytically informative set for comparison should be
provided. This has the benefits, as discussed below, of allowing the reader to evaluate the
choices made by the researcher, and to assess how compelling any claims of
generalizability might be.
(3) Clearly articulate the causal argument.
This is an obvious point not just for transparency but also for good scholarship more
fundamentally—readers cannot understand and evaluate research if the argument under
consideration is ill-specified. Yet there is of course substantial variation in the extent to
which scholarship achieves this goal. Ideally, we would like our hypotheses to include a
careful discussion of both causal mechanisms and scope conditions, so that readers
understand how and to what range of cases the argument should apply when bringing their
own knowledge to bear on the question. If the latter are not well known, some discussion
of uncertainty regarding the range of conditions and contexts under which the theory should
apply is merited. The more clarity can be achieved on this front, the better the prospects
for theory refinement and knowledge accumulation through subsequent research. This
includes being transparent about whether a qualitative case study is case-centered without
aiming for generalization, or centered on theory that applies more broadly, and a discussion
of the breadth of application of a more generalized theory.
(4) Identify and assess salient alternative explanations.
As a matter of common practice, qualitative research identifies and assesses
alternative explanations, to show how the argument proposed builds on previous
approaches and/or provides a better explanation than rival hypotheses. Scholars are advised
to, and generally do, locate alternative explanations in salient literatures and explain why
they deem these explanations inadequate, based on prior knowledge and/or evidence
uncovered during the investigation. Readers themselves should also be expected to
contemplate salient alternative explanations that the author may have overlooked and
assess how well the argument at hand holds up against those alternatives.
(5) Explain how the empirical evidence leads to a given inference.
Linking evidence to inference lies at the heart of analytic transparency. Good
scholarship presents the evidentiary basis for the analysis and takes the reader through
some explanation of how the evidence has been collected and interpreted, as well as why
and to what extent the evidence supports the author’s argument and/or undermines rival
explanations. As part of this process, authors should address any consequential evidence
that runs counter to their overall conclusions, not just the supporting evidence. Discussion
of absence of particular evidentiary findings may also be relevant, depending on the
reasons for the absence. Generally speaking, some pieces of evidence will have greater
inferential weight than others. Inference does not proceed by simply counting up clues for
and against a claim, and scholars should aim to assess and explain why certain findings
carry more substantial import.
1
e.g., Beach & Pedersen 2016, Waldner 2015.
2
Authors can also make their estimates less confining by using upper and lower estimates of probabilities to
avoid conveying a false sense of precision).
3
Elman and Elman, 2003.
4
Fairfield and Charman 2019.
studied. While it is neither possible nor desirable to expound every detail of accumulated
background knowledge, key elements therein that matter to the author’s interpretation of
the evidence should be shared with readers.5 Because everyone comes to the table with
very different background knowledge, based on familiarity with different bodies of
literature, different countries, different political phenomena, etc., this guideline can make
a big difference for establishing common ground and fostering consensus on inferences, or
at least for identifying the reasons for divergent interpretations of evidence.
(7) Present key pieces of evidence in their original form where feasible.
Qualitative research is at its best when it showcases salient evidence in its original
form, for example, direct quotations from documents and informants. Of course, this is not
always possible; in many instances an indirect summary of information from an interview
or a primary source will be preferable for clarity and conciseness, and in some situations
this may be the only feasible option given the conditions of research (e.g. concern over
human subject protection). Nevertheless, when it is possible to present the evidence
directly, readers have the opportunity to assess whether the author’s interpretations and
inferences are convincing. Given the ambiguities inherent in written language, small
changes in wording from an original source can make a big difference to the meaning and
import of the information. While it may not be necessary to provide full text quotations
from a secondary source, it follows that when working with such sources, page numbers
must be included so that readers can locate the original statements.6
Research Exemplars
The above practices find expression in many excellent books and articles. Wood’s (2000,
2001) research on democratization from below is a benchmark that illustrates many of these
virtues. Wood clearly articulates the causal process through which mobilization by poor
and working-class groups led to democratization in El Salvador and South Africa, provides
extensive and diverse case evidence to establish each step in the causal process, carefully
considers alternative explanations, and explains why they are inconsistent with the
evidence. Wood’s use of interview evidence is particularly compelling. For example, in
the South African case, she provides three extended quotations from business leaders that
illustrate the mechanism through which mobilization led economic elites to change their
regime preferences in favor of democratization: they came to view democracy as the only
way to end the massive economic disruption created by strikes and protests.7
Among many other exemplars, we would also call attention to the following list of
works. Most of these works highlight multiple practices discussed above, while a few are
chosen to showcase effective use of a particular practice.
5
For discussion with respect to Bayesian process tracing, see Fairfield & Charman 2017.
6
This element of transparency has been thoroughly discussed and debated in the initiative for transparency
appendices (See e.g. Moravcsik 2014) and we refer readers to that scholarship for a presentation of the
benefits as well as the costs therein. Readers might also explore the recent Annotation for Transparency
Inquiry; see https://qdr.syr.edu/guidance/ati-at-a-glance
7
Wood 2001, 880.
their causal power in a nuanced framework that also includes political conditions
and the policy process. Finally, it strikes a balance between effective narrative
presentation and the explicit evaluation of theory in the empirical chapters that
highlights the potential of transparent process-tracing accounts to also be readable.
• James Mahoney, “Long Run Development and the Legacy of Colonialism in
Spanish America.” American Journal of Sociology 109 (1) (July 2003): 50-106.
This article pays careful attention to concept formation and operationalization. As
a useful transparency practice to that end, Mahoney includes a series of tables that
present information about the sources used to operationalize each variable in each
of his cases.
• Kenneth Schultz, Democracy and Coercive Diplomacy (Cambridge University
Press, 2009)
• This multimethod work uses a formal model, statistical analysis, and process
tracing to assess whether democracies’ superior ability to send credible diplomatic
signals, due to the ability of opposition parties to reinforce threats or reveal bluffs,
gives them an advantage in coercive diplomacy. The case study of the Fashoda
crisis does an excellent job of testing the alternative explanations with evidence
about the timing and content of British and French parliamentary statements and
diplomatic moves.
• Dan Slater, “Revolutions, Crackdowns, and Quiescence: Communal Elites and
Democratic Mobilization in Southeast Asia.” American Journal of Sociology 115
(1) (July 2009): 203-254.
This article is noteworthy for its careful treatment of alternative explanations.
Slater (220f) clearly identifies four salient rival explanations for the advent of mass
popular protest against authoritarian regimes that differ from his argument that
emotive appeals to nationalist or religious sentiments spark and sustain popular
collective action. The case studies present compelling pieces of evidence that not
only support Slater’s argument, but also undermine the rival explanations, as the
author clearly explains in the context of his case narratives.
Case Selection
Our general recommendation as noted above entails providing a rationale for why given
cases were chosen for close analysis. Common practices to this end vary depending on the
research goals and the methodological approach espoused. 11 Beyond explaining why the
studied cases are substantively or theoretically salient, comparative historical analysis and
11
Seawright and Gerring, 2008; Seawright, 2016: 75-106; Gerring & Cojocaru 2016; Rohlfing 2012, chapter
3.
comparative case study research generally describes relevant similarities and differences
across the chosen cases that facilitate multiple structured, focused comparisons. These
studies often aim to encompass variation on the dependent variable and key causal factors
of interest, as well as diversity of background conditions, for the purposes of testing the
theory or explanation in multiple different contexts and assessing its scope and/or
generalizability. In addition to Boas (2015) as noted above, examples of effective case
selection discussions in this tradition include Amengual (2016), Boone (2014), and Garay
(2016).
One potential transparency practice that is less common in existing literature could
entail being more explicit about what information was known during the case selection
process. In some instances, little salient information is available for case selection, and
strategies in the methodological literature that assume broad cross-case knowledge about
key independent and/or dependent variables will be inapplicable.12 As a rule in such
situations, scholars should not falsely imply that cases were selected prospectively on the
basis of knowledge that in fact was only available after in-depth research on the selected
cases was undertaken. In line with Yom’s (2015) discussion of iterative research, we
recommend that scholars be up front when case selection occurs through an iterative
process in conjunction with theory development and concept formulation.13 How much
information about the iterative process should be provided for readers is a matter of some
debate and depends on the framework within which one makes inferences. Yom (2015)
advocates providing substantial details about how the process unfolded over time. If one
takes a Bayesian perspective, Fairfield & Charman (2019) argue that what matters for
making and evaluating inferences is simply the evidence uncovered from the cases studied,
not what was in the scholar’s mind when choosing the cases or the timing of when a given
case was added to the analysis.
A second possible transparency practice that scholars might consider entails
identifying cases that were almost chosen but ultimately not included, and providing
reasons for why those cases were excluded. This can include pragmatic considerations such
as language skills or data availability. Advantages to this practice could include addressing
reviewers’ and readers’ concerns about potential confirmation bias or potential selection
bias (the latter will be more salient for multi-methods research designs that aim to assess
population-level effects within an orthodox statistical framework). For example, specifying
that cases were selected for compelling pragmatic reasons can justify why the researcher
did not choose other possible cases that might strike readers as appropriate options.
Relatedly, such information could help ease the way for other scholars to conduct follow-
up research and further test the generalizability of the theory. In particular, communicating
practical considerations in case selection can alert readers to future opportunities for
analyzing new cases, for example if previously inaccessible archives are made public, or
political changes in a country facilitate interview-based research, or if scholars possess
language skills that allows them to investigate theoretically optimal but unstudied cases.
Disadvantages of this approach could include allocating valuable time and space to details
12
For a practical discussion of these issues, see Fairfield (2015a: Appendix 1.3).
13
e.g. Glaser and Strauss 1967; Ragin 1997.
that many readers might find irrelevant and/or distracting from the flow of the book or
article if included in the main text.14
14
As for other issues mentioned before, the details of the case selection process could be part of a
transparency appendix.
15
Beach & Pedersen 2016.
16
Goertz 2016; Beach & Rohlfing 2018.
17
See Bennett 2015 and Humphreys and Jacobs 2015 on correspondences between Van Evera’s tests and
Bayesianism.
18
e.g. Bennett 2008; Collier 2011; Mahoney 2012.
19
Fairfield 2013, Handlin 2017.
20
See also Bennett 2008.
entails asking which hypothesis makes the evidence more plausible. Bayesian process
tracing has become an active area of methodological research, with a recent turn toward
efforts to explicitly apply Bayesian analysis in qualitative research.21 Empirical
applications of this approach range from appendices that qualitatively discuss a few
illustrative pieces of evidence and two main rival hypotheses, to analyses that quantify
degrees of belief in multiple rival hypotheses, assign likelihood ratios for each piece of
evidence under these rival hypotheses, and derive an aggregate inference using Bayes’ rule.
(i) Low–Moderate cost practices
A relatively low-cost emerging practice for journal articles entails providing a
structured explication of the evidence-to-inference logic, usually (for reasons of space and
narrative coherence) in an appendix. The idea is to briefly illustrate how the method of
choice underpins analysis in the case study narratives presented in the main text.
Van Evera’s Tests:
• Tasha Fairfield, “Going Where the Money Is: Strategies for Taxing
Economic Elites in Unequal Democracies,” World Development 47 (2013):
42-57.
The process-tracing appendix (roughly 2000 words) explicitly casts the
author’s analysis of one of the article’s three case studies in the framework
of Van Evera’s (1997) process-tracing tests.
Bayesian Analysis:
• Tasha Fairfield & Candelaria Garay, “Redistribution Under the Right in
Latin America: Electoral Competition and Organized Actors in
Policymaking,” Comparative Political Studies (2017).
Drawing on Fairfield & Charman (2017), the online process-tracing
appendix (roughly 3300 words) explicitly but informally applies Bayesian
reasoning about likelihood ratios to a few case examples from the main text
of the article, without quantifying probabilities or discussing every piece of
evidence from the case narratives.
These kinds of appendices are valuable for clarifying the logic of inference and
illustrating how specific pieces of evidence contribute to the overall inference without
interrupting the flow of causal narratives that aim to paint a more holistic picture of the
events and sequences studied.
As a practical matter, these appendices can be especially useful when authors and
reviewers disagree on inferences and/or do not share a common methodological
understanding of inference in process tracing; both of the example appendices above were
elaborated in response to reviewer queries about the extent to which evidence weighs in
favor of the arguments advanced and/or about methodology more broadly.
The book format, where space constraints are less stringent, offers more room for
experimentation with these practices, above and beyond the appendix approach.
21
Bennett 2015; Humphreys and Jacobs 2015; Fairfield and Charman 2017.
22
Lengfelder 2012; Schwartz 2016.
23
Fairfield 2013.
10
from pieces of evidence that may pull in different directions, (3) assess how sensitive the
findings are to different interpretations of the evidence, different prior views, and/or other
starting assumptions (other approaches to process tracing also include attention to this third
issue; the point here is that formal Bayesianism includes a specific mathematical
framework for sensitivity analysis).
The drawbacks of fully formal treatments include the substantial time, effort, and
training required, as well as inherent limitations in that probabilities cannot be
unambiguously quantified in qualitative social science, and practical difficulties that arise
when handling large amounts of complex evidence and multiple nuanced hypotheses.24
In practice, there is flexibility in how formally or informally Bayesian analysis is
employed, how extensively it is used, and how it is integrated alongside narrative accounts.
Mechanistic approaches
As it is a within-case method of analysis, process tracing, including the Bayesian
approaches outlined above, is grounded in philosophical approaches that focus on the role
of causal mechanisms in the causal explanation of individual cases. Debates continue,
however, over how exactly to define causal mechanisms and over what constitutes a
satisfactory explanation, and process tracing approaches vary in the level of detail they
seek regarding causal mechanisms and the degree of explanatory completeness to which
they aspire.
A widely-used minimal definition of a mechanism is that it links a cause (or
combination of causes) to the outcome.25 Based on developments in philosophy of biology,
an emerging understanding of mechanisms decomposes them into a sequence of entities
and activities in which the activity of one entity is causal for the next entity performing its
activity and so on. The advantage over the minimal definition and other definitions such as
the covering-law model of explanations is that it achieves productive continuity and can
satisfactorily answer why-questions. 26
One mechanism-focused approach outlined by David Waldner aims at a high level
of explanatory completeness. In this approach, explanatory accounts are adequate or
complete to the extent that: 1) they outline “a causal graph whose individual nodes are
connected in such a way that they are jointly sufficient for the outcome;” 2) they provide
“an event history map that establishes valid correspondence between the events in each
particular case study and the nodes in the causal graph;” 3) they give “theoretical statements
about causal mechanisms [that] link the nodes in the causal graph” … in ways that “allow
us to infer that the events were in actuality generated by the relevant mechanisms;” and 4)
“rival explanations have been credibly eliminated, by direct hypothesis testing or by
demonstrating that they cannot satisfy the first three criteria listed above.”27
24
See Fairfield & Charman 2017:§5.1.
25
Gerring 2008.
26
Machamer et al. 2002.
27
Waldner 2015, 129.
11
Caveats
We advocate for scholars to try out the types of practices described above while the
methodological literature continues to evolve. At this point in time, however, we caution
against making any of these practices a norm for publication or requisite components of
what research transparency entails, for two central reasons.
First, case narratives are central to what we generally recognize as comparative
historical analysis and process tracing, and they serve a vital role in communication by
making our research broadly comprehensible to a wide audience.31 As such, for most
research agendas, case narratives will be the basis for add-on applications of process
tracing tests or formal Bayesian analysis. Others have argued that case narratives in and of
themselves do substantial analytical work by cogently conveying temporal processes.32 As
28
Waldner 2015,138.
29
Gerring 2010
30
Machamer et al. 2000.
31
Mayer 2014.
32
Crasnow 2017; Büthe 2002; Sharon Crasnow (in press), “Process Tracing in Political Science—What’s
the Story?” Studies in History and Philosophy of Science.
12
such, explicit Bayesian analysis and even the application of Van Evera’s tests can entail
investing substantial time and effort above and beyond constructing a written-language
account of the research, which in itself already tends to be a time- and effort-intensive
endeavor. As Hall writes:
Efforts to weigh the importance of every observation quickly make the text
of an article cumbersome, rendering studies that might otherwise deserve a
large audience virtually unreadable.... Demanding that such efforts be
included in an online appendix, in effect asking qualitative researchers to
write their articles twice—in short and then extended form—does not make
the task any more feasible.33
Moving forward, finding a middle ground between coherent narrative and highlighting the
inferential weight of key pieces of evidence is an important methodological agenda.
Second, given that methodological literatures on Bayesian process tracing and more
ambitious mechanistic approaches are still in their infancy, making definitive best-practice
recommendations, let alone imposing standards for how these approaches should be
implemented in empirical work, is premature. There is not yet a clear technical consensus
among methodologists on how to apply these approaches. Moving toward specific
standards or requirements on transparency would require not only such a consensus, but
training for both authors and reviewers. The training that is needed to apply formal
Bayesian analysis or to use formal means of diagramming causal arguments is not yet
widely available, and in the absence of substantial training, efforts to apply these
approaches may result in inferences that are worse than those that scholars would obtain
based on less formal or more intuitive practices. Moreover, what seems like a reasonable
standard today might be superseded by new, better practices in one or two years. For
example, while Van Evera’s labels for “smoking gun,” “hoop,” and other tests are
intuitively appealing for conveying some basic Bayesian insights on the strength of
evidence, authors should not deploy them without reminding readers that they are points
on a continuum of strong to weak evidentiary tests and that Bayesianism does not allow
for 100% certainty that an explanation is true.
Journal editors should thus proceed with an abundance of caution regarding any
specific requirements for transparency in process tracing. Authors, whether they employ
narrative, Bayesian, or other kinds of mechanism-focused process tracing, have a range of
practices to choose from, representing various tradeoffs of transparency, effort, and ease
for readers and reviewers. Whatever approach they use, scholars should aim to ensure that
their narratives are written in a manner that prioritizes analytical coherence over purely
narrative storytelling, such that readers can follow the logic of inference as clearly as
possible.
33
Hall 2016, 30.
13
References
Amengual, Matthew. 2016. Politicized Enforcement in Argentina: Labor and Environmental Regulation.
Cambridge: Cambridge University Press.
Beach, Derek, and Ingo Rohlfing. 2018. “Integrating cross-case analyses and process tracing in set-
theoretic research: Strategies and parameters of debate,” Sociological Methods and Research
47(1): 3-36.
Beach, Derek, and Rasmus Brun Pedersen. 2016. Causal Case Study Methods: Foundations and Guidelines
for Comparing, Matching, and Tracing. Ann Arbor: University of Michigan Press.
Bennett, Andrew. 2008. “Process-tracing: A Bayesian perspective.” In Oxford Handbook of Political
Methodology, ed. Janet M. Box-Steffensmeier, Henry Brady and David Collier. Oxford: Oxford
University Press: 702-721.
Bennett, Andrew. 2010. “Process Tracing and Causal Inference.” In Rethinking Social Inquiry: Diverse
Tools, Shared Standards second edition, ed. Henry Brady and David Collier. Lanham: Rowman &
Littlefield Publishers: 207-220.
Bennett, Andrew. 2015. “Disciplining Our conjectures: Systematizing Process Tracing with Bayesian
Analysis.” Appendix to Process Tracing: From Metaphor to Analytic Tool, ed. Andrew Bennett
and Jeffrey Checkel. Cambridge: Cambridge University Press.
Boas, Taylor. 2016. Presidential Campaigns in Latin America: Electoral Strategies and Success
Contagion. Cambridge: Cambridge University Press.
Boone, Catherine. 2014. Property and Political Order in Africa: Land Rights and the Structure of Politics.
Cambridge: Cambridge University Press.
Büthe, Tim. 2002. “Taking temporality seriously: Modeling history and the use of narratives as evidence.”
American Political Science Review 96 (3): 481-493.
Collier, David. 2011. “Understanding Process Tracing.” PS: Political Science and Politics 44(4):823-830.
Crasnow, Sharon. in press. “Process Tracing in Political Science—What’s the Story?” Studies in History
and Philosophy of Science.
Downing, Brian. 1992. The Military Revolution and Political Change. Princeton: Princeton University
Press.
Elman, Colin, and Miriam Fendius Elman. 2003. Progress in International Relations Theory: Appraising
the Field. Cambridge: MIT Press.
Fairfield, Tasha. 2013. “Going Where the Money Is: Strategies for Taxing Economic Elites in Unequal
Democracies.” World Development 47: 42-57.
_____. 2015. Private Wealth and Public Revenue in Latin America: Business Power and Tax Politics.
Cambridge: Cambridge University Press.
Fairfield, Tasha and Andrew E. Charman. 2017. “Explicit Bayesian Analysis for Process Tracing:
Guidelines, Opportunities, and Caveats.” Political Analysis 25 (3): 363-380.
_____. 2019. “A Dialog with the Data: The Bayesian Foundations of Iterative Research.” Perspectives on
Politics 17 (1): _________.
Fairfield, Tasha and Candelaria Garay. 2017. “Redistribution under the right in Latin America: Electoral
competition and organized actors in policymaking.” Comparative Political Studies. 50 (14): 1871-
1906.
Garay, Candelaria. 2016. Social Policy Expansion in Latin America. Cambridge: Cambridge University
Press.
Gerring, John. 2008. “The Mechanismic Worldview: Thinking Inside the Box.” British Journal of Political
Science 38(1) (January): 161-179.
14
15
16