Between X and Y How Process Tracing Contributes To Opening The Black Box of Causality
Between X and Y How Process Tracing Contributes To Opening The Black Box of Causality
To cite this article: Christine Trampusch & Bruno Palier (2016) Between X and Y: how process
tracing contributes to opening the black box of causality, New Political Economy, 21:5, 437-454,
DOI: 10.1080/13563467.2015.1134465
1. Introduction
In recent years the process tracing method has become increasingly popular in case studies as well as
in political science and political economy methodological debates (Hall 2003, 2013; Waldner 2012:
66).1 The goal of this special issue is to more fully explore its implications for political economy. To
facilitate this dialogue, this introductory essay begins with a brief overview of the increasingly
diverse and multifaceted literature on process tracing. In the second section, it extrapolates on the
implications for political economy research, details the different possible uses of process tracing,
underlines the advantages of and challenges met by this method, and concludes by proposing 10
guidelines for better implementing process tracing in actual research.
2. Overview
Process tracing was originally employed in cognitive and psychological studies on individual
decision-making (for an overview, see: Ford et al. 1989; Falleti, in this symposium). A second
process-oriented strand of literature evolved in organisational studies where Mohr (1982) introduced
the distinction between variance and process research in the analysis of organisational change in
order to clarify the ontological differences between methods that analyse time in terms of variables
(such as a time series analysis) and those that investigate the dynamic essence of processes by inves-
tigating sequences (named as case studies) (Van de Ven and Pool 2005).2
In 1979 the political scientist Alexander L. George was the first scholar (Bennett and Checkel 2015: 5)
to propose using process tracing in his discipline. He suggested applying ‘the historian’s methodology
of explanation’ to ‘assess whether a statistical correlation between independent variables and the
dependent variable is of causal significance’ (George 1979a: 46). He writes:
[T]he investigator subjects a single case in which that correlation appears to more intensive scrutiny, as the his-
torian would do, in order to establish whether there exists an intervening process, that is, a causal nexus, between
the independent and the dependent variable. (George 1979a: 46)
George (1979b: 113) and George and McKeown (1985: 35; italics added) put forward the ‘“process-
tracing” procedure’ as an ‘attempt to trace the process – the intervening steps – by which beliefs
influence behaviour’ and as a means to make ‘historical arguments about causal processes in
studies of human and organizational decision-making’. In particular, international political
economy and international relations scholars – and later those in comparative political economy –
have followed this advice, applying process tracing to explain how individual and collective
decision-making processes work by uncovering the stimuli of decision-making and the ‘effects of
various institutional arrangements on attention, processing, and behaviour’ (George and McKeown
1985: 35) (see also Falleti’s contribution to this symposium).
Since then, and particularly in the last few years, a large body of further important contributions on
process tracing has been published and no end to this process is in sight (for example, George and
Bennett 2005, Gerring 2007: Ch. 7, Bennett 2008, Hall 2008, 2013, Collier 2010, 2011, Mahoney 2010,
2012, Haverland and Blatter 2012, Mahoney and Goertz 2012: Ch. 8, Rohlfing 2012: Ch. 6, Waldner
2012, Beach and Pedersen 2013, Bennett and Checkel 2015). Among other aspects, the recent literature
primarily discusses variations of the method, best practices, how to use this method to increase the val-
idity of causal inference, and its possible integration into multi-method research (MMR) design. The
broadened focus of this recent literature has also led to a proliferation of conceptions about what
process tracing should be, which has distanced process tracing more and more from what George
(1979b: 113) and George and McKeown (1985: 35) defined as the ‘process-tracing procedure’.
This widened focus of the literature has led to a confused state of affairs, even to ‘methodological
stretching’.3 The main reason is that most of this literature is mainly methodologically oriented, and
discusses very abstract notions and philosophical assumptions, including different methodologies of
process tracing such as Bayesian process tracing, set theoretic process tracing or process tracing with
directed acyclic graphs (Bennett and Checkel 2015: 16).4 Although some of these new contributions
are potentially important, many of them become more and more distanced from real research.
Definitions of process tracing have multiplied rather than simplified over time. The process tracing
method has been ‘stretched’ and applied to nearly every analysis of processes. Looking at the variety
of definitions and types of process tracing, summarised in Table 1 and 2 (see also Tulia Falleti’s con-
tribution), we would characterise the current situation as one of internal debate, considerable dis-
agreement, and occasional confusion. With this symposium, we aim to clarify the situation. We do
this by showing what the various approaches to process tracing have in common, by demonstrating
the added value and challenges of using process tracing in political science and political economy
research, and by illustrating how and when this method can be usefully applied.
In this article, we map the methodological debate on process tracing and discuss the diverse var-
iants of process tracing in order to highlight what appear to be commonalities disguised by diversity
and disagreements. While – over time – we count 18 different definitions of process tracing (see
Table 1) and no less than 18 types of process tracing suggested in the methodological literature
(see Table 2), 4 main lessons can be drawn from the current debate.
First, today most authors agree that process tracing is about causal and temporal mechanisms: it is
a method for unpacking causality, that aims at studying what happens between X and Y and beyond
(scope conditions). As Bennett and Checkel (2015: 9) put it ‘process tracing is a key technique for cap-
turing causal mechanisms in action’. Process tracing methods allow for systematic and rigorous quali-
tative analysis that can complement the correlational approach in the analysis of causation. This
definition is very similar to that offered by Alexander L. George (1979a, 1979b), who first introduced
process tracing to political science.
NEW POLITICAL ECONOMY 439
It is important to note that the explanatory aspirations of process tracers differ from those of scho-
lars applying multivariate statistical methods (Hall 2008: 305–6). Multivariate statisticians quantify the
average causal effects that causal factors (independent variables) have on an outcome (dependent
variable) and assume that average effects of these factors can be isolated across cases. Process
tracers do not seek precise estimates of specific causes but rather intend to specify the ‘process
whereby relevant variables have an effect’ within individual cases (Hall 2008: 306). Their goal is to
observe causal process through close-up qualitative analysis within single cases, rather than to stat-
istically estimate their effects across multiple cases. They consider the analysis of causal effects to
include both the investigation of the effects of causes and the study of causal chains, causal mech-
anisms and the causes of effects.5 They are more sensitive to the context in which causal processes
unfold, thus interested in knowing about the temporal and spatial scope conditions of mechanisms
and their underpinning theories (Falleti and Lynch 2009: 1152).
Second, despite the agreement on analysing causal mechanisms, the literature varies on the spe-
cifics of causal process tracing. We identify at least three different ways of implementing causal
process tracing. One way focuses on the (causal or temporal) sequences of events. It takes ‘time
seriously’ and considers it as part of the causal explanation, hence relies on detailed tracing of pro-
cesses. It does so by disentangling (temporal) complexities such as causal pathways and feedback
processes (for example, Büthe 2002, Falleti and Mahoney 2015, Falleti in this symposium). A
second way concentrates on identifying and testing hypotheses on causal mechanisms which
open the black box of causal inference (for example, Rohlfing 2012, Beach and Pedersen 2013,
Beach in this symposium). A third take on process tracing focuses on background factors such as
omitted variables, the problem of endogeneity or on scope conditions (Kreuzer in this symposium).
In order to map and discuss these different approaches to the analysis of causal mechanisms we have
invited three distinguished ‘process tracers’ to develop their own approach on how to identify and
study causal mechanisms.
The third lesson is that in the application of process tracing to the analysis of causal mechan-
isms and in its use as a means of causal inference, we come across two ontologies: a deterministic
and a probabilistic conception of causality. The deterministic conception implies that process
tracing analyses the link between X and Y and is interested in what ‘is constant in a mechanism’
(Mayntz 2004: 245). In our symposium, Beach follows this conception as he views process tracing
as a method to test theories about causal mechanisms, defined as a theoretical system. He follows
a set-theoretical conception of causal mechanisms. In contrast, the probabilistic perspective
includes scope conditions as well as the outcome in the mechanism analysis and, therewith,
does not assume that the same mechanism always produces the same outcome since mechan-
isms vary in their operation. Mayntz (2004: 244; italics by Mayntz) calls this a ‘generative mechan-
ism’. In their contributions, Tulia Falleti and Marcus Kreuzer allow for probabilism. We return later
to these two ontologies.
Fourth, the literature differs in its approach to theory. Some contributions are more inductive,
aimed at theory building (that is, at uncovering and specifying causal mechanisms) while others
are more deductive, aimed at theory testing (and refining) (that is, at checking with empirical
case(s) analysis whether the theoretically elaborated causal mechanisms are indeed the ones explain-
ing how X and Y are connected) (see also George and Bennett 2005). We view this differentiation
between inductive and deductive approaches as important for two reasons. First, it makes process
tracing the only case study approach within political economy capable of testing as well as develop-
ing theory. Second, it builds an indispensible bridge to quantitative approaches and is therefore
crucial for MMR design, even though, as we will show later, some have reservations about the com-
patibility of process tracing with multi-method approaches to causality. In this symposium, Derek
Beach outlines the strengths and uses of process tracing in more deductively oriented research
while Tulia Falleti advocates for its more inductive use. In her suggestion of ‘Theory Guided
Process Tracing’ (TGPT), Falleti presents her own view on theory building use of process tracing
and how it helps to uncover causal mechanisms within what she calls ‘intensive process’ tracing, a
440 C. TRAMPUSCH AND B. PALIER
variant that contributes to theory building.6 Marcus Kreuzer’s version of Bayesian process tracing tries
to integrate theory building and theory testing aspects. It emphasises the importance of ‘reviewing,
evaluating and incorporating existing foreknowledge in the causal inference process’.
Based on the preceding literature overview, we now move to develop four main points that are of
key importance for the development of process tracing in political economy. First, we emphasise that
causal process tracing is indispensable for clarifying causal as well as temporal processes. Second, we
elaborate on the two main uses of process tracing, the more inductive and the more deductive ones.
In this context we underline that currently the Bayesian version of process tracing is the most stan-
dardised variant regarding causal inference statements. It can also integrate the inductive and deduc-
tive goals of process tracing. Third, we discuss the strength and weakness of the causal process
tracing in order to clarify when this time-consuming research method is and is not worth pursuing.
Finally, we address the discussion on best practices and process tracing standards (for example,
Beach and Pedersen 2013, Bennett and Checkel 2015, Waldner 2015) and we suggest ten steps for
good process tracing.
Table 1. Continued.
‘Rather than multiple instances of X1→Y (the large-N cross-case style of research), one examines a single instance of
X1→X2→X3→X4→Y. ( … ) In these respects, process tracing is akin to detective work’ (Gerring 2007: 173).
‘A style of analysis used to reconstruct a causal process that has occurred within a single case … Its defining features are that (a)
multiple types of evidence (non comparable observations) are employed for the verification of a single outcome and (b) the causal
process itself is usually quite complex, involving a long causal chain and perhaps multiple switches, feedback loops, and the like’
(Gerring 2007: 216).
‘Process tracing can involve both inductive and deductive study of events and sequences within a case. Inductive examination may
reveal potentially causal processes that the researcher had not theorized a priori. Deductively, theories can suggest which
intervening events should have occurred within a case if the theory is an accurate explanation of the case. Depending on the theory
under investigation, some of the hypothesized steps in the case may be tightly defined necessary conditions, and others may be
defined more loosely as having several substitutable processes that could have taken place at a particular juncture’ (Bennett and
Elman 2007: 183).
‘Process tracing involves looking at evidence within an individual case, or a temporally and spatially bound instance of a specified
phenomenon, to derive and/or test alternative explanations of that case. In other words, process tracing seeks an historical
explanation of an individual case, and this explanation may or may not provide a theoretical explanation relevant to the wider
phenomenon of which the case is an instance’ (Bennett 2008: 4).
‘Process tracing can be used as a method for evaluating hypotheses about the causes of a specific outcome in a particular case. It is
arguably the most important tool of causal inference in qualitative and case study research (George and Bennett 2005, Collier et al.
2010). The tests associated with process tracing can help a researcher establish that: (1) a specific event or process took place, (2) a
different event or process occurred after the initial event or process, and (3) the former was a cause of the latter’ (Mahoney 2012: 2).
Rohlfing (2012: ch. 6) argues that process tracing helps to analyse two different types of processes, realized and anticipated
processes.
‘Process tracing evidently involves processes, mechanisms, and heterogeneous evidence. It will not be easy to subsume these
considerations in a single definition ( … ) I define process tracing as mode of causal inference based on concatenation, not
covariation. Process tracing uses a longitudinal research design whose data consist of a sequence of events (individual and
collective acts or changes of a state) represented by nonstandardized observations drawn from a single unit of analysis … By
relying on within-case analysis, process tracing privileges internal validity over external validity; in return for his constraint on
generality, process tracing has the potential to generate relatively complete explanations’ (Waldner 2012: 67–8).
‘The essence of process-tracing research is that scholars want to go beyond merely identifying correlations between independent
variables (Xs) and outcomes (Ys)’ (Beach and Pedersen 2013: 1). ‘ … process tracing methods are arguably the only method that
allows us to study causal mechanisms’ (Beach and Pedersen 2013: 1–2).
‘The idea of process tracing is to identify and cut down the “causal chain” that connects independent and dependent variables and
to examine each sequence in detail’ (Maggetti et al. 2012; 59).
‘ … techniques falling under the label of process tracing are particularly well-suited for measuring and testing hypothesized causal
mechanisms’ (Bennett and Checkel 2015: 1–2).
‘ … process tracing may help to validate design and modelling assumptions in natural experiments: through the discovery of what I
have called treatment-assignment CPOs and the testing of model-validation CPOs’ (Dunning 2015: 214).
‘ … process tracing is especially valuable for establishing the features of the events that compose individual sequences (e.g. their
duration, order, and pace) as well as the causal mechanisms that link them together. There is no substitute for process tracing when
analyzing the events that make up the sequences and processes that are studied in comparative-historical research’ (Falleti and
Mahoney 2015: 212).
Source: Own compilation.
explanations of a case that may even be applicable to the wider phenomenon the case represents
(Bennett 2008: 4).
The above table shows that while process tracing was originally defined as a method to analyse
decision-making processes, now it is predominantly defined as a method aiming to identify or test
hypotheses on causal mechanisms in order to compensate for weaknesses in correlational analysis.
The method is even sometimes viewed as the ‘only method’ (Beach and Pedersen 2013: 1–2) for
solving the mechanism problem of causation that has vexed generations of social scientists and
that statistical and experimental methods are less able to tackle. As Tulia Falleti makes clear in her
contribution, over time, process tracing has been expanded: from being a method to analyse
decision-making processes at the individual level and the micro-foundations of behaviour, to a
442 C. TRAMPUSCH AND B. PALIER
method of scrutinising mechanisms that may also operate beyond the individual level to analyse
macro-level causal mechanisms (similar: Bennett and Checkel 2015: 11).
Process tracers take the term ‘process’ seriously. They analyse ‘causal-process observations’ (Collier
et al. 2004: 277, Mahoney 2010), that is, ‘an insight or piece of data that provides information about
context, process, or mechanism, and that contributes distinctive leverage in causal inference’ (Collier
et al. 2004: 277). From this also follows that process tracers seek to address some weaknesses of stat-
istical analyses (for example, Bennett and George 1997, Checkel 2005, George and Bennett 2005,
Schimmelfennig 2006, Blatter et al. 2007: 157–70). They compensate for and solve the problems
that correlational studies have with specifying causality, its type and direction. Process tracers find
it impossible to analyse processes without qualitative data, hence, they use primary sources such
as documents and interviews, as well as secondary literature describing the specific outcome.7
Process tracing also helps solve the problems of selectivity, omitted variable bias and spurious causa-
tion (Mahoney 2004: 89).
Researchers using process tracing want to understand what links causal factors, events, sequences
and outcomes together. In our symposium, Derek Beach emphasises that (from a set-theoretical per-
spective) researchers want to identify the ‘cogs and wheels’ of causal mechanisms, while Tulia Falleti
shows that the analysis of ‘temporal sequences’ of events is also important because the order of
events is ‘causally consequential’. Marcus Kreuzer adds to this debate that process tracers ‘emphasize
the importance of opening up black boxes, investigating the functioning of unstated causal mechan-
isms, probing for endogeneity or confounders, and exploring underlying ontological assumptions’.
Today, all process tracers seek to identify or specify causal and temporal mechanisms. This can be
done in two ways: when we are interested in making causal sequential or temporal sequential argu-
ments through a close analysis of sequences and temporal orders of events (for example, Falleti and
Mahoney 2015; Falleti in this symposium)8; and, when we are seeking to ‘flesh out the causal process
between X and Y’ by opening the black box of causal inference (see Beach in this symposium).
But what are causal mechanisms? While Derek Beach provides in his contribution a very specific
answer to this question, Mahoney (2001: 579–80) lists no less than 24 definitions of causal mechan-
isms used in social science literature, either focusing on causal chains, or causal paths, or intervening
variables, and so on. It is far beyond this introductory note to elaborate on these different 24 variants
of causal mechanisms.9 What matters here is that, beyond the diverse conceptions of causal mech-
anisms, process tracers share the desire to rest causal inferences not just on the cross-case based cor-
relation between independent and dependent variables, but also on within case observation of the
actual causal processes.
In the current debate, we find two ontologies of causal mechanisms. On the one hand, some
assume a deterministic conception of causal mechanisms, while, on the other hand, other scholars
support a probabilistic understanding of causality (see also Bennett and Checkel 2015: 10–11).
Whereas determinism implies that a specific mechanism in operation will always produce a specific
outcome (Mahoney 2001: 580, Falleti and Lynch 2009: 1147), a probabilistic view argues that ‘because
mechanisms interact with the context in which they operate, the outcomes of the process cannot be
determined a priori by knowing the type of mechanism that is at work’ (Falleti and Lynch 2009: 1147).
This difference between viewing mechanisms as univocal links between X and Y or as ‘generative
mechanisms’, as Renate Mayntz (2004: 245) puts it, has, as we explain below, important implications
on how process tracing is conducted.
To simplify again: in order to understand what links X and Y together, purely quantitative analysis,
although indispensable, is not enough, and qualitative research is needed. However, this qualitative
research must be informed by theory, either to know in advance where to look for causal mechanisms,
or to know what causal mechanisms to test empirically. As Hall (2013) suggests, since a causal mech-
anism is a statement on how intervening variables and processes linked to them cause a specific
outcome, to identify a causal mechanism through process tracing, we need a theory about that mech-
anism. In other words, as demonstrated by Kreuzer in his article, the perfect researcher for applying this
method is an ‘all-rounder’ scholar with experience in historical and theoretical work.
NEW POLITICAL ECONOMY 443
(Continued)
444 C. TRAMPUSCH AND B. PALIER
Table 2. Continued.
Name Description
The most inductive form of process tracing is used to deliver a historical explanation of a specific
outcome. In this case, process tracing is close (but not identical) to historical explanations. In the
recent literature, further inductive modes of process tracing are called ‘process induction’, ‘analytical
explanation’, ‘process analysis’, ‘causal process tracing’, ‘theory-building process tracing’ or ‘explain-
ing outcome process tracing’ (Table 2). The inductive types start with observations and the historical
record, and then explore whether the evidence allows for the identification of intervening variables,
causal mechanisms or causal chains at work. The objective of these types of process tracing is mainly
to contribute to theory building by generating hypotheses and uncovering specific (new) causal
mechanisms.
NEW POLITICAL ECONOMY 445
The more inductively oriented types of process tracing also include a mode that views time as
playing a key role in the causal explanation, or, in the words of Büthe (2002), that takes ‘temporality
seriously’. This refers to narratives that investigate sequences, endogenise explanatory variables, and
in which ‘[t]ime itself … becomes an element of causal explanation, a factor in the model’ (Büthe
2002: 486; on this see also Falleti 2006: 4). As emphasised by Falleti in this symposium,
the comparative advantage of process tracing vis-à-vis other social research methods lies in its potential to
uncover the causal mechanisms that link the constitutive events of intensive type of [transformative] processes
… [It supports] medium-range theories that take context and complex causality seriously.
However, more inductively oriented process tracers are not analysing processes without theory.
When analysing the sequences that led to particular political decisions, for example, one might
use the power resource approach in order to disentangle the preferences and interests of actors
and the coalitions they form with each other. One would also be identifying the various sequences
of a public policy process with the help of the classical analyses of political decision-making. In other
words, inductive analysis of processes does not merely consist of naïve observations of empirical
events from which theoretical ideas are derived, but rather forms a theoretically informed analysis
(= decomposition) of processes that looks for causal chains between the observed events.
This is why Falleti argues for a ‘TGPT’ of intensive processes, defined as ‘the temporal and causal
analysis of the sequences of events that constitute the process of interest’. Such processes must be
clearly conceptualised, both theoretically and operationally, with reference to previous theories. The
TGPT method assumes that in these temporal sequences of event, the order of events is ‘causally con-
sequential’. The TGPT is most powerful when various sequences are compared. And ‘the method
permits the researcher to identify different patterns of sequences and their related causes and
consequences’.
The more deductive mode of process tracing is called ‘process verification’, ‘systematic process
analysis’, ‘congruence method (or congruence analysis)’,11 ‘theory testing’, or ‘efficient process
tracing’ (Table 2), and ‘mechanism tracing’ in Derek Beach’s words in this symposium. Here,
process tracing starts with theory and assesses the plausibility of empirical expectations drawn
from theory by comparing the case evidence with the predictions of theoretical accounts. The
researcher collects confirming and contradictory evidence, and evaluates to what extent the
record is consistent with theoretical explanations. Deductive process tracers compare the evidence
and historical record with theoretical accounts of specific hypotheses about causal mechanisms.
With these more deductive types of process tracing, it is important (but not always easy) to differ-
entiate between analytical narratives (Bates et al. 1998) and process tracing (on this see also Falleti
2006: 4, Hall 2008: 314). Analytical narratives are used to demonstrate the validity of micro-founda-
tional theoretical reasoning, like the rational choice approach, through an iterative process that
moves back and forth between observations and only one theory in order to refine this theory
and seek parsimony (Bates et al. 1998: 11), and to ‘illustrate how the theory or formal model works
in the real world’ (Falleti 2006: 4). In contrast to those who pursue this rather formal method that pri-
marily aims to refine parsimonious formal modelling, process tracers are more interested in dynamics
and less parsimonious in their theorising. They are investigators interested in analysing change,
mechanisms, and feedback processes.
Far from neglecting causal complexity, process tracers are actually experts in identifying this type
of causality. They seek causal chains and acknowledge the importance of sequencing and timing.
They may also use competing and rival theories in order to contrast them with each other, instead
of focusing on one specific theory. There is a long debate about what distinguishes process
tracers from some historians (for example, George and Bennett 2005, Bennett and Checkel 2015),
which we will not repeat in this article. Let us simply state that in our view, in contrast to historians,
process tracers must always make their theoretical underpinnings transparent and explicit, otherwise
their contribution to theory development and testing remains obscure (grey or black boxed as Derek
Beach would put it).
446 C. TRAMPUSCH AND B. PALIER
In this more deductive approach to process tracing, a lot of authors, including some contributing
to our symposium, have elaborated on how process tracing can be used as a test for causal inference.
Mahoney (2012: 2) recently even claimed that process tracing is the ‘most important tool of causal
inference in qualitative and case study research’ and ‘can be used as a method for evaluating hypoth-
eses about the causes of a specific outcome in a particular case’. Van Evera suggested that process
tracing is able to deliver good tests of theory because ‘process predictions are often unique’ (Van
Evera 1997: 65). Through an analysis along the two dimensions of uniqueness and certitude, he dif-
ferentiated between hoop tests, smoking-gun tests, double-decisive tests and straw-in-the-wind-
tests (Van Evera 1997: 31–2). Following Van Evera, the newer literature recommends that process
tracing studies employ the Bayesian inference logic. In this symposium, Marcus Kreuzer provides a
detailed account of the Bayesian logic applied to process tracing.
In the context of single case studies, this analogy between process tracing and Bayesian statistics
requires that causality be defined in set-theoretical (deterministic) terms if one wants to test for
causal mechanisms in a very strict deductive sense (for example, Bennett 2008, Collier 2011,
Mahoney 2012, Beach and Pedersen 2013, Beach in this symposium). In our symposium, Beach
demonstrates how employing a deterministic understanding of a causal mechanism, in which
‘each part of a mechanism is conceptualized as an individually necessary element of the whole’
(Beach and Pedersen 2013: 31), helps to conduct theory tests with the method of process tracing
by using the logic of Bayesian updating, because unique and certain predictions on causal processes
are made possible. However, some process tracers who are sceptical of set theory may still find the
Bayesian logic attractive since it allows them to combine theory building and theory testing aspects
when tracing processes (see Kreuzer in this symposium).
Bayesian process tracers argue that the Bayesian logic has a lot in common with process tracing.
Bennett (2008: 10) lists three similarities between Bayesian inference and the logic of process tracing.
The first is that both give equal attention to the test hypothesis and alternative hypotheses. They
contend that the confidence in a test hypothesis is conditional on having given equal empirical atten-
tion to rival hypotheses as well as on asking how similar and different their empirical predictions are.
They point out that confirming a test hypothesis that makes the same empirical predictions as a rival
hypothesis advances our knowledge little because we cannot discriminate between the validity of
those two hypotheses. Process tracing and Bayesian analysis therefore place a great deal of emphasis
on carefully specifying hypotheses and differentiating them from each other in order to produce
what they call strong tests. Second, this close attention to developing strong tests also means that
individual pieces of evidence can become ‘far more discriminating among competing explanations
than others’ (Bennett 2008: 10). Both discuss the probative or diagnostic value of individual pieces
of evidence and therefore rest their causal inferences not exclusively on the frequency of observed
pieces of evidence but also on their ability to discriminate among existing hypotheses. Thirdly, both
logics’ close attention to the strength of their tests also means that they work with affirmative evi-
dence as well as eliminative induction in order to test theories. Due to this analogy, the literature
suggests ‘process tracing tests’ (Mahoney 2012) that process tracers might use to make descriptive
and causal inference.12 The tests are based on the argument that observations are compared with
competing and rival explanations (Bennett 2008: 8). In short, as Kreuzer puts it nicely in this sym-
posium: the Bayesian version of process tracing ‘involves a substantive rather than strictly technical
testing approach to dealing with causal inference problem’. It embeds causal inference both in the
frequency of observed evidence as well as in the quality of the available foreknowledge; it recognises
that our ability to learn from evidence is highly conditional on how closely we conduct our literature
reviews and specify tests. Test preparation is just as important as testing techniques for making valid
causal inferences.
Our reading of the current debate is that two motivations have triggered this convergence
between Bayesian logic and some type of process tracing. First, supporters of the Bayesian logic in
process tracing claim that it helps to standardise process tracers’ statements on causal inference
(Bennett 2008, Rohlfing 2012, Beach and Pedersen 2013; Maggetti et al. 2012: 60, Bennett and
NEW POLITICAL ECONOMY 447
Checkel 2015: 16). The second motivation is a methodological one: in combination with a determi-
nistic conception of causal mechanism, using the Bayesian logic makes process tracing a suitable
method for analysing causality in MMR without endangering the robustness of causal inference state-
ments.13 MMR faces the challenge of the ‘stability of causal effects’ (Mahoney and Goertz 2012: 106)
because qualitative case studies analyse causality in individual cases while quantitative work analyses
causal inferences in populations, by estimating ‘average effects of an independent variable within
population’ that ‘may or may not apply to particular cases’ (Mahoney and Goertz 2012: 47). This stab-
ility problem can easily be solved with a ‘mechanismic’ understanding of mechanism that defines
mechanism as a theoretical system. The initial conditions as well as the outcome (explanandum)
are excluded and not part of the mechanism. A deterministic perspective means that no random
and no error term occur because a deterministic mechanism produces the outcome with 100 per
cent certainty and with no variance (Beach and Pedersen 2013: 27). Slackening determinism by
the premise that ‘randomness and chance appear only because of limitations in theories, models,
measurement and data’ but not because of purely ‘stochastic factors which … randomly produce
outcomes’ (Mahoney 2008: 420, Beach and Pedersen 2013: 27) opens the door to infer through
process tracing least likely explanations by using the standards of Bayesian statistics. In short: as
long as we conceive of a mechanism as being deterministic in its operation, Bayesian process
tracing is able to generate robust statements of causal inference that can even be added to statistical
analysis in MMR.14
We must emphasise that Bayesian process tracing is trying to standardise more than any other
(within or single) case study method. This standardisation process is still in the early stages. Some
approaches employ an implicit Bayesian approach (Bennett 2008, Bennett and Checkel 2015),
others use a more formalised and technical approach (Bennett 2015), and still others are trying to
integrate those two versions (Kreuzer and DeFina 2015; Kreuzer in this symposium). We do not
doubt that the Bayesian version of process tracing is important for the further development of
case study methods in general. The contributions by Derek Beach and Marcus Kreuzer illustrate
this point nicely and impressively.
However, we also agree with Mahoney and Goertz (2012: 107) that ‘scholars who supplement their
statistical findings with process tracing do not appear to use the method in the same way as quali-
tative researchers’. To adapt their comment to Bayesian process tracing in MMR, we want to empha-
sise that Bayesian process tracers use the method of process tracing in another way than scholars
who do not have a deterministic conception of mechanisms. The transfer of Bayesian statistics
and logic into process tracing is a way to eliminate contingency from the explanation. It makes it
impossible to analyse those causal mechanisms whose outcomes are not determined a priori by
the mechanism which operates, but depends on the temporal and spatial conditions or even on con-
tingency, which is produced by uncertainty.
Marrying mechanism analysis with determinism makes it possible to use the Bayesian causal infer-
ence logic in process tracing. For scholars who do not share a strict deterministic view of causality,
and who understand mechanism analysis as a way to scrutinise the variability of outcomes when
mechanisms operate (on which see Mayntz 2004: 245, Falleti and Lynch 2009: 1147) or as a means
to analyse the causal and temporal sequences of events (Falleti and Mahoney 2015 ), the Bayesian
way of causal inference testing through process tracing is blocked. As this strand of process
tracers cannot use Bayesian statistics to improve the quality of their explanations, their statements
on causal inferences and their possibilities to move beyond a statement that ‘things just happen’
(Mahoney 2008: 420; similar Beach and Pedersen 2013: 27) depend much more (or better – only)
on theory grounded case selection and whether the theories they use are able to handle the
scope conditions as well as the stochastic factors. For this reason, we argue that one of the greatest
challenges of process tracers who have a non-deterministic conception of mechanism and do not
apply the Bayesian version of process tracing is to further develop their standards and methodologi-
cal concepts. They should also be aware that the validity of their causal arguments depends much
more on case selection strategies, and thus also on the selection of crucial cases (least likely or
448 C. TRAMPUSCH AND B. PALIER
most likely cases; Eckstein 1975), which make theory testing through case studies possible (on this
see also Rohlfing 2012, Beach and Pedersen 2013).
formalise their approach. Efforts in this direction may help prevent process tracing from becoming a
catch word in practical research, around which researchers rally without specifying clearly what they
actually mean by process tracing.
Third, the combination of process tracing with other methods in MMR is of course an important
(albeit challenging) new area for research (for example, Bennett and Checkel 2015: 272–3). However,
despite all the benefits of this kind of research design, the challenge to further developing process
tracing as a stand-alone-method should not be forgotten or neglected. As we have shown, in that
case, the use of process tracing in pure qualitative research may be based on different epistemologi-
cal and ontological assumptions.
In our view, the most important lesson of the benefits and challenges of process tracing though is
that while in the 1970s and 1980s, friends of statistics and experimental research viewed case studies
and process tracing as ‘wrong-headed’ (George and McKeown 1985: 22), research in the last few years
has sparked a discussion emphasising the statistics and experimental research without case studies
and process tracing may not be appropriate.
. First, as any other empirical social scientist, process tracers need to be aware of and clarify their
ontology. Following Hall (2003: 374), ontology contains ‘the fundamental assumptions scholars
make about the nature of the social and political world and especially about the nature of
causal relationships within that world’. For instance, as explained above, it makes a difference
whether we conceive of causal mechanisms as operating deterministically or whether we leave
room for contingency by having a probabilistic view. Hall (2003: 374) puts it nicely: ‘Ontology is
ultimately crucial to methodology because the appropriateness of a particular set of methods
for a given problem turns on assumptions about the nature of the causal relations they are
meant to discover’. Our ontology contributes to make our decision about the variant of process
tracing we apply.
. Second, depending on the state of the art, the status quo of available theories, the availability of
data and the research question, the researcher must determine whether his/her epistemological
interest is inductive or deductive. Is the study’s objective to generate or specify a hypothesis
about a causal effect or a causal mechanism, or does it seek a plausibility probe about a causal
effect or mechanism that has already been theorised? Note, however, that whether more inductive
or more deductive, process tracing in practice is always an iterative process, a back and forth
movement between theory and empirical within case(s) evidences. To use Marcus Kreuzers’
words during our long and iterative exchanges about process tracing: ‘the testing of causal
links will invariably reveal links that don’t work as advertised. Process tracing thus serves not
only an important diagnostic purpose but also the important remedial goal of improving theory
development’.
. Third, researchers must be transparent with their epistemological assumption when they are mar-
rying process tracing with other methods. Although applying process tracing in MMR (thus jointly
450 C. TRAMPUSCH AND B. PALIER
with Mill’s methods, statistics and experiments) is one of the most promising routes for the further
development of this method, researchers have to be aware of the epistemological and ontological
assumptions on which such combinations are founded.
. Fourth, whatever the variant being chosen, good process tracers need good theory, so that they
know where to focus their analytical attention, which actors to study and interview and what his-
torical sequences of events to analyse. If the study is a plausibility probe, then competing theories
are useful. That aside, the quality of process tracing study is strengthened if the theories used to
explore or explain a case have proven their value in previous research.
. Fifth, the theoretical leverage of a process tracing study strongly depends on theory-oriented case
selection. Here, we recommend applying the case selection strategies suggested by Eckstein
(1975), Lijphart (1971) and Gerring (2007); the most important of them for process tracers are
the disciplined-configurative study, crucial cases, extreme cases, most likely, least likely, typical
cases and deviant cases.
. Sixth, sound causal process observations need time, contextual evidence (Gerring 2007: 172) and a
‘good knowledge of individual cases’ (Mahoney 2010: 131). Various data sources should be used,
and interviews conducted. Within this process of data collection, the historical integrity and speci-
ficity of the case should be seriously considered. Here, we recommend respecting guidelines on
how to minimise the effects of selectivity and bias in the use of primary and secondary sources
(Thies 2002, Kreuzer 2010). In addition, measurement validity has to be improved through the
use of context-specific domains of observation, context-specific indicators and adjusted
common indicators requiring thoughtful data collection (Adcock and Collier 2001).
. Seventh, the use of counterfactual analysis and mental experiments may be another important safe-
guard for good process tracing (similar: George 1997 with reference to the congruence method).
This means that a ‘series of events in sequence’ (George 1997: 7) or parts of a causal chain may be
thought as a mental and cognitive construction in cases where there are reality gaps. However,
George (1997:7) also rightly acknowledges that this is a difficult and ambitious enterprise.
. Eighth, we should carefully investigate when the processes we analyse have started and when
they end. Here, theoretical frameworks as well as sound causal process observations are helpful
(see also Falleti and Lynch 2009). Tulia Falleti’s differentiation between extensive and intensive
processes in this symposium is very useful to that end.
. Ninth, good process tracers follow the current ‘transparency revolution’ across quantitative and
qualitative research, which encompasses ‘data transparency’, ‘analytic transparency’, as well as
‘production transparency’ (Moravscik 2014: 665–6). While providing an (on-line) appendix with
quantitative data has become the norm, adding an appendix with qualitative data such as inter-
views could be encouraged.
. Finally, researchers should be aware of, and always remember that when it is a method of causal
interpretation of one case, process tracing is not a causal explanation reached by statistics.
However, the more the above steps are closely followed, the better the causal interpretation will be.
7. Conclusion
Recent years have seen growing interest in the method of process tracing. Various books and articles
have been published discussing not only the ontological and epistemological foundations of this
method, but also criteria to standardise and formalise it. This literature also suggests many different
definitions of what process tracing actually is. We have counted at least 18 different variants of
process tracing. Against the background that there is only three different logics of regression analysis
(OLS, Logit and Probit),17 this high number of variants for the process tracing method is pretty
remarkable and proves the ‘buzzword problem’. If introducing Bayesian logic within process
tracing is a way to standardise this method, we also underlined that applying the Bayesian logic in
process tracing case studies requires a strict set-theoretical or deterministic view of causality;
NEW POLITICAL ECONOMY 451
otherwise, inference tests following the Bayesian logic would be impossible to implement in a case
study. Hence one cannot conclude that the Bayesian way is the only and right way since the deter-
ministic approach to causality is quite restrictive, demanding and even questionable.
In order to simplify the review of debates over process tracing, we have identified two main types
of process tracing: those using it in a more inductive way (but in a theory guided way), and those
using it in a deductive way (but being always ready to refine the tested theories).
We have underlined that process tracing is certainly the best method to study causal mechanisms,
the more inductive way being used for uncovering causal mechanisms and the more deductive
manner for testing the theoretically alleged causal mechanism. Because they are able to follow
actors’ positions and actions, process tracing studies have helped to unravel the mechanisms of
changes (in preferences, as well as institutional changes), while bringing time and context back
into the explanations. This is why process tracing matters so much for the further development of
theory building and testing in social sciences.
Notes
1. In accordance with Hall (2003: 373), we ‘use the term “methodology” to refer to the means scholars employ to
increase confidence that the inferences they make about the social and political world are valid’.
2. Due to word restrictions our symposium cannot include this debate on process and variance research in organis-
ation studies but Van der Ven and Pool (2005) gives a good overview.
3. We thank Yves Surel for this term.
4. While Bennett and Checkel (2015: 16) distinguish set theoretical from Bayesian process tracing, we will argue that
there are set-theory based and non-set theory based Bayesian process tracers.
5. Concerning explanatory strategies, Mahoney and Goertz (2006) distinguish between the causes-of-effects strategy
followed in qualitative studies and the effects-of-causes strategy applied in quantitative research. We agree with this
differentiation, however, we also acknowledge that under certain conditions within-cases analysis may also inves-
tigate effects-of-causes but in a slightly different manner than quantitative studies do. Process tracers are not able to
estimate and quantify any causal effect. What some process tracers do is assessing the ‘reality fit’ of causal hypoth-
eses, see Beach and Kreuzer in this symposium.
6. Falleti adds to the analysis of intensive processes, the variant of ‘extensive process’ tracing which may be used for
testing theory. However, Falleti also argues that ‘the greatest potential and comparative advantage of process
tracing as a social science method lies in the analysis of intensive processes … ’. While extensive processes
include the cause and the outcome and mobilize the notion of intervening variables (which may also be analysed
by statistical methods), intensive processes only work after a cause and they end before the outcome (see Falleti in
this symposium).
7. They may also use quantitative data as long as they help to trace processes. On the different types of evidence
(among them are also statistical patterns) used in process tracing see Beach in this symposium.
8. While in causal sequential arguments the ‘events in a sequence are understood to be causally connected to one
another’, ‘strictly temporal sequential arguments’ do not assume that events are causally connected but that the
‘temporality of these events (their duration, order, pace, or timing) is causally consequential for the outcome’
(Falleti and Mahoney 2015: 216, 218).
9. Bennett and Checkel (2015: 12) define causal mechanisms as:
ultimately unobservable physical, social, or psychological processes through which agents with causal
capacities operate, but only in specific contexts or conditions, to transfer energy, information, or matter to
other entities. In doing so, the causal agent changes the affected entities’ characteristics, capacities, or pro-
pensities in ways that persist until subsequent causal mechanisms act upon them.
10. In agreement with Eckstein (1975[1992]: 130), with theory we mean in
a minimal sense that it must state a presumed regularity in observations that is susceptible to reliability and
validity tests, permits the deduction of some unknowns, and is parsimonious enough to prevent the deduc-
tion of so many that virtually any occurrence can be held to bear it out.
11. We have to note that George and Bennett (2005), Blatter and Haverland (2012) and Beach and Pedersen (2013)
strongly distinguish between process tracing and the congruence method while Gerring (2007: 173, fn. 2) notes
that the congruence method is synonymous to process tracing.
12. Again, these are the tests promoted by Van Evera: hoop test, the smoking gun test, the straw in the wind test and the
double decisive test (e.g. Collier 2011).
452 C. TRAMPUSCH AND B. PALIER
13. With regard to process tracing in multi-method research, see also Dunning (2015) who demonstrates the benefits of
process tracing in experimental studies.
14. Renate Mayntz (2004: 245) rightly claims that as long as ‘we look at the whole process and recognize that “inputs”
and “outputs” can vary, making outcomes contingent on variable initial conditions’ it does not matter whether we
have a generic or mechanismic understanding of mechanisms. But as scholars of the Bayesian version of process
tracing just view their deterministic handling of process tracing as an alternative to conception of causality
based on probabilism, the definition of causal mechanism does decisively matter.
15. We thank Marcus Kreuzer for this insight.
16. On guidelines for best practices, see also Beach and Pedersen (2013); Bennett and Checkel (2015); Waldner (2015).
17. We thank Dennis C. Spies for this number. The structure of the data (e.g. time series, multi-level) break into further
subtypes which, however, are sons and daughters of these three types.
Acknowledgements
Our very special thanks go to Marcus Kreuzer who helped us to distil the methodological identities of Bayesian process
tracers. However, any caveats of our lexical grouping should address us and not Marcus. In addition, we thank Tulia Falleti
and Derek Beach as well as the other participants to our panels on process tracing at the CES (Council for European
Studies) Conference at Amsterdam in June 2013 for their useful remarks. We are also grateful for the comments by
Marek Naczyk on an earlier version of this paper. Last but not least we gained a lot of insight during a workshop on
process tracing held in Paris in May 2014 and thank all participants to this workshop.
Disclosure statement
No potential conflict of interest was reported by the authors.
Funding
This work has been partly supported by a public grant overseen by the French National Research Agency (ANR) as part of
the ‘Investissements d’Avenir’ program LIEPP (ANR-11-LABX-0091, ANR-11-IDEX-0005-02).
Notes on contributors
Christine Trampusch is professor of international comparative political economy and economic sociology at the Cologne
Center for Comparative Politics (CCCP) at the University of Cologne. Her professorship is also Liaison Chair to the Max
Planck Institute for the Study of Societies (Brückenprofessur). Her current research projects cover the political economy
of skill formation regimes, of welfare states, of industrial relations as well as of financial market regulation and of
public finance. Her research has been published in various international and national peer-reviewed journals. Her co-
edited volumes have been published by Oxford University Press and Routledge.
Bruno Palier is CNRS Research Director at Sciences Po, Centre d’études européennes. He is studying welfare reforms in
Europe. He is co-director of LIEPP (Laboratory for Interdisciplinary Evaluation of Public Policies). He has published numer-
ous articles on welfare reforms in France and in Europe. In 2012, he co-edited The Age of Dualization: The Changing Face of
Inequality in Deindustrializing Societies (with Emmenegger, Patrick, Häusermann, Silja, and Seeleib-Kaiser, Martin), Oxford
University Press, and Towards a social investment welfare state? Ideas, Policies and Challenges (with Morel, Nathalie and
Palme, Joakim), Bristol: Policy Press. In 2010, he edited A long Good Bye to Bismarck? The Politics of Welfare Reforms in
Continental Europe, Amsterdam: Amsterdam University Press.
References
Adcock, R. and Collier, D. (2001), ‘Measurement Validity: A Shared Standard for Qualitative and Quantitative Research’,
American Political Science Review, 95 (3), pp. 529–54.
Bates, R.H., et al. (eds.) (1998), Analytic Narratives (Princeton, NJ: Princeton University Press).
Beach, D. and Pedersen, R.B. (2013), Process-Tracing Methods: Foundation and Guidelines (Ann Arbor: University of
University of Michigan Press).
Bennett, A. (2008), ‘Process Tracing: A Bayesian Perspective’, in J. Box-Steffensmeier, H.E. Brady, and D. Collier (eds.), The
Oxford Handbook of Political Methodological (Oxford: Oxford University Press), pp. 217–70.
Bennett, A. (2015), ‘Appendix: Discipling our Conjectures: Systematizing Process Tracing with Bayesian Analysis’, in A.
Bennett and J.T. Checkel (eds.), Process Tracing in the Social Sciences: From Metaphor to Analytic Tool (Cambridge:
Cambridge University Press), pp. 276–98.
NEW POLITICAL ECONOMY 453
Bennett, A. and Checkel, J.T. (2015), ‘Process Tracing: From Philosophical Roots to Best Practices’, in A. Bennett and J.T.
Checkel (eds.), Process Tracing in the Social Sciences: From Metaphor to Analytic Tool (Cambridge: Cambridge University
Press), pp. 3–37.
Bennett, A. and Elman, C. (2007), ‘Case Study Methods in the International Relations Subfield’, Comparative Political
Studies, 40 (2), pp. 170–95.
Bennett, A. and George, A.L. (1997), Process Tracing in Case Study Research. MacArthur Foundation Workshop on Case
Study Methods, October 17–19, 1997. Available from: http://www.ciaonet.org/wps/bea03/index.html.
Blatter, J. and Blume, T. (2008), ‘In Search of Co-variance, Causal Mechanisms or Congruence? Towards a Plural
Understanding of Case Studies’, Swiss Political Science Review, 14 (2), pp. 315–56.
Blatter, J. and Haverland, M. (2012), Designing Case Studies: Explanatory Approaches in Small-N Research (Basingstoke:
Pagrave Macmillan).
Blatter, J., Janning, F., and Wagemann, C. (2007), Qualitative Politikanalyse. Eine Einführung in Forschungsansätze und
Methoden (Wiesbaden: VS Verlag für Sozialwissenschaften).
Büthe, T. (2002), ‘Taking Temporality Seriously: Modeling History and the Use of Narratives as Evidence’, American Political
Science Review, 96 (3), pp. 481–94.
Checkel, J.T. (2005), It’s the Process Stupid! Process Tracing in the Study of European and International Politics. Working Paper
No. 26, October 2005. Available from: http://ideas.repec.org/p/erp/arenax/p0206.html.
Collier, D. (2010), Process Tracing: Introduction and Exercises. Available from: http://polisci.berkeley.edu/people/faculty/
CollierD/Proc%20Trac%20-%20Text%20and%20Story%20-%20Sept%2024.pdf.
Collier, D. (2011), ‘Understanding Process Tracing’, PS Political Science & Politics, 44 (4), pp. 823–30.
Collier, D., Brady, H.E., and Seawright, J. (2004), ‘Sources of Leverage in Causal Inference: Toward an Alternative View of
Methodology’, in H.E. Brady and D. Collier (eds.), Rethinking Social Inquiry: Diverse Tools, Shared Standards (Lanham, MD:
Rowman & Littlefield Publishers), pp. 229–66.
Collier, D., Brady H.E., and Seawright, J. (2010), ‘Sources of Leverage in Causal Inference: Toward an Alternative View of
Methodology’, in H.E. Brady and D. Collier (eds.), Rethinking Social Inquiry. Diverse Tools, Shared Standards (Lanham:
Rowman and Littlefield), pp. 161–99.
Dunning, T. (2015), ‘Improving Process Tracing. The Case of Multi-Method Research’, in A. Bennett and J.T. Checkel (eds.),
Process Tracing in the Social Sciences: From Metaphor to Analytic Tool (Cambridge: Cambridge University Press),
pp. 211–36.
Eckstein, H. (1975), ‘Case Study and Theory in Political Science’, in F.I. Greenstein and N.W. Polsby (eds.), Handbook of
Political Science (Boston, MA: Addison-Wesley), pp. 79–138. Reprint 1992 is quoted.
Falleti, T.G. (2006), ‘Theory-Guided Process-Tracing in Comparative Politics: Something Old, Something New’, Newsletter
of the Organized Section in Comparative Politics of the American Political Science Association, 17 (1).
Falleti, T.G. and Lynch, J. (2009), ‘Context and Causal Mechanism in Political Analysis’, Comparative Political Studies, 42 (9),
pp. 1143–66.
Falleti, T.G. and Mahoney, J. (2015), ‘The Comparative Sequential Method’, in J. Mahoney and K. Thelen (eds), Advances in
Comparative Historical Analysis: Resilience, Diversity, and Change (Cambridge: Cambridge UP), pp. 211–39.
Ford, J.K., et al. (1989), ‘Process Tracing Methods: Contributions, Problems, and Neglected Research Questions’,
Organizational Behavior and Human Decision Processes, 43 (1), pp. 75–117.
George, A.L. (1979a), ‘Case Studies and Theory Development: The Method of Structured, Focused Comparison’, in P.G.
Lauren (ed.), Diplomacy: New Approaches in History, Theory, and Policy (New York: Free Press), pp. 43–68.
George, A.L. (1979b), ‘The Causal Nexus between Cognitive Beliefs and Decision-Making Behavior: The “Operational
Code” Belief System’, in L.S. Falkowski (ed.), Psychological Models in International Politics (Boulder, CO: Westview
Press), pp. 95–124.
George, A.L. (1997), The Role of the Congruence Method for Case Study Research. Available from: http://www.ciaonet.org/
wps/gea01/index.html.
George, A.L. and Bennett, A. (2005), Case Studies and Theory Development in Social Sciences (Cambridge: MIT Press).
George, A.L. and McKeown, T.J. (1985), ‘Case Studies and Theories of Organizational Decision Making’, in Advances in
Information Processing in Organizations, Vol. 2 (Santa Barbara, CA: JAI Press), pp. 21–58.
Gerring, J. (2007), Case Study Research: Principles and Practices (Cambridge: Cambridge University Press).
Hall, P.A. (2003), ‘Aligning Ontology and Methodology in Comparative Research’, in J. Mahoney and D. Rueschemeyer
(eds.), Comparative Historical Analysis in the Social Sciences (Cambridge: Cambridge University Press), pp. 373–404.
Hall, P.A. (2008), ‘Systematic Process Analysis: What it is and how to use it’, European Political Science, 7 (3), pp. 304–17.
Hall, P.A. (2013), ‘Tracing the Progress of Process Tracing’, European Political Science, 12 (1), pp. 20–30.
Haverland, M. and Blatter, J. (2012), Two or Three Approaches to Explanatory Case Study Research? APSA 2012 Annual
Meeting. Unpublished Paper.
King, G., Keohane, R.O. and Verba, S. (1994), Designing Social Inquiry: Scientific Inference in Qualitative Research (Princeton,
NJ: Princeton University Press).
Kreuzer, M. (2010), ‘Historical Knowledge and Quantitative Analysis: The Case of the Origins of Proportional
Representation’, American Political Science Review, 104 (2), pp. 369–92.
454 C. TRAMPUSCH AND B. PALIER
Kreuzer, M. and DeFina, R. (2015), Look Before You Leap: Bayesian Process Tracing, Test Strength, and Causal Inference.
Paper presented at the Institute for Qualitative and Multi-Method Research, Summer Institute, Syracuse University,
June 20–21.
Lijphart, A. (1971), ‘Comparative Politics and the Comparative Method’, American Political Science Review, 65 (3), pp.
682–93.
Little, D. (1995), ‘Causal Explanation in the Social Sciences’, The Southern Journal of Philiosophy, XXXIV (Suppl.), pp. 31–56.
Maggetti, M., Radaelli, C., and Gilardi, F. (2012), Designing Research in the Social Sciences (London: Sage).
Mahoney, J. (2001), ‘Beyond Correlational Analysis: Recent Innovations in Theory and Method’, Sociological Forum, 16 (3),
pp. 575–93.
Mahoney, J. (2004), ‘Comparative-historical Methodology’, Annual Review of Sociology, 30, pp. 81–101.
Mahoney, J. (2008), ‘Toward a Unified Theory of Causality’, Comparative Political Studies, 41 (4–5), pp. 412–36.
Mahoney, J. (2010), ‘After KKV. The New Methodology of Qualitative Research’, World Politics, 62 (1), pp. 120–47.
Mahoney, J. (2012), ‘The Logic of Process Tracing Tests in Social Sciences’, Sociological Methods & Research, 41 (4), pp.
570–97.
Mahoney, J. and Goertz, G. (2006), ‘A Tale of Two Cultures: Contrasting Quantitative and Qualitative Research’, Political
Analysis, 14 (3), pp. 227–49.
Mahoney, J. and Goertz, G. (2012), A Tale of Two Cultures. Qualitative and Quantitative Research in the Social Sciences
(Princeton, NJ: Princeton University Press).
Mayntz, R. (2004), ‘Mechanisms in the Analysis of Social Macro-Phenomena’, Philosophy of the Social Sciences, 34 (2), pp.
237–59.
Mohr, L.B. (1982), Explaining Organizational Behavior (San Francisco, CA: Jossey-Bass).
Moravscik, A. (2014), ‘Trust, but Verify: The Transparency Revolution and Qualitative International Relations’, Security
Studies, 23 (4), pp. 663–88.
Palier, B. (2005), ‘Ambiguous Agreement, Cumulative Change: French Social Policy in the 1990s’, in K. Thelen and W.
Streeck (eds.), Beyond Continuity, Institutional Change in Advanced Political Economies (Oxford: Oxford University
Press), pp. 127–44.
Palier, B. (2010), A long Good bye to Bismarck? The Politics of Welfare reforms in Continental Europe (Amsterdam and
Chicago: Amsterdam University Press and Chicago University Press).
Rohlfing, I. (2012), Case Studies and Causal Inference: An Integrative Framework Research (Basingstoke: Palgrave
Macmillan).
Schimmelfennig, F. (2006), ‘Prozessanalyse’, in J. Behnke, T. Gschwend, D. Schindler, and K.-U. Schnapp (eds.), Methoden
der Politikwissenschaft (Baden-Baden: Nomos, S), pp. 263–71.
Schimmelfennig, F. (2015), ‘Efficient Process Tracing. Analyzing the Causal Mechanisms of European Integration’, in A.
Bennett and J.T. Checkel (eds.), Process Tracing in the Social Sciences: From Metaphor to Analytic Tool (Cambridge:
Cambridge University Press), pp. 98–125.
Steinlin, S. and Trampusch, C. (2012), ‘Institutional Shrinkage: The Deviant Case of Swiss Banking Secrecy’, Regulation &
Governance, 6 (2), pp. 242–59.
Thies, C.G. (2002), ‘A Pragmatic Guide to Qualitative Historical Analysis in the Study of International Relations’,
International Studies Perspectives, 3 (4), pp. 351–72.
Trampusch, C. (2014), ‘Why Preferences and Institutions Change: A Systematic Process Analysis of Credit Rating in
Germany’, European Journal of Political Research, 53 (2), pp. 328–44.
Van de Ven, A.H. and Poole, M.S. (2005), ‘Alternative Approaches for Studying Organizational Change’, Organization
Studies, 26 (9), pp. 1377–404.
Van Evera, S. (1997), Guide to Methods for Students of Political Science (Ithaca, NY: Cornell University).
Waldner, D. (2012), ‘Process Tracing and Causal Mechanisms’, in H. Kincaid (ed.), The Oxford Handbook of Philosophy of
Social Science (Oxford: Oxford University Press), pp. 65–84.
Waldner, D. (2015), ‘What Makes Process-tracing Good? Causal Mechanisms, Causal Inference, and the Completeness
Standard in Comparative Politics’, in A. Bennett and J.T. Checkel (eds.), Process Tracing in the Social Sciences: From
Metaphor to Analytic Tool (Cambridge: Cambridge University Press), pp. 126–52.