One of the recent big and, in my view, underappreciated innovations in the field of Qualitative Comparative Analysis (QCA) is Baumgartner’s formulation of the Coincidence Analysis algorithm (CNA). Baumgartner presents it as an alternative to QCA, which I do not find convincing because I do not see QCA married to a specific algorithm. I conceive of CNA as an alternative to the Quine-McCluskey (QMC) algorithm that Ragin chose as the default algorithm for QCA in his foundational book in 1987.

Baumgartner describes CNA in formal terms in various publications and I do not need to go into its details here (here the probably most accessible discussion). In (very) short, the argument is that a term – a single condition, a conjunction, or a disjunction – is causally related to the outcome when no proper part (superset) of the term is also related to the outcome. ABC is causal for Y when neither AB, AC, BC, A, B nor C are tied to the outcome. A, B and C are non-redundant conjuncts of ABC and the absence of any of these conjuncts is associated with the absence of the outcome.

In addition to the nature of the algorithm, CNA differs from QMC in two important ways. First, it dispenses with the counterfactuals that are essential for deriving the intermediate and parsimonious solution with QMC. As Baumgartner explains, this allows one to avoid untenable counterfactuals about issues such as female African American presidents of the United States or pregnant men. Second and relatedly, CNA produces only one solution that is identical with the parsimonious solution and that Baumgartner argues as being the only solution that is causally interpretable (an argument with which I disagree, but that is a topic for another blog post).

What is probably less well-understood among QCA researchers is that the algorithm is anchored in a regularity theory of causation. This can be inferred from Baumgartner’s discussions of CNA and because he anchors the algorithm in a regularity theory drawing on Mackie’s INUS theory of causation which is, in turn, a regularity theory of causation that further develops Hume’s widely known theory of causation as constant conjunction.

I belong to the group of probably not so many people in the social sciences who do not have fundamental problems with regularity theories that receive favorable treatment, neither in the quantitative literature nor in the qualitative literature (I would say much less in the qualitative literature that pits mechanisms and process tracing against regularity theories). Since it is always important to understand what the theory of causation is on which our methods are built, it is worth emphasizing that you buy in a regularity theory when you do CNA.

This is not bad *per se*, but one should know it and I doubt that everyone who referenced Mackie meant it to be a reference to his regularity theory as opposed to merely referring to the idea of INUS causes. In this sense, I disagree with Baumgartner who, for example, cites Mahoney and Goertz as subscribers to Mackie’s theory because they also favor an “asymmetric approach to causation” (chap. 5 in the A Tale of Two Cultures book) that does not fit with Mackie.

In addition, QCA researchers should note that regularity theories relate types to each other. As Baumgartner notes, they can hardly handle the causal analysis of singular cases (not the same as single cases) that are characterized by specific features related to place and time, for example. This challenges, in my reading, the idea of QCA as case-based method that also takes singular features of cases into account in the case-based part. As a regularity theorist of causation, this is not an issue for Baumgartner, but this is relevant for empirical QCA researchers who like the idea of QCA as a case-based method and do not want to conceive of singular cases as instantiations of regularities. This shows that the exciting invention of CNA as an alternative algorithm to QMC has implications that stretch beyond the truth table analysis for which algorithms are devised.

Dear Ingo,

I recently stumbled across this post and, although I am not really conversant in the blogging business, I feel the urge to draft a response. This blog entry suggests to me that you view the relationship between a theory of causation, on the one hand, and a procedure of causal inference, on the other, in somewhat too loose a manner. In particular, you seem to believe that one can develop a procedure of causal inference P and then still reasonably ask the question “In what theory of causation do I want to anchor P?”, or differently, that the choice of a procedure of causal inference does not by itself determine the choice of a theory of causation. Let me try to convince you that the relationship between procedure and theory is much tighter than you seem to think.

One fundamental problem any procedure of causal inference in any methodological tradition faces is that causation is not directly measurable or observable. In consequence, causal dependencies must be indirectly inferred via some measurable proxy dependence relation. Traditional regression-analytic methods (RAMs) choose covariational or correlational dependencies for that purpose, Bayes-nets methods draw on probabilistic dependencies and conditional independencies, and configurational comparative methods (CCMs) draw on Boolean dependencies of sufficiency and necessity. What is common to all these proxy dependencies—apart from the fact that, unlike causation, they are visible in empirical data—is that they are of purely functional nature, meaning they carry NO causal meaning whatsoever. Most covariational, Bayesian or Boolean dependencies have nothing to do with causation. Still, certain distinguished structures of these proxy dependencies are amenable to a causal interpretation, and this is where theories of causation come into play, for they connect these purely functional dependencies to causation. As the business of (reductive) theories of causation is to define causation in terms of non-causal dependence relations, they tell us exactly how to causally interpret the outputs of our preferred methods of causal inference. But this also means that depending on the sort of output we receive from a method we have to resort to the theory of causation that connects THAT output to causation.

Regression equations output by RAMs are causally interpreted based on a theory that defines causation in terms of covariational dependencies, as e.g. classically proposed by Herbert Simon (1954). Bayesian networks output by Bayes-nets methods are causally interpreted based on a theory that defines causation in terms of conditional probabilistic (in-)dependencies, as e.g. classically proposed by Patrick Suppes (1970). And Boolean functions as output by ALL CCMs, including QCA and CNA, are causally interpreted based on a theory that defines causation in terms of certain structures of Boolean functions, as e.g. classically proposed by John Mackie (1974). The important point here is that the output of a procedure DETERMINES what theory to choose. The choice of a procedure and the choice of a theory of causation are not independent, rather, the former fixes the latter.

I am sure we can both agree that QCA, just like CNA, outputs Boolean sufficient and necessary conditions—and even to a higher degree than CNA does QCA implement standard inference rules from Boolean logic, like distributivity and tautology elimination as algorithmically regulated in Quine-McCluskey optimization (QMC). Hence, Boolean functions constitute the heart of QCA’s output. In order to connect these Boolean functions—which, to repeat, by themselves carry no causal meaning whatsoever—to causation, we need a theory of causation. There is only one theoretical framework that closes the gap between Boolean functions and causation, and that is the REGULARITY theoretic one. It is the business of regularity theories to define causation in terms of (certain structures of) Boolean functions. Hence, contrary to what you claim in your post, QCA is just as anchored in regularity theories as CNA and any other CCM that outputs Boolean functions. You may like regularity theories or you may dislike them, but if you want to analyze causal structures along the lines of QCA you MUST rely on regularity theories, because they constitute THE theoretical background against which Boolean functions are amenable to a causal interpretation.

Of course, you may not agree with Mackie’s version of a regularity theory or with the one I propose in the article you cite in your post. In that case, you may develop your own theory connecting Boolean functions to causation—but that will simply be another variant of the regularity theoretic brand. I know from personal communication with you that you have a preference for counterfactual theories of causation, as classically proposed by David Lewis (1973). Such a theory connects counterfactual conditionals of the sort “had A not occurred, B would not have occurred” to causation. It must be very clearly stated that counterfactual conditionals ARE NOT Boolean functions. While the latter are truth-functional, the former are not. While the latter are cashed out in classical extensional semantics, the former require a possible-world semantics that presupposes intricate similarity measures over possible worlds. Details don’t matter here; what is crucial is merely that a counterfactual theory is not serviceable at all in closing the gap between the output of QCA and causation. To repeat: QCA outputs Boolean functions, that is, if you want to causally interpret the output of QCA you need a theory that defines causation not in covariational, probabilistic or counterfactual terms but in BOOLEAN terms. The only theoretical framework that does that is the regularity theoretic one.

You might object that QCA, at least as long as it relies on QMC, sometimes invites its user to add COUNTERFACTUAL cases as simplifying assumptions. As misguided as you know I take QCA’s counterfactual detour to be, it must be emphasized that the counterfactual addition of simplifying assumptions does not change the fact that QCA outputs necessary and sufficient conditions as defined in Boolean algebra. QCA does not output counterfactual conditionals that could be plugged into a Lewis-style theory of causation. It outputs Boolean functions that are causally interpreted based on some regularity theory or other.

Conversely put, this means that if you insist that what causation REALLY is is best cashed out in counterfactual terms, you must not do causal inference based on QCA or CNA or any other CCM. If you want to do causal inference against the background of a Lewis-style theory of causation you first have to develop a procedure that actually outputs counterfactual dependencies, as no such procedure is currently available. Although I am very skeptical that such a procedure can actually be developed—for it would require data on non-actual possible worlds—I sincerely invite you to take on this task. For as an opponent of a regularity theory of causation you should likewise be an opponent of QCA. Being a friend of QCA and an opponent of the regularity theoretic framework IS NOT A CONSISTENT position.

One final point. A Boolean implicational dependence output by QCA such as “A + B -> C” states the following: whenever A OR B are present, C is present. It is of universally quantified logical form, meaning it makes a claim about ALL cases in the data, not about a particular case. Correspondingly, if a Boolean function of this sort is causally interpreted, what we get is a type-level causal claim of the form “A and B are causally relevant to C on two alternative causal paths”. In other words, the output of QCA does not merely make a claim about single cases, rather it makes a claim about ALL cases. It does not tell us what the causes of an outcome are in one particular case but what factor levels are causally relevant to the outcome IN GENERAL. Claiming that QCA is a case-based method does not mean, as you suggest, that QCA merely gives us the causes of an outcome in particular cases, rather it means that QCA derives type-level causal dependencies by aggregating information taken from single cases.

In sum, you are absolutely right that choosing CNA “has implications that stretch beyond the truth table analysis for which algorithms are devised”, but you are wrong to think that the same does not hold for QCA, or for any other method of causal inference for that matter. The choice of a method determines what theory of causation must be chosen and whether the method merely tells us about causation in single cases or about causal relevance in general. As it turns out, CNA and QCA determine the very same theory of causation and they imply the very same type-level causal claims.

I appreciate that Michael took the effort of writing a blog post that is longer than my original post and raises important points each deserving more discussion that I can give it here:

1) You agree with my argument that CNA is based on a regularity theory, which is not surprising because you invented CNA and formulated a corresponding theory of causation.

2) The Quine-McCluskey algorithm (we should stop speaking of “QCA” when talking about the algorithm) certainly can also be anchored in a regularity theory. The problem with QCA as it was developed in the social sciences is that it was proposed as a pragmatic third way synthesizing case-based and quantitative approaches and that it was not systematically linked to theories of causation. One implication is that, as you know, we can generate three types of solutions depending on how we use logical remainders and make counterfactuals. You call it a “detour”, but my (limited) understanding of regularity theories is that they do not have a place for counterfactuals. Doesn’t this mean you should outright reject the idea of counterfactuals and three types of solutions if you anchor Quine-McCluskey in a regularity framework?

3) I know that Lewis-type counterfactuals have their problems, as each theory of causation has its own share of problems (including yours, I guess). We might see this differently as a philosopher and social scientist, but there are more or less hands-on criteria for making counterfactuals making reasoning about possible worlds easier (for example, in Lebow’s work). Moreover, a reasonable comparison can substitute the counterfactual case with an actual case. You might cry out loud now, but I do not see this differently than in statistics and experimental research that now can be (and is) anchored in Woodward’s interventionist, counterfactual theory. Nobody does the type-level counterfactuals and only thinks about what the potential outcome could be, but uses a method where actual cases assume the values of the potential outcome to estimate the treatment effect. As my thinking is developing, I might not have always put it like this in my writings, but you might know that I do not argue for Lewis-style counterfactuals on the type level. I like the idea of making causal inferences on the case-level and of QCA generalizing these claims across cases without doing the actual “causal job”.

4) Among others, I think, Tyler Vanderweele discusses sufficient causes, conjunctions and equifinality in the context of directed acyclic graphs (DAG) and the potential outcomes framework. (VanderWeele, Tyler J. and James M. Robins (2007): Directed Acyclic Graphs, Sufficient Causes, and the Properties of Conditioning on a Common Effect. American Journal of Epidemiology 166 (9): 1096-1104.) It is beyond my horizon, but at least it suggests that Boolean terminology does not require a regularity theory. You probably see this differently and I would be interested to know what mistake Vandwerweele makes in discussing sufficiency and equifinality in a DAG context.