At the end of last week, a two-day conference, Qualitative Comparative Analysis – Social Science Applications and Methodological Challenges, took place in Tilburg, the Netherlands. Needless to say, the recent and ongoing wave of criticism of QCA was a key topic on the agenda and in discussions among participants. Unfortunately, some central figures in the field of QCA could not make it to Tilburg, including Axel Marx, Charles Ragin, Carsten Schneider and Claudius Wagemann. Moreover, none of the critics of QCA were participating, which was not surprising because most are not based in Europe and would have had to travel a long distance.
In my perception, the conference showed what one could observe over the last two years or so: the QCA community at large is building sub-communities that differ in their views of what QCA is, how it should be studied and what QCA is good for. The key dimension seems to be the one distinguishing QCA as an approach from QCA as a method (see chap. 1 in this book). QCA as an approach refers to the original understanding of QCA as a case-based method for which close case knowledge is not only an asset, but constitutive. QCA as a method focuses more specifically on the algorithm that is used for the processing of data and how it performs in the face of challenges such as limited diversity. QCA as an approach implies QCA as a method, but not the other way around because one can run an algorithm on data without knowing any of the cases in detail.
Implicitly, the distinction already played out in the symposium on Lucas and Szatrowski’s “critical perspective” on QCA and centers on the question of whether simulations with hypothetical data are useful for evaluating the performance of an algorithm. Ragin and Olsen say “no” in their contributions to the symposium because you necessarily lack case knowledge when using hypothetical data and refuse to accept insights derived from simulations. Fiss, Marx and Rihoux, and Vaisey hold a different view because they run simulations themselves.
This is only one issue, although quite salient at present, and it is important because where one stands on the simulation-question determines what one thinks the best response is. If someone believes simulations are useless, they must argue against them, probably reaching an impasse quite quickly because neither side is likely to suddenly adopt the view of the other side. QCA researchers embracing the idea of simulations (like me) have to engage with the existing ones and might devise some themselves, committing an instrument that is disparaged by other QCA scholars.
It is not necessarily bad that QCA is building sub-communities and is probably unavoidable. It is a sign that the community has grown because the method spreads across disciplines and generations of social scientists; young researchers who had a different academic training from more senior ones enter the field of QCA and introduce new perspectives. Although QCA is often pitted against regression analysis, a somewhat closer look at the quantitative community shows that it too hosts many sub-communities having different opinions on the frequentism vs Bayesianism debate, on the sense and procedure of null-hypothesis testing, etc. Similarly, there are multiple camps in the case studies and process tracing domain that share a belief in the general value of the method, but disagree on many issues, such as case selection and the understanding of what a mechanism is. In this view, the development of circles within the broader QCA domain can be taken as a good sign: QCA has achieved a certain level of maturity and diffused across disciplines and generations, with the normal side effect of the community becoming more diverse. And QCA researchers value diversity, don’t they?