Surprised, anyone? Putting the debate about QCA into context

As is well known, QCA has been under intense scrutiny in recent years and subject to criticism (sometimes quite strong). I am not going to review studies on the validity of QCA that entail criticism; although it would be worthwhile as I am not always convinced that the simulations are set up properly (most inquiries are based on some form of simulation). If, for the moment, we take the findings at face value, it would be helpful to take a step back and ask how surprised one can and should be by them.

In my view, the critics have hardly provided anything that should come as a surprise. Measurement error threatens the validity of QCA solutions? Well, all empirical research is bedeviled by measurement error, regardless of the method that is used. The QCA solution you get can be dependent on which cases you include? This is to be expected, given that the truth table rows that feed into the QCA solution depend on whether cases fall into a row and its consistency value. The QCA solution is not valid in the presence of overspecification (too many conditions in the analysis)? I would be surprised if the QCA solution is wholly insensitive to the conditions we use.

In short, all of the issues that we know to be a problem for empirical research – sampling bias, measurement error, etc. – can also be expected to pose a threat to valid causal inference in QCA. No one could seriously argue anything to the contrary and I would be surprised if anybody ever claimed that QCA gets the right solution regardless of measurement error, calibration of conditions, etc. (Readers are invited to point my attention to such statements, but we should detach statements about QCA from its inherent qualities and not hold a method hostage to incorrect perceptions about it.)

The question is less about whether QCA is affected by these problems, but how and with what consequences. It is at this point where many inquiries into QCA overstep by making overly strong claims about QCA. (Here, it would be important to reconstruct in detail how studies of QCA produce their results because some turn out surprisingly badly for QCA (which is not to say they have to be wrong.) If one finds that QCA is sensitive to some issue such as measurement error, one has only demonstrated what has always been obvious. Dismissing QCA as a method based on such an insight takes the point too far and implies that we should cease doing empirical research because all methods have their problems with measurement error. Did knowledge about potential omitted variable bias hinder the widespread application of regression analysis? It did not and that’s good because we do not only know what the problem is, but also what the consequences and remedies are.

The conclusion that QCA is affected by a problem can only be the first step toward developing a better understanding of how the validity of QCA results is threatened and whether and how to improve QCA to diminish those adverse effects. This is an important route for future work on QCA because, unfortunately, work that critically engaged with QCA only took the first step.

Advertisements

About ingorohlfing

I am Professor for Methods of Comparative Political Research at the Cologne Center for Comparative Politics at the University of Cologne (http://cccp.uni-koeln.de). My research interests are social science methods with an emphasis on case studies, multi-method research, and philosophy of science concerned with causation and causal inference. Substantively, I am working on party competition and parties as organizations.
This entry was posted in causal inference, causation, QCA, qualitative, set theory, Uncategorized and tagged , , , , , . Bookmark the permalink.

2 Responses to Surprised, anyone? Putting the debate about QCA into context

  1. dwayne woods says:

    After having read what I thought was a Bear hug that QCA couldn’t survive, your post has convinced me otherwise. In support, just look at King’s 2014 PA piece on robust errors to see how much non QCA stuff is prone to measurement error and misspecification. And this is from a non fan of QCA. I always confuse it with the late night sales T.V. show (he, he).

    • ingorohlfing says:

      I agree the King/Roberts article is very nice in going back to the basics and reminding researchers to test the model assumptions. More importantly, I am glad I could change your mind. Some people criticize the QCA literature (and researchers) because the consequences of measurement error etc. have been rarely discussed, if at all (curiously, many overlook, at least initially, that Skaaning pointed out in 2011 that QCA results are sensitive to various modeling decisions: Skaaning, Svend-Erik (2011): Assessing the Robustness of Crisp-Set and Fuzzy-Set Qca Results. Sociological Methods & Research 40 (2): 391-408. dx.doi.org/10.1177/0049124111404818). This then sometimes leads to the fallacious conclusion that the literature implicitly argues that these issues are not problems for QCA which is of course not defensible. Not saying that something is a problem for QCA is not the same as saying it is not a problem.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s