If this was a blog post about the #APSA2014, I would have to write about Friday night’s fire emergency at the Marriott (i.e., #APSAonfire) as the non-academic event that left a definite imprint (and affected me as one of the many people who had a room at the Marriott). But uncomfortable as that night had been, I will focus instead on the ongoing debate about set theory and Qualitative Comparative Analysis (QCA) which are the central topics in the Section on Qualitative and Multi-Method Research. Set theory and QCA were discussed at one panel on QCA, happened to be central at a panel on Designing Social Inquiry, and were also a hot topic outside of the panels.
The criticisms raised vis-à-vis set theory and QCA were not new, but Krogslund/Michel (K&M) and Monroe presented two alternatives that were new to me. K&M proposed random forests as one type of machine learning algorithm that seems to handle problems such as model misspecification and measurement error quite well. Bowers presented a fully saturated Bayesian logic model, i.e., a Bayesian logic with all possible interaction terms included. A Bayesian approach is always nice, but the problem with this model seems to be that it is not in line with a set-relational interpretation of causal relationships. I am not entirely confident this criticism is correct because Monroe rushed through many points very quickly, but he seemed to estimate an ordinary logit model which is not equivalent to crisp-set QCA (all his covariates were dichotomous). This problem does not seem to hold for random forests because its results are amenable to a set-relational interpretation, but I have to admit that I need to better familiarize myself with random forests, which were new to me until K&M’s presentation.
In any case, the more important point is that Monroe and others always criticized QCA when they were only talking about potential problems of the Quine-McCluskey algorithm in processing truth tables. Carsten Q. Schneider and I pointed out that QCA is more than the Quine-McCluskey algorithm and the latter is not even a necessary element of the former. (See my earlier post about what QCA is.) What is a necessary component of QCA is the use of some algorithm for processing the data. When random forests or another approach does a better job than Quine-McCluskey, I would be happy to use this algorithm. Of course, I do not speak for the QCA community, but I do not see any reason why one should only use the Quine-McCluskey algorithm and I can’t remember a publication on QCA who explicitly argues for this algorithm. (Baumgartner explicitly argues against it when developing Coincidence Analysis, but also conflates QCA with the Quine-McCluskey algorithm.)
The critics of QCA were surprised when Carsten Schneider and I explained that one can detach criticisms of QCA from criticisms of a specific algorithm because QCA and the Quine-McCluskey algorithm seemed to form a symbiosis for them. However, the critic seemed to follow our line of reasoning, which would be good news because that would mean there is more common ground in the debate than many perceived to exist. Since the debate about QCA is likely to rage on, the future will tell whether this impression is correct and if the common ground will be explored.