Qualitative Comparative Analysis (QCA) is a method utilized by different disciplines in the social sciences and beyond, e.g., business economics and management. However, QCA users must still justify their choice of method more frequently than the users of other methods. Whatever the reason, it is actually not a bad thing to reflect upon the choice of a method because it should be suitable for answering our research question. (Although it can be annoying to explain the basics of QCA over and over again; nobody expects you to explain what OLS stands for and how an OLS regression works).
Among the different justifications for running a QCA, two are popular, but inappropriate: the medium-N argument and the look-the-results-are-complex argument. Both arguments refer to the basics and the soul of QCA. Originally, QCA was portrayed as a medium-N that allowed one to link case knowledge with the processing of more cases than can usually be handled in case study research. It is true that QCA can handle a medium number of cases. (I leave aside here that nobody knows where medium N starts and ends in terms of the number of cases.) However, one should not choose QCA because it can process a medium number of cases of, say, eight that are insufficient for a conventional statistical analysis (even bootstrapping reaches its limits here). If the application of a method is justified by the number of cases, the causal inferences that one can make are made dependent on how many cases are available. Put otherwise, saying you use QCA because the N is too small for a statistical analysis implies you would rely on statistics if you had more cases.
The causal inferences derived from a QCA and a statistical analysis are not the same, not even close (at least, that’s my view on this issue). Because of this discrepancy between QCA and a statistical analysis, one should carefully consider what method is able to deliver the answers that one needs given the research question and hypotheses at hand. For example, when a hypothesis reads “X is making Y more likely”, then QCA would be a bad choice regardless of the number of cases because this is not a set-relational statement. Similarly, when your hypothesis is set-relational, such as “If X, then Y” or you hold the ontological belief that causal relations are set-relational, then go with a QCA and disregard how many cases you have.
A second fallacious argument attempts to justify the choice of QCA via the results it produces. A QCA is done because of the belief that the causal relation of interest is complex (equifinality and conjunctural causation) and this belief is thought to be found confirmed when the empirical analysis includes multiple conjunctions. The method is justified by the results it produces. This is a bad argument for running a QCA because the best method to apply is the one that correctly captures the data-generating process (DGP, with the major problem, of course, being that we often don’t know what the DGP is). What is equally important is that, even when a method completely misses the actual DGP, we might still obtain intelligible results. In fact, this is behind the popular criticism “correlation is not causation” that is raised against statistical techniques because we can obtain significant estimates when the statistical model does not match the DGP. The same problem holds for QCA, meaning that one’s choice of QCA can never be validated by the results it produces.
So how should we justify the application of QCA? It is quite simple and by the textbook on research design: you should either hold a fundamental, ontological belief in causal relations being set-relational, or, on a lower level, follow theory and run a QCA when a hypotheses is couched in set-relational terms. For QCA, it thus holds what holds for any other method: ontological commitments and theory should reign supreme.