Qualitative Methods (i.e., process tracing, set theoretic methods, informal Bayesian inference etc.) and multi-method research, in particular the combination of regression analysis or QCA with case studies, are certainly a growth industry in political science and sociology. In light of some methods panels held at the APSA Annual Meeting in Chicago and the ECPR General Conference in Bordeaux, this might actually come as a surprise. The reason for my assertion is that this field is still wrestling with proper definitions and common understandings of many, not to say, all key terms.

This is also an insight that I gained at the ECPR Joint Sessions of Workshops in Antwerp in 2012 where most participants of a process tracing workshop had different understandings of what a process and a process tracing are, how “mechanism” should be defined, what evidence and observation mean and so on.

At the APSA, I discovered that Andy Bennett and I have a different understanding of most-likely and least-likely cases and how they figure in the basic variant of Bayes’ theorem (two exclusive hypotheses, only one body or piece of evidence). I argued that the conditional likelihood of finding evidence, given a hypothesis, p(E|H), is a formalization of the meaning of most-likely and least-likely cases. The rationale is that the evidence is tied to what we derive from the analysis of a case, hence the linkage of p(E|H) to most-likely/least-likely cases. Andy Bennett maintained that most-likely/least-likely case refers to the prior in Bayes’ theorem, i.e., p(H). Ultimately, we agreed that this disagreement can be resolved, but I was surprised that we had different opinions on how to link the established types of most-likely and least-likely cases to Bayes’ theorem (which is becoming established as well in qualitative methodology).

At the ECPR General Conference, I learned that Gary Goertz does not like these types of case studies at all. Moreover, it became apparent that he takes a different view on counterfactual causal inference and case selection for the analysis of necessity than Carsten Schneider and I do (we were co-discussants at the ECPR panel).

Of course, it is not bad for a field when it witnesses conceptual disagreements because discussions about how to resolve them can contribute to the advancement of the field. At present, however, it sometimes occurs to me that this is an obstacle because, in my view, the variety of understandings about key terms means that we are stuck with debates about these issues, which in turn impedes the advancement of tools that are useful for empirical research. In addition, for teaching, it means that one always has to spend at least one session on conceptual clarifications. (“My definition of mechanism is… . XY, on the other hand, defines it like this… .”).

To be clear, this is not necessarily bad because it is testimony to a certain degree of self-reflection of the field. However, I admit I sometimes envy quantitative methods that have reached a shared understanding of basic issues. (This does not automatically imply I fully subscribe to this understanding in every respect.) On the downside, the shared conceptions in quantitative research imply that since many issues are taken for granted, they are sometimes not taught in statistics, meaning that students lack knowledge about the basis of (mostly frequentist) statistics. Nevertheless, I think it would be good for the field of qualitative methods and multi-method research if we get a little bit closer to where quantitative methods are right now. I just do not know how to achieve this; I guess it just takes time.