Comparative politics and the choice of methods

The current APSA Comparative Politics Newsletter is dedicated to “Doing Comparative Politics Elsewhere” (i.e. , outside of the US). Thomas Plümper contributes a discussion on Comparative Politics in Europe. In brief, Plümper argues that, until recently, the field of Comparative Politics (CP) has been dominated by qualitative methods. During the last ten years or so, Plümper has seen the advent of sophisticated quantitative research, as it has been practiced for decades in the United States.

The prevalence of small-n research is attributed to Lijphart’s 1971 article published in APSR (Lijphart, Arend (1971): Comparative Politics and the Comparative Method. American Political Science Review 65 (3): 682-693). The article was widely perceived (about 360 citations according to the SSCI as of today) and distinguished between the experimental, statistical, comparative, and case study method in the social sciences. The comparative method is effectively used synonymously with qualitative comparative case studies, the case study method being confined to single-case studies. The mainstream method in the US became the statistical method, while Europe followed the comparative trail (experiments only recently becoming more popular on both sides of the Atlantic).

Plümper does not find Lijphart’s distinction convincing and does not understand why the comparative method became so dominant. He argues that the number of cases is not exogenously set, but depends on the theory and hypotheses at hand. If theory is the guide, one should sometimes do qualitative research, but should also occasionally do quantitative research. The (almost) exclusive realization of case studies in Europe indicates that theory did not always drive the design and method, but that theory was fit to the need to do qualitative research. Hard figures on whether US research was largely quantitative since the 1970s and European research qualitative are unavailable, but it seems to me that Plümper’s perception is shared by many.

Accepting this (now narrowing) divide across the Atlantic, I would add three observations concerning Plümper’s diagnosis. The first is related to Lijphart’s article and his four-fold distinction; the second to the methods in the US; and the third to the theory-design link in empirical research.

Lijphart and the comparative method

There are many different ways to describe the field of social science methods. My hunch is that, at present, most people see the major line of division between experimental and non-experimental, i.e., observational research (this presumes that we, as Lijphart did, only discuss “neo-positivism,” without any intention of degrading interpretivist research). Within the observational camp, one major line of division is between quantitative/large-n and qualitative/small-n research. Indeed, then, it is peculiar to put experimental, statistical, comparative methods in one row because the former two are also comparative and it is hard not to be comparative at all (Lees, Charles (2006): We Are All Comparativists Now: Why and How Single-Country Scholarship Must Adapt and Incorporate the Comparative Politics Approach. Comparative Political Studies 39 (9): 1084-1108). In retrospect, it is also odd how much debate there was about the comparative method in the 1970s and 1980s (e.g., DeFelice, E. Gene (1980): Comparison Misconceived – Common Nonsense in Comparative Politics. Comparative Politics 13 (1): 119-126; Frendreis, John (1983): Explanation of Variation and Detection of Covariation: The Purpose and Logic of Comparative Analysis. Comparative Political Studies 16: 255-272).

Nevertheless, Lijphart’s discussion of the comparative method (and case study method) is seminal because it clearly identifies major problems for causal inference as well as some remedies. However, this should not obscure that Lijphart actually declares the comparative method to be weaker than experimental and statistical methods (p. 685). If it were true that European scholars were influenced by Lijphart’s distinction of methods, they would have missed the point because they opted for a weak method.

The statistical method and the US

Second, the development of CP in the US was more in line with Lijphart because the mainstream method was (and is) the quantitative one. If CP in Europe was one-sidedly qualitative, though, CP in the US was one-sidedly quantitative, thus also indicating theory’s lack of influence on the design and choice of method.

Interestingly, we see converging trends on both sides of the Atlantic. In the US, the dominance of quantitative methods gave rise to the foundation of the APSA Section on Qualitative Methods, recently renamed the Section on Qualitative Methods and Multi-Method Research. In Europe, we now have the European Political Science Association (EPSA), seeking the promotion of more sophisticated formal and quantitative research on this side of the Atlantic. These are laudable developments that reflect Plümper’s claim that qualitative and quantitative methods both have their place in CP.

The dependence of theory on methods

Third and related to the previous point, it is correct that the number of cases is not exogenous. The state of the field and theory should determine whether to do a quantitative or a qualitative analysis involving many or a small number of cases. From a sociology of science perspective, however, this only shifts the problem to the theoretical part of the analysis because the theoretical goal is not necessarily exogenous to a researcher’s take on the small-n/large-n question. If a researcher is (implicitly or explicitly) predisposed to case studies, she formulates a theoretical goal and hypotheses that are better examined with case studies. The reverse holds for a researcher inclined to do quantitative analyses. In this perspective, theory does not drive the choice of the method, but rather methodological predispositions determine the theoretical goal of a study.

In the best of all (social science) worlds, this is not a huge problem as long as researchers that only do quantitative or qualitative analyses accept that the respective other method has a role to play as well and acknowledge the results produced with that method. In the worst of all worlds, researchers attached to one method negate the value of the other method. The increasing advocacy for multi-method research is a promising sign that we are closer to the best than the worst of all worlds that one can imagine for the social sciences.

Advertisements

About ingorohlfing

I am Professor for Methods of Comparative Political Research at the Cologne Center for Comparative Politics at the University of Cologne (http://cccp.uni-koeln.de). My research interests are social science methods with an emphasis on case studies, multi-method research, and philosophy of science concerned with causation and causal inference. Substantively, I am working on party competition and parties as organizations.
This entry was posted in comparative, qualitative, quantitative, theory and tagged , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s