The COMPASSS statement and QCA solution types

About two weeks ago, COMPASSS issued a Statement on Rejecting Article Submissions because of QCA Solution Type. In short, the reasoning was that methodological work on QCA is developing and that reviewers and editors should not judge empirical work based on whether one particular solution type is interpreted as causal. (Disclosure: I am a member of the COMPASSS advisory board, but was not involved in this statement in any way.) Dimiter Toshkov picked up on the statement and wrote an interesting blog post on this. Dimiter’s blog then hosted a response by Eva Thomann that led to a say, heated, exchange of arguments in the comments section.

With some delay, I want to raise two points about the debate that largely abstracts from the question of which type of QCA solution (conservative, intermediate, parsimonious) is the “correct” one and can be interpreted as causal. I have my viewpoint on this, but my remarks are more general.

The bar for public statements should be high

It is not without precedent that professional associations, which COMPASSS is for me, publish statements on various matters (such as the ASA statement on p-values). I believe that associations have the right to issue statements and should do so. However, the bar should be set high because, otherwise, there may be interference with the scientific process, with the association tending to take sides in a debate (not necessarily, though). Most of the time, the association should be a bystander and not an involved party.

The question of how to review QCA submissions to journals certainly is an important topic that potentially qualifies for a statement. In light of Eva Thomann’s post, however, my impression is that the bar has been set too low. She writes that the COMPASSS Steering Committee (SC) had no information about how often papers have been rejected based on arguments that a specific solution type can(not) be interpreted as causal. One might read this such that the issue is so important to the SC that the frequency of its occurrence does not matter. However, I am far from convinced that frequency should be irrelevant.

If we take this as the precedent of COMPASSS statements, it would mean that the SC could put out statements at a high rate because I am sure that reviews of QCA articles often contain arguments the members of the SC consider dubious or wrong, some of which strike me as being worse than the issue at stake here. As many QCA researchers surely can tell, one still has to manage reviews that flatly deny any value of QCA. One might respond that the SC saw a problem emerging and wanted to intervene early on, but I do not find this convincing. As a professional association, COMPASSS includes respected QCA researchers and experts, but it would still mean that the only professional association in the field would intervene and, further, would need to frequently intervene in the future because reviewer arguments that it deems ill-suited will continue to be made.

Most studies interpret one solution type as causal

The second point concerns the internal coherence of the argument, which dips into the question of which of the three solution types is causal. The COMPASSS statement does not say anything about a specific type, but readers with some knowledge of the field of QCA can reasonably guess that it is about Baumgartner’s and Thiem’s work and argument that only the parsimonious solution should be interpreted as causal[1, 2]. If you follow their reasoning, the question is: why should reviewers not be allowed to criticize a study if it interprets another type as causal? I agree that the Baumgartner/Thiem article is “only” one study and that more work is needed on this. However, this is not a sufficient reason to discount the work that has been done and it should mean something. If this was the standard, we could pretty much stop doing research because one could always say “I don’t follow this reasoning because there is not enough research about it.” (We would have to define what “enough” means, but this is probably even more subjective than the question of QCA solution types.)

Now to the internal consistency of the argument: There seems to be consensus that only one of the three solution types can be interpreted as causal (this should be consensus). This means that as soon as an empirical researcher interprets any solution type as causal (as many do), they are implicitly saying that the other two types are not causal. You might, like Eva, say other types are insightful or useful, but they cannot be causal (another point on which Eva seems to agree). The Baumgartner/Thiem argument that only one solution type is causal is therefore not new at all (the content is, I’d say, because most empirical researchers now seem to prefer the intermediate solution).

This means you can criticize reviewers for rejecting papers because they interpret the wrong type as causal, “wrong” from the reviewer’s perspective. For the sake of consistency, you should then also criticize empirical researchers if they interpret only one solution type as causal because they are designating the other solution types as non-causal. The liberal attitude that shines through the COMPASSS statement, stating that any solution could be interpreted as causal, is not sustainable because the causal interpretation of any solution type is exclusive. Either reviewers should be allowed to settle on the causal interpretation of one solution type, or reviewers and authors should not interpret any type as causal.

A way out

Avoiding causal terminology might not be the worst of ideas, but I think there is a middle way between rejecting papers because of the wrong solution type and not using any causal terminology per se. I agree with Dimiter and Eva that rejecting a paper based solely on the question of solution types should be discouraged. This has nothing to do with the topic, but rather the understanding of what constitutes good reviewer practice. If you think the paper is good, but also that the wrong solution type is interpreted as causal, then ask the author to clarify their position on why this solution type is taken as causal. Perhaps the author has a good answer for why this type and not another and the issue can be settled. Or the author starts thinking, accepts that this solution might not be interpreted as causal, and switches to non-causal terminology. I believe that most QCA researchers want to make causal claims, but we should keep in mind that good and interesting research can also be non-causal.

About ingorohlfing

I am Professor for Methods of Comparative Political Research at the Cologne Center for Comparative Politics at the University of Cologne ( My research interests are social science methods with an emphasis on case studies, multi-method research, and philosophy of science concerned with causation and causal inference. Substantively, I am working on party competition and parties as organizations.
This entry was posted in causal inference, causation, QCA, qualitative, Qualitative Comparative Analysis, regularity, set theory and tagged , , , , , . Bookmark the permalink.

15 Responses to The COMPASSS statement and QCA solution types

  1. Alrik Thiem says:

    Hi Ingo, thank you for your contribution to this debate, which I think contains many important points. The main reason why the COMPASSS statement is problematic, at least for me, is because it is a mere value judgment on a scientific argument, but it provides no scientific arguments itself. In contrast, statements issued by other associations, such as the American Statistical Association, are based on scientific arguments, and concern the misinterpretation or misapplication of these scientific arguments.

    Nor do I understand why COMPASSS deems reviewer reports that base their recommendations on the work Michael Baumgartner and I have done as bad practice, but it has not issued a statement, for example, about the practice of some reviewers to recommend rejection because they argue that QCA is not a good method, or that QCA has been shown to be “a fatal distraction” by researchers such as Lucas and Szatrowski (2014). Apparently, Michael and my work is considered to be more threatening to COMPASSS than work that rejects the entire method of QCA in toto.

    Unfortunately, the organizers of the COMPASSS-sponsored QCA expert workshop that is to take place in December in Zurich have decided not to accept any papers addressing the issue of solution types. I think this is regrettable because Michael Baumgartner and I explicitely invited COMPASSS to explain to us formally why they deem our work erroneous.

    Irrespective of how this issue develops further in the future, reviewers should be free to buy into or out of arguments, and if these reviewers think that our work is convincing, then they should be free to say so, and base their reports on their current state of knowledge and conviction. At least I know of no other professional association that would attempt to restrict this scientific freedom.

  2. Adrian Dusa says:

    Dear Ingo,

    Your intervention is welcomed, and if I completely understand what you write the whole post revolves around this sentence:

    “There seems to be consensus that only one of the three solution types can be interpreted as causal (this should be consensus).”

    This is key, and this is also problematic, as I don’t recall having any kind of consensus that one solution type is better than any other, for causal arguments or anything else. If the parsimonious solution would be the “best”, it would have been correct since the starting of QCA and none of the other developments would have been needed.

    But the parsimonious solution has its problems, as it relies on difficult, impossible and untenable counterfactuals. To be clear, I am not referring to the new CNA algorithm, as I believe (and this should indeed be consensual) that no solid conclusion should be derived based on a three months old algorithm. In fact, it can already be demonstrated the new CNA is perfectly equivalent with my own newest minimization algorithm called CCubes, which is still a QCA-style-truth-table-based procedure. A comparison of both algorithms can be found here:

    Until a firm conclusion will be drawn based on these new developments, what I see is that all currently published articles, trying so hard to demonstrate that one solution is better than the others, are based entirely on my (double underlining “my”) algorithm called eQMC.

    As the creator of eQMC, I am authoritatively say this is a pseudo-counterfactual method, that although not explicitly involving remainders, it does so implicitly. In plain language, it means that it gives exactly the same solutions as the classical Quine-McCluskey minimization, including all those (difficult, impossible and untenable) remainders. This is why I call it “pseudo” counterfactual, with a demonstration (and a short description of CCubes) here:

    In the absence of a consensus (that you assume) the entire intervention remains without a point, and the COMPASSS statement stands valid.

    Finally, there seems to be a huge misunderstanding (I would even say doubled by a huge manipulation) that COMPASSS issued the Statement against someone. That is completely untrue and quite contrary, it is a statement that defends scientific freedom, because a rejection based on isolated opinions (that are not embraced by the community) impedes the scientific freedom of the author.

    • ingorohlfing says:

      I appreciate the response to my post. Let me make two points in return.
      1) As important as this is, we should keep the perspective. Scientific freedom is not at stake here. Otherwise, every reviewer criticizing a manuscript and rejecting it would endanger scientific freedom, which clearly is not the case.
      2) There is a misunderstanding about my statement ““There seems to be consensus that only one of the three solution types can be interpreted as causal (this should be consensus).” I am only interested in which solution type can be interpreted as causal. If you have other criteria in mind such as “validity” (however you define this), I am fine with this, but this is not my concern here. The argument that only one solution can be causal should be uncontroversial without settling for any specific type here. This argument is independent from which algorithm you use and, more broadly, which method. What “causal” means is a matter of the preferred theory of causation. You can choose one of the theories of difference-making, of which Thiem’s and Baumgartner’s regularity theory is one variant, or a process theory. Regardless of the theory, if you get three different solutions from a truth table analysis, any theory will tell you that only one can be causal. Say the solutions are ABC, AB and A and you follow a difference-making theory. If A, B and C make a difference for Y as INUS conditions of ABC, then ABC is causal and AB and A are not. If C does not make a difference, but A and B are INUS conditions of AB, then it is AB. If B does not make a difference, then it is only A. Your causal inference might be uncertain for reasons such as data quality, but this is another matter. Leaving this aside, it is impossible to say that two different solutions derived from the same truth table analysis are both causal (I mean, it is possible to say it, but it is wrong). This should be uncontroversial; which of the derived solutions is causal (conservative, intermediate, parsimonious or any other) is subject to debate and should be a major line of set-theoretic methods research in the future.

    • Alrik Thiem says:

      The post by Adrian Dusa misinterprets the situation regarding solution types. None of the recent works that show the conservative and intermediate solution types of QCA to be incorrect couples its conclusion to a specific algorithm (there are literally dozens of different minimization algorithms, several of which are implemented in various QCA software packages). I should know since I have (co-)authored these works.

      Instead, the argument that is presented in these works, and against which the COMPASSS Management Team and Steering Committee have sought to intervene through its public call on journal editors and reviewers to simply ignore these works, is purely results-driven. Conservative and intermediate solutions are incorrect because they make inferences (way) beyond given data, in consequence of which these two solution types often also present inferences to the user that demonstrably contradict the very causal structures that underlie these data, and which users employing QCA seek to discover.

      This deficiency of both conservative and intermediate solutions has not only been revealed in millions of elaborate data simulations ( but also in purely theoretical work ( That is it. There are no algorithms, no “untenable assumptions”, no “pseudo counterfactuals” or anything else of the kind Adrian Dusa suspects involved.

      • Adrian Dușa says:

        As much as I dislike so many back and forth argumentations, I still believe that further clarifications are needed.

        Of course the published pieces refer to a specific algorithm, since the “elaborate” data simulations are performed using the QCApro package (this can be verified in the replication file), which is a fork of my package QCA, and it can also be verified that package QCApro uses my algorithm eQMC.
        And I have to stress again and again, eQMC is pseudo-counterfactual therefore it absolutely does (implicitly) involve difficult, untenable and impossible counterfactuals. Not to mention it can be demonstrated those simulations are nothing but the result of a programming artefact and even contain logical errors.

        As for the alleged “correctness” of the parsimonious solution, not everyone agrees. Schneider and Rohlfing ( actually disagree with Baumgartner and Thiem the parsimonious solutions is the “only one” that can be interpreted causally. Although everyone referring to Mackie’s INUS theory points the attention to a causal interpretation, in fact it cannot be unequivocally demonstrated which of the three solution types is causal. The fact that some published papers “claim” that one is correct, does not mean it is.

        Finally, the COMPASSS statement is not addressed to the reviewers, but to the article authors and eventually journal editors.
        Of course, everyone agree the reviewers have as much scientific freedom as they seem fit. No institution (e.g. COMPASSS) can force reviewers to write anything but whatever they want, and ask for any clarification, on a causal nature or anything else.
        But should a potential author still feel discriminated (after the normal back and forth clarifications) that a reviewer rejected the paper based solely on the conclusions of a certain isolated publication, then (at least this is how I understand) the author could at least write the editor and point to the COMPASSS statement. Naturally, the authors should make absolutely sure the rejection is unjust, and not abuse the Statement just because they have been rejected.
        This way, I believe, respects the scientific freedom of both the reviewers and of the authors.

      • Alrik Thiem says:

        I believe that back-and-forth discussions are very important and necessary. How else could a debate that may lead to scientific advance be structured? Should only one party be given a voice? Definitely not. Everyone who feels like contributing something helpful towards the solution of a problem should be invited to join. I leave comments on the COMPASSS Statement aside now, and just reply to Adrian’s comment about the correctness of QCA solution types.

        In the 2017 Sociological Methods & Research paper, the nature of counterfactuals is no criterion in evaluating QCA-CS (conservative), QCA-IS (intermediate) and QCA-PS (parsimonious). The exact three criteria we use you find on page 9. They are in line with criteria used in other areas of method testing, adapted to the logic of configurational research.

        That the employed package “QCApro” uses eQMC is just a consequence of historical developments, but it was no deliberate choice for the analysis. Any other algorithm would have done the same job (had the Quine-McCluskey-based fs/QCA software an option for simulations of the type we needed, we could have used it instead, or Tosmana with its graph-based agent algorithm). Also note that the theoretical paper mentioned in my previous post does not resort to algorithmic particularities for making its case. That there are different solution types is a result of the first use of Quine-McCluskey optimization, but Quine-McCluskey optimization in its PS version can equally shown to be correct.

        In other words, we need to first get rid of the confusion that all the literature related to “counterfactuals” in QCA is connected to correctness-testing. That confusion, however, sits deep because many researchers still think that QCA-CS makes no assumptions at all, that QCA-PS often makes untenable assumptions, etc. That this is not true can easily be proven (see QCApro’s documentation for the eQMC function or my referenced paper “Going beyond the facts”).

        Just two weeks ago I also demonstrated it to an audience of more than 30 PhDs and professors (including three mathematicians and statisticians) in Indianapolis. After an elaborate and detailed simulation exercise, where everyone had a different dataset, it became clear that QCA-CS went beyond the data, and most of the time put the researcher at a high risk of drawing demonstrably false conclusions even in otherwise ideal research circumstances. None of these researchers will (hopefully) use QCA-CS anymore.

        Again, I would be happy to see a thorough re-evaluation of the results of Michael Baumgartner and my 2017 SMR paper, using even more causal structures than the seven (one deliberately chosen and six randomly chosen structures) we have in our paper, or even more evaluation designs. A theoretical comment on our specified correctness criteria would also be highly welcome. I have every interest in seeing whether we were correct, as we still believe we are, or whether we have made a small or big mistake somewhere along the road. If that should be the case, I would have no reservations whatsoever revising my position (a positive error culture can only be helpful in science), but so far our conclusion that QCA-CS and QCA-IS should NOT be used for causally-oriented empirical data analysis has survived ALL attempts of refutation.

      • Adrian Dușa says:

        With this reply, Alrik Thiem effectively acknowledges that “any other algorithm” (including Quine-McCluskey) would have done the same job. Just to avoid possible misinterpretations, it should be clear for anyone that Quine-McCluskey _can_ be used to derive all solutions types: CS, IS and PS.

        And this is precisely what I am arguing, that all of these algorithms (eQMC, Graph Based Engine in Tosmana) are 100% compatible with Quine-McCluskey which is _explicitely_ using all of those (difficult, untenable, impossible) counterfactuals.
        eQMC, on the other hand, is _implicitly_ using them: it is the essence of what I call a “pseudo-counterfactual” algorithm, otherwise it cannot be explained why eQMC and Quine-McCluskey give 100% exactly the same solutions. These are strong facts, and they cannot be changed by using a different wording to describe them.

        What I find really amazing is that Alrik Thiem keeps pointing at the documentation of package QCApro when referring to the eQMC algorithm, as if the eQMC in that package would be different from the eQMC algorithm I have created back in 2007.
        It is absolutely astonishing how a user of my own algorithm keeps describing my work, to me, as if I didn’t know what I have created. Oh, please…

        I’ve stated before that QCA-CS does not make any kind of assumptions on the remainders, exclusively dealing only with the empirically observed positive output configurations. Therefore it cannot possibly go “beyond the data”. These kinds of “simulations” are so blatantly illogical, that it is curious anyone even stays to listen to all this nonsense.
        QCA-CS equals Quine-McCluskey proper, and anyone claiming that QCA-CS is wrong, effectively claims that Quine-McCluskey is wrong. It is just not worth my time to say more on this topic.

      • Alrik Thiem says:

        I may be repeating myself, but if that is necessary to contribute to resolving the gridlock in this exchange at some point, so be it.

        What we tested was not the correctness of an algorithm, but the correctness of QCA’s three solution types. Using Tosmana, we could have run the same tests (setting remainders to “Exclude”) or fs/QCA (setting remainders to “False”) for testing QCA-CS. In QCApro, setting “sol.type = “cs” did this job.

        Now, QCA-CS, irrespective of which software you use, declares all remainders to be insufficient for the outcome (that is the meaning of “False” or “Exclude”). However, under which conditions is the statement that something is not sufficient for something else true? There exists only ONE single configuration of truth value assignments that turns a statement of non-sufficiency into a true one, namely the presence of the antecedent in conjunction with the negation of the analyzed outcome (just scribble a basic two-variable truth table on paper to check this). In other words, QCA-CS, not QCA-PS, introduces cases counterfactually, and QCApro’s documentation contains data experiments that show this to indeed be the case.

        Even much more obvious is the data set presented in Ragin (1987:106), where QCA-CS is presented as F = aC + bC. While combinations of rows [1,2], [3,4] and [5,6] do indeed show “C” to be causally relevant to F, neither “a” nor “b” receives any such support, at least not from the empirical data. You need to counterfactually introduce at least ABC in conjunction with not-F to claim “a” to be causally relevant, “b” respectively. If you do this, however, you will violate two out of four possible data-generating structures that could potentially underlie the data in this table (see my “going-beyond-the-facts-paper). These violations turn QCA-CS into an incorrect method.

        Introducing cases counterfactually would be unproblematic if QCA-CS had strong predictive powers or whatever property that would render the introduction of non-empirical cases unproblematic. But these cases are often in straight contradiction to the very structure that generated the empirical cases, as all our simulations have shown. Maybe you should run some simulations yourself (a replication script is available for the SMR paper) to verify this. If you can conduct a single data experiment where QCA-PS turns out to be incorrect, or where QCA-CS turns out to perform better than QCA-PS, please let me know.

        Ok, I stop here. Maybe it would be best if you submitted a comment on the SMR paper. This would undergo a first quality check through review, and then I would have the chance to present an official and formal reply to your claims that all we did was wrong.

      • Adrian Dușa says:

        I have the terrible feeling this is a deaf discussion.
        Where in the world does the description of the Quine-McCluskey contains this affirmation: “…QCA-CS, irrespective of which software you use, declares all remainders to be insufficient for the outcome”…?

        That is not only blatantly false but also illogical, as the conservative solution has absolutely nothing to do with the remainders: it only deals with the observed positive configurations. Period.

        Setting remainders to “Exclude” means exactly what it means: they are excluded from (hence not part of) the input in the minimization procedure. The same with “False” in fs/QCA: false meaning they are _not_ included in the minimization. It is astonishing how such people, who did not create anything of their own, all of a sudden start twisting the true meaning of others’ work, despite the direct feedback from the original author(s).

        For the readers of this post, here is an example, involving the simplest possible function:
        f(p) = p + 1
        Set p = 3, then:
        f(3) = 4
        Set another object r = 4, and yet another object n = 5
        f(3) = 4 (again)
        Set r = 6, and n = 9 (any values, basically)
        f(3) = 4 (again, and again, for all eternity).

        How can r (like, remainders) or n (like, negative configurations) can ever have an influence on the output of this function, when the input does not change because it only contains p (like, positive configurations)?

        This would be really interesting, perhaps Alrik Thiem could provide any kind of mathematical, philosophical and / or formal logical proof to answer this direct question: how something which is _not_ part of the input, can (ever) have any influence on the output?

        Unless this question is answered, or some other solid theoretical evidence is brought forward from respectable sources, this is nothing but an absurd claim.

        Not to mention this has nothing to do with the SMR paper, bearing the absolute weakness (which is itself enough to invalidate the results) that it uses a Quine-McCluskey compatible algorithm, thus implicitly including the difficult, untenable and impossible counterfactuals to obtain the parsimonious solution.

      • Alrik Thiem says:

        To be honest, I start losing track of what it actually is you’re criticizing since you’re jumping back and forth and from side to side. Is it the claim that QCA operationalizes Mackie’s INUS theory, is it the fact that the SMR paper shows QCA-CS and QCA-IS to be incorrect methods of causal inference, is it a minimization algorithm or something else? If you were more specific, it would be much easier to get a handle on the issue.

        A last attempt from my side; perhaps this gets the discussion going again in a constructive direction: Your central question seems to be “how something which is _not_ part of the input, can (ever) have any influence on the output?”

        The mistake already occurs in your premise: Setting a remainder to FALSE (or “excluded”), which QCA-CS does [see also Schneider and Wagemann’s QCA textbook (2012:162-163) if you believe this to be a “respectable source” as you call for], does not mean that it is not part of the input. It means that the remainder, before an algorithm begins finding minimally sufficient conditions in its first phase, is declared to be NOT sufficient for the outcome. However, what does it mean for an implication to be FALSE? Here’s the corresponding truth table (check any introductory book on Boolean algebra if you don’t believe this):

        A B | A -> B
        0 0 | 1
        1 0 | 0*
        0 1 | 1
        1 1 | 1

        The only possibility for receiving a FALSE on a statement of sufficiency (0* in the table) is to declare the antecedent (the remainder) to be TRUE and the consequent (the outcome) to be FALSE. Thus, you need to assume that the remainder exists and occurs in conjunction with the negation of the outcome. In other words, QCA-CS, by setting remainders to FALSE prior to minimization proper, in fact introduces non-empirical cases through the back door, to the maximum extent possible. QCA-CS is anything but CONSERVATIVE!

        Once you’ve understood this, it’s not a far cry anymore from understanding why QCA-CS and QCA-IS so often fail tests for correctness and frequently output results that contradict the very data-generating structure researchers using QCA seek to uncover.

  3. Adrian Dușa says:

    I have been in touch with Schneider & Wagemann about precisely this example, and we have already agreed this is erroneous. More details here:

    It is important to understand the meaning of the words “input” and “output”, and that is obvious for anyone who implemented a programming exercise.
    I rest my case here.

    • Alrik Thiem says:

      That just tells me that Schneider and Wagemann disagree with Michael Baumgartner and me, hardly anything that should surprise anyone who knows our works. So what’s your argument now before you leave the debate? That QCA-CS declares remainders to not be sufficient for the outcome (my argument), or that it declares them to be sufficient, or nothing of that (but what else then)? If that’s a question you cannot answer here, at some point you should make up your mind about it when pursuing your line of criticism further (however, even then you can’t explain yet why QCA-CS and QCA-IS fail correctness tests). As I said, it would be good if you produced some kind of working paper with a formal argument that could be evaluated. That would help this debate much more than scattered and patchy blog posts.

      • Adrian Dușa says:

        I’m sorry, but I fail to comprehend what the question is.
        Schneider & Wagemann (2012) could not have possibly disagreed with Baumgartner & Thiem (2017). The disagreement comes from Schneider & Rohlfing (

        The link from my previous comment is related to the example offered from Schneider & Wagemann (2012:162-163), to demonstrate why they are mistaken:
        (and both Schneider and Wagemann have replied privately, acknowledging this error).

        Therefore QCA-CS absolutely does _not_ make any assumption on the remainders, the example above (far from supporting Baumgartner and Thiem), is actually reinforcing my arguments: since the input for the QCA-CS (for the Quine-McCluskey algorithm) does not change, it means that QCA-CS is 100% correct and it never goes “beyond the data” as implied. The rest are just isolated allegations, illogical speculations that serve no one to spread around.

        The analysis from my own link above is crystal clear for anyone dealing with programming of Quine-McCluskey and even for casual readers. Theoretical work is one thing, but programming doesn’t lie, is unambiguous and it cannot be so easily obfuscated.

        I have no further comments to make. All other readers are free to believe whomever they want. As the person who actually did the programming of Quine-McCluskey, I happen to actually know what I am talking about.

  4. Adrian Dușa says:

    I’m sorry, but I fail to comprehend what the question is.
    Schneider & Wagemann (2012) could not have possibly disagreed with Baumgartner & Thiem (2017). The disagreement comes from Schneider & Rohlfing (

    The link from my previous comment is related to the example indicated by Alrik Thiem himself in the previous reply:
    “…see also Schneider and Wagemann’s QCA textbook (2012:162-163) if you believe this to be a respectable source as you call for…”
    as supporting their claim, and the same example one reply later is dismissed as:
    “…that just tells me that Schneider and Wagemann disagree with Michael Baumgartner and me…”
    Kind of ambiguous.

    The analysis of the logical mistake by Schneider & Wagemann (i.e. the example offered by Alrik Thiem) just reinforces my argument that QCA-CS absolutely does _not_ make any assumption on the remainders: since the input for the QCA-CS (for the Quine-McCluskey algorithm) does not change, it means that QCA-CS is 100% correct and it never goes “beyond the data” as implied. The rest are just isolated allegations, illogical speculations that serve no one to spread around.

    The analysis from my own link above is crystal clear for anyone dealing with programming of Quine-McCluskey and even for casual readers. Theoretical work is one thing, but programming doesn’t lie, is unambiguous and it cannot be so easily obfuscated.

    I have no further comments to make. All other readers are free to believe whomever they want. As the person who actually did the programming of Quine-McCluskey, I happen to actually know what I am talking about.

  5. Tobias says:

    The whole issue seems to be part of a larger social problem.

    Numbers are still seen as something very “sacred” to the extent that many (probably a majority) of academics of all fields have certain emotions even when they see statistical tables or formulas – emotions which lay somewhere between fear and admiration. For many, statisticians and mathematicians are some sort of magicians – a matter which gives them an incredible amount of power.

    The above problem leads to an imbalance of perception between what people infer by text and by what people infer by numbers. Inference by the numbers is seen as something with much greater weight, while inference by text is something that most people in the academic field are able to take with a grain of salt.

    In some scholarly papers, this imbalance appears to be reflected by a complementary imbalance – namely that they seem to be increasingly strict when it comes to what authors infer by the numbers and comparably relaxed when it comes to inference by words – or even inference by old literature, which is often based on outdated methodology. This could be problematic, because the postulates of an old study, based on outdated methodology, might thus “survive” in the academic discourse, even though it could be partically refuted by a newer study, which uses a more advanced but nevertheless flawed method.

    The argument about the rejection of papers could be seen in that light. Therefore I would indeed say that quite a bit is at stake. The underlying question is how we deal with the disproportionate power of the numbers. There are two apparent possibilities: either we become more aware of the matter that numbers have to be taken with a grain of salt – or we become extremely strict. The first approach seems to lay in rather distant future – and also seems to be rather dissatisfying, because it devalues the purity of the discipline. Thus, the latter approach is certainly important, but probably not sufficient, either, especially with respect to the matter that methods we consider to be appropriate, today, might tomorrow turn out to be fallacies.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.