Different tools, shared standards: The debate about DA-RT

Let’s imagine you are attending a presentation and the presenter does not follow the standard intro-literature-theory-design-results-conclusion structure. Instead, she only presents the results of her research. What would be the first question you would ask? I assume at least 90 percent of the audience would ask about how she arrived at her findings. This hypothetical example puts us right in the middle of the idea behind the Data Access and Research Transparency Initiative (DA-RT).

The RT in DA-RT

One might disagree with the view that science is all about method rather than substance, but no one can reasonably claim that method and design issues are irrelevant to science and for determining what we can make of new findings of empirical research. It is therefore safe to argue that transparency about every step of analysis potentially influencing what we find and conclude in empirical research is mandatory, accounting for the “RT” in DA-RT.

The DA in DA-RT

If we want to determine if the results are robust to different design and modeling decisions, we also need access to the data, accounting for the “DA”. As Rick Wilson pointed out, this concerns access to the raw data and not only to the final, processed dataset. Data access is also crucial to cross-checking for mistakes in data preparation and data analysis; we all want to avoid mistakes, but they can happen and, potentially, making data available can be considered a means of crowdsourcing the task of checking for errors. Beyond others scrutinizing our work, easy data access and transparency enhance replication attempts and building on existing data and studies for the accumulation of knowledge. (In quantitative research, one should also not forget the opportunity to learn something from how others processed the data and the syntax they used.)

Taken together, DA-RT makes for better science because we can understand and cross-validate what others did and build on their work. Still, with editors of more than two dozen journals aiming to enforce DA-RT by the beginning of next year, there is now a vibrant debate about whether DA-RT is meaningful across methods and, if so, how.

As should be clear by now, I am sympathetic to the DA-RT initiative. However, I also see merit in some of the arguments that are floating around, while I find others weaker or misleading. Here is my view on some issues that are currently being discussed.

An implementation delay needs a firm timetable

A petition to delay the implementation of DA-RT has been signed by 1.173 scholars. It received support and has provoked a couple of counterstatements and criticisms. I believe a different framing of that petition would have increased mutual understanding. It is all centered on the request of a delay. Literally, delaying something implies that whatever is delayed will eventually come about. However, we all know that delays sometimes serve as an instrument for bringing something to an end. My reading of some criticisms of the petition is that they interpret it in the latter way. Such misunderstandings could have been diffused by making clearer that the petition and its signatories want to see DA-RT implemented, but that more discussion is needed (e.g., perhaps titling it, “Toward a More Integrative DA-RT Initiative”).

Some might also think that there is no need for a delay because the DA-RT Initiative started in 2010. I do not know whether relevant groups were able to participate in the work on DA-RT and some just recognized the implications of DA-RT enforcement too late. But if DA-RT simply moves forward as it stands, it has the potential to drive a wedge into our discipline. I do not see this as a risk worth taking and would prefer a delay over risking that.

For this to occur, one would need to remedy a second shortcoming of the petition, i.e., it does not lay out a roadmap. If implementation is delayed, there should be a scheduled plan through which everyone could submit his opinion, potentially resulting in a revised DA-RT statement. The current DA-RT statement and situation could be the default option if the modification fails, ensuring that some improvement takes place.

The DA-RT idea is not driven by quantitative thinking

Some researchers’ criticism is that DA-RT follows a quantitative template because, for quantitative methodologists, it is easier to ensure data access and transparency. This might be so (but see below), but a misunderstanding is at stake here which is not uncommon in the methods field when quantitative and qualitative methods are involved.

The general idea behind DA-RT is not in any sense quantitative in nature. Quantitative research implements DA-RT in a specific way – making data and code available – that cannot be transferred to the qualitative domain, but who is demanding this? In terms of a famous book title, we are talking about “diverse tools, shared standards”. There is nothing quantitative about doing science transparently, but the tools for transparency required for process tracing, Comparative Historical Analysis, ethnography, etc. might well differ and this is what needs to be worked out (as stated in the delay petition).

DA-RT means a great deal of work, regardless of the applied method.

As an example for the implications, qualitative researchers ask whether it would become necessary to transcribe all their interviews because this is a lot of work (here it is pointed out that ethical issues are taken care of by DA-RT). The answer should be, in principle, “yes”. Other researchers should be able to assess the way in which you did your interviews, i.e., what questions were asked and when and what was not asked. In processing the interview, did you overlook interviewee statements that would alter the conclusions? Did you correctly assign codes when doing a document analysis? Seen from the other side, when you only present parts of the interviews and they are not fully accessible, you create uncertainty about how you did your research, which undermines confidence in the validity of your conclusions.

Someone who has never done quantitative research might actually underestimate the effort it takes to be transparent in quantitative studies. You shouldn’t upload some data and some syntax, but all the data you took for building your final dataset and an annotated file with syntax. Depending on the study, this can easily mount up to more than 1000 lines of code. Making sure that the code works and matches the results in the publication and annotating it is also a lot of work (also when you use Markdown, etc.).

I do not intend to launch a beauty contest between methods to decide how hard or easy it is to achieve DA-RT when using one method or the other. But it would help to agree that achieving DA-RT is always a lot of work, regardless of what method you use, that failing to do the required work weakens your research, and that achieving DA-RT is worth the effort.

It is not about philosophy of science

Some discount DA-RT because it is inspired by neo-positivism. I might not know enough about philosophy of science, but I do not see the bigger problem here (concurring with Thomas Leeper). Giving access to data and observations for the sake of reproducibility might be the neo-positivist element because it presumes a duality of mind and world.

Regardless of whether my reading is correct, I believe we should not overload the discussion with philosophical issues. Many methods, including QCA, process tracing and Comparative Historical Analysis – the latter two being mentioned in the delay petition –, became more formalized and standardized in recent years. I do not see inherent challenges in achieving DA-RT for them, be they labeled neo-positivist or not.

For interpretivist methods, ethnography, etc., the unease might be larger, if only because “data” might ring too quantitative in qualitative ears. But we can easily replace “data” with “material” without changing anything else about DA-RT (call it MA-RT, if you like). Still, I think that researchers from ethnography can and should explain what they did and how they did it. If they do not use software, assign codes to parts of their material, and are not able to pinpoint individual observations because it is incompatible with their approach, that’s fine with me. Nonetheless, thinking about transparency and the accessibility of their analysis should also be useful for disciplining qualitative research and the writing process. A uniform standard matters, above all, and the tool for meeting it needs to fit the research approach.

Relevance and rigor is a false dichotomy

Some argue that DA-RT requirements stifle innovation because the pursuit of rigorous research trumps relevant research. I do not see relevance and rigor as an inherent trade-off that denies relevance to methodologically rigorous research, and excuses relevant research from pursuing rigor. I do not see any reason why, for example, exploratory qualitative research should meet lower standards than confirmatory research using whatever method. The value of exploratory research lies in producing new insights, sometimes groundbreaking ones. Still, I would like to know how interviewees were selected; what was discussed in the interviews; how archival sources were selected and processed, etc. There is better and worse relevant research, the difference lying in the degree of transparency.

How DA-RT do you go?

Only the publication of fully transparent research, however this is defined, represents the highest standard currently pursued. An alternative would be to have a checklist for different methods – quantitative, QCA, process tracing, etc. – and tick boxes. When interviews are not fully transcribed, the corresponding box does not get ticked. When you only offer a processed dataset for a quantitative analysis, the corresponding box gets ticked; however, the box “Accessibility of raw data” goes unticked.

Information about the degree of transparency would have to be placed right above the abstract so that everybody can see the checklist. It would then be up to the researcher to decide how transparent she wants to be and the reader to decide how much faith she wants to put in the analysis. This might solve some of the concerns that access to certain journals will be impossible if the highest DA-RT statements are not fulfilled.

This might not be the ideal and there might be other ways to move forward, but it should be clear that the status quo would be a step backward. We need more transparency in political science.

Advertisements

About ingorohlfing

I am Professor for Political Science, Qualitative Methods at the Bremen International Graduate School of Social Sciences (BIGSSS, co-hosted by University of Bremen and Jacobs University) and Associate Editor of the American Political Science Review. My research interests are social science methods with an emphasis on case studies, multi-method research, and philosophy of science concerned with causation and causal inference. Substantively, I am working on party competition and parties as organizations.
This entry was posted in APSA, dart, data, transparency and tagged , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s