Metascience as a scientific social movement

Not sure it has already been highlighted, but this is a very interesting view on the reproducibility crisis, open research practices and metascience: https://osf.io/preprints/socarxiv/4dsqa/.

I’d love to exchange thoughts on it.

5 Likes

Thanks for posting @pcmasuzzo. This is an interesting article. In the last few years, I’ve read more deeply on Philosophy and Sociology of Science (books, papers, etc.). In contrast, I’ve generally engaged with Open Science more superficially (blogs, guides, talks, etc.). As such, even though I generally consider myself as supportive of Meta/Open Science, I could relate to the perspective of this paper quite clearly.


the Metascience Symposium, twenty-two hundred miles away, another group of science studies scholars was sitting in a conference room in New Orleans discussing the same topics. The Society for the Social Studies of Science hosted three consecutive panels on the replication crisis and its implications. What is remarkable about these two events is how little visibility there is between the groups given the enormous overlap in the subject matter.

I agree with the general view of the paper is that the current iteration of Metascience should probably have more contact with qualitative research done on science from a philosophical and sociological perspective. Another interesting point is that a researcher in scientometrics also told me that they were surprised that Metascience doesn’t engage more with their field, where the Society for Informetrics and Scientometrics already hosts a conference covering similar topics on science quantification. Yet, he views there as having been little engagement between these fields.


Put simply, metascience is a scientific social movement that seeks to use the tools of science- especially, quantification and experimentation- to diagnose problems in research practice and improve efficiency. It draws together data scientists, experimental and statistical methodologists, and open science activists into a project with both intellectual and policy dimensions. Intellectually, metascience produces quantitative studies meant to describe and evaluate science on a macro scale. Metascientists then use those studies to motivate reforms in scientific practice including more elaborate reporting requirements, data deposition demands, and more replications.

The history of metascience cannot be understood apart from the anxiety about a possible reproducibility crisis. Metascience has produced the tools that supposedly uncovered the crisis through statistical critiques, meta-analyses, and mass replications. Together, these provided the evidence that activists have used to demonstrate systemic problems across fields. And, metascientists have positioned themselves as key players in solving the crisis by pushing for interventions in everything from scientific training to reporting.

This is interesting; I hadn’t noticed before that metascience is a field that aims to both diagnose a problem and promote a cure. Generally, I think its wise to be cautious about this as it can be quite a self-serving approach (this is somewhat ironic, given metascience’s thesis that the incentives of science distort research practices to become self-serving instead of science-serving).


statistical debates and replication scandals have occurred before. What has given metascience its potency has been its ability to attract financial and intellectual investment and, then, use that investment to create self-perpetuating cycles of research, critique, and intervention. … What Brian Nosek and other metascientists have done is to funnel the energy and funding pouring into metascience into institutions that will influence science policy long after the news cycle surrounding the crisis narrative fades.

problematically, as metascience institutionalizes—as it creates self-perpetuating institutes, conferences, journals, and departments—it creates the potential for an ongoing gulf between metascience and disciplinary science as metascientists no longer come to metascience through a discipline, but are raised intellectually in the heady mixture of big data, open science, statistical mechanics, and little knowledge how bench science is done.

These quotes also call for caution. If Metascience becomes self-perpetuating without subjecting its own existence to criticism, then it seems possible that Metascience could outlive its usefulness and/or encroach into fields where it is unhelpful (as I note below, one example could be emerging fields). Although Metascience provides a solution to the problem (that it diagnosed), it would be concerning if, having eventually solved those problems, diagnosing new problems to solve with the standard treatment becomes the core of Metascience.

(to paraphrase some common start-up advice; it’s not good to build a solution and then search for a problem - it’s better to find a problem that needs solving and build a solution)


The desire for “hard-nosed empiricism” meant that no qualitative studies of science were presented at the Metascience Symposium. There was a session which several respondents called the “skeptics panel,” but the critiques were focused on technical issues in metascience rather than the theoretical challenges that are common features of debates in STS. Jan Walleczek, Director of Research at the Franklin Fetzer Fund, helped organize the conference. He told us that avoiding these sorts of theoretical debates was a key decision regarding invitations to the symposium.

I wonder if Metascience’s focus on quantitative analysis may have come from the current quantitative practices used to assess researchers for promotion/grants (i.e. H-Index, Impact Factors) - kind of a move from quantifying productivity/quantity to trying to quantify quality (perhaps in a more nuanced way) as well. In that sense, it’s also interesting the recent UNESCO Open Science panel clearly called for more towards qualitative assessment of research/er quality (see my notes here).


Metascience poses a deep dilemma. On one hand, they are engaged in a seemingly commonsensical and uncontestable mission to make science more transparent and efficient. They attempt to formalize the loosely defined “norms” of science. … On the other hand, the project itself seems to be founded on a set of ideas that find few supporters in the philosophy or social studies of science: that “science” is a coherent entity on which to intervene, that there is a singular method for science, that “efficiency” is a meaningful concept in the area of basic research. The fundamental stance of metascience would seem to be at odds with one of the central findings of science studies, that “science is not one, indivisible, and unified, but that the sciences are many, diverse, and disunified.”`

I mostly do interdisciplinary work in what might be considered emerging or pre-paradigmatic fields. I often find myself thinking, damn, it’s tough to approach things like Open Data when there is no standardised formats or metadata, or even repositories for what you are doing. I feel some of the quantitative aspects of meta-science are going to fit best into established fields, while the qualitative approaches are likely to best for emerging fields on the frontiers of knowledge.

Relatedly, Against Method is on my reading list, which I believe basically makes the case that there is no universal scientific method (as in the quote above).


Will metascientific reforms lead us out of the replication crisis? Will they deliver us from our epistemic crisis? These questions cannot be answered. But they suggest others, equally necessary and urgent: What is a replication crisis? What is the relationship of transparency to trust? How can we measure efficiency when basic science often does not have a known objective? Metascience has been a frantically productive empirical field devoid of theory. A final question: how much longer can that last?

I think Metascience and Open Science will help with the replication crisis in established fields. Something I’ve become interested in is Scientific Prioritization which seems to lend itself to quantitative predictions (given finite resources, how can we get the most value from our research) but generally becomes difficult and qualitative as soon as your try to pin down how you actually value different outcomes. I think that addressing this point will probably require more reflection and theory.


[I didn’t get around to reading the section on Instutizaionalization of Metascience and don’t comment on that. Is IGDORE part of that process? I guess so…]

Note for others, the Metascience Symposium being referred to in the paper is:

3 Likes

I made a similar comment on a quite recent paper on preregistration by a couple of metascientists from psychology: they consistently talked about preregistration as a new thing and as something proposed by and mainly employed by psychologists, but this is incorrect. They had completely disregarded the whole clinical trials field where preregistration has been the default since 2005, but in clinical trials it’s not called ‘preregistration’, it’s called ‘protocol’. And they are not alone. I’ve met other metascientists at the SIPS conference (again psychology) who are not at all familiar with the use of protocols/preregistration in clinical trials. Similarly, I’ve talked to several rather prominent metascientists at the SIPS conference who are completely unaware of the history of open science (the open source origin) and the fact that there is a difference in focus between early open science proponents (open access) and the current new wave (replicability & open practices other than open access). And then of course we have the mathematicians who have struggled for over a decade to make empirical scientists aware of their terrible misuse of statistics. I touched upon these different origins in a blog post last year on the history of open and replicable science.

2 Likes