Poor-quality studies are polluting the literature — a group will study the businesses that produce them to stem the flow of bogus research.
I’ve been very frustrated with seeing bad, sometimes blatantly plagiarised (some of which was uncomfortable close to home…), work that manages to get published in predatory journals. The idea that paper mills are churning out bad work that gets published in apparently reputable journals is also very frustrating. So I was happy to see that there is a fairly serious effort by major publishers and funders to investigate and perhaps reduce problem caused by paper mills.
However, I also recently came across: Science is a strong-link problem - by Adam Mastroianni (I’ve seen similar perspectives elsewhere but I think this is quite well articulated). In short: “Weak-link problems are problems where the overall quality depends on how good the worst stuff is. … Science is a strong-link problem. In the long run, the best stuff is basically all that matters, and the bad stuff doesn’t matter at all.” Which could be reassuring, because then:
CHEATERS SOMETIMES WIN AND THAT’S OKAY
… I’ve talked to a lot of folks since I published The rise and fall of peer review and got a lot of comments, and I’ve realized that when scientists tell me, “We need to prevent bad research from being published!” they often mean, “We need to prevent people from gaining academic status that they don’t deserve!” That is, to them, the problem with bad research isn’t really that it distorts the scientific record. The problem with bad research is that it’s cheating .
I get that. It’s maddening to watch someone get ahead using shady tactics, and it might seem like the solution is to tighten the rules so we catch more of the cheaters. But that’s weak-link thinking. The real solution is to care less about the hierarchy . If you spend your life yelling at bad scientists, you’ll make yourself hoarse. If you spend your life trying to do great science, you might forever change the world for the better, which seems like a better use of time.
My previous experiences seeing bad work getting published in predatory journals had often led me to think about science reform from a weak-link perspective (i.e. using tools like iThenticate to do 3rd party screening to flag published papers with blatant cut and paste plagiarism, even if predatory publishers won’t do it themselves). I still think that enabling high-quality research is very important, but I’m drifting away from the idea of structuring scientific reform so that it’s only possible for high-quality research to be done. If there are researchers who want to do bad research, and they manage to find a funder willing to pay for it and a journal willing to publish it, then maybe It’s better to just let them be and focus on supporting the researchers doing good work (although I’d rather the bad research doesn’t get done with an IGDORE affiliation!)
As an aside, the effectiveness of peer review has been criticised a lot lately (not least by the author of the strong link article I quoted from above, in his articles The rise and fall of peer review and The dance of the naked emperors). But the creator of The Papermill Alarm (a tool which flags papers for signs of research fraud) writes:
We don’t actually prescribe how editors use the output from the Papermill Alarm, but one thing we’ve learned: peer-review is effective against papermills. So applying the best standard of peer-review is an effective control.
- I’m yet to see a situation where the rejection rate for papers which receive ‘red’ alerts on a journal isn’t significantly higher than the rejection rate for other papers.
- We know that papermills often try to manipulate the peer-review process (or circumvent it entirely — e.g. by abusing special issue programmes). They wouldn’t do that if peer-review wasn’t a problem for them.
That’s nice, isn’t it? It turns out that the thing we have always been good at — peer-review — is already the best tool for the job. The Papermill Alarm raises the alarm so that editors know when to take care.
At the journal level, when the alert-rate is high, we always find something. There was another thing that was important here. Some publishers had almost no signal at all. I’d talk to them and they would tell me that they didn’t get papermills submitting because they were tough on peer-review and they thought that tough peer-review put the mills off.
So despite the many other criticisms that can be levelled against it, good peer-review does seem to at least be good for deterring the submission of articles from paper mills…