A toast to error detectors

I find this article quite interesting as it portrays the tendency of in-group blaming of researchers dedicated to expose errors in scientific works (which represents, in my opinion, one of the finest ways to be a researcher and one of the best features of scientific knowledge production), but also the general call for “kindness” between researchers (i.e. not criticizing eachother’s work). This last issue is very much present in my research field.


It is sure that kindness and respect should be more present in our scientific community and especially in those people trying to expose questionable research practices and promote open science. But I would not mistake kindness with compliance!


When I’ve corrected errors in the engineering literature, I’ve consistently been ignored. Hostility would be preferable to being ignored to me. At least they’re engaging with me if they are hostile.

One particularly disappointing example I summarized on PubPeer:


The issues with the part of the article that I criticize are so basic that I’m disappointed it passed review. You do not need to understand much about this field to see that something is off.

I published a conference paper on this error because the original paper was highly cited. (I’m redoing a part of my paper to improve it before submitting to a journal.) I contacted the first author twice before I published my conference paper and never received a reply. Later, I contacted someone else who recently published something based on the flawed work, and they later published a review article that favorably cited the flawed work with no mention of my criticism.

Perhaps the culture in engineering is different, and ignoring criticism is more common than attacking the critics.

Edit: Just randomly browsing the internet leads to this link which suggests ignoring critics is common in psychology too: https://twitter.com/hardsci/status/1072228408489168896


I think the “ignoring” aspect is very present in every field. I would add that even those aticles and reports that do reach a high visibility may result in “much ado about nothing”, as people carry on as if nothing happened. This is, in my opinion, due to the high-pressure environment, limited time for critical thinking and general sense of resignation (or inability to act) that characterizes the scientific community but also our society in general.

I also think, though, that the ignoring aspect does not stem from malevolence or “cowardry” only. Sometimes an article or communication is ignored because it is quickly lost in the ever-growing bunk of scientific articles, blogs, news and other form of information. This to me raises an important issue that is not really addressed in the science of digital era: How to filter, group and monitor all this information? How to clusterize it, reduce it and synthetize it? How to make sure that what should be seen is actually seen and seriosuly “digested”?

1 Like

This to me raises an important issue that is not really addressed in the science of digital era: How to filter, group and monitor all this information? How to clusterize it, reduce it and synthetize it? How to make sure that what should be seen is actually seen and seriosuly “digested”?

PubPeer works quite nicely in my view, though roughly 0% of engineering researchers seem aware of it. I entered email addresses for two of the authors of the paper I criticized, so they should have received notice as well.

I don’t know what other people’s personal policies/behaviors are. But if I get an email that I know I won’t have time to answer right away, I’ll send a reply saying that I’ve received their email and will reply when I get the chance to examine it more closely. And if someone found a major error in my work, unless I had deadlines soon, I’d probably drop most of what I was doing to investigate. No response at all after two emails separated by a long time doesn’t make much sense to me.

In terms of being aware of the literature as a whole, I think (at least in the fields I’m familiar with) this is extremely poorly done in general. While reviews and books aren’t everything, it’s important that they are actually up-to-date as many people learn from them. Most reviews appear to mirror previous reviews. It’s not uncommon for reviewers to add new things that they are familiar with, but there’s generally a lack of depth. This seems to be a major problem holding back the progress of science to me. I don’t know how to solve this problem in general, though I try hard to be aware of all literature on certain problems. I’m just one person, so the scope of what I can do is limited. Fortunately, I don’t think a subfield needs too many people trying to be comprehensive to see large benefits. One problem is that people like me rarely seem to be in position to be invited to write review articles.

If some billionaire wants to accelerate the progress of science, they might do well to fund researchers to specifically do in-depth reviews. I’d jump at such an opportunity.

I recall watching a video where Nick Brown (@sTeamTraen) said that his article that criticizes the critical positivity ratio is cited at a rate lower than the one which it debunks. That’s amazing to me because of the media coverage his article got. I’m not even a psychologist and I heard about it.

There are existing group methods as well. I emailed Nick Brown before and he recommended that I get on Twitter as he’ll hear about problematic studies there. But I’m not aware of anyone in my field on Twitter who posts about problematic studies. Twitter mostly is used for self-promotional purposes in my field. I think psychology is much better organized than engineering in this regard, though unlikely to be optimal.

A online journal club platform might be better than Twitter for this.


Definitely what I would do as well (unless I find the concern to not be serious).

I think @antonio.schettino once proposed a journal club here on the forum. Don’t think it was aimed particularly at error detection, but perhaps worthwhile to bring the idea up again.


PubPeer is interesting, I hadn’t seen it before - thanks for linking @btrettel. I had a quick look at a few articles on the front page and saw that most of the biology papers listed were for people picking up on figure manipulations, but I assume the type of comments varies a lot by field. I have at times been very frustrated after being misled by numerical or mathematical errors in papers, and will consider posting them on PubPeer in future!

I think one difficulty with posting about errors, and also publishing replications, is that they usually aren’t connected back to the original article by anything more than a link or citation. The notable exception here is eLife, where public comments and annotations can be made directly to papers on their site:

I don’t think comments are included in PDF downloads, but I have seen authors respond to comments on several occasions, and I assume they are more likely to in this context than if feedback comes through a 3rd party site. People also seem to reply to comments on Research Gate, although the comments are not particularly visible so I don’t know how much attention they get.

The extreme case for correcting the scientific record seems to be making a retraction. I recall hearing about the authors of a cell biology paper making a retraction after discovering a methodological artefact in a widely cited paper they published. They felt that retracting the paper was the only way to alert other people using their method to the problem: https://www.nature.com/articles/nj7492-389a

While retractions should be a last resort to correct an error, it seems they work as people do quickly stop citing the retracted paper and even other work in the same field (maybe that’s not always be beneficial):

I’m not sure if corrections, errata, or corrigenda are similarly effective.

It would be nice if there was a way for original articles or their metadata to link forwards to comments in places like PubPeer and Research Gate. Something like the ‘Cited by’ information shown on databases and most publishers sites. I expect that the original authors might take more responsibility for errors in their work if anybody who found their paper say, on google scholar, was also linked to PubPeer comments!

I think Scite.ai is taking on the related problem of determining the context in which citations are made and linking this data back to the original article on a publishers page, maybe something similar could be done to aggregate and overlay comments. Any thoughts @Josh_Nicholson?


I was also unaware of the existence of PubPeer! It looks very interesting and they are also beta-testing self-edited journals at Peeriodicals @btrettel maybe that would be useful in terms of keeping an updated literature review of a specific subfield.

I personally would like to see more and more commitment to this open and broad peer-reviewing. It is, in my opnion, the ultimate way to assess scientific validity and rigour of a work before it gets on the records (i.e. published) and also after its publication, to update the community on a work’s significance and fasifiability.

However, in my opinion, the problem remains the same. The digital era is providing us with tools to drastically improve scientific research at all levels, but only few of these initiatives, breakthroughs and tools have a real impact. Is this because there is an overload of information? Is it because, while the digital world progresses rapidly, scientists are still bounded to traditional academic environment and mentality? Is it because, no matter how the internet connects us, we are scattered and individualistic? I think addressing these questions is a very big deal and can lead to possible solutions.


well, no love for billionaire from me. :slight_smile:

I came across this article by our late @jon_tennant which touches on a few of the points @btrettel raised about parries to post-publication peer review on sites such as PubPeer.

The last barrier is most relevant to this discussion, but the whole article is worth reading.

Barrier: No-one reads or uses PPPRs

Further questions arise as to the actual readership of PPPR comments. What if substantial issues are raised, and the authors just ignore them? Are they going to go back and address comments on research that might be years old, and funding has completely run out on? Commitment to perform a PPPR on an article might be difficult if a guaranteed reciprocal commitment to address any issues raised is not given. One problem here, as pointed out by others like Lenny Teytelman before, is that a lack of version control over the vast majority of the research literature makes actually “adapting” papers to include post-publication comments impossible. This is because, despite the functionality that the web provides us, the vast majority of research articles are still published as static “papers”, with single versions that are considered final and uneditable. What we have then is not evidence of a problem with uptake of or demand for PPPR, in this case, but evidence for lack of an incentive to do PPPR as there are very little real consequences of doing so.

Is anyone even going to read PPPR reports, or are they just going to gather dust as footnotes? How many researchers even have heard of PPPR, or know what it is or what it is for? So, the question then becomes are PPPRs really even that useful, academically? Why should anyone spend their time trying to improve a research paper if the authors won’t or can’t actually then improve it? PPPR therefore becomes a communications issue, based around cultural norms and practices within academia.

Potential solutions

Possible ways to resolve this include trying to maximise PPPR reports’ visibility and reusability. At ScienceOpen, we are similar to other journals like F1000 Research in that PPPRs are presented on the same pages as the articles themselves. Reviews are clearly presented with summary statistics, names, graphs, data tables, and DOIs to make them as visible as the research articles themselves.

Another solution is for publishers to start using version control with peer review, and provide updated versions of papers with successive rounds of peer review. This is what we do at ScienceOpen Research, and also at other journals such as F1000 Research – my personal experience is that this is a far superior method of publishing than any traditional model. However, for now, the vast majority of the research literature requires overlay (e.g. PubPeer) or aggregation (e.g. ScienceOpen) features to directly link PPPR to papers so that readers can easily see them, until the immense value of version control is recognised and it becomes more widespread.

N.b., I came across this in Jon’s self-published book (which is a complication of posts on Open Science he published since starting his PhD). There are quite a few thought-provoking gems like this in there that would be otherwise hard to stumble across.

1 Like

With respect to the issue of no one reading PPPR comments, I can think of a few reasons to write them even if the uptake is low:

  • Some people (like myself) do actively check these, though it might not be said frequently enough. I have the PubPeer Firefox addon installed and it informs me if an article I’m looking at has comments on PubPeer. I’ve been pleasantly surprised by which ones do.
  • If the authors don’t respond, that just makes them look bad in my view. This is particularly true if the reviewer is polite and brings up what appears to be a serious issue.
  • If someone is going to look for PPPR comments, there are only a few places they might check, so you have a high probability of someone finding your comment.
  • You might find your own PPPR comments more accessible to you than your notes on specific papers.
  • By posting a comment about a problem you are putting it “on the record” that there is a problem. If someone takes a closer look later and notices issues then it’ll be harder for them to say that no one was paying attention. I think many problems are noticed but because no one publishes their concerns, it seems like no one noticed.

Jon Tennant’s article on PPPR is interesting. His comments on a PPPR not needing to be comprehensive has made me want to post a few more comments on PubPeer. I’ve started a TODO list that I’ll handle in bulk later when I have the time.