This to me raises an important issue that is not really addressed in the science of digital era: How to filter, group and monitor all this information? How to clusterize it, reduce it and synthetize it? How to make sure that what should be seen is actually seen and seriosuly “digested”?
PubPeer works quite nicely in my view, though roughly 0% of engineering researchers seem aware of it. I entered email addresses for two of the authors of the paper I criticized, so they should have received notice as well.
I don’t know what other people’s personal policies/behaviors are. But if I get an email that I know I won’t have time to answer right away, I’ll send a reply saying that I’ve received their email and will reply when I get the chance to examine it more closely. And if someone found a major error in my work, unless I had deadlines soon, I’d probably drop most of what I was doing to investigate. No response at all after two emails separated by a long time doesn’t make much sense to me.
In terms of being aware of the literature as a whole, I think (at least in the fields I’m familiar with) this is extremely poorly done in general. While reviews and books aren’t everything, it’s important that they are actually up-to-date as many people learn from them. Most reviews appear to mirror previous reviews. It’s not uncommon for reviewers to add new things that they are familiar with, but there’s generally a lack of depth. This seems to be a major problem holding back the progress of science to me. I don’t know how to solve this problem in general, though I try hard to be aware of all literature on certain problems. I’m just one person, so the scope of what I can do is limited. Fortunately, I don’t think a subfield needs too many people trying to be comprehensive to see large benefits. One problem is that people like me rarely seem to be in position to be invited to write review articles.
If some billionaire wants to accelerate the progress of science, they might do well to fund researchers to specifically do in-depth reviews. I’d jump at such an opportunity.
I recall watching a video where Nick Brown (@sTeamTraen) said that his article that criticizes the critical positivity ratio is cited at a rate lower than the one which it debunks. That’s amazing to me because of the media coverage his article got. I’m not even a psychologist and I heard about it.
There are existing group methods as well. I emailed Nick Brown before and he recommended that I get on Twitter as he’ll hear about problematic studies there. But I’m not aware of anyone in my field on Twitter who posts about problematic studies. Twitter mostly is used for self-promotional purposes in my field. I think psychology is much better organized than engineering in this regard, though unlikely to be optimal.
A online journal club platform might be better than Twitter for this.