I only caught the first few session for both days of AIMOS, but I thought the quality of lectures was great and they covered a wide breadth of interesting topics. Here a few links and notes from the conference that that might be of general interest:
Paul Glasziou mentioned a few interesting tools/links in his talk:
- Template for Intervention Description and Replication: http://www.tidierguide.org/
- Systematic Review Accelerator (partially automated systematic review, apparently turns a task of years into a task of weeks!): https://sr-accelerator.com
- The Experimental Design Assistant: https://www.nc3rs.org.uk/experimental-design-assistant-eda
- The COS’s strategy/pyramid for academic culture change towards OS: https://www.cos.io/blog/strategy-for-culture-change
A few other notes from Paul’s talk:
- Apparently it’s rare to do systematic reviews of animal trial studies before moving to human trials (after which systematic reviews are expected).
- Mandating the reporting of clinical trial outcomes in the US saw a 10% increase in reports being submitted, but no increase in publications (still considered a positive outcome for reducing publication bias).
- Overly bureaucratic ethical review can be a large source of research waste - it is estimated that adopting best-practice for efficient ethical review would save Australia $160 million annually with negligible increase in subject risk.
Sandy Onie had a great talk about OS for the developing world (it had a lot of similar themes to the recent UNESCO webinar on OS). The key idea was that a locally tailored approach would usually be needed rather than scaling out the OS model from Western academia. See his recent article for more details: nature.com/articles/d41586-020-03052-3
Tatsuya Amano discussed non-English publishing and, from a case-study on biodiversity conservation, found that: 1) around 36% of literature for was not in English, 2) the non-English literature was growing at a similar rate to the English literature, and 3) that different types of literature were published in English vs. non-English journals - so the English literature is not simply a random subset of all literature. I thought the last point was particularly interesting, and Tatsuya found that the publications in different languages tended to have different biases in statistical results (IIRC, non-English journals were more likely to publish negative results, and positive results with small effect sizes but strong significance) and study characteristics (again IIRC, non-English journals were more likely to be descriptive than experimental).
Jennifer Byrne’s presentation showed that (particularly in her field of cancer genetics) there are both large amounts of errors (including outright fabrication from paper mills) and there is limited ability to correct this in the scientific record. She found that it is difficult to get journals to take action on reported errors and that the outcomes varied a lot when they chose to do so. She is preparing a standardised error reporting template to assist with this in future. She has also developed Seek&Blastn, a tool for automated fact-checking of nucleotide sequences in papers (apparently this is a good way to spot errors and fraud in genetics papers):https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0213266
She also mentioned this checklist to follow when evaluating research integrity: https://www.nature.com/articles/d41586-019-03959-6
The lightning talks were a blast, here are a few highlights (see to slides link below for more details):
- @cooper had some connection difficulties, but still managed to present Free Our Knowledge: https://www.youtube.com/watch?v=vzB7Vh_gkLs&feature=emb_logo
- Lee Jones, a PhD student at QUT, is recruiting statisticians and data scientists for an empirical assessment of statistics in health and biomedical publications. Volunteer here: https://github.com/Lee-V-Jones/statistical-quality
- Rob Heirene notes that Questionable Preregistration Practices (QPPs) are now becoming common…
- Rob Ross found that meta-analyses tend to produce results with poor validity compared to replication projects, but only including pre-registered studies seems to improve the validity of meta-analysis studies.
- Max Primbs demonstrated how the choice between suitable data pre-processing strategies used in a reaction time assay can substantially change the strength of statistical outcomes made from the same raw data.
Matt Makel presented the results self-surveys on the attitudes of educational researchers towards QRPs and Open Science. There is little consensus about whether Questionable Research Practices are acceptable (a minority admits to regularly engaging in QRPs and believe they are necessary for research, while a minority believe they are unacceptable…) - thankfully, most researchers agreed that Open Science practices are both acceptable and important.
Rachael Brown’s workshop on contemporary philosophy of science was a lot of fun! It mostly focused on how ‘non-objective’ values were present in many aspects of scientific practice, particularly with regards to the choice of what is studied. Values can be considered in a constitutive (related to understanding the goals of science) or contextual (related to determining the acceptable practice of science) - the later is usually more influenced by personal/societal preferences and are considered to be somewhat problematic.
I found the discussion of whether scientists should take moral responsibility for inductive risk (namely, the societal consequences from incorrectly accepting [or rejecting] a theory) to be quite thought-provoking. The case for this is that scientists are better positioned to understand not only the evidence for hypotheses but also the likelihood of error and (possibly) the consequences that will arise than policymakers using their research will be able to. While some argue this is too demanding for scientists, it was a good lead into discussing whether researchers should at least consider using a stricter statistical significance threshold (aka. p value) than 0.05 for some types of studies (see more in the in/famous Justify your alpha article).
A fun exercise was crowd-sourcing some answers to what differentiates the good science from the bad:
Recordings of the talks may be posted on YouTube and I’ll link here if they are. Until then, slides from some talks and the posters are on OSF: https://osf.io/meetings/AIMOS2020/
I also stumbled onto this quite comprehensive list of Open Science Literature from COS on the AIMOS website: https://osf.io/kgnva/wiki/Open%20Science%20Literature/
Finally, congratulations to @jason.chin who was elected AIMOS president for 2021!