The Society for Open, Reliable, and Transparent Ecology and Evolutionary Biology (SORTEE) is having its first conference on July 12th-14th. Apparently, it will run continuously to cover all time zones, and membership in the society/conference participation is currently free.
Proposals for sessions or presentations are open until June 1st. I’ll comment again when registration is open.
Aside, SORTEE apparently got started during an unconference session at AIMOS 2020. It’s nice to see Open Science is building up more momentum in biology and ecology.
The inaugural conference of the Society for Open, Reliable, and Transparent Ecology and Evolutionary Biology (SORTEE) will be held July 12-14*. The conference organizers are seeking content submissions for four types of sessions:
This was a fun conference. For anybody who missed it, most of the talks are available on OSF:
Many of SORTEE’s founders also contribute to this opinion article:
Unreliable research programmes waste funds, time, and even the lives of the organisms we seek to help and understand. Reducing this waste and increasing the value of scientific evidence require changing the actions of both individual researchers and the institutions they depend on for employment and promotion. While ecologists and evolutionary biologists have somewhat improved research transparency over the past decade (e.g. more data sharing), major obstacles remain. In this commentary, we lift our gaze to the horizon to imagine how researchers and institutions can clear the path towards more credible and effective research programmes.
I appreciate that they acknowledge Transparency is necessary but not sufficient:
The information afforded by greater transparency only helps us discriminate between studies if we care to look (Fig. 2). Transparency alone does not prevent errors, nor does it guarantee that research helps to build and test strong theories. For example, methods might not measure what the authors claim to be measuring, and authors might not specify their claims precisely enough to be falsifiable. If preregistrations and supplementary materials are not read, data are not examined, analyses are not reproduced, and, crucially, close replications are not conducted or published, then our mistakes will not be identified. Researchers will always make mistakes, but changed incentives could encourage errors to be corrected and dissuade researchers from rushing into hypothesis testing, cutting corners, or fabricating results . Common wisdom within the scientific community is that fraud is so rare as to be ignorable, but we cannot really know, as we do not really check; mechanisms to detect, investigate, and prosecute cases of fraud and research misconduct are under-resourced and not standardised across institutions. The dearth of formalised error detection in ecology and evolutionary biology suggests that we do not live up to the scientific ideal of organised scepticism.