I’d love your feedback, help, corrections, and support with this proposal and discussion I’m calling the ‘Unjournal’ for now. The title and blurb are:
The “Evaluated Project Repo” (aka ‘the Unjournal’):
- a proposal for peer review/rating, archiving, and open science, aimed at…
- avoiding rent-extracting publishers,
- reducing careerist gamesmanship in academia,
- and making research more effective.
- Whether/why this particularly aligns with global priorities/effective-altruism research organizations.
I’m looking for feedback, help, and allies, in finding ways towards a practical and near-term route towards:
a credible open-access ‘peer evaluation and rating’ process that is given weight in Economics (and social science and more) as an alternative to conventional 0/1 publishing…
To have this become the default place for researchers to ‘submit’ their work … at my organization (Rethink Priorities), as well as other places funded by Open Philanthropy, the Global Priorities Initiative, and in the Effective Altruism and public-interest space, and for innovative research institutions in general
I realize there is a lot going on in this space, and many the ideas proposed in the doc (I learned) are already somewhat in the works, thanks to people like Cooper Smout/Free Our Knowledge, the Prereview initiative, OSD and others.
I would like to leverage the best existing tools, and start making this work at my org and beyond!
E.g., here’s a route I’m tentatively proposing for Rethink Priorities and friends (Open Philanthropy, FHI, etc)
- Host article (or dynamic research project) on OSF or other place allowing time stamping & DOIs
- Link this to PREreview (or similar tool) tools/sites soliciting feedback and evaluation without requiring exclusive publication rights
- Also directly solicit feedback from EA-adjacent partners in academia and other EA-research orgs
- We need to build our own systems (assign ‘editors) to do this without bias and with incentives
- building standard metrics for interpreting these reviews (possibly incorporating prediction markets,
- encouraging them to leave their feedback through the PREreview or other platform.
Also: Committing to publish academic reviews or ‘share in our internal group’ for further evaluation and reassessment/benchmarking of the ‘PRE’ type reviews above. (Perhaps taking the FOK pledge relating to this)
Academic publishers extract rents and discourage progress. But there is a coordination problem in ‘escaping’ this. Funders like Open Philanthropy and EA-affiliated researchers are not stuck, we can facilitate an exit.
The traditional binary ‘publish or reject’ system wastes resources (wasted effort and gamesmanship) and adds unnecessary risk. I propose an alternative, the “Evaluated Project Repo”: a system of credible evaluations, ratings, and published reviews (linked to an open research archive/curation). This will also enable more readable, reliable, and replicable research formats, such as dynamic documents; and allow research projects to continue to improve without “paper bloat”. (I also propose some ‘escape bridges’ from the current system.)
Global priorities and EA research organizations are looking for ‘feedback and quality control’, dissemination, and external credibility. We would gain substantial benefits from supporting, and working with the Evaluated Project Repo (or with related peer-evaluation systems), rather than (only) submitting our work to traditional journals. We should also put some direct value on results of open science and open access, and the strong impact we may have in supporting this.
I’d love your feedback and thoughts on this in/on the unjournal proposal doc (linked above), and/or this presentation I’m giving at the CoS MetaScience conference (5 minute lightning talk) tomorrow evening.
One thing I’m particularly interested is ‘which tools and which horses to back’
How can we get a DOI (and any other necessary links) for dynamic documents hosted as HTML (R bookdown and Jupyter notebooks in particular)?
What would be the best ‘layer on top’ tool/tools/interface for reviewing and feedback that allows a quantitative component, measuring the credibility, interestingness, and clarity of each paper or project in a way that universities, policymakers, and grantmakers could value? Can PREreview be adapted to this in the ways I propose? What do you think of ResearchHub as an alternative for this? (Please also see my ‘Airtable of evaluated tools’ linked in the doc)
How can we get support and funding (or ally with existing people doing this) for putting together ‘pseudo-editorial processes’ … assigning and compensating referees and putting together the ratings.
Thank you for your help and advice. Please don’t hesitate to reach out to me directly for a chat.
- David Reinstein CV thing