Moving science beyond closed, binary, static journals; a proposed alternative; how the 'Effective Altruist' and nontraditional nonprofit sector can help make this happen

The “Evaluated Project Repo” (aka ‘the Unjournal’):

I’d love your feedback, help, corrections, and support with this proposal and discussion I’m calling the ‘Unjournal’ for now. The title and blurb are:

The “Evaluated Project Repo” (aka ‘the Unjournal’):

  • a proposal for peer review/rating, archiving, and open science, aimed at…
    • avoiding rent-extracting publishers,
    • reducing careerist gamesmanship in academia,
    • and making research more effective.
  • Whether/why this particularly aligns with global priorities/effective-altruism research organizations.

Help us find a route to make something happen

I’m looking for feedback, help, and allies, in finding ways towards a practical and near-term route towards:

  1. a credible open-access ‘peer evaluation and rating’ process that is given weight in Economics (and social science and more) as an alternative to conventional 0/1 publishing…

  2. To have this become the default place for researchers to ‘submit’ their work … at my organization (Rethink Priorities), as well as other places funded by Open Philanthropy, the Global Priorities Initiative, and in the Effective Altruism and public-interest space, and for innovative research institutions in general

I realize there is a lot going on in this space, and many the ideas proposed in the doc (I learned) are already somewhat in the works, thanks to people like Cooper Smout/Free Our Knowledge, the Prereview initiative, OSD and others.

I would like to leverage the best existing tools, and start making this work at my org and beyond!

E.g., here’s a route I’m tentatively proposing for Rethink Priorities and friends (Open Philanthropy, FHI, etc)

Proposal for our work aiming at rigorous academic-level credibility:

  1. Host article (or dynamic research project) on OSF or other place allowing time stamping & DOIs
  1. Link this to PREreview (or similar tool) tools/sites soliciting feedback and evaluation without requiring exclusive publication rights
  1. Also directly solicit feedback from EA-adjacent partners in academia and other EA-research orgs
  • We need to build our own systems (assign ‘editors) to do this without bias and with incentives
  • building standard metrics for interpreting these reviews (possibly incorporating prediction markets,
    • encouraging them to leave their feedback through the PREreview or other platform.

Also: Committing to publish academic reviews or ‘share in our internal group’ for further evaluation and reassessment/benchmarking of the ‘PRE’ type reviews above. (Perhaps taking the FOK pledge relating to this)

Back to my proposal, my ‘key points’ … which may be old hat for you

  1. Academic publishers extract rents and discourage progress. But there is a coordination problem in ‘escaping’ this. Funders like Open Philanthropy and EA-affiliated researchers are not stuck, we can facilitate an exit.

  2. The traditional binary ‘publish or reject’ system wastes resources (wasted effort and gamesmanship) and adds unnecessary risk. I propose an alternative, the “Evaluated Project Repo”: a system of credible evaluations, ratings, and published reviews (linked to an open research archive/curation). This will also enable more readable, reliable, and replicable research formats, such as dynamic documents; and allow research projects to continue to improve without “paper bloat”. (I also propose some ‘escape bridges’ from the current system.)

  3. Global priorities and EA research organizations are looking for ‘feedback and quality control’, dissemination, and external credibility. We would gain substantial benefits from supporting, and working with the Evaluated Project Repo (or with related peer-evaluation systems), rather than (only) submitting our work to traditional journals. We should also put some direct value on results of open science and open access, and the strong impact we may have in supporting this.

Wrapping up

I’d love your feedback and thoughts on this in/on the unjournal proposal doc (linked above), and/or this presentation I’m giving at the CoS MetaScience conference (5 minute lightning talk) tomorrow evening.

One thing I’m particularly interested is ‘which tools and which horses to back’

  1. How can we get a DOI (and any other necessary links) for dynamic documents hosted as HTML (R bookdown and Jupyter notebooks in particular)?

  2. What would be the best ‘layer on top’ tool/tools/interface for reviewing and feedback that allows a quantitative component, measuring the credibility, interestingness, and clarity of each paper or project in a way that universities, policymakers, and grantmakers could value? Can PREreview be adapted to this in the ways I propose? What do you think of ResearchHub as an alternative for this? (Please also see my ‘Airtable of evaluated tools’ linked in the doc)

  3. How can we get support and funding (or ally with existing people doing this) for putting together ‘pseudo-editorial processes’ … assigning and compensating referees and putting together the ratings.

Thank you for your help and advice. Please don’t hesitate to reach out to me directly for a chat.

5 Likes

Welcome to the forum @daaronr :slight_smile: Glad to see you moving forwards with this (n.b. I’ve given David some advice on this previously)

@cooper, @antonio.schettino, @pcmasuzzo, @sivashchenko I think you might know this space better than I do, any suggestions? I know there are a lot of preprint review initiatives but am not sure of any that allow for quantitative rankings.

Not sure about where to get funding, but the editorial management of preprint reviewing reminds me of what James Fraser did recently:

James has also taken this idea one step further: when asked to guest edit a paper for a journal, he instead organized an independent review process, inviting his own reviewers. With the consent of all reviewers, he then posted the entire package of reviews as a comment on bioRxiv.

4 Likes

Yes, what James and those guys did was great! My idea is kind of towards ‘making that the norm rather than the exception’.

1 Like

Hi David,

Just found this post after replying to your comment on the FOK forum. Love that you’re focusing on quantitative ratings, as I also think ratings will be key to freeing researchers from the trap of commercial journals. I still can’t seem to find your proposal though (the link above under ‘this proposal and discussion’ doesn’t seem to work for me), could you please repost below so I can read properly?

I’m flat out at the moment preparing for a new project at the eLife Innovation Sprint next week, called MERITS, which is highly related to this discussion, but thought I’d chime in quickly and return to the conversation when I have more time (probably next week).

FYI there are a number of platforms that collect quantitative ratings, in different forms. E.g., PREreview collects about 10 ratings via their ‘Rapid PREreview’ form using a limited scale (Yes/No/Unsure/NA). I’ve discussed adapting these ratings with Daniela from PREreview (as per your second last question), and she’s open to the idea, pending funding/resources.

This table shows some other rating sources we’re looking to import to the MERITS database. My vision for this project is to become a tertiary layer on top of existing infrastructure, so that we can pull in all the various ratings and use them for additional purposes, e.g., metaresearch and innovation.

I agree that we need to be building our own systems, and personally think the key element will be to develop a new journal system that directly incentivises use. To me, this means a journal system that can develop prestige, because without such a mechanism, I struggle to see why researchers would actually use it. We’ve built plenty of PPPR systems over the last 20 years, but they remain underutilised because they don’t offer researchers the prestige they need to survive.

So I’m working toward a model in which we have (1) preprint servers, (2) preprint review platforms (including ratings), and (3) a researcher-owned journal system that categorises articles into different ‘quality’ tiers, using algorithms trained on the ratings (e.g. those collected in MERITS). Providing we can predict quality accurately (which will have to use citations in the short term as a proxy for quality), the articles in the top tiers should attract more citations, boosting the prestige of the top journals, and tapping into the ‘prestige cycle’ that maintains commercial journals today. See here for slides and a video on these ideas.

I’ve also got some funding to support these ideas now – just working out the best way to put it to use. Various ideas here, e.g. paying ‘participants’ to contribute ratings as part of a study, paying pledgees on a FOK campaign to rate articles, hiring a research assistant, etc.

Also looking for allies on this journey, and sounds like we have a lot of aligned interests! Keen to chat more after I’ve seen your proposal. Chat soon!

3 Likes

Hi David, all,

I just posted a new blog post that is the first in a new series detailing my proposal for a ‘disruptive’ scholarly publishing model. I’ve been thinking about this model for years now, as a way to ‘bootstrap’ an alternative rating system under the current system, which cares only for journals and journal impact factors. It ties into a lot of what’s been discussed above, so felt it worthwhile sharing here.

Relating my proposal into the above discussion:

I’m looking for feedback, help, and allies, in finding ways towards a practical and near-term route towards:

  1. a credible open-access ‘peer evaluation and rating’ process that is given weight in Economics (and social science and more) as an alternative to conventional 0/1 publishing…
  2. To have this become the default place for researchers to ‘submit’ their work … at my organization (Rethink Priorities ), as well as other places funded by Open Philanthropy, the Global Priorities Initiative, and in the Effective Altruism and public-interest space, and for innovative research institutions in general

My take on (1) is that if we want to build a rich, credible dataset of alternative ratings, we’ll need to leverage the existing incentive structures to get people to actually provide ratings. This means we need to develop a system that can generate prestige at the journal-level (via journal impact factors), so that researchers feel comfortable (a) publishing their articles there, and (b) spending time reviewing/rating articles that are submitted. But in the background, this tiered journal system could be based on a much richer dataset of ratings, covering whatever dimensions of interest we build into the model (e.g., novelty, reliability, transparency etc). I think of this model like a trojan horse, which we can use to smuggle in a whole new range of article-level metrics under the guise of a more traditional journal (see the blog post if this is unclear).

The model I’m proposing could publish any piece of research (subject to some minimum level of rigour), in a relatively short period of time, so I think would be attractive to the average researcher. And if my assumptions prove correct (that the model can generate prestige at the journal level), then I think (2) will happen naturally – people will begin to see the model as a viable alternative to the traditional journal system and start to publish their full range of research there (especially given the time savings). If this were coupled with a pledge on FOK, I think it could be particularly powerful (e.g., “I pledge to publish all of my work on this system, on the condition that X other researchers do the same”).

One thing I’m particularly interested is ‘which tools and which horses to back’

  1. How can we get a DOI (and any other necessary links) for dynamic documents hosted as HTML (R bookdown and Jupyter notebooks in particular)?
  2. What would be the best ‘layer on top’ tool/tools/interface for reviewing and feedback that allows a quantitative component, measuring the credibility, interestingness, and clarity of each paper or project in a way that universities, policymakers, and grantmakers could value? Can PREreview be adapted to this in the ways I propose? What do you think of ResearchHub as an alternative for this? (Please also see my ‘Airtable of evaluated tools’ linked in the doc)
  3. How can we get support and funding (or ally with existing people doing this) for putting together ‘pseudo-editorial processes’ … assigning and compensating referees and putting together the ratings.
  1. Personally I think we should leverage existing infrastructures and processes wherever possible, to minimise friction and maximise adoption. I completely agree that the future is dynamic documents, but think that the fastest route to get there is to create a researcher-controlled system based on existing infrastructure – and this means preprints. Once we have a critical mass of people on board, we can do whatever we want, but the challenge is in getting the critical mass in the first place. As for your specific question though, I don’t know any way to get a DOI for those outputs sorry.
  1. Similarly, I think it makes sense to leverage existing review platforms (PREreview) and communities (e.g., PCI) where possible. PREreview is a great example of a secondary layer that sits on top of preprints. The model I’m proposing could act as a tertiary layer on top of PREreview and other review platforms, pulling in these ratings and then using them to categorise articles according to whatever ruleset we decide upon (i.e., the ‘algorithms’). This would require some back and forth to make PREreview ratings align with those in our model, but I think is entirely doable and would ensure that we had some resillience in the system and enough flexibility (via the different review platforms) to cater for different communities.

  2. This is the million dollar question :slight_smile: I can see a few paths forward re funding:

  • apply for a grant to conduct a meta-research study and pay researchers/editors as ‘participants’ who provide reviews and ratings to the proposed system
  • apply for philanthropic funding
  • apply for social enterprise funding
  • crowd-fund
  • create a cooperative organisation and charge members a fee to join

Aside from that, I’m hoping to leverage FOK to get ratings, e.g., with a campaign asking peer-reviewers to copy over any ratings that journals ask them to provide into our new MERITS database (but in the end I suspect it would be more efficient to win funding and just pay people directly).

Keen for feedback on any and all of this. I’m now working on this (plus FOK) full time, so am hoping to make some good progress in the coming year, but of course keen to partner with allies and like-minded people who think these ideas could work. And of course also very keen to hear from people who think these ideas won’t work, in case it saves me a whole bunch of time :slight_smile:

2 Likes

I’ll need some time to digest this and read your blog in detail. You have the experience and knowledge in this space, and the support to work full time on this…

So I hope to follow your lead (and help make it happen, and help bring EA research orgs and related researchers onboard).

Some quick thoughts and impressions:

This means we need to develop a system that can generate prestige at the journal-level (via journal impact factors), so that researchers feel comfortable (a) publishing their articles there, and (b) spending time reviewing/rating articles that are submitted.

“Having a rated/impact factored journal”: This is one route, and there are good arguments . But I still think there are ways of getting a & b (submissions and reviews to a feedback and rating platform) if we

A. Make submission to this non-exclusive (so you are not 'burning an opportunity to submit to a trad journal later) B. Pay people and otherwise provide incentives for doing reviewing activities.

My concern with your proposal (although I see its strengths), is that (if I understand) it’s for a journal with ‘exclusivity’ … you can’t also submit the work later to a prestigious traditional journal. I think this makes it more difficult to get submissions, as there is an ‘opportunity cost’.

DOI for dyndocs: I’ve spoken to some people and I’m pretty sure that you could simply deposit all the HTML and associated files with Zenodo and get a DOI for that, as a time-stamped piece of work.

And then these files could be cross-linked with the work that is actually hosted on the WWW.

I agree in a general sense

I love the idea, as it allows inter-operability across different reviewing platforms. (Minor concern: will you be the only tertiary layer? If there are multiple we need a quaternary layer :frowning: )

FWIW I’ve had some chats with PREReview and they seemed open to giving “us” (EA-aligned researchers, researchers interested in moving beyond static journals?) an 'experimental space with more flexibility).

Thanks for your thoughts @daaronr!

Would fully support this approach, but main obstacle would be finding a sustainable funding source to pay for all those reviews (I’ve heard that funders won’t touch this idea with a 10 foot pole, but hopefully I’m wrong on this!)

Correct, you couldn’t subsequently publish in a rival traditional journal, because they all abide by the Inglefinger rule. I can imagine some scenarios where authors might see this as an opportunity cost for a single paper (e.g., if it gets classified into a lower tier than what they expected), but this ignores the fact that they only had to submit their article once for it to be published. In contrast, to get published in a higher JIF traditional journal it might take multiple attempts and months of work before finding one that will publish you, which to me seems like a much greater opportunity cost than a couple of impact factor points here or there.

Good idea, I didn’t think of that.

Thanks :slight_smile: our code would be open source, so others would be welcome to fork it and create a new product if they think they can do better. In this way the system might evolve in a similar way to an open source project (e.g., with proposed upgrades that get voted on by the community before merging with the code-base), or perhaps similar to a blockchain project if we go that direction, where someone can fork a project but whether that fork survives depends on whether people actually support it or not. It’s possible that quaternary layers might become useful at some point, but if that’s the case its a good problem to have because it means our system has been a raging success :slight_smile:

Me too :slight_smile: looking forward to these upgrades, and keen for this to be a collaborative experiment, wherever possible!

Some quick responses to keep the ball rolling, I hope

I see this as being a bigger obstacle than you think, at least for snooty, risk averse, authority-following economists. I agree that what you propose seems a better alternative, but I’m worried that many people will think ‘this will never be the same as a top-5 journal even if they list me as top tier’. If we can get people to take the leap we’ll be in a better world/better equilibrium, but that might be too big a leap … thus I like having some options involving non-exclusivity.

So, going forward, I’d love to propose a “yes-and” approach.

  1. Yes, we work to gain support and credibility for a journal-independent review and evaluation platform and system,
  2. and we "develop a system that can generate prestige at the journal level… via journal impact factors.

And we encourage people interested in (2) consider doing (1) first, and encourage those who did (1) but still need a ‘journal with an impact factor’ to submit their work to (2) … perhaps with some fast-tracking.

1 Like

@daaronr B. Pay people and otherwise provide incentives for doing reviewing activities. [/quote]

@cooper Would fully support this approach, but main obstacle would be finding a sustainable funding source to pay for all those reviews (I’ve heard that funders won’t touch this idea with a 10 foot pole, but hopefully I’m wrong on this!)

Where have you heard this?

I’m wondering if you might be looking at ‘traditional funders’ …

… when in fact

  • ‘new thinking’ funders like Open Philanthropy
  • open science/metascience-interested funders like Fetzer or Arnold ventures
  • iconoclast funders like Peter Thiel

might be all over this!

1 Like

Hi David,

Sorry, seems I failed to keep the ball rolling!

Certainly hope you’re right about alternative funders supporting this idea. If so, I agree it could be a shortcut to winning adoption. I’ve previously contacted Open Philanthropy seeking funding for Project Free Our Knowledge, but didn’t receive a reply. Am yet to try again with these newer ideas. Same story with Arnold, though Fetzer is a new one to me.

Re: Peter Thiel, agree he might be interested as I’ve heard a couple interviews from him where he’s particularly disparaging of the academic institution. I agree with his analysis that innovation has slowed greatly over the past 50 years, largely due to universities failing to deliver, and would also suggest it’s no coincidence that this slowing started to emerge around the same time as Journal Impact Factors.

3 Likes

The model I propose is scalable, so I think in time it could potentially compete with top-5 journals, even surpass them. I’m proposing that no article get rejected, except for those below a minimum level of rigour (e.g., PLOS One level). So this means that as more submissions start to flow in we could accommodate them by creating more tiers within the hierarchy. E.g., we might start with 2 tiers only (high vs low), but then add a third tier when we start to get enough submissions to justify its creation. This new tier could sit on top of the previous two tiers, piggybacking their reputation. Hopefully the community would see it as an even better version of the previous two journals, and trust the model since all of the data will be available and people can see exactly what’s going on with the review/selection process. If the model proves successful, and more and more people start to use it, I can imagine some point in the future when we’re publishing so many journals that the very top tiers in this system could generate enough prestige to rival even Nature etc. Of course all of this is theoretical at this stage, but I think it should also be possible to model this concept if we can collect some preliminary rating data.

Of course working with Peter Thiel might bring costs in terms of being polarizing?