Consultation on Researcher Assessment Framework

Hello everyone! I wanted to share with you all an invitation to review and discuss the new Framework for Researcher Assessment.

This Framework has been developed within Open and Universal Science (OPUS) Project and consists of a wide range of indicators to assess researchers in grant evaluations and career progression and goes beyond research publications to include other research, education, leadership, and societal impact activities as well as Open Science.

It will run during 2 hours on Tuesday 10th of June at 10:00 CEST to review and provide feedback on the draft of the framework. The workshop is kindly hosted by Marie Curie Alumni Association and will be run on Zoom.

Link to register Meeting Registration - Zoom

Not required for participation in the event, but for more information about the OPUS https://opusproject.eu and the Researcher Assessment Framework itself: O’Neill, G. (2024). OPUS Deliverable 3.1: Indicators and Metrics to Test in the Pilots. Zenodo. OPUS Deliverable 3.1: Indicators and Metrics to Test in the Pilots

2 Likes

@AGM_Fox @daaronr @garymcdowell @safieh.shah

1 Like

Thank you - I will attend for sure _ I appreciate you remembering to tag me.

1 Like

Oof- 4am. I registered, but really doubt that I will be there. However, I will try to post comments to the Zenodo doc. In the meantime, I’d like to share two excellent articles that give very important background and context to the US system of funding and review. Buck & Marcum discuss the struggle to get negative data from funders, and the exquisite piece by Laird documents the role that peer review plays in setting national priorities. The former is in the current issue and the latter is 5 years old and a bit of a read (20 pages), but both are really worth a close look because together they explain everything you need to know about the critical gatekeeper–government research sponsors. Though both are about the US, by the time you are done reading you will understand clearly why that matters to researchers around the world.

1 Like

thanks for the links to articles! will have a look.

I am not sure you can leave comments on Zenodo, but probably you could send directly comments to the main author Gareth O’Neill (his email is easily found on the web).

Hi Mayya- Sorry - I wasn’t clear at all. I meant that I would post my comments to that doc in this thread. I have developed the Collaboratory Cultures framework (CCf), which is designed to capture research metrics from within the research community. In contrast to the top-down evaluation models that combine legal, fiscal, and administrative considerations to measure ROI, CCf looks to work performed at the project level to measure progress and/or success in innovation, serendipity, motivation/incentive, JEDI+A dynamics, and other metrics that institutions fail to grasp–or worse, have failed to achieve due to bad metrics set by counterproductive policy. For example, university policies are strongly influenced by legal departments, whose job is to interpret the law and recommend the policies and enforcement mechanisms that will ensure compliance. This is often where a good idea to solve one problem transforms into a new problem.
In the US, JEDI+A policies in academia have been proven in court to result in discrimination against successful plaintiffs, and there was very little counterevidence that defendants could produce in response. This has all avalanched quickly into broad sanctions from the White House to all universities. The CCf allows the people who actually do the research to claim success on their own terms, which is great for an increasingly beleaguered community of research professors, and also very good for the university. But to my knowledge, there is nothing similar to it, and based on the pilot study that I led to test and refine it, the approach might be viewed as a threat to institutional policy and decisionmakers. In my opinion, culture is always the limiting factor on innovation.

1 Like

Hello! Thanks, that is very interesting. Can you please share where I can read more about CCf? RIsk of being viewed as a threat indeed is real. Earlier today we had run the consultation, and I have just sent email with links to slides which contain the most recent version of the framework - it is simpler than the one on Zenodo. From discussions at the meeting I learned that RAF in particular is designed as a ingredient/shopping list/lego bricks for institutions to choose and adapt based on their priorities and values. This can be criticised of course - e.g. there is no team work or collaboration or community among the indicators. Another finding is that nowadays there are other frameworks around, e.g. not fully aligned framework on reserach competencies Two hours were not enough deifnitely!

Thanks for tagging people to bring this to their attention. And thanks to those who could join and contribute! I hope on another occasion we can continue.