New peer-review trial lets grant applicants evaluate each other’s proposals
One of Germany’s biggest research-funding organizations is hoping ‘distributed peer review’ can help to tackle the reviewer shortage.
https://www.nature.com/articles/d41586-024-03106-w
Here's a story about our @RoRInstitute project with the Volkswagen Foundation, by @Dalmeet
@tomstafford @RoRInstitute @Dalmeet
Nice, I think this is a great way forward. The Dutch science funder NWO has implemented this for their smallest grant round (NWO-XS, 50k€, open for anyone with a PhD). I've done it once and found it a nice and noticeably quick process, even though my proposal got rejected.
One interesting corollary is that the applications have to be anonymous (i.e. you're not allowed to cite your own papers), which means that reputation gets taken out of the equation.
@tomstafford @RoRInstitute @Dalmeet does it "let" them do this or does it force them to do it?
An interesting and I think worthwhile follow-up would be to let applicants decide the format of the application by voting on each section whether or not it should be included and how long it should be.
@neuralreckoning @RoRInstitute @Dalmeet everyone who applies for funding has to do reviews as a condition of eligibility, so it forces them, if you'd like to see it that way. I agree that having reviewers vote on the components of the application would be interesting
@tomstafford @RoRInstitute @Dalmeet I think "forces" (or "requires") is a more accurate word in this case than "lets" which implies choice. And I'm not just trying to be annoying or score points here, I think it's important that we honestly describe these things because there's a bit of a culture of imposing more work on academics who are already way over capacity and have to find time to do this extra work by sacrificing their spare time, with their children for example, and then telling them it's for their own good.
A useful analysis to do - assuming you have access to the data - would be a linguistic analysis of these compelled reviews versus previous rounds with voluntarily provided reviews. I suspect you will see a much larger fraction of (valueless) LLM-written reviews, which should pop out on a word frequency analysis, for example.
@neuralreckoning this is a good idea!
@neuralreckoning although I will say that we also asked people for their component criteria scores, as well as overall scores, and these relate very tightly, which suggests people are doing more than just responding at random...
The word frequency idea is a nice idea for checking the review text too though, thanks!
@tomstafford @RoRInstitute @Dalmeet
Interesting idea.
There is an obvious incentive to criticise other applications, but at least the referees will be motivated to read the proposals carefully. You would probably still want a committee to sift the factual criticisms.
Maybe unfortunately, referee anonymity will be weakened.
A hidden benefit would be that this would favour a bottom-up approach that might ignore political "impact" directives.