In the spirit of "betting is a tax on bullshit", this seems like an interesting opportunity to try it out.
Why not find some way to make panelist rankings public (or some function of panelist rankings)? If you serve as a panelist and you rank grant proposals highly that ultimately don't meet success criteria down the road (spawning high-profile publications, generating citations, coming in under-budget, whatever...) that should reflect badly upon you as a panelist, in a public way. And over time, your "score" as a panelist would modulate the extent to which your rankings are weighted in a panel discussion. Similar to how each major European football league gets a coefficient from UEFA, and based on the coefficients, the number of direct and playoff slots into the Champion's League tournament is decided.
Imagine if academics who want to serve on a panel must agree to also have such a 'coefficient' published about them. Now, their ranking is something they must openly bet their reputation upon, and over time those who place correct bets will be given more weight when panels select grants.
Obviously this isn't perfect. The coefficient scheme could be manipulated in the same way that network connections lead to manipulation of the rankings as it is now. If we don't trust the central body measuring and adjusting the coefficients, that would be a problem (cough... FIFA).
But still, wouldn't some version of this idea -- making academics pay a public reputation price in order to vote for their preferred funding recipients -- be better than letting people rank and vote without reputation effects? In theory, it should also mean that only those with a real stake in the decision will risk the reputation price to vote -- and you could imagine even beginning to open up grant funding decisions to much wider voting bodies. Instead of a small panel, just drop all of the proposals onto a site like arXiv, and allow absolutely anyone at all to vote -- so long as the weight of their vote corresponds to their public score, and that their future score will be affected by the success/failure of whatever they vote for. No more small / closed-off committees, just open speculative voting like a prediction market.
>"success criteria down the road (spawning high-profile publications, generating citations, coming in under-budget, whatever...)"
I wouldn't handwave this aspect. Creating incentives to optimize the wrong thing can be worse than using an arbitrary filter. Your first two sound like they encourage hype, popularity contests, and quantity over quality.
How about something combining reproducibility of analysis, availability/sharing of the data, precision of theoretical prediction, and consistency of quantitative estimates from independent replications?
It sounds about as hopeless as many things in science. There are too many researchers and no good objective measurements of success. Even the author essentially said "I prefer to fund people according to whether they have recently published in a luxury closed access journal".
I had the same thought as danieltillet while reading it - if they're all equally good, then choose randomly instead of being consistently biased towards an arbitrary criterion that's going to skew the system.
Disclaimer: I have never been on a grant committee myself.
I see two problems with grant committees.
1. Some of us are more theorists, others - engineers. Some of us want to solve mysteries, others - practical problems.
Which means that if the panel is composed primarily out of theorists then applications possibly yielding highly cited papers (solving general problems vs specific) are more likely to get funded. This also creates incentive to split the work up into as many papers as possible.
Likewise, panel composed primarily out of engineers is likely to undervalue deep problem research without physical deliverables. I have absolutely zero idea how to adjust for both these cases, though.
2. Another problem is relevance/urgency. Some problems/proposals are always actual and can be polished and reapplied every year. Others can quickly get practically irrelevant or competing technology become de facto standard. E.g. analogue television (terrestrial), Magnetic Cassette (solve practical problems why those got phased out). Do we rank ones over the others or keep them in the same pool? I have no idea.
I have actually been on grant committees before and the basic problem is you only have enough funding for 10% to 15% of the proposals. Almost all of the applications are really good and from good people. The reason for this is it is so much work (at least 6 to 8 weeks full time) to write a grant application that only the really good people put them in. On top of this the universities will “pre-review” all the grant applications internally before they go in to make sure that all the weak proposal are weeded out and all the obvious flaws removed.
The end of this process is that almost every grant is really, really strong making it impossible to rank them consistently. The committee ends up ranking on trivia, or in the worse case on “old-boy” connections. We are asking these committees to do something that we know is impossible.
If we can’t rank grant applications by quality then lets stop pretending we can. Just decide if they are strong and then put all the strong ones into a lottery and fund as many as we can.
Interesting to see this resurface after it died yesterday. Dang working his magic again.
What I find most frustrating about this story is that the panelist recognises that whole process is not able to determine which grants should be funded (basically they are all good), yet he still continues to try and do so. If you can’t tell which of the good grants is better than the other just put all the good grants into a lottery and fund as many as you can - every other option is worse.
Edit. I should add that my experience of being on committees like this is that excluding yourself when you have a conflict is a highly effective way of getting your grant funded. The people left in the room can hardly decide to not fund you to your face when you come back into the room. This is why the competition to get onto these committees is so great despite them being very laborious.
Why not find some way to make panelist rankings public (or some function of panelist rankings)? If you serve as a panelist and you rank grant proposals highly that ultimately don't meet success criteria down the road (spawning high-profile publications, generating citations, coming in under-budget, whatever...) that should reflect badly upon you as a panelist, in a public way. And over time, your "score" as a panelist would modulate the extent to which your rankings are weighted in a panel discussion. Similar to how each major European football league gets a coefficient from UEFA, and based on the coefficients, the number of direct and playoff slots into the Champion's League tournament is decided.
Imagine if academics who want to serve on a panel must agree to also have such a 'coefficient' published about them. Now, their ranking is something they must openly bet their reputation upon, and over time those who place correct bets will be given more weight when panels select grants.
Obviously this isn't perfect. The coefficient scheme could be manipulated in the same way that network connections lead to manipulation of the rankings as it is now. If we don't trust the central body measuring and adjusting the coefficients, that would be a problem (cough... FIFA).
But still, wouldn't some version of this idea -- making academics pay a public reputation price in order to vote for their preferred funding recipients -- be better than letting people rank and vote without reputation effects? In theory, it should also mean that only those with a real stake in the decision will risk the reputation price to vote -- and you could imagine even beginning to open up grant funding decisions to much wider voting bodies. Instead of a small panel, just drop all of the proposals onto a site like arXiv, and allow absolutely anyone at all to vote -- so long as the weight of their vote corresponds to their public score, and that their future score will be affected by the success/failure of whatever they vote for. No more small / closed-off committees, just open speculative voting like a prediction market.