I’d be curious to see more analysis here. If it is the case that a very large fraction of grants are useless, and very few produce huge wins, then I agree that that would definitely be concerning.
In particular, I’d like to see analysis of a fair[1] sample.
I don’t think we would necessarily need to see a “very large fraction” be “useless” for us to have some serious concerns here. I take Nicolae to raise two discrete concerns about the video-game grant: that it resulted in no deliverable product at all, and that it wouldn’t have been a good use of funds even if it had. I think the quoted analysis addresses the second concern better than the first.
If there are “numerous other cases, many even worse, . . . involving digital creators with barely any content produced during their funding period,” then that points to a potential vetting problem. I can better see the hits-based philanthropy argument for career change, or for research that ultimately didn’t produce any output,[2] but producing ~no digital output that the grantee was paid to create should be a rare occurrence. It’s hard to predict whether any digital content will go viral / have impact, but the content coming into existence at all shouldn’t be a big roll of the dice.
I used “fair” rather than “random” to remain agnostic on weighting by grant size, etc. The idea is representative and not cherry-picked (in either direction).
In particular, I’d like to see analysis of a fair[1] sample.
I don’t think we would necessarily need to see a “very large fraction” be “useless” for us to have some serious concerns here. I take Nicolae to raise two discrete concerns about the video-game grant: that it resulted in no deliverable product at all, and that it wouldn’t have been a good use of funds even if it had. I think the quoted analysis addresses the second concern better than the first.
If there are “numerous other cases, many even worse, . . . involving digital creators with barely any content produced during their funding period,” then that points to a potential vetting problem. I can better see the hits-based philanthropy argument for career change, or for research that ultimately didn’t produce any output,[2] but producing ~no digital output that the grantee was paid to create should be a rare occurrence. It’s hard to predict whether any digital content will go viral / have impact, but the content coming into existence at all shouldn’t be a big roll of the dice.
I used “fair” rather than “random” to remain agnostic on weighting by grant size, etc. The idea is representative and not cherry-picked (in either direction).
These are other two grant types in Nicolae’s sentence that I partially quoted in my sentence before this one.