A meta-level issue is ensuring consistency in this “high risk, high reward” approach.
For example, some grantmakers in EA indicate they take this approach and will support relevant projects. Which is great!
But if they then decide against funding a project merely because they think it’s unlikely to succeed, this implies they actually aren’t taking such an approach. Ideally they would provide feedback such as “well you think this project has a 10% chance of succeeding, but we think it’s actually more like 1% because you haven’t considered X, Y, Z, and this now means the expected value is below other projects we have chosen to fund instead”.
If grantmakers fail to do this, they are failing to even give people the chance to fail. This obviously doesn’t have the same consequences as a project failing, but does require coping with rejection that is perceived to be unjustified and inconsistent with the purported approach, and could discourage ambition.
A meta-level issue is ensuring consistency in this “high risk, high reward” approach.
For example, some grantmakers in EA indicate they take this approach and will support relevant projects. Which is great!
But if they then decide against funding a project merely because they think it’s unlikely to succeed, this implies they actually aren’t taking such an approach. Ideally they would provide feedback such as “well you think this project has a 10% chance of succeeding, but we think it’s actually more like 1% because you haven’t considered X, Y, Z, and this now means the expected value is below other projects we have chosen to fund instead”.
If grantmakers fail to do this, they are failing to even give people the chance to fail. This obviously doesn’t have the same consequences as a project failing, but does require coping with rejection that is perceived to be unjustified and inconsistent with the purported approach, and could discourage ambition.