“even if the upside of them working out could really be quite valuable” is the part I disagree with most in your comment. (Again, speaking just for myself), I don’t think any of the projects I remember us rejecting seemed like they had a huge amount of upside; my overall calculus was something like “this doesn’t seem like it has big upside (because the policy asks don’t seem all that good), and also has some downside (because of person/project-specific factors)”. It would be nice if we did quantified risk analysis for all of our grant applications, but ultimately we have limited time, and I think it makes sense to focus attention on cases where it does seem like the upside is unusually high.
On potential risk factors:
I agree that (1) and (2) above are very unlikely for most grants (and are correlated with being unusually successful at getting things implemented).
I feel less in agreement about (3)-- my sense is that people who want to interact with policymakers will often succeed at taking up the attention of someone in the space, and the people interacting with them form impressions of them based on those interactions, whether or not they make progress on pushing that policy through.
I think (4) indeed isn’t specific to the policy space, but is a real downside that I’ve observed affecting other EA projects– I don’t expect the main factor to be that there’s only one channel for interacting with policymakers, but rather that other long-term-focused actors will perceive the space to be taken, or will feel some sense of obligation to work with existing projects / awkwardness around not doing so.
Caveating a lot of the above: as I said before, my views on specific grants have been informed heavily by others I’ve consulted, rather than coming purely from some inside view.
I don’t think any of the projects I remember us rejecting seemed like they had a huge amount of upside
That’s fair, and I should also be clear that I’m less familiar with LTFF’s grantmaking than some others in the EA universe.
It would be nice if we did quantified risk analysis for all of our grant applications, but ultimately we have limited time, and I think it makes sense to focus attention on cases where it does seem like the upside is unusually high.
Oh, I totally agree that the kind of risk analysis I mentioned is not costless, and for EA Funds in particular it seems like too much to expect. My main point is that in the absence of it, it’s not necessarily an optimal strategy to substitute an extreme version of the precautionary principle instead.
Overall, I agree that judging policy/institution-focused projects primarily based on upside makes sense.
“even if the upside of them working out could really be quite valuable” is the part I disagree with most in your comment. (Again, speaking just for myself), I don’t think any of the projects I remember us rejecting seemed like they had a huge amount of upside; my overall calculus was something like “this doesn’t seem like it has big upside (because the policy asks don’t seem all that good), and also has some downside (because of person/project-specific factors)”. It would be nice if we did quantified risk analysis for all of our grant applications, but ultimately we have limited time, and I think it makes sense to focus attention on cases where it does seem like the upside is unusually high.
On potential risk factors:
I agree that (1) and (2) above are very unlikely for most grants (and are correlated with being unusually successful at getting things implemented).
I feel less in agreement about (3)-- my sense is that people who want to interact with policymakers will often succeed at taking up the attention of someone in the space, and the people interacting with them form impressions of them based on those interactions, whether or not they make progress on pushing that policy through.
I think (4) indeed isn’t specific to the policy space, but is a real downside that I’ve observed affecting other EA projects– I don’t expect the main factor to be that there’s only one channel for interacting with policymakers, but rather that other long-term-focused actors will perceive the space to be taken, or will feel some sense of obligation to work with existing projects / awkwardness around not doing so.
Caveating a lot of the above: as I said before, my views on specific grants have been informed heavily by others I’ve consulted, rather than coming purely from some inside view.
Thanks for the response!
That’s fair, and I should also be clear that I’m less familiar with LTFF’s grantmaking than some others in the EA universe.
Oh, I totally agree that the kind of risk analysis I mentioned is not costless, and for EA Funds in particular it seems like too much to expect. My main point is that in the absence of it, it’s not necessarily an optimal strategy to substitute an extreme version of the precautionary principle instead.
Overall, I agree that judging policy/institution-focused projects primarily based on upside makes sense.