EA isn’t a superorganism where everyone (and all organizations) share the same empirical assumptions (or normative views). This complicates the analysis because money held by people who do grantmaking one doesn’t consider important should already be discounted.
edit: In the current funding landscape, my point isn’t very relevant because it seems relatively easy to get funding if someone’s project looks strong on some plausible assumptions. However, in the future, we could imagine scenarios where large sums of money are deployed for ambitious strategies where some people think the strategy is good and others think it might be too risky. Example: Buying compute for an AI company where some funders think the company has enough of a safety mindset, while other funders would prefer to wait and evaluate.
That’s a great point on disagreement on the effectiveness of the interventions themselves (rather than the investments). I’m not really sure how to think about that. I think we do already have a process for figuring out the allocation by hedging against other people’s “intervention portfolios” in the same way as is suggested for investment portfolios below.
For example, if I think LTFF is overemphasizing ai-risk, I can directly donate or offer to fund bio-risk, instead of donating via LTFF.
EA isn’t a superorganism where everyone (and all organizations) share the same empirical assumptions (or normative views). This complicates the analysis because money held by people who do grantmaking one doesn’t consider important should already be discounted.
edit: In the current funding landscape, my point isn’t very relevant because it seems relatively easy to get funding if someone’s project looks strong on some plausible assumptions. However, in the future, we could imagine scenarios where large sums of money are deployed for ambitious strategies where some people think the strategy is good and others think it might be too risky. Example: Buying compute for an AI company where some funders think the company has enough of a safety mindset, while other funders would prefer to wait and evaluate.
That’s a great point on disagreement on the effectiveness of the interventions themselves (rather than the investments). I’m not really sure how to think about that. I think we do already have a process for figuring out the allocation by hedging against other people’s “intervention portfolios” in the same way as is suggested for investment portfolios below.
For example, if I think LTFF is overemphasizing ai-risk, I can directly donate or offer to fund bio-risk, instead of donating via LTFF.