Personally, I would donate to the Long Term Future Fund over the global health fund, and would expect it to be perhaps 10-100x more cost-effective (and donating to global health is already very good). This is mainly because I think issues like AI safety and global catastrophic biorisks are bigger in scale and more neglected than global health. Coming up with an actual number is difficult – I certainly don’t think they’re overwhelmingly better.
Not to pick nits but what would you consider “overwhelmingly better?” 1000x? I’d have said 10x so curious to understand how differently we’re calibrated / the scales we think on.
There isn’t a hard cutoff, but one relevant boundary is when you can ignore the other issue for practical purposes. At 10-100x differences, then other factors like personal fit or finding an unusually good opportunity can offset differences in cause effectiveness. At, say 10,000x, they can’t.
Sometimes people also suggest that e.g. existential risk reduction is ‘astronomically’ more effective than other causes (e.g. 10^10 times), but I don’t agree with that for a lot of reasons.
Not to pick nits but what would you consider “overwhelmingly better?” 1000x? I’d have said 10x so curious to understand how differently we’re calibrated / the scales we think on.
There isn’t a hard cutoff, but one relevant boundary is when you can ignore the other issue for practical purposes. At 10-100x differences, then other factors like personal fit or finding an unusually good opportunity can offset differences in cause effectiveness. At, say 10,000x, they can’t.
Sometimes people also suggest that e.g. existential risk reduction is ‘astronomically’ more effective than other causes (e.g. 10^10 times), but I don’t agree with that for a lot of reasons.
Got it—thanks for taking the time to respond!