I agree thinking xrisk reduction is the top priority likely depends on caring significantly about future people (e.g. thinking the value of future generations is at least 10-100x the present).
A key issue I don’t see discussed very much is diminishing returns to x-risk reduction. The first $1bn spent on xrisk reduction is (I’d guess) very cost-effective, but over the next few decades, it’s likely that at least tens of billions will be spent on it, maybe hundreds. Additional donations only add at that margin, where the returns are probably 10-100x lower than the first billion. So a strict neartermist could easily think AMF is more cost-effective.
That said, I think it’s fair to say it doesn’t depend on something like “strong longtermism”. Common sense ethics cares about future generations, and I think suggests we should do far more about xrisk and GCR reduction than we do today.
I wrote about this in an 80k newsletter last autumn:
So, if the reduction can be achieved for under $16.5 trillion, it would pass a government cost-benefit analysis.
If you can reduce existential risk by 1 percentage point for under $165 billion, the cost-benefit ratio would be over 100 — no longtermism or cosmopolitanism needed.
Taking a global perspective, if you can reduce existential risk by 1 percentage point for under $234 billion, you would save lives more cheaply than GiveWell’s top recommended charities — again, regardless of whether you attach any value to future generations or not.
Toby Ord, author of The Precipice, thinks there’s a 16% chance of existential risk before 2100. Could we get that down to 15%, if we invested $234 billion?
I think yes. Less than $300 million is spent on the top priorities for reducing risk today each year, so $200 billion would be a massive expansion.
The issue is marginal returns, and where the margin will end up. While it might be possible to reduce existential risk by 1 percentage point now for $10 billion — saving lives 20 times more cheaply than GiveWell’s top charities — reducing it by another percentage point might take $100 billion+, which would be under 2x as cost-effective as GiveWell top charities.
I don’t know how much is going to be spent on existential risk reduction over the coming decades, or how quickly returns will diminish. [Edit: But it seems plausible to me it’ll be over $100bn and it’ll be more expensive to reduce x-risk than these estimates.] Overall I think reducing existential risk is a competitor for the top issue even just considering the cost of saving the life of someone in the present generation, though it’s not clear it’s the top issue.
My bottom line is that you only need to put moderate weight on longtermism to make reducing existential risk seem like the top priority.
(Note: I made some edits to the above in response to Eli’s comment.)
I agree with most of this, thanks for pointing to the relevant newsletter!
A few specific reactions:
The first $1bn spent on xrisk reduction is very cost-effective
This seems plausible to me but not obvious, in particular for AI risk the field seems pre-paradigmatic such that there aren’t necessarily “low-hanging fruit” to be plucked; and it’s unclear whether previous efforts besides field-building have even been net positive in total.
That said, I think it’s fair to say it doesn’t depend on something like “strong longtermism”. Common sense ethics cares about future generations, and I think suggests we should do far more about xrisk and GCR reduction than we do today.
Agree with this, though I think “strong longtermism” might make the case easier for those who aren’t sure about the expected length of the long-term future.
Taking a global perspective, if you can reduce existential risk by 1 percentage point for under $234 billion, you would save lives more cheaply than GiveWell’s top recommended charities — again, regardless of whether you attach any value to future generations or not.
reducing it by another percentage point might take $100 billion+, which would be only 20% as cost-effective as GiveWell top charities.
Seems like there’s a typo somewhere; reducing x-risk by a percentage point for $100 billion would be more effective than $234 billion, not 20% as cost-effective?
This seems plausible to me but not obvious, in particular for AI risk the field seems pre-paradigmatic such that there aren’t necessarily “low-hanging fruit” to be plucked; and it’s unclear whether previous efforts besides field-building have even been net positive in total.
Agree though my best guess is something like diminishing log returns the whole way down. (Or maybe even a bit of increasing returns within the first $100m / 100 people.)
I think log returns is reasonable—that’s what we generally assumed in the cost effectiveness analyses that estimated that AGI safety, resilient foods, and interventions for loss of electricity/industry catastrophes would generally be lower cost per life saved in the present generation than GiveWell. But that was only for the first ~$3 billion for AGI safety and the first few hundred million dollars for the other interventions.
I agree thinking xrisk reduction is the top priority likely depends on caring significantly about future people (e.g. thinking the value of future generations is at least 10-100x the present).
A key issue I don’t see discussed very much is diminishing returns to x-risk reduction. The first $1bn spent on xrisk reduction is (I’d guess) very cost-effective, but over the next few decades, it’s likely that at least tens of billions will be spent on it, maybe hundreds. Additional donations only add at that margin, where the returns are probably 10-100x lower than the first billion. So a strict neartermist could easily think AMF is more cost-effective.
That said, I think it’s fair to say it doesn’t depend on something like “strong longtermism”. Common sense ethics cares about future generations, and I think suggests we should do far more about xrisk and GCR reduction than we do today.
I wrote about this in an 80k newsletter last autumn:
(Note: I made some edits to the above in response to Eli’s comment.)
I agree with most of this, thanks for pointing to the relevant newsletter!
A few specific reactions:
This seems plausible to me but not obvious, in particular for AI risk the field seems pre-paradigmatic such that there aren’t necessarily “low-hanging fruit” to be plucked; and it’s unclear whether previous efforts besides field-building have even been net positive in total.
Agree with this, though I think “strong longtermism” might make the case easier for those who aren’t sure about the expected length of the long-term future.
Seems like there’s a typo somewhere; reducing x-risk by a percentage point for $100 billion would be more effective than $234 billion, not 20% as cost-effective?
Thanks I made some edits!
Agree though my best guess is something like diminishing log returns the whole way down. (Or maybe even a bit of increasing returns within the first $100m / 100 people.)
I think log returns is reasonable—that’s what we generally assumed in the cost effectiveness analyses that estimated that AGI safety, resilient foods, and interventions for loss of electricity/industry catastrophes would generally be lower cost per life saved in the present generation than GiveWell. But that was only for the first ~$3 billion for AGI safety and the first few hundred million dollars for the other interventions.