I agree with most of this, thanks for pointing to the relevant newsletter!
A few specific reactions:
The first $1bn spent on xrisk reduction is very cost-effective
This seems plausible to me but not obvious, in particular for AI risk the field seems pre-paradigmatic such that there aren’t necessarily “low-hanging fruit” to be plucked; and it’s unclear whether previous efforts besides field-building have even been net positive in total.
That said, I think it’s fair to say it doesn’t depend on something like “strong longtermism”. Common sense ethics cares about future generations, and I think suggests we should do far more about xrisk and GCR reduction than we do today.
Agree with this, though I think “strong longtermism” might make the case easier for those who aren’t sure about the expected length of the long-term future.
Taking a global perspective, if you can reduce existential risk by 1 percentage point for under $234 billion, you would save lives more cheaply than GiveWell’s top recommended charities — again, regardless of whether you attach any value to future generations or not.
reducing it by another percentage point might take $100 billion+, which would be only 20% as cost-effective as GiveWell top charities.
Seems like there’s a typo somewhere; reducing x-risk by a percentage point for $100 billion would be more effective than $234 billion, not 20% as cost-effective?
This seems plausible to me but not obvious, in particular for AI risk the field seems pre-paradigmatic such that there aren’t necessarily “low-hanging fruit” to be plucked; and it’s unclear whether previous efforts besides field-building have even been net positive in total.
Agree though my best guess is something like diminishing log returns the whole way down. (Or maybe even a bit of increasing returns within the first $100m / 100 people.)
I think log returns is reasonable—that’s what we generally assumed in the cost effectiveness analyses that estimated that AGI safety, resilient foods, and interventions for loss of electricity/industry catastrophes would generally be lower cost per life saved in the present generation than GiveWell. But that was only for the first ~$3 billion for AGI safety and the first few hundred million dollars for the other interventions.
I agree with most of this, thanks for pointing to the relevant newsletter!
A few specific reactions:
This seems plausible to me but not obvious, in particular for AI risk the field seems pre-paradigmatic such that there aren’t necessarily “low-hanging fruit” to be plucked; and it’s unclear whether previous efforts besides field-building have even been net positive in total.
Agree with this, though I think “strong longtermism” might make the case easier for those who aren’t sure about the expected length of the long-term future.
Seems like there’s a typo somewhere; reducing x-risk by a percentage point for $100 billion would be more effective than $234 billion, not 20% as cost-effective?
Thanks I made some edits!
Agree though my best guess is something like diminishing log returns the whole way down. (Or maybe even a bit of increasing returns within the first $100m / 100 people.)
I think log returns is reasonable—that’s what we generally assumed in the cost effectiveness analyses that estimated that AGI safety, resilient foods, and interventions for loss of electricity/industry catastrophes would generally be lower cost per life saved in the present generation than GiveWell. But that was only for the first ~$3 billion for AGI safety and the first few hundred million dollars for the other interventions.