This seems plausible to me but not obvious, in particular for AI risk the field seems pre-paradigmatic such that there aren’t necessarily “low-hanging fruit” to be plucked; and it’s unclear whether previous efforts besides field-building have even been net positive in total.
Agree though my best guess is something like diminishing log returns the whole way down. (Or maybe even a bit of increasing returns within the first $100m / 100 people.)
I think log returns is reasonable—that’s what we generally assumed in the cost effectiveness analyses that estimated that AGI safety, resilient foods, and interventions for loss of electricity/industry catastrophes would generally be lower cost per life saved in the present generation than GiveWell. But that was only for the first ~$3 billion for AGI safety and the first few hundred million dollars for the other interventions.
Thanks I made some edits!
Agree though my best guess is something like diminishing log returns the whole way down. (Or maybe even a bit of increasing returns within the first $100m / 100 people.)
I think log returns is reasonable—that’s what we generally assumed in the cost effectiveness analyses that estimated that AGI safety, resilient foods, and interventions for loss of electricity/industry catastrophes would generally be lower cost per life saved in the present generation than GiveWell. But that was only for the first ~$3 billion for AGI safety and the first few hundred million dollars for the other interventions.