I don’t think it’s generally useful, but at the least it gives us a floor for the value of longtermist interventions. $20B per basis point is far from optimal but still blows eg GiveWell out of the water, so this number at least tells us that our marginal spending should be on longtermism over GiveWell.
~20 billion * 10, 000 /~8 billion ~= $25,000 $/life saved. This seems ~5x worse than AMF (iirc) if we only care about present lives, done very naively. Now of course xrisk reduction efforts save older people, happier(?) people in richer countries, etc, so the case is not clearcut that AMF is better. But it’s like, maybe similar OOM?
(Though this very naive model would probably point to xrisk reduction being slightly better than GiveDirectly, even at 20B/basis point, even if you only care about present people, assuming you share Cotra’s empirical beliefs and you don’t have intrinsic discounting for uncertainty)
Now I will probably prefer 20B/basis point over AMF, because I like to take ideas seriously and longtermism is one of the ideas I take seriously. But this seems like a values claim, not an empirical one.
Hmm, fair. I guess the kind of people who are unpersuaded by speculative AI stuff might also be unswayed by the scope of the cosmic endowment.
So I amend my takeaway from the OpenPhil number to: people who buy that the long-term future matters a lot (mostly normative) should also buy that longtermism can absorb at least $10B highly effectively (mostly empirical).
I don’t think it’s generally useful, but at the least it gives us a floor for the value of longtermist interventions. $20B per basis point is far from optimal but still blows eg GiveWell out of the water, so this number at least tells us that our marginal spending should be on longtermism over GiveWell.
Can you elaborate on your reasoning here?
~20 billion * 10, 000 /~8 billion ~= $25,000 $/life saved. This seems ~5x worse than AMF (iirc) if we only care about present lives, done very naively. Now of course xrisk reduction efforts save older people, happier(?) people in richer countries, etc, so the case is not clearcut that AMF is better. But it’s like, maybe similar OOM?
(Though this very naive model would probably point to xrisk reduction being slightly better than GiveDirectly, even at 20B/basis point, even if you only care about present people, assuming you share Cotra’s empirical beliefs and you don’t have intrinsic discounting for uncertainty)
Now I will probably prefer 20B/basis point over AMF, because I like to take ideas seriously and longtermism is one of the ideas I take seriously. But this seems like a values claim, not an empirical one.
Hmm, fair. I guess the kind of people who are unpersuaded by speculative AI stuff might also be unswayed by the scope of the cosmic endowment.
So I amend my takeaway from the OpenPhil number to: people who buy that the long-term future matters a lot (mostly normative) should also buy that longtermism can absorb at least $10B highly effectively (mostly empirical).