Cotra seems to be specifically referring to the cost-effectiveness of “meta R&D to make responses to new pathogens faster.” She/OpenPhil sees this as “conservative,” a “lower bound” on the possible impact of marginal funds, and believes “AI risk is something that we think has a currently higher cost effectiveness.” I think they studied this intervention just because it felt robust and could absorb a lot of money.
The goal of this project was just to reduce uncertainty on whether we could [effectively spend lots more money on longtermism]. Like, say the longtermist bucket had all of the money, could it actually spend that? We felt much more confident that if we gave all the money to the near-termist side, they could spend it on stuff that broadly seemed quite good, and not like a Pascal’s mugging. We wanted to see what would happen if all the money had gone to the longtermist side.
So I expect Cotra is substantially more optimistic than $20B per basis point. That number is potentially useful as a super-robust minimum-marginal-value-of-longtermist-funds to compare to short-term interventions, and that’s apparently what OpenPhil wanted it for.
Thanks! Do you or others have have any insight on why having a “lower bound” is useful for a “last dollar” estimation? Naively having a tight upper bound is much more useful (so we’re definitely willing to spend $$s on any intervention that’s more cost-effective than that).
I don’t think it’s generally useful, but at the least it gives us a floor for the value of longtermist interventions. $20B per basis point is far from optimal but still blows eg GiveWell out of the water, so this number at least tells us that our marginal spending should be on longtermism over GiveWell.
~20 billion * 10, 000 /~8 billion ~= $25,000 $/life saved. This seems ~5x worse than AMF (iirc) if we only care about present lives, done very naively. Now of course xrisk reduction efforts save older people, happier(?) people in richer countries, etc, so the case is not clearcut that AMF is better. But it’s like, maybe similar OOM?
(Though this very naive model would probably point to xrisk reduction being slightly better than GiveDirectly, even at 20B/basis point, even if you only care about present people, assuming you share Cotra’s empirical beliefs and you don’t have intrinsic discounting for uncertainty)
Now I will probably prefer 20B/basis point over AMF, because I like to take ideas seriously and longtermism is one of the ideas I take seriously. But this seems like a values claim, not an empirical one.
Hmm, fair. I guess the kind of people who are unpersuaded by speculative AI stuff might also be unswayed by the scope of the cosmic endowment.
So I amend my takeaway from the OpenPhil number to: people who buy that the long-term future matters a lot (mostly normative) should also buy that longtermism can absorb at least $10B highly effectively (mostly empirical).
Cotra seems to be specifically referring to the cost-effectiveness of “meta R&D to make responses to new pathogens faster.” She/OpenPhil sees this as “conservative,” a “lower bound” on the possible impact of marginal funds, and believes “AI risk is something that we think has a currently higher cost effectiveness.” I think they studied this intervention just because it felt robust and could absorb a lot of money.
So I expect Cotra is substantially more optimistic than $20B per basis point. That number is potentially useful as a super-robust minimum-marginal-value-of-longtermist-funds to compare to short-term interventions, and that’s apparently what OpenPhil wanted it for.
This is correct.
Thanks! Do you or others have have any insight on why having a “lower bound” is useful for a “last dollar” estimation? Naively having a tight upper bound is much more useful (so we’re definitely willing to spend $$s on any intervention that’s more cost-effective than that).
I don’t think it’s generally useful, but at the least it gives us a floor for the value of longtermist interventions. $20B per basis point is far from optimal but still blows eg GiveWell out of the water, so this number at least tells us that our marginal spending should be on longtermism over GiveWell.
Can you elaborate on your reasoning here?
~20 billion * 10, 000 /~8 billion ~= $25,000 $/life saved. This seems ~5x worse than AMF (iirc) if we only care about present lives, done very naively. Now of course xrisk reduction efforts save older people, happier(?) people in richer countries, etc, so the case is not clearcut that AMF is better. But it’s like, maybe similar OOM?
(Though this very naive model would probably point to xrisk reduction being slightly better than GiveDirectly, even at 20B/basis point, even if you only care about present people, assuming you share Cotra’s empirical beliefs and you don’t have intrinsic discounting for uncertainty)
Now I will probably prefer 20B/basis point over AMF, because I like to take ideas seriously and longtermism is one of the ideas I take seriously. But this seems like a values claim, not an empirical one.
Hmm, fair. I guess the kind of people who are unpersuaded by speculative AI stuff might also be unswayed by the scope of the cosmic endowment.
So I amend my takeaway from the OpenPhil number to: people who buy that the long-term future matters a lot (mostly normative) should also buy that longtermism can absorb at least $10B highly effectively (mostly empirical).