This estimate is roughly $200 trillion per world saved, in expectation. So, it’s actually like billions of dollars for some small fraction of the world saved, and dividing that out gets you to $200 trillion per world saved
This suggests a funding bar of 200T/10,000 world~= 20B for every 0.01% of existential risk.
These numbers seem very far from my own estimates for what marginal $s can do or the (AFAICT) apparent revealed preferences of Open Phil’s philantropic spending, so I find myself very confused about the discrepancy.
Cotra seems to be specifically referring to the cost-effectiveness of “meta R&D to make responses to new pathogens faster.” She/OpenPhil sees this as “conservative,” a “lower bound” on the possible impact of marginal funds, and believes “AI risk is something that we think has a currently higher cost effectiveness.” I think they studied this intervention just because it felt robust and could absorb a lot of money.
The goal of this project was just to reduce uncertainty on whether we could [effectively spend lots more money on longtermism]. Like, say the longtermist bucket had all of the money, could it actually spend that? We felt much more confident that if we gave all the money to the near-termist side, they could spend it on stuff that broadly seemed quite good, and not like a Pascal’s mugging. We wanted to see what would happen if all the money had gone to the longtermist side.
So I expect Cotra is substantially more optimistic than $20B per basis point. That number is potentially useful as a super-robust minimum-marginal-value-of-longtermist-funds to compare to short-term interventions, and that’s apparently what OpenPhil wanted it for.
Thanks! Do you or others have have any insight on why having a “lower bound” is useful for a “last dollar” estimation? Naively having a tight upper bound is much more useful (so we’re definitely willing to spend $$s on any intervention that’s more cost-effective than that).
I don’t think it’s generally useful, but at the least it gives us a floor for the value of longtermist interventions. $20B per basis point is far from optimal but still blows eg GiveWell out of the water, so this number at least tells us that our marginal spending should be on longtermism over GiveWell.
~20 billion * 10, 000 /~8 billion ~= $25,000 $/life saved. This seems ~5x worse than AMF (iirc) if we only care about present lives, done very naively. Now of course xrisk reduction efforts save older people, happier(?) people in richer countries, etc, so the case is not clearcut that AMF is better. But it’s like, maybe similar OOM?
(Though this very naive model would probably point to xrisk reduction being slightly better than GiveDirectly, even at 20B/basis point, even if you only care about present people, assuming you share Cotra’s empirical beliefs and you don’t have intrinsic discounting for uncertainty)
Now I will probably prefer 20B/basis point over AMF, because I like to take ideas seriously and longtermism is one of the ideas I take seriously. But this seems like a values claim, not an empirical one.
Hmm, fair. I guess the kind of people who are unpersuaded by speculative AI stuff might also be unswayed by the scope of the cosmic endowment.
So I amend my takeaway from the OpenPhil number to: people who buy that the long-term future matters a lot (mostly normative) should also buy that longtermism can absorb at least $10B highly effectively (mostly empirical).
These numbers seem very away from my own estimates for what marginal $s can do or the (AFAICT) apparent revealed preferences of Open Phil’s philantropic spending, so I find myself very confused about the discrepancy.
I don’t understand. Total willingness to pay, and the belief of the marginal impact of $, are different things.
Also, when buying things, we often spend a small fraction of our total willingness to pay (e.g. imagine paying even 10% of the max value for transportation or healthcare each time).
We are accustomed to paying a fraction of our willingness to pay and also it’s how usually things work out. For preventative measures like x-risk, we might also expect this fraction to be low, because it benefits from planning and optimization.
Your own comment here, and Ben Todd’s messaging says that EA spending is limited by top talent that can build and deploy new large scale institutions in x-risk.
I feel like I’m missing something or talking past you?
Total willingness to pay, and the belief of the marginal impact of $, are different things.
Maybe I’m missing something, but I’m assuming a smooth-ish curve for things people are just barely willing to fund vs are just barely unwilling to fund.
My impression is that longtermist things just below that threshold have substantially higher naive cost-effectiveness than 20B/0.01% of xrisk.
It’s easier for me to see this for longtermist interventions other than AI risk, since AI risk is very confusing. A possible response for the discrepancy is that maybe grantmakers are much less optimistic (like by 2 orders of magnitude on average) about non-AI xrisk reduction measures than I am. For example, one answer that I sometimes hear (~never from grantmakers) is that people don’t really think of xrisks outside of AI as a thing. If this is the true rejection, I’d a) consider it something of a communications failure in longtermism and b) would note that we are pretty inefficiently allocating human capital then.
Also, when buying things, we often spend a small fraction of our total willingness to pay (e.g. imagine paying even 10% of the max value for transportation or healthcare each time).
You’re the economist here, but my understanding is that the standard argument for this is that people spend $s until marginal spending per unit of goods equals marginal utility of that good. So if we value the world at 200T EA dollars, we absolutely should spend $s until it costs 20B in $s (or human capital-equivalents) per basis point of xrisks averted.
Maybe I’m missing something, but I’m assuming a smooth-ish curve for things people are just barely willing to fund vs are just barely unwilling to fund.
My impression is that longtermist things just below that threshold have substantially higher naive cost-effectiveness than 20B/0.01% of xrisk.
Ok, I am confused and I don’t immediately know where the above reply fits in.
Resetting things and zooming out to the start:
Your question seems to be talking about two different things:
Willingness to pay to save the world (“E.g. 0.01% of 200T”)
The actual spending and near-terms plans of 2021 EA’s willingness to spend on x-risk related programs, e.g. “revealed preferences of Open Phil’s philantropic spending” and other grantmakers as you mentioned.
Clearly these are different things because EA is limited by money. Importantly, we believe EA might have $50B (and as little as $8B before Ben Todd’s 2021 update). So that’s not $200T and not overly large next to the world. For example, I think annual bank overdraft fees in the US alone are like ~$10B or something.
Would the following help to parse your discussion?
If you want to talk about spending priorities right now with $50B (or maybe much more as you speculate, but still less than 200T), that makes sense.
If you want to talk about what we would spend if Linch or Holden were the central planner of the world and allocating money to x-risk reduction, that makes sense too.
Would the following help to parse your discussion?
If you want to talk about spending priorities right now with $50B (or maybe much more as you speculate, but still less than 200T), that makes sense.
If you want to talk about what we would spend if Linch or Holden were the central planner of the world and allocating money to x-risk reduction, that makes sense too.
I was referring to #1. I don’t think fantasizing about being a central planner of the world makes much sense. I also thought #1 was what Open Phil was referring to when they talk about the “last dollar project”, though it’s certainly possible that I misunderstood (for starters I only skim podcasts and never read them in detail).
I was referring to #1. I don’t think fantasizing about being a central planner of the world makes much sense. I also thought #1 was what Open Phil was referring to when they talk about the “last dollar project”, though it’s certainly possible that I misunderstood (for starters I only skim podcasts and never read them in detail).
Ok, this makes perfect sense. Also this is also my understanding of “last dollar”.
My very quick response, which may be misinformed , is that Open Phil is solving some constrained spending problem with between $4B and $50B of funds (e.g. the lower number being half of the EA funds before Ben Todd’s update and the higher number being estimates of current funds).
Basically, in many models, the best path is going to be some fraction, say 3-5% of the total endowment each year (and there are reasons why it might be lower than 3%).
There’s no reason why this fraction or $ amount rises with the total value of the earth, e.g. even if we add the entire galaxy, we would spend the same amount.
Is this getting at your top level comment “I find myself very confused about the discrepancy”?
In Ajeya Cotra’s interview with 80,000 Hours, she says:
This suggests a funding bar of 200T/10,000 world~= 20B for every 0.01% of existential risk.
These numbers seem very far from my own estimates for what marginal $s can do or the (AFAICT) apparent revealed preferences of Open Phil’s philantropic spending, so I find myself very confused about the discrepancy.
Cotra seems to be specifically referring to the cost-effectiveness of “meta R&D to make responses to new pathogens faster.” She/OpenPhil sees this as “conservative,” a “lower bound” on the possible impact of marginal funds, and believes “AI risk is something that we think has a currently higher cost effectiveness.” I think they studied this intervention just because it felt robust and could absorb a lot of money.
So I expect Cotra is substantially more optimistic than $20B per basis point. That number is potentially useful as a super-robust minimum-marginal-value-of-longtermist-funds to compare to short-term interventions, and that’s apparently what OpenPhil wanted it for.
This is correct.
Thanks! Do you or others have have any insight on why having a “lower bound” is useful for a “last dollar” estimation? Naively having a tight upper bound is much more useful (so we’re definitely willing to spend $$s on any intervention that’s more cost-effective than that).
I don’t think it’s generally useful, but at the least it gives us a floor for the value of longtermist interventions. $20B per basis point is far from optimal but still blows eg GiveWell out of the water, so this number at least tells us that our marginal spending should be on longtermism over GiveWell.
Can you elaborate on your reasoning here?
~20 billion * 10, 000 /~8 billion ~= $25,000 $/life saved. This seems ~5x worse than AMF (iirc) if we only care about present lives, done very naively. Now of course xrisk reduction efforts save older people, happier(?) people in richer countries, etc, so the case is not clearcut that AMF is better. But it’s like, maybe similar OOM?
(Though this very naive model would probably point to xrisk reduction being slightly better than GiveDirectly, even at 20B/basis point, even if you only care about present people, assuming you share Cotra’s empirical beliefs and you don’t have intrinsic discounting for uncertainty)
Now I will probably prefer 20B/basis point over AMF, because I like to take ideas seriously and longtermism is one of the ideas I take seriously. But this seems like a values claim, not an empirical one.
Hmm, fair. I guess the kind of people who are unpersuaded by speculative AI stuff might also be unswayed by the scope of the cosmic endowment.
So I amend my takeaway from the OpenPhil number to: people who buy that the long-term future matters a lot (mostly normative) should also buy that longtermism can absorb at least $10B highly effectively (mostly empirical).
I don’t understand. Total willingness to pay, and the belief of the marginal impact of $, are different things.
Also, when buying things, we often spend a small fraction of our total willingness to pay (e.g. imagine paying even 10% of the max value for transportation or healthcare each time).
We are accustomed to paying a fraction of our willingness to pay and also it’s how usually things work out. For preventative measures like x-risk, we might also expect this fraction to be low, because it benefits from planning and optimization.
Your own comment here, and Ben Todd’s messaging says that EA spending is limited by top talent that can build and deploy new large scale institutions in x-risk.
I feel like I’m missing something or talking past you?
Maybe I’m missing something, but I’m assuming a smooth-ish curve for things people are just barely willing to fund vs are just barely unwilling to fund.
My impression is that longtermist things just below that threshold have substantially higher naive cost-effectiveness than 20B/0.01% of xrisk.
It’s easier for me to see this for longtermist interventions other than AI risk, since AI risk is very confusing. A possible response for the discrepancy is that maybe grantmakers are much less optimistic (like by 2 orders of magnitude on average) about non-AI xrisk reduction measures than I am. For example, one answer that I sometimes hear (~never from grantmakers) is that people don’t really think of xrisks outside of AI as a thing. If this is the true rejection, I’d a) consider it something of a communications failure in longtermism and b) would note that we are pretty inefficiently allocating human capital then.
You’re the economist here, but my understanding is that the standard argument for this is that people spend $s until marginal spending per unit of goods equals marginal utility of that good. So if we value the world at 200T EA dollars, we absolutely should spend $s until it costs 20B in $s (or human capital-equivalents) per basis point of xrisks averted.
Ok, I am confused and I don’t immediately know where the above reply fits in.
Resetting things and zooming out to the start:
Your question seems to be talking about two different things:
Willingness to pay to save the world (“E.g. 0.01% of 200T”)
The actual spending and near-terms plans of 2021 EA’s willingness to spend on x-risk related programs, e.g. “revealed preferences of Open Phil’s philantropic spending” and other grantmakers as you mentioned.
Clearly these are different things because EA is limited by money. Importantly, we believe EA might have $50B (and as little as $8B before Ben Todd’s 2021 update). So that’s not $200T and not overly large next to the world. For example, I think annual bank overdraft fees in the US alone are like ~$10B or something.
Would the following help to parse your discussion?
If you want to talk about spending priorities right now with $50B (or maybe much more as you speculate, but still less than 200T), that makes sense.
If you want to talk about what we would spend if Linch or Holden were the central planner of the world and allocating money to x-risk reduction, that makes sense too.
I was referring to #1. I don’t think fantasizing about being a central planner of the world makes much sense. I also thought #1 was what Open Phil was referring to when they talk about the “last dollar project”, though it’s certainly possible that I misunderstood (for starters I only skim podcasts and never read them in detail).
Ok, this makes perfect sense. Also this is also my understanding of “last dollar”.
My very quick response, which may be misinformed , is that Open Phil is solving some constrained spending problem with between $4B and $50B of funds (e.g. the lower number being half of the EA funds before Ben Todd’s update and the higher number being estimates of current funds).
Basically, in many models, the best path is going to be some fraction, say 3-5% of the total endowment each year (and there are reasons why it might be lower than 3%).
There’s no reason why this fraction or $ amount rises with the total value of the earth, e.g. even if we add the entire galaxy, we would spend the same amount.
Is this getting at your top level comment “I find myself very confused about the discrepancy”?
I may have missed something else.