Total willingness to pay, and the belief of the marginal impact of $, are different things.
Maybe I’m missing something, but I’m assuming a smooth-ish curve for things people are just barely willing to fund vs are just barely unwilling to fund.
My impression is that longtermist things just below that threshold have substantially higher naive cost-effectiveness than 20B/0.01% of xrisk.
It’s easier for me to see this for longtermist interventions other than AI risk, since AI risk is very confusing. A possible response for the discrepancy is that maybe grantmakers are much less optimistic (like by 2 orders of magnitude on average) about non-AI xrisk reduction measures than I am. For example, one answer that I sometimes hear (~never from grantmakers) is that people don’t really think of xrisks outside of AI as a thing. If this is the true rejection, I’d a) consider it something of a communications failure in longtermism and b) would note that we are pretty inefficiently allocating human capital then.
Also, when buying things, we often spend a small fraction of our total willingness to pay (e.g. imagine paying even 10% of the max value for transportation or healthcare each time).
You’re the economist here, but my understanding is that the standard argument for this is that people spend $s until marginal spending per unit of goods equals marginal utility of that good. So if we value the world at 200T EA dollars, we absolutely should spend $s until it costs 20B in $s (or human capital-equivalents) per basis point of xrisks averted.
Maybe I’m missing something, but I’m assuming a smooth-ish curve for things people are just barely willing to fund vs are just barely unwilling to fund.
My impression is that longtermist things just below that threshold have substantially higher naive cost-effectiveness than 20B/0.01% of xrisk.
Ok, I am confused and I don’t immediately know where the above reply fits in.
Resetting things and zooming out to the start:
Your question seems to be talking about two different things:
Willingness to pay to save the world (“E.g. 0.01% of 200T”)
The actual spending and near-terms plans of 2021 EA’s willingness to spend on x-risk related programs, e.g. “revealed preferences of Open Phil’s philantropic spending” and other grantmakers as you mentioned.
Clearly these are different things because EA is limited by money. Importantly, we believe EA might have $50B (and as little as $8B before Ben Todd’s 2021 update). So that’s not $200T and not overly large next to the world. For example, I think annual bank overdraft fees in the US alone are like ~$10B or something.
Would the following help to parse your discussion?
If you want to talk about spending priorities right now with $50B (or maybe much more as you speculate, but still less than 200T), that makes sense.
If you want to talk about what we would spend if Linch or Holden were the central planner of the world and allocating money to x-risk reduction, that makes sense too.
Would the following help to parse your discussion?
If you want to talk about spending priorities right now with $50B (or maybe much more as you speculate, but still less than 200T), that makes sense.
If you want to talk about what we would spend if Linch or Holden were the central planner of the world and allocating money to x-risk reduction, that makes sense too.
I was referring to #1. I don’t think fantasizing about being a central planner of the world makes much sense. I also thought #1 was what Open Phil was referring to when they talk about the “last dollar project”, though it’s certainly possible that I misunderstood (for starters I only skim podcasts and never read them in detail).
I was referring to #1. I don’t think fantasizing about being a central planner of the world makes much sense. I also thought #1 was what Open Phil was referring to when they talk about the “last dollar project”, though it’s certainly possible that I misunderstood (for starters I only skim podcasts and never read them in detail).
Ok, this makes perfect sense. Also this is also my understanding of “last dollar”.
My very quick response, which may be misinformed , is that Open Phil is solving some constrained spending problem with between $4B and $50B of funds (e.g. the lower number being half of the EA funds before Ben Todd’s update and the higher number being estimates of current funds).
Basically, in many models, the best path is going to be some fraction, say 3-5% of the total endowment each year (and there are reasons why it might be lower than 3%).
There’s no reason why this fraction or $ amount rises with the total value of the earth, e.g. even if we add the entire galaxy, we would spend the same amount.
Is this getting at your top level comment “I find myself very confused about the discrepancy”?
Maybe I’m missing something, but I’m assuming a smooth-ish curve for things people are just barely willing to fund vs are just barely unwilling to fund.
My impression is that longtermist things just below that threshold have substantially higher naive cost-effectiveness than 20B/0.01% of xrisk.
It’s easier for me to see this for longtermist interventions other than AI risk, since AI risk is very confusing. A possible response for the discrepancy is that maybe grantmakers are much less optimistic (like by 2 orders of magnitude on average) about non-AI xrisk reduction measures than I am. For example, one answer that I sometimes hear (~never from grantmakers) is that people don’t really think of xrisks outside of AI as a thing. If this is the true rejection, I’d a) consider it something of a communications failure in longtermism and b) would note that we are pretty inefficiently allocating human capital then.
You’re the economist here, but my understanding is that the standard argument for this is that people spend $s until marginal spending per unit of goods equals marginal utility of that good. So if we value the world at 200T EA dollars, we absolutely should spend $s until it costs 20B in $s (or human capital-equivalents) per basis point of xrisks averted.
Ok, I am confused and I don’t immediately know where the above reply fits in.
Resetting things and zooming out to the start:
Your question seems to be talking about two different things:
Willingness to pay to save the world (“E.g. 0.01% of 200T”)
The actual spending and near-terms plans of 2021 EA’s willingness to spend on x-risk related programs, e.g. “revealed preferences of Open Phil’s philantropic spending” and other grantmakers as you mentioned.
Clearly these are different things because EA is limited by money. Importantly, we believe EA might have $50B (and as little as $8B before Ben Todd’s 2021 update). So that’s not $200T and not overly large next to the world. For example, I think annual bank overdraft fees in the US alone are like ~$10B or something.
Would the following help to parse your discussion?
If you want to talk about spending priorities right now with $50B (or maybe much more as you speculate, but still less than 200T), that makes sense.
If you want to talk about what we would spend if Linch or Holden were the central planner of the world and allocating money to x-risk reduction, that makes sense too.
I was referring to #1. I don’t think fantasizing about being a central planner of the world makes much sense. I also thought #1 was what Open Phil was referring to when they talk about the “last dollar project”, though it’s certainly possible that I misunderstood (for starters I only skim podcasts and never read them in detail).
Ok, this makes perfect sense. Also this is also my understanding of “last dollar”.
My very quick response, which may be misinformed , is that Open Phil is solving some constrained spending problem with between $4B and $50B of funds (e.g. the lower number being half of the EA funds before Ben Todd’s update and the higher number being estimates of current funds).
Basically, in many models, the best path is going to be some fraction, say 3-5% of the total endowment each year (and there are reasons why it might be lower than 3%).
There’s no reason why this fraction or $ amount rises with the total value of the earth, e.g. even if we add the entire galaxy, we would spend the same amount.
Is this getting at your top level comment “I find myself very confused about the discrepancy”?
I may have missed something else.