I’m having trouble figuring out how to respond to this. I understand that it’s kind of an academic exercise to see how cause prioritization might work out if you got very very rough numbers and took utilitarianism very seriously without allowing any emotional considerations to creep in. But I feel like that potentially makes it irrelevant to any possible question.
If we’re talking about how normal people should prioritize...well, the only near-term cause close to x-risk here is animal welfare. If you tell a normal person “You can either work to prevent you and everyone you love from dying, or work to give chickens bigger cages, which do you prefer?”, their response is not going to depend on QALYs.
If we’re talking about how the EA movement should prioritize, the EA movement currently spends more on global health than on animal welfare and AI risk combined. It clearly isn’t even following near-termist ideas to their logical conclusion, let alone long-termist ones.
If we’re talking about how a hypothetical perfect philosopher would prioritize, I think there would be many other things they worry about before they get to long-termism. For example, does your estimate for the badness of AI risk include that it would end all animal suffering forever? And all animal pleasure? Doesn’t that maybe flip the sign, or multiply its badness an order of magnitude? You very reasonably didn’t include that because it’s an annoying question that’s pretty far from our normal moral intuitions, but I think there are a dozen annoying questions like that, and that long-termism could be thought of as just one of that set, no more fundamental or crux-y than the others for most people.
I’m not even sure how to think about what these numbers imply. Should the movement put 100% of money and energy into AI risk, the cause ranked most efficient here? To do that up until the point where the low-hanging fruit have been picked and something else is most effective? Are we sure we’re not already at that point, given how much trouble LTF charities report finding new things to fund? Does long-termism change this, because astronomical waste is so vast that we should be desperate for even the highest fruit? Is this just Pascal’s Wager? These all seem like questions we have to have opinions on before concluding that long-termism and near-termism have different implications.
I find that instead of having good answers to any of these questions, my long-termism (such as it is) hinges on an idea like “I think the human race going extinct would be extra bad, even compared to many billions of deaths”. If you want to go beyond this kind of intuitive reasoning into real long-termism, I feel like you need extra work to answer the questions above that in general isn’t being done.
But I feel like that potentially makes it irrelevant to any possible question.
I see what you mean and I think I didn’t do a good job of specifying this in the post; my impression is one question your post and the otherposts I’m responding to are trying to answer is “How should we pitch x-risks to people who we want to {contribute to them via work, donations, policy, etc.}?” So my post was (primarily) attempting to contribute to answering that question.
In your post, my understanding of part of your argument was: thoughtful short-termism usually leads to the same conclusion as longtermism so when pitching x-risks we can just focus on the bad short-term effects without getting into debates about whether future people matter and how much, etc. My argument is that it’s very unclear if this claim is true[1], so making this pitch feels intellectually dishonest to some extent. It feels important to have people who we want to do direct work on x-risks working on it for coherent reasons so intellectual honesty feels very important when pitching there; I’m less sure about donaters and even less sure about policymakers but in general trying to be as intellectually honest as possible while maintaining similar first-order effectiveness feels good to me.
It feels less intellectually dishonest if we’re clear that a substantial portion of the reason we care about x-risks so much is that extinction is extra bad, as you mentioned here but wasn’t in the original post:
I find that instead of having good answers to any of these questions, my long-termism (such as it is) hinges on an idea like “I think the human race going extinct would be extra bad, even compared to many billions of deaths”.
A few reactions to the other parts of your comment:
If we’re talking about how normal people should prioritize...well, the only near-term cause close to x-risk here is animal welfare. If you tell a normal person “You can either work to prevent you and everyone you love from dying, or work to give chickens bigger cages, which do you prefer?”, their response is not going to depend on QALYs.
I agree, but it feels like the target audience matters here; in particular, as I mentioned above I think the type of person I’d want to successfully pitch to directly work on x-risk should care about the philosophical arguments to a substantial extent.
If we’re talking about how the EA movement should prioritize, the EA movement currently spends more on global health than on animal welfare and AI risk combined. It clearly isn’t even following near-termist ideas to their logical conclusion, let alone long-termist ones.
Agree, I’m not arguing to change the behavior/prioritization of leaders/big funders of the EA movement (who I think are fairly bought into longtermism with some worldview diversification but are constrained by good funding opportunities).
If we’re talking about how a hypothetical perfect philosopher would prioritize, I think there would be many other things they worry about before they get to long-termism. For example, does your estimate for the badness of AI risk include that it would end all animal suffering forever? And all animal pleasure? Doesn’t that maybe flip the sign, or multiply its badness an order of magnitude? You very reasonably didn’t include that because it’s an annoying question that’s pretty far from our normal moral intuitions, but I think there are a dozen annoying questions like that, and that long-termism could be thought of as just one of that set, no more fundamental or crux-y than the others for most people.
I agree with much of this except the argument for lack of emphasis on longtermism; I think there are lots of annoying questions but longtermism is a particularly important one given the large expected value of the future (also, in the first sentence you say a hypothetical perfect philosopher but in the last sentence you say “most people”?).
If there are lots of annoying questions that could flip the conclusion when only looking at the short-term this feels like an argument for mentioning something like longtermism as it could more robustly overwhelm the other considerations.
I’m not even sure how to think about what these numbers imply. Should the movement put 100% of money and energy into AI risk, the cause ranked most efficient here? To do that up until the point where the low-hanging fruit have been picked and something else is most effective? Are we sure we’re not already at that point, given how much trouble LTF charities report finding new things to fund? Does long-termism change this, because astronomical waste is so vast that we should be desperate for even the highest fruit? Is this just Pascal’s Wager? These all seem like questions we have to have opinions on before concluding that long-termism and near-termism have different implications.
These rough numbers should definitely not be taken too seriously to imply that we should put all of our resources into AI risk! Plus diminishing marginal returns / funding opportunities are a real consideration. I think Ben’s comment does a good job describing some of the practical funding considerations here. I do think we should be very surprised if taking into account longtermism doesn’t change the funding bar at all; it’s a huge consideration and yes, I think it should make us willing to pick substantially “higher fruit”.
even if it’s true the majority of cases in a substantial minority thoughtful short-termism would likely make different recommendations due to personal fit etc.
I think once you take account of diminishing returns and the non-robustness of the x-risk estimates, there’s a good chance you’d end up estimating the cost per present life saved of GiveWell is cheaper than donating to xrisk. So the claim ‘neartermists should donate to xrisk’ seems likely wrong.
I agree with Carl the US govt should spend more on x-risk, even just to protect their own citizens.
I think the typical person is not a neartermist, so might well end up thinking x-risk is more cost-effective than GiveWell if they thought it through. Though it would depend a lot on what considerations you include or not.
From a pure messaging pov, I agree we should default to opening with “there might be an xrisk soon” rather than “there might be trillions of future generations”, since it’s the most important message and is more likely to be well-received. I see that as the strategy of the Precipice, or of pieces directly pitching AI xrisk. But I think it’s also important to promote longtermism independently, and/or mention it as an additional reason to prioritise about xrisk a few steps after opening with it.
I’m having trouble figuring out how to respond to this. I understand that it’s kind of an academic exercise to see how cause prioritization might work out if you got very very rough numbers and took utilitarianism very seriously without allowing any emotional considerations to creep in. But I feel like that potentially makes it irrelevant to any possible question.
If we’re talking about how normal people should prioritize...well, the only near-term cause close to x-risk here is animal welfare. If you tell a normal person “You can either work to prevent you and everyone you love from dying, or work to give chickens bigger cages, which do you prefer?”, their response is not going to depend on QALYs.
If we’re talking about how the EA movement should prioritize, the EA movement currently spends more on global health than on animal welfare and AI risk combined. It clearly isn’t even following near-termist ideas to their logical conclusion, let alone long-termist ones.
If we’re talking about how a hypothetical perfect philosopher would prioritize, I think there would be many other things they worry about before they get to long-termism. For example, does your estimate for the badness of AI risk include that it would end all animal suffering forever? And all animal pleasure? Doesn’t that maybe flip the sign, or multiply its badness an order of magnitude? You very reasonably didn’t include that because it’s an annoying question that’s pretty far from our normal moral intuitions, but I think there are a dozen annoying questions like that, and that long-termism could be thought of as just one of that set, no more fundamental or crux-y than the others for most people.
I’m not even sure how to think about what these numbers imply. Should the movement put 100% of money and energy into AI risk, the cause ranked most efficient here? To do that up until the point where the low-hanging fruit have been picked and something else is most effective? Are we sure we’re not already at that point, given how much trouble LTF charities report finding new things to fund? Does long-termism change this, because astronomical waste is so vast that we should be desperate for even the highest fruit? Is this just Pascal’s Wager? These all seem like questions we have to have opinions on before concluding that long-termism and near-termism have different implications.
I find that instead of having good answers to any of these questions, my long-termism (such as it is) hinges on an idea like “I think the human race going extinct would be extra bad, even compared to many billions of deaths”. If you want to go beyond this kind of intuitive reasoning into real long-termism, I feel like you need extra work to answer the questions above that in general isn’t being done.
The question is, I think, “how should FTX/SBF spend its billions?”
I see what you mean and I think I didn’t do a good job of specifying this in the post; my impression is one question your post and the other posts I’m responding to are trying to answer is “How should we pitch x-risks to people who we want to {contribute to them via work, donations, policy, etc.}?” So my post was (primarily) attempting to contribute to answering that question.
In your post, my understanding of part of your argument was: thoughtful short-termism usually leads to the same conclusion as longtermism so when pitching x-risks we can just focus on the bad short-term effects without getting into debates about whether future people matter and how much, etc. My argument is that it’s very unclear if this claim is true[1], so making this pitch feels intellectually dishonest to some extent. It feels important to have people who we want to do direct work on x-risks working on it for coherent reasons so intellectual honesty feels very important when pitching there; I’m less sure about donaters and even less sure about policymakers but in general trying to be as intellectually honest as possible while maintaining similar first-order effectiveness feels good to me.
It feels less intellectually dishonest if we’re clear that a substantial portion of the reason we care about x-risks so much is that extinction is extra bad, as you mentioned here but wasn’t in the original post:
A few reactions to the other parts of your comment:
I agree, but it feels like the target audience matters here; in particular, as I mentioned above I think the type of person I’d want to successfully pitch to directly work on x-risk should care about the philosophical arguments to a substantial extent.
Agree, I’m not arguing to change the behavior/prioritization of leaders/big funders of the EA movement (who I think are fairly bought into longtermism with some worldview diversification but are constrained by good funding opportunities).
I agree with much of this except the argument for lack of emphasis on longtermism; I think there are lots of annoying questions but longtermism is a particularly important one given the large expected value of the future (also, in the first sentence you say a hypothetical perfect philosopher but in the last sentence you say “most people”?).
If there are lots of annoying questions that could flip the conclusion when only looking at the short-term this feels like an argument for mentioning something like longtermism as it could more robustly overwhelm the other considerations.
These rough numbers should definitely not be taken too seriously to imply that we should put all of our resources into AI risk! Plus diminishing marginal returns / funding opportunities are a real consideration. I think Ben’s comment does a good job describing some of the practical funding considerations here. I do think we should be very surprised if taking into account longtermism doesn’t change the funding bar at all; it’s a huge consideration and yes, I think it should make us willing to pick substantially “higher fruit”.
even if it’s true the majority of cases in a substantial minority thoughtful short-termism would likely make different recommendations due to personal fit etc.
I think once you take account of diminishing returns and the non-robustness of the x-risk estimates, there’s a good chance you’d end up estimating the cost per present life saved of GiveWell is cheaper than donating to xrisk. So the claim ‘neartermists should donate to xrisk’ seems likely wrong.
I agree with Carl the US govt should spend more on x-risk, even just to protect their own citizens.
I think the typical person is not a neartermist, so might well end up thinking x-risk is more cost-effective than GiveWell if they thought it through. Though it would depend a lot on what considerations you include or not.
From a pure messaging pov, I agree we should default to opening with “there might be an xrisk soon” rather than “there might be trillions of future generations”, since it’s the most important message and is more likely to be well-received. I see that as the strategy of the Precipice, or of pieces directly pitching AI xrisk. But I think it’s also important to promote longtermism independently, and/or mention it as an additional reason to prioritise about xrisk a few steps after opening with it.