From an altruistic cause prioritization perspective, existential risk seems to require longtermism
No it doesn’t! Scott Alexander has a great post about how existential risk issues are actually perfectly well motivated without appealing to longtermism at all.
When I’m talking to non-philosophers, I prefer an “existential risk” framework to a “long-termism” framework. The existential risk framework immediately identifies a compelling problem (you and everyone you know might die) without asking your listener to accept controversial philosophical assumptions. It forestalls attacks about how it’s non-empathetic or politically incorrect not to prioritize various classes of people who are suffering now. And it focuses objections on the areas that are most important to clear up (is there really a high chance we’re all going to die soon?) and not on tangential premises (are we sure that we know how our actions will affect the year 30,000 AD?)
Caring about existential risk does not require longtermism, but existential risk being the top EA priority probably requires longtermism or something like it. Factory farming interventions look much more cost-effective in the near term than x-risk interventions, and GiveWell top charities look probably more cost-effective.
I’m not sure if GiveWell top charities do? Preventing extinction is a lot of QALYs, and it might not cost more than a few $B per year of extra time bought in terms of funding Pause efforts (~$1/QALY!?)
By my read, that post and the excerpt from it are about the rhetorical motivation for existential risk rather than the impartial ethical motivation. I basically agree that longtermism is not the right framing in most conversations, and it’s also not necessary for thinking existential risk work would be more valuable than the marginal public dollar.
I included the qualifier “From an altruistic cause prioritization perspective” because I think that from an impartial cause prioritization perspective, the case is different. If you’re comparing existential risk to animal welfare and global health, the links in my comment I think make the case pretty persuasively that you need longtermism.
It’s not “longtermist” or “fanatical” at all (or even altruistic) to try and prevent yourself and everyone else on the planet (humans and animals) being killed in the near future by uncontrollable ASI[1] (quite possibly in a horrible, painful[2], way[3]).
When ordinary folks think seriously about AGI risks, they don’t need any consequentialism, or utilitarianism, or EA thinking, or the Sequences, or long-termism, or anything fancy like that.
They simply come to understand that AGI could kill all of their kids, and everyone they ever loved, and could ruin everything they and their ancestors ever tried to achieve.
I’m not that surprised that the above comment has been downvoted to −4 without any replies (and this one will probably buried by an even bigger avalanche of downvotes!), but it still makes me sad. EA will be ivory-tower-ing until the bitter end it seems. It’s a form of avoidance. These things aren’t nice to think about. But it’s close now, so it’s reasonable for it to feel viscerally real. I guess it won’t be EA that saves us (from the mess it helped accelerate), if we do end up saved.
acknowledges the value of x-risk reduction in general from a non-longtermist perspective
clarifies that it is making a point about the marginal altruistic value of x-risk vs AW or GHW work and points to a post making this argument in more detail
Your response merely reiterates that x-risk prevention has substantial altruistic (and non-altruistic) value. This isn’t responsive to the claim about whether, under non-longtermist assumptions, that value is greater on the margin than AW or GHW work.
So even though I actually agree with the claims in your comment, I downvoted it (along with this one complaining about the downvotes) for being off-topic and not embodying the type of discourse I think the EA Forum should strive for.
Whilst zdgroff’s comment “acknowledges the value of x-risk reduction in general from a non-longtermist perspective” it downplays it quite heavily imo (and the OP comment does even more, using the pejorative “fanatical”).
I don’t think the linked post makes the point very persuasively. Looking at the table, at best there is an equivalence.
I think a rough estimate of the cost effectiveness of pushing for a Pause is orders of magnitude higher.
No it doesn’t! Scott Alexander has a great post about how existential risk issues are actually perfectly well motivated without appealing to longtermism at all.
Caring about existential risk does not require longtermism, but existential risk being the top EA priority probably requires longtermism or something like it. Factory farming interventions look much more cost-effective in the near term than x-risk interventions, and GiveWell top charities look probably more cost-effective.
I’m not sure if GiveWell top charities do? Preventing extinction is a lot of QALYs, and it might not cost more than a few $B per year of extra time bought in terms of funding Pause efforts (~$1/QALY!?)
By my read, that post and the excerpt from it are about the rhetorical motivation for existential risk rather than the impartial ethical motivation. I basically agree that longtermism is not the right framing in most conversations, and it’s also not necessary for thinking existential risk work would be more valuable than the marginal public dollar.
I included the qualifier “From an altruistic cause prioritization perspective” because I think that from an impartial cause prioritization perspective, the case is different. If you’re comparing existential risk to animal welfare and global health, the links in my comment I think make the case pretty persuasively that you need longtermism.
It’s not “longtermist” or “fanatical” at all (or even altruistic) to try and prevent yourself and everyone else on the planet (humans and animals) being killed in the near future by uncontrollable ASI[1] (quite possibly in a horrible, painful[2], way[3]).
Indeed, there are many non-EAs who care a great deal about this issue now.
I mention this as it’s a welfarist consideration, even if one doesn’t care about death in and of itself.
Ripped apart by self-replicating computronium-building nanobots, anyone?
Strongly endorsing Greg Colbourn’s reply here.
When ordinary folks think seriously about AGI risks, they don’t need any consequentialism, or utilitarianism, or EA thinking, or the Sequences, or long-termism, or anything fancy like that.
They simply come to understand that AGI could kill all of their kids, and everyone they ever loved, and could ruin everything they and their ancestors ever tried to achieve.
I’m not that surprised that the above comment has been downvoted to −4 without any replies (and this one will probably buried by an even bigger avalanche of downvotes!), but it still makes me sad. EA will be ivory-tower-ing until the bitter end it seems. It’s a form of avoidance. These things aren’t nice to think about. But it’s close now, so it’s reasonable for it to feel viscerally real. I guess it won’t be EA that saves us (from the mess it helped accelerate), if we do end up saved.
The comment you replied to
acknowledges the value of x-risk reduction in general from a non-longtermist perspective
clarifies that it is making a point about the marginal altruistic value of x-risk vs AW or GHW work and points to a post making this argument in more detail
Your response merely reiterates that x-risk prevention has substantial altruistic (and non-altruistic) value. This isn’t responsive to the claim about whether, under non-longtermist assumptions, that value is greater on the margin than AW or GHW work.
So even though I actually agree with the claims in your comment, I downvoted it (along with this one complaining about the downvotes) for being off-topic and not embodying the type of discourse I think the EA Forum should strive for.
Thanks for the explanation.
Whilst zdgroff’s comment “acknowledges the value of x-risk reduction in general from a non-longtermist perspective” it downplays it quite heavily imo (and the OP comment does even more, using the pejorative “fanatical”).
I don’t think the linked post makes the point very persuasively. Looking at the table, at best there is an equivalence.
I think a rough estimate of the cost effectiveness of pushing for a Pause is orders of magnitude higher.
You don’t need EAs Greg—you’ve got the general public!