From an altruistic cause prioritization perspective, existential risk seems to require longtermism
No it doesnât! Scott Alexander has a great post about how existential risk issues are actually perfectly well motivated without appealing to longtermism at all.
When Iâm talking to non-philosophers, I prefer an âexistential riskâ framework to a âlong-termismâ framework. The existential risk framework immediately identifies a compelling problem (you and everyone you know might die) without asking your listener to accept controversial philosophical assumptions. It forestalls attacks about how itâs non-empathetic or politically incorrect not to prioritize various classes of people who are suffering now. And it focuses objections on the areas that are most important to clear up (is there really a high chance weâre all going to die soon?) and not on tangential premises (are we sure that we know how our actions will affect the year 30,000 AD?)
Caring about existential risk does not require longtermism, but existential risk being the top EA priority probably requires longtermism or something like it. Factory farming interventions look much more cost-effective in the near term than x-risk interventions, and GiveWell top charities look probably more cost-effective.
Iâm not sure if GiveWell top charities do? Preventing extinction is a lot of QALYs, and it might not cost more than a few $B per year of extra time bought in terms of funding Pause efforts (~$1/âQALY!?)
This looks incorrect to me. Factory farming interventions winning over x-risk interventions requires both thinking (1) that animals have moral weight not too far from that of humans, and (2) that the amount of suffering in factory farming is more morally important than increasing the chances of humanity and life in general of surviving at all. These assumptions are not shared by everyone in EA.
By my read, that post and the excerpt from it are about the rhetorical motivation for existential risk rather than the impartial ethical motivation. I basically agree that longtermism is not the right framing in most conversations, and itâs also not necessary for thinking existential risk work would be more valuable than the marginal public dollar.
I included the qualifier âFrom an altruistic cause prioritization perspectiveâ because I think that from an impartial cause prioritization perspective, the case is different. If youâre comparing existential risk to animal welfare and global health, the links in my comment I think make the case pretty persuasively that you need longtermism.
Itâs not âlongtermistâ or âfanaticalâ at all (or even altruistic) to try and prevent yourself and everyone else on the planet (humans and animals) being killed in the near future by uncontrollable ASI[1] (quite possibly in a horrible, painful[2], way[3]).
When ordinary folks think seriously about AGI risks, they donât need any consequentialism, or utilitarianism, or EA thinking, or the Sequences, or long-termism, or anything fancy like that.
They simply come to understand that AGI could kill all of their kids, and everyone they ever loved, and could ruin everything they and their ancestors ever tried to achieve.
Iâm not that surprised that the above comment has been downvoted to â4 without any replies (and this one will probably buried by an even bigger avalanche of downvotes!), but it still makes me sad. EA will be ivory-tower-ing until the bitter end it seems. Itâs a form of avoidance. These things arenât nice to think about. But itâs close now, so itâs reasonable for it to feel viscerally real. I guess it wonât be EA that saves us (from the mess it helped accelerate), if we do end up saved.
acknowledges the value of x-risk reduction in general from a non-longtermist perspective
clarifies that it is making a point about the marginal altruistic value of x-risk vs AW or GHW work and points to a post making this argument in more detail
Your response merely reiterates that x-risk prevention has substantial altruistic (and non-altruistic) value. This isnât responsive to the claim about whether, under non-longtermist assumptions, that value is greater on the margin than AW or GHW work.
So even though I actually agree with the claims in your comment, I downvoted it (along with this one complaining about the downvotes) for being off-topic and not embodying the type of discourse I think the EA Forum should strive for.
Whilst zdgroffâs comment âacknowledges the value of x-risk reduction in general from a non-longtermist perspectiveâ it downplays it quite heavily imo (and the OP comment does even more, using the pejorative âfanaticalâ).
I donât think the linked post makes the point very persuasively. Looking at the table, at best there is an equivalence.
I think a rough estimate of the cost effectiveness of pushing for a Pause is orders of magnitude higher.
No it doesnât! Scott Alexander has a great post about how existential risk issues are actually perfectly well motivated without appealing to longtermism at all.
Caring about existential risk does not require longtermism, but existential risk being the top EA priority probably requires longtermism or something like it. Factory farming interventions look much more cost-effective in the near term than x-risk interventions, and GiveWell top charities look probably more cost-effective.
Iâm not sure if GiveWell top charities do? Preventing extinction is a lot of QALYs, and it might not cost more than a few $B per year of extra time bought in terms of funding Pause efforts (~$1/âQALY!?)
This looks incorrect to me. Factory farming interventions winning over x-risk interventions requires both thinking (1) that animals have moral weight not too far from that of humans, and (2) that the amount of suffering in factory farming is more morally important than increasing the chances of humanity and life in general of surviving at all. These assumptions are not shared by everyone in EA.
By my read, that post and the excerpt from it are about the rhetorical motivation for existential risk rather than the impartial ethical motivation. I basically agree that longtermism is not the right framing in most conversations, and itâs also not necessary for thinking existential risk work would be more valuable than the marginal public dollar.
I included the qualifier âFrom an altruistic cause prioritization perspectiveâ because I think that from an impartial cause prioritization perspective, the case is different. If youâre comparing existential risk to animal welfare and global health, the links in my comment I think make the case pretty persuasively that you need longtermism.
Itâs not âlongtermistâ or âfanaticalâ at all (or even altruistic) to try and prevent yourself and everyone else on the planet (humans and animals) being killed in the near future by uncontrollable ASI[1] (quite possibly in a horrible, painful[2], way[3]).
Indeed, there are many non-EAs who care a great deal about this issue now.
I mention this as itâs a welfarist consideration, even if one doesnât care about death in and of itself.
Ripped apart by self-replicating computronium-building nanobots, anyone?
Strongly endorsing Greg Colbournâs reply here.
When ordinary folks think seriously about AGI risks, they donât need any consequentialism, or utilitarianism, or EA thinking, or the Sequences, or long-termism, or anything fancy like that.
They simply come to understand that AGI could kill all of their kids, and everyone they ever loved, and could ruin everything they and their ancestors ever tried to achieve.
Iâm not that surprised that the above comment has been downvoted to â4 without any replies (and this one will probably buried by an even bigger avalanche of downvotes!), but it still makes me sad. EA will be ivory-tower-ing until the bitter end it seems. Itâs a form of avoidance. These things arenât nice to think about. But itâs close now, so itâs reasonable for it to feel viscerally real. I guess it wonât be EA that saves us (from the mess it helped accelerate), if we do end up saved.
The comment you replied to
acknowledges the value of x-risk reduction in general from a non-longtermist perspective
clarifies that it is making a point about the marginal altruistic value of x-risk vs AW or GHW work and points to a post making this argument in more detail
Your response merely reiterates that x-risk prevention has substantial altruistic (and non-altruistic) value. This isnât responsive to the claim about whether, under non-longtermist assumptions, that value is greater on the margin than AW or GHW work.
So even though I actually agree with the claims in your comment, I downvoted it (along with this one complaining about the downvotes) for being off-topic and not embodying the type of discourse I think the EA Forum should strive for.
Thanks for the explanation.
Whilst zdgroffâs comment âacknowledges the value of x-risk reduction in general from a non-longtermist perspectiveâ it downplays it quite heavily imo (and the OP comment does even more, using the pejorative âfanaticalâ).
I donât think the linked post makes the point very persuasively. Looking at the table, at best there is an equivalence.
I think a rough estimate of the cost effectiveness of pushing for a Pause is orders of magnitude higher.
You donât need EAs Gregâyouâve got the general public!