It’s not “longtermist” or “fanatical” at all (or even altruistic) to try and prevent yourself and everyone else on the planet (humans and animals) being killed in the near future by uncontrollable ASI[1] (quite possibly in a horrible, painful[2], way[3]).
When ordinary folks think seriously about AGI risks, they don’t need any consequentialism, or utilitarianism, or EA thinking, or the Sequences, or long-termism, or anything fancy like that.
They simply come to understand that AGI could kill all of their kids, and everyone they ever loved, and could ruin everything they and their ancestors ever tried to achieve.
I’m not that surprised that the above comment has been downvoted to −4 without any replies (and this one will probably buried by an even bigger avalanche of downvotes!), but it still makes me sad. EA will be ivory-tower-ing until the bitter end it seems. It’s a form of avoidance. These things aren’t nice to think about. But it’s close now, so it’s reasonable for it to feel viscerally real. I guess it won’t be EA that saves us (from the mess it helped accelerate), if we do end up saved.
acknowledges the value of x-risk reduction in general from a non-longtermist perspective
clarifies that it is making a point about the marginal altruistic value of x-risk vs AW or GHW work and points to a post making this argument in more detail
Your response merely reiterates that x-risk prevention has substantial altruistic (and non-altruistic) value. This isn’t responsive to the claim about whether, under non-longtermist assumptions, that value is greater on the margin than AW or GHW work.
So even though I actually agree with the claims in your comment, I downvoted it (along with this one complaining about the downvotes) for being off-topic and not embodying the type of discourse I think the EA Forum should strive for.
Whilst zdgroff’s comment “acknowledges the value of x-risk reduction in general from a non-longtermist perspective” it downplays it quite heavily imo (and the OP comment does even more, using the pejorative “fanatical”).
I don’t think the linked post makes the point very persuasively. Looking at the table, at best there is an equivalence.
I think a rough estimate of the cost effectiveness of pushing for a Pause is orders of magnitude higher.
It’s not “longtermist” or “fanatical” at all (or even altruistic) to try and prevent yourself and everyone else on the planet (humans and animals) being killed in the near future by uncontrollable ASI[1] (quite possibly in a horrible, painful[2], way[3]).
Indeed, there are many non-EAs who care a great deal about this issue now.
I mention this as it’s a welfarist consideration, even if one doesn’t care about death in and of itself.
Ripped apart by self-replicating computronium-building nanobots, anyone?
Strongly endorsing Greg Colbourn’s reply here.
When ordinary folks think seriously about AGI risks, they don’t need any consequentialism, or utilitarianism, or EA thinking, or the Sequences, or long-termism, or anything fancy like that.
They simply come to understand that AGI could kill all of their kids, and everyone they ever loved, and could ruin everything they and their ancestors ever tried to achieve.
I’m not that surprised that the above comment has been downvoted to −4 without any replies (and this one will probably buried by an even bigger avalanche of downvotes!), but it still makes me sad. EA will be ivory-tower-ing until the bitter end it seems. It’s a form of avoidance. These things aren’t nice to think about. But it’s close now, so it’s reasonable for it to feel viscerally real. I guess it won’t be EA that saves us (from the mess it helped accelerate), if we do end up saved.
The comment you replied to
acknowledges the value of x-risk reduction in general from a non-longtermist perspective
clarifies that it is making a point about the marginal altruistic value of x-risk vs AW or GHW work and points to a post making this argument in more detail
Your response merely reiterates that x-risk prevention has substantial altruistic (and non-altruistic) value. This isn’t responsive to the claim about whether, under non-longtermist assumptions, that value is greater on the margin than AW or GHW work.
So even though I actually agree with the claims in your comment, I downvoted it (along with this one complaining about the downvotes) for being off-topic and not embodying the type of discourse I think the EA Forum should strive for.
Thanks for the explanation.
Whilst zdgroff’s comment “acknowledges the value of x-risk reduction in general from a non-longtermist perspective” it downplays it quite heavily imo (and the OP comment does even more, using the pejorative “fanatical”).
I don’t think the linked post makes the point very persuasively. Looking at the table, at best there is an equivalence.
I think a rough estimate of the cost effectiveness of pushing for a Pause is orders of magnitude higher.
You don’t need EAs Greg—you’ve got the general public!