Longtermism =/ existential risk, though it seems the community has more or less decided they mean similar things (at least while at our current point in history).
Here is an argument to the contrary- “the civilization dice roll”: Current Human society becoming grabby will be worse for the future of our lightcone than the counterfactual society that will(might) exist and end up becoming grabby if we die out/ our civilization collapses.
Now, to directly answer your point on x-risk vs longtermism, yes you are correct. Fear mongering will always trump empathy mongering in terms of getting people to care. We might worry though that in a society already full of fear mongering, we actually need to push people to build their thoughtful empathy muscles, not their thoughtful fear muscles. That is to say we want people to care about x-risk because they care about other people, not because they care about themselves.
So now turning back to the dice roll argument, we may prefer to survive because we became more empathetic/expanded our moral circle and as a result cared about x-risk, rather than because we just really really didn’t want to die in the short-term. Once (if) we pass the hinge of history, or at least the peak of existential risk, we still have to decide what the fate of our ecosystem will be. Personally, I would prefer we decide with maximal moral circles.
Some potential gaps in my argument. (1) There might be reasons to believe that our lightcone will be better off with current human society becoming grabby, in which case we really should just be optimizing almost exclusively on reducing x-risk (probably). (2) Focusing on Fear mongering x-risk rather than empathy mongering x-risk will not decrease the likelihood of people expanding their moral circles , maybe it will even increase moral circle expansion because it will actually get people to grapple with the possibility of these issues (3) Moral circle expansion won’t actually make the future go better (4) AI will be uncorrelated with human culture, so this whole argument is sort of irrelevant if the AI does the grabbing.
Longtermism =/ existential risk, though it seems the community has more or less decided they mean similar things (at least while at our current point in history).
Here is an argument to the contrary- “the civilization dice roll”: Current Human society becoming grabby will be worse for the future of our lightcone than the counterfactual society that will(might) exist and end up becoming grabby if we die out/ our civilization collapses.
Now, to directly answer your point on x-risk vs longtermism, yes you are correct. Fear mongering will always trump empathy mongering in terms of getting people to care. We might worry though that in a society already full of fear mongering, we actually need to push people to build their thoughtful empathy muscles, not their thoughtful fear muscles. That is to say we want people to care about x-risk because they care about other people, not because they care about themselves.
So now turning back to the dice roll argument, we may prefer to survive because we became more empathetic/expanded our moral circle and as a result cared about x-risk, rather than because we just really really didn’t want to die in the short-term. Once (if) we pass the hinge of history, or at least the peak of existential risk, we still have to decide what the fate of our ecosystem will be. Personally, I would prefer we decide with maximal moral circles.
Some potential gaps in my argument. (1) There might be reasons to believe that our lightcone will be better off with current human society becoming grabby, in which case we really should just be optimizing almost exclusively on reducing x-risk (probably). (2) Focusing on Fear mongering x-risk rather than empathy mongering x-risk will not decrease the likelihood of people expanding their moral circles , maybe it will even increase moral circle expansion because it will actually get people to grapple with the possibility of these issues (3) Moral circle expansion won’t actually make the future go better (4) AI will be uncorrelated with human culture, so this whole argument is sort of irrelevant if the AI does the grabbing.