What are your thoughts about the position of existential risk—and more specifically AI x-risk, as a key concern of Effective Altruism?
I have a sense that while it is a significant and important concern, I’m not sure it falls into the category of “altruism” as opposed to “self-preservation”, and considering its current popularity, is there a risk of this concern crowding-out other core altruistic causes around the immediate well-being of those less fortunate or less empowered?
What are your thoughts about the position of existential risk—and more specifically AI x-risk, as a key concern of Effective Altruism?
I have a sense that while it is a significant and important concern, I’m not sure it falls into the category of “altruism” as opposed to “self-preservation”, and considering its current popularity, is there a risk of this concern crowding-out other core altruistic causes around the immediate well-being of those less fortunate or less empowered?