I’m concerned that given the nearness of AI-x-risk and high likelihood that we all die in the near future, to some extent you are trying to seek comfort in complex cluelessness and moral uncertainty. If we go extinct (along with all the rest of known life in the universe), maybe it would be for the best? I don’t know, I think I would rather live to find out, and help steer the future toward more positive paths (we can end factory farming before space colonisation happens in earnest). I also kind of think “what’s the point in doing all these other EA interventions if the world just ends in a few years?” Sure, there is some near term benefit to those helped here and now, but everyone still all ends up dead.
To be clear, given the situation we now find ourselves in, I also think it’s perfectly reasonable to work on AI x-risk reduction for selfish reasons. Or just wanting your family and friends to live.
I’m concerned that given the nearness of AI-x-risk and high likelihood that we all die in the near future, to some extent you are trying to seek comfort in complex cluelessness and moral uncertainty. If we go extinct (along with all the rest of known life in the universe), maybe it would be for the best? I don’t know, I think I would rather live to find out, and help steer the future toward more positive paths (we can end factory farming before space colonisation happens in earnest). I also kind of think “what’s the point in doing all these other EA interventions if the world just ends in a few years?” Sure, there is some near term benefit to those helped here and now, but everyone still all ends up dead.
To be clear, given the situation we now find ourselves in, I also think it’s perfectly reasonable to work on AI x-risk reduction for selfish reasons. Or just wanting your family and friends to live.