Agree with this, with the caveat that the more selfish framing (“I don’t want to die or my family to die”) seems to be helpfully motivating to some productive AI alignment researchers.
The way I would put it is on reflection, it’s only rational to work on x-risk for altruistic reasons rather than selfish. But if more selfish reasoning helps for day-to-day motivation even if it’s irrational, this seems likely okay (see also Dark Arts of Rationality).
Agree with this, with the caveat that the more selfish framing (“I don’t want to die or my family to die”) seems to be helpfully motivating to some productive AI alignment researchers.
The way I would put it is on reflection, it’s only rational to work on x-risk for altruistic reasons rather than selfish. But if more selfish reasoning helps for day-to-day motivation even if it’s irrational, this seems likely okay (see also Dark Arts of Rationality).