You seem to be not considering global catastrophic risk. This would generally not cause extinction, but could cause collapse of civilization from which we may not recover. And even if we do recover, we may end up losing significant fractions of long-term value. And it even if there’s not a collapse of civilization, it could make global totalitarianism more likely, or worse values could end up in AI. At least some of these could be considered existential risk in the sense that much of the long-term value is lost. And yet preventing or mitigating them can generally be justified based on saving lives in the present generation.
You seem to be not considering global catastrophic risk. This would generally not cause extinction, but could cause collapse of civilization from which we may not recover. And even if we do recover, we may end up losing significant fractions of long-term value. And it even if there’s not a collapse of civilization, it could make global totalitarianism more likely, or worse values could end up in AI. At least some of these could be considered existential risk in the sense that much of the long-term value is lost. And yet preventing or mitigating them can generally be justified based on saving lives in the present generation.