Great point. Ideally “existential risk” should be an entirely empirical thing that we can talk about independent of our values / moral beliefs about what future is optimal.
This is impossible if you consider “unrecoverable dystopia”, “stable totalitarianism” etc as existential risks, as these things are implicitly values judgments.
Though I’m open to the idea that we should maybe talk about extinction risks instead of existential risks instead, given that this is empirically most of what xrisk people work on.
(Though I think some AI risk people think, as an empirical matter, that some AI catastrophes would entail humanity surviving while completely losing control for the lightcone, and both they and I would consider this basically as bad as all of our descendants dying).
It’s also possible that different people have different views of what “humanity’s potential” really means!
Great point. Ideally “existential risk” should be an entirely empirical thing that we can talk about independent of our values / moral beliefs about what future is optimal.
This is impossible if you consider “unrecoverable dystopia”, “stable totalitarianism” etc as existential risks, as these things are implicitly values judgments.
Though I’m open to the idea that we should maybe talk about extinction risks instead of existential risks instead, given that this is empirically most of what xrisk people work on.
(Though I think some AI risk people think, as an empirical matter, that some AI catastrophes would entail humanity surviving while completely losing control for the lightcone, and both they and I would consider this basically as bad as all of our descendants dying).