I don’t think this is as clear of a dichotomy as people think it is. A lot of global catastrophic risk doesn’t come from literal extinction because human extinction is very hard. A lot of mundane work on GCR policy involves a wide variety of threat models that are not just extinction.
I don’t think this is as clear of a dichotomy as people think it is. A lot of global catastrophic risk doesn’t come from literal extinction because human extinction is very hard. A lot of mundane work on GCR policy involves a wide variety of threat models that are not just extinction.
What about the threat of strongly superhuman artificial superintelligence?