David addresses a lot of the arguments for a ‘Time of Perils’in his ‘Existential Risk, Pessimism and the Time of Perils’ paper which this moral mathematics paper is a follow up to
Seems like David agrees that once you were spread across many star systems this could reduce existential risk a great deal.
The other line of argument would be that at some point AI advances will either cause extinction or a massive drop in extinction risk.
The literature on a ‘singleton’ is in part addressing this issue.
Because there’s so much uncertainty about all this, it seems like an overly-confident claim that it’s extremely unlikely for extinction risk to drop near zero within the next 100 or 200 years.
David addresses a lot of the arguments for a ‘Time of Perils’in his ‘Existential Risk, Pessimism and the Time of Perils’ paper which this moral mathematics paper is a follow up to
Seems like David agrees that once you were spread across many star systems this could reduce existential risk a great deal.
The other line of argument would be that at some point AI advances will either cause extinction or a massive drop in extinction risk.
The literature on a ‘singleton’ is in part addressing this issue.
Because there’s so much uncertainty about all this, it seems like an overly-confident claim that it’s extremely unlikely for extinction risk to drop near zero within the next 100 or 200 years.