Seems like David agrees that once you were spread across many star systems this could reduce existential risk a great deal.
The other line of argument would be that at some point AI advances will either cause extinction or a massive drop in extinction risk.
The literature on a ‘singleton’ is in part addressing this issue.
Because there’s so much uncertainty about all this, it seems like an overly-confident claim that it’s extremely unlikely for extinction risk to drop near zero within the next 100 or 200 years.
Seems like David agrees that once you were spread across many star systems this could reduce existential risk a great deal.
The other line of argument would be that at some point AI advances will either cause extinction or a massive drop in extinction risk.
The literature on a ‘singleton’ is in part addressing this issue.
Because there’s so much uncertainty about all this, it seems like an overly-confident claim that it’s extremely unlikely for extinction risk to drop near zero within the next 100 or 200 years.