Unfortunately, I think falling empires and species decline is a different mechanism of decline than many AI risk models.
If you give credence to “foom”, “ASI/AGI” these do involve almost immediate and total destruction or permanent loss of agency.
A closer analogy or model might be nuclear weapons? (Although they don’t result in total destruction).
These ideas seem highly relevant slow takeoff and maybe related lock-in and multipolar conflict. These aren’t talked about much (but are anyways probably conjunctive with most “AI safety interventions”?, besides yelling we are all doomed).
As an additional potential analogy, some scenarios people discuss seem analogous to coups. If that’s a good analogy, I think it suggests that things would be quick.
Unfortunately, I think falling empires and species decline is a different mechanism of decline than many AI risk models.
If you give credence to “foom”, “ASI/AGI” these do involve almost immediate and total destruction or permanent loss of agency.
A closer analogy or model might be nuclear weapons? (Although they don’t result in total destruction).
These ideas seem highly relevant slow takeoff and maybe related lock-in and multipolar conflict. These aren’t talked about much (but are anyways probably conjunctive with most “AI safety interventions”?, besides yelling we are all doomed).
As an additional potential analogy, some scenarios people discuss seem analogous to coups. If that’s a good analogy, I think it suggests that things would be quick.