The idea of state risks vs transition risks, as discussed in Superintelligence and Chapter 7 of The Precipice
This passage from The Precipice:
It is even possible to have situations where we might be best off with actions that pose their own immediate risk if they make up for it in how much they lower longterm risk. Potential examples include developing advanced artificial intelligence or centralising control of global security.
This idea seems somewhat related to:
The idea of state risks vs transition risks, as discussed in Superintelligence and Chapter 7 of The Precipice
This passage from The Precipice: