I would add the (in my view far more likely) possibility of Yudkowskian* paperclipping via non-sentient AI, which given our currently incredibly low level of control of AI systems, and the fact that we don’t know how to create sentience, seems like the most likely default.
*) Specifically, the view that paperclipping occurs by default from any complex non-satiable implicit utility function, rather than the Bostromian paperclipping risk of accidentally giving a smart AI a dumb goal.
I would add the (in my view far more likely) possibility of Yudkowskian* paperclipping via non-sentient AI, which given our currently incredibly low level of control of AI systems, and the fact that we don’t know how to create sentience, seems like the most likely default.
*) Specifically, the view that paperclipping occurs by default from any complex non-satiable implicit utility function, rather than the Bostromian paperclipping risk of accidentally giving a smart AI a dumb goal.