My uninformed guess is that an automatic system doesn’t need to be superintelligent to create trouble, it only needs some specific abilities (depending on the kind of trouble).
For example, the machine doesn’t need to be agentic if there is a human agent deciding to make bad stuff happen.
So I think it would be an important point to discuss, and maybe someone has done it already.
My uninformed guess is that an automatic system doesn’t need to be superintelligent to create trouble, it only needs some specific abilities (depending on the kind of trouble).
For example, the machine doesn’t need to be agentic if there is a human agent deciding to make bad stuff happen.
So I think it would be an important point to discuss, and maybe someone has done it already.