Some people seem to think the risk from AI comes from AIs gaining dangerous capabilities, like situational awareness. I don’t really agree. I view the main risk as simply arising from the fact that AIs will be increasingly integrated into our world, diminishing human control.
Under my view, the most important thing is whether AIs will be capable of automating economically valuable tasks, since this will prompt people to adopt AIs widely to automate labor. If AIs have situational awareness, but aren’t economically important, that’s not as concerning.
The risk is not so much that AIs will suddenly and unexpectedly take control of the world. It’s that we will voluntarily hand over control to them anyway, and we want to make sure this handoff is handled responsibly.
An untimely coup, while possible, is not necessary.
Some people seem to think the risk from AI comes from AIs gaining dangerous capabilities, like situational awareness. I don’t really agree. I view the main risk as simply arising from the fact that AIs will be increasingly integrated into our world, diminishing human control.
Under my view, the most important thing is whether AIs will be capable of automating economically valuable tasks, since this will prompt people to adopt AIs widely to automate labor. If AIs have situational awareness, but aren’t economically important, that’s not as concerning.
The risk is not so much that AIs will suddenly and unexpectedly take control of the world. It’s that we will voluntarily hand over control to them anyway, and we want to make sure this handoff is handled responsibly.
An untimely coup, while possible, is not necessary.