I think that by far the less intuitive thing about AI X-risk is why AIs would want to kill us instead of doing what they would be “programmed” to do.
I would give more importance to that part of the argument than the “intelligence is really powerful” part.
Noted. I find many are stuck on the ‘how’. That said, some polls have 2/3rds or 3/4ths of people consider AI might harm humanity, so it isn’t entirely clear who needs to hear which arguments/analysis.
I think that by far the less intuitive thing about AI X-risk is why AIs would want to kill us instead of doing what they would be “programmed” to do.
I would give more importance to that part of the argument than the “intelligence is really powerful” part.
Noted. I find many are stuck on the ‘how’. That said, some polls have 2/3rds or 3/4ths of people consider AI might harm humanity, so it isn’t entirely clear who needs to hear which arguments/analysis.