Looking at your profile I think you have a good idea of answers already, but for the benefit of everyone else who upvoted this question looking for an answer, here’s my take:
Are there AI risk scenarios which involve narrow AIs?
Yes, a notable one being military AI i.e. autonomous weapons (there are plenty of related posts on the EA forum). There are also multipolar failure modes on risks from multiple AI-enabled superpowers instead of a single superintelligent AGI.
Why does most AI risk research and writing focus on artificial general intelligence?
A misaligned AGI is a very direct pathway to x-risk, where an AGI that pursues some goal in an extremely powerful way without having any notion of human values could easily lead to human extinction. The question is how to make an AI that’s more powerful than us do what we want it to do. Many other failures modes like bad actors using tool (narrow) AIs seem less likely to lead directly to x-risk, and is also more of a coordination problem than a technical problem.
Looking at your profile I think you have a good idea of answers already, but for the benefit of everyone else who upvoted this question looking for an answer, here’s my take:
Yes, a notable one being military AI i.e. autonomous weapons (there are plenty of related posts on the EA forum). There are also multipolar failure modes on risks from multiple AI-enabled superpowers instead of a single superintelligent AGI.
A misaligned AGI is a very direct pathway to x-risk, where an AGI that pursues some goal in an extremely powerful way without having any notion of human values could easily lead to human extinction. The question is how to make an AI that’s more powerful than us do what we want it to do. Many other failures modes like bad actors using tool (narrow) AIs seem less likely to lead directly to x-risk, and is also more of a coordination problem than a technical problem.