Looking at your profile I think you have a good idea of answers already, but for the benefit of everyone else who upvoted this question looking for an answer, here’s my take:
Are there AI risk scenarios which involve narrow AIs?
Yes, a notable one being military AI i.e. autonomous weapons (there are plenty of related posts on the EA forum). There are also multipolar failure modes on risks from multiple AI-enabled superpowers instead of a single superintelligent AGI.
Why does most AI risk research and writing focus on artificial general intelligence?
A misaligned AGI is a very direct pathway to x-risk, where an AGI that pursues some goal in an extremely powerful way without having any notion of human values could easily lead to human extinction. The question is how to make an AI that’s more powerful than us do what we want it to do. Many other failures modes like bad actors using tool (narrow) AIs seem less likely to lead directly to x-risk, and is also more of a coordination problem than a technical problem.
What happens when we create AI companions for children that are more “engaging” than humans? Would children stop making friends and prefer AI companions? What happens when we create AI avatars of mothers that are as or more “engaging” to babies than real mothers, and people start using them to babysit? How might that affect a baby’s development? What happens when AI becomes as good as an average judge at examining evidence, arguments, and reaching a verdict?
“AGI” is largely an imprecisely-used initialism: when people talk about AGI, we usually don’t care about generality and instead just mean about human-level AI. It’s usually correct to implicitly substitute “human-level AI” for “AGI” outside of discussions of generality. (Caveat: “AGI” has some connotations of agency.)
There are risk scenarios with narrow AI, including catastrophic misuse, conflict (caused or exacerbated by narrow AI), and alignment failure. On alignment failure, there are somegoodstories. Each of these possibilities is considered reasonably likely by AI safety & governance researchers.
Why does most AI risk research and writing focus on artificial general intelligence? Are there AI risk scenarios which involve narrow AIs?
Looking at your profile I think you have a good idea of answers already, but for the benefit of everyone else who upvoted this question looking for an answer, here’s my take:
Yes, a notable one being military AI i.e. autonomous weapons (there are plenty of related posts on the EA forum). There are also multipolar failure modes on risks from multiple AI-enabled superpowers instead of a single superintelligent AGI.
A misaligned AGI is a very direct pathway to x-risk, where an AGI that pursues some goal in an extremely powerful way without having any notion of human values could easily lead to human extinction. The question is how to make an AI that’s more powerful than us do what we want it to do. Many other failures modes like bad actors using tool (narrow) AIs seem less likely to lead directly to x-risk, and is also more of a coordination problem than a technical problem.
What happens when we create AI companions for children that are more “engaging” than humans? Would children stop making friends and prefer AI companions?
What happens when we create AI avatars of mothers that are as or more “engaging” to babies than real mothers, and people start using them to babysit? How might that affect a baby’s development?
What happens when AI becomes as good as an average judge at examining evidence, arguments, and reaching a verdict?
“AGI” is largely an imprecisely-used initialism: when people talk about AGI, we usually don’t care about generality and instead just mean about human-level AI. It’s usually correct to implicitly substitute “human-level AI” for “AGI” outside of discussions of generality. (Caveat: “AGI” has some connotations of agency.)
There are risk scenarios with narrow AI, including catastrophic misuse, conflict (caused or exacerbated by narrow AI), and alignment failure. On alignment failure, there are some good stories. Each of these possibilities is considered reasonably likely by AI safety & governance researchers.