“AGI” is largely an imprecisely-used initialism: when people talk about AGI, we usually don’t care about generality and instead just mean about human-level AI. It’s usually correct to implicitly substitute “human-level AI” for “AGI” outside of discussions of generality. (Caveat: “AGI” has some connotations of agency.)
There are risk scenarios with narrow AI, including catastrophic misuse, conflict (caused or exacerbated by narrow AI), and alignment failure. On alignment failure, there are somegoodstories. Each of these possibilities is considered reasonably likely by AI safety & governance researchers.
“AGI” is largely an imprecisely-used initialism: when people talk about AGI, we usually don’t care about generality and instead just mean about human-level AI. It’s usually correct to implicitly substitute “human-level AI” for “AGI” outside of discussions of generality. (Caveat: “AGI” has some connotations of agency.)
There are risk scenarios with narrow AI, including catastrophic misuse, conflict (caused or exacerbated by narrow AI), and alignment failure. On alignment failure, there are some good stories. Each of these possibilities is considered reasonably likely by AI safety & governance researchers.