I’m going to repeat my question from the “Ask EA anything” thread. Why do people talk about artificial general intelligence, rather than something like advanced AI? For some AI risk scenarios, it doesn’t seem necessary that the AI be “generally” intelligent.
There are some AI risks which don’t require generality, but the ones which have the potential to be an x-risk will likely involve fairly general capabilities. In particular, capabilities to automate innovation, like OpenPhil’s Process for Automating Scientific and Technological Advancement.
Several other overlapping terms have been used, such as Transformative AI, AI existential safety, AI alignment, AGI safety. We’re planning to have a question on Stampy which covers these different terms.
This Rob Miles video covers why the risks from this class of AI is likely the most important:
the ones which have the potential to be an x-risk will likely involve fairly general capabilities
I think there are also (unfortunately) some likely AI x-risks that don’t involve general-purpose reasoning.
For instance, so much of our lives already involves automated systems that determine what we read, how we travel, who we date, etc, and this dependence will only increase with more advanced AI. These systems will probably pursue easy-to-measure goals like “maximize user’s time on the screen” and “maximize reported well-being,” and these goals won’t be perfectly aligned with “promote human flourishing.” One doesn’t need to be especially creative to imagine how this situation could create worlds in which most humans live unhappy lives (and are powerless to change their situation). Some of these scenarios would be worse than human extinction.
There are more scenarios in “What failure looks like” and “What multipolar failure looks like” that don’t require AGI. A counterargument is that we might eventually build AGI in these worlds anyways, at which point the concerns in Rob’s talk become relevant. (Side note: from my perspective, Rob’s talk says very little about why x-risk from AGI could be more pressing than x-risk from narrow AI.)
I’m going to repeat my question from the “Ask EA anything” thread. Why do people talk about artificial general intelligence, rather than something like advanced AI? For some AI risk scenarios, it doesn’t seem necessary that the AI be “generally” intelligent.
There are some AI risks which don’t require generality, but the ones which have the potential to be an x-risk will likely involve fairly general capabilities. In particular, capabilities to automate innovation, like OpenPhil’s Process for Automating Scientific and Technological Advancement.
Several other overlapping terms have been used, such as Transformative AI, AI existential safety, AI alignment, AGI safety. We’re planning to have a question on Stampy which covers these different terms.
This Rob Miles video covers why the risks from this class of AI is likely the most important:
I think there are also (unfortunately) some likely AI x-risks that don’t involve general-purpose reasoning.
For instance, so much of our lives already involves automated systems that determine what we read, how we travel, who we date, etc, and this dependence will only increase with more advanced AI. These systems will probably pursue easy-to-measure goals like “maximize user’s time on the screen” and “maximize reported well-being,” and these goals won’t be perfectly aligned with “promote human flourishing.” One doesn’t need to be especially creative to imagine how this situation could create worlds in which most humans live unhappy lives (and are powerless to change their situation). Some of these scenarios would be worse than human extinction.
There are more scenarios in “What failure looks like” and “What multipolar failure looks like” that don’t require AGI. A counterargument is that we might eventually build AGI in these worlds anyways, at which point the concerns in Rob’s talk become relevant. (Side note: from my perspective, Rob’s talk says very little about why x-risk from AGI could be more pressing than x-risk from narrow AI.)