the ones which have the potential to be an x-risk will likely involve fairly general capabilities
I think there are also (unfortunately) some likely AI x-risks that don’t involve general-purpose reasoning.
For instance, so much of our lives already involves automated systems that determine what we read, how we travel, who we date, etc, and this dependence will only increase with more advanced AI. These systems will probably pursue easy-to-measure goals like “maximize user’s time on the screen” and “maximize reported well-being,” and these goals won’t be perfectly aligned with “promote human flourishing.” One doesn’t need to be especially creative to imagine how this situation could create worlds in which most humans live unhappy lives (and are powerless to change their situation). Some of these scenarios would be worse than human extinction.
There are more scenarios in “What failure looks like” and “What multipolar failure looks like” that don’t require AGI. A counterargument is that we might eventually build AGI in these worlds anyways, at which point the concerns in Rob’s talk become relevant. (Side note: from my perspective, Rob’s talk says very little about why x-risk from AGI could be more pressing than x-risk from narrow AI.)
I think there are also (unfortunately) some likely AI x-risks that don’t involve general-purpose reasoning.
For instance, so much of our lives already involves automated systems that determine what we read, how we travel, who we date, etc, and this dependence will only increase with more advanced AI. These systems will probably pursue easy-to-measure goals like “maximize user’s time on the screen” and “maximize reported well-being,” and these goals won’t be perfectly aligned with “promote human flourishing.” One doesn’t need to be especially creative to imagine how this situation could create worlds in which most humans live unhappy lives (and are powerless to change their situation). Some of these scenarios would be worse than human extinction.
There are more scenarios in “What failure looks like” and “What multipolar failure looks like” that don’t require AGI. A counterargument is that we might eventually build AGI in these worlds anyways, at which point the concerns in Rob’s talk become relevant. (Side note: from my perspective, Rob’s talk says very little about why x-risk from AGI could be more pressing than x-risk from narrow AI.)