My guess: The point at which “AI took my job” changes from low-status to an influential rallying cry is the point when a critical mass of people “wake up” to the fact that AGI is going to take their jobs (in fact, everyone’s) and that this will happen in the near future.
My fear is that there won’t be enough time in the window between “critical mass of people “wake up” to the fact that AGI is going to take their jobs (in fact, everyone’s)” and AGI/ASI actually being capable of doing so (which would nullify human social/economic power). To be slightly cynical about it, I feel like the focus on doom/foom outcomes ends up preventing the start of a societal immune response.
In the public eye, AI work that attempts to reach human-level and beyond-human-level capabilities currently seems to live in the same category as Elon’s Starship/Super Heavy adventures: an ambitious eccentric project that could cause some very serious damage if it goes wrong—except with more at stake than a launchpad. All the current discourse is downstream of this: opposition towards AGI work thus gets described as pro-stagnation / anti-progress / pro-[euro]sclerosis / pro-stagnation / anti-tech / anti-freedom and put in the same slot as anti-nuclear-power environmentalists, as anti-cryptocurrency/anti-encryption efforts, etc.
There’s growing public realization that there’s ambitious eccentric billionaires/corporations working on a project which might be Really Dangerous If It Things Go Wrong — “AI researchers believe that super-powerful AI might kill us all, we should make sure it doesn’t” is entering the Overton window — but this ignores the cataclysmic human consequences even if things go right, even if the mythical (which human values? which humans? how is this supposed to be durable against AI systems creating AIs, how is this supposed to be durable against economic selection pressures to extract more profit and resources?) “alignment with human values” is reached.
Even today, “work towards AGI is explicitly and openly seeks to make the overwhelming majority of human work economically unviable” is still not in the Overton window of what it’s acceptable/high-status to express, fear, and coordinate around, even though “nobody finds your work to be valuable, it’s not worth it to train you to be better, and there’s better replacements for what you used to do” is something which:
most people can easily understand the implications of (people in SF can literally go outside and see what happens to humans that are rendered economically unviable by society)
is openly desired by the AGI labs: they’re not just trying to create better protein-folding AIs, they’re not trying to create better warfighting or missile-guidance AIs. They’re trying to make “highly autonomous systems that outperform humans at most economically valuable work”. Says it right on OpenAI’s website.
is not something that the supposed “alignment” work is even pretending to be able to prevent.
Your comment is valuable, because it’s a very pointed criticism of how (a lot of) EAs think about this topic, but, unlike most things in that genre, expressed in a way that will make intuitive sense to most EAs (I think). You should turn it into a post of your own if you have time.
My fear is that there won’t be enough time in the window between “critical mass of people “wake up” to the fact that AGI is going to take their jobs (in fact, everyone’s)” and AGI/ASI actually being capable of doing so (which would nullify human social/economic power). To be slightly cynical about it, I feel like the focus on doom/foom outcomes ends up preventing the start of a societal immune response.
In the public eye, AI work that attempts to reach human-level and beyond-human-level capabilities currently seems to live in the same category as Elon’s Starship/Super Heavy adventures: an ambitious eccentric project that could cause some very serious damage if it goes wrong—except with more at stake than a launchpad. All the current discourse is downstream of this: opposition towards AGI work thus gets described as pro-stagnation / anti-progress / pro-[euro]sclerosis / pro-stagnation / anti-tech / anti-freedom and put in the same slot as anti-nuclear-power environmentalists, as anti-cryptocurrency/anti-encryption efforts, etc.
There’s growing public realization that there’s ambitious eccentric billionaires/corporations working on a project which might be Really Dangerous If It Things Go Wrong — “AI researchers believe that super-powerful AI might kill us all, we should make sure it doesn’t” is entering the Overton window — but this ignores the cataclysmic human consequences even if things go right, even if the mythical (which human values? which humans? how is this supposed to be durable against AI systems creating AIs, how is this supposed to be durable against economic selection pressures to extract more profit and resources?) “alignment with human values” is reached.
Even today, “work towards AGI is explicitly and openly seeks to make the overwhelming majority of human work economically unviable” is still not in the Overton window of what it’s acceptable/high-status to express, fear, and coordinate around, even though “nobody finds your work to be valuable, it’s not worth it to train you to be better, and there’s better replacements for what you used to do” is something which:
most people can easily understand the implications of (people in SF can literally go outside and see what happens to humans that are rendered economically unviable by society)
is openly desired by the AGI labs: they’re not just trying to create better protein-folding AIs, they’re not trying to create better warfighting or missile-guidance AIs. They’re trying to make “highly autonomous systems that outperform humans at most economically valuable work”. Says it right on OpenAI’s website.
is not something that the supposed “alignment” work is even pretending to be able to prevent.
@havequick
Your comment is valuable, because it’s a very pointed criticism of how (a lot of) EAs think about this topic, but, unlike most things in that genre, expressed in a way that will make intuitive sense to most EAs (I think). You should turn it into a post of your own if you have time.