I think a concerted effort to make the public aware of some of the underlying motivations, consequences, and goals of AGI research would likely trigger public backlash:
the singularity-flavored motivations of many AGI researchers: creation of superior successor beings to realize quasi-religious promises of a heavenly future, etc
the economic-flavored motivations of AGI labs: “highly autonomous systems that outperform humans at most economically valuable work”. This is literally on OpenAI’s website
increasing the likelihood of total human extinction is regarded as acceptable collateral damage in pursuit of the above goals. Some are even fine with human extinction if it’s replaced by sufficiently-advanced computers that are deemed to be able to experience more happiness than humans can.
human economic/social disempowerment is sought out in pursuit of these goals, is realistically achievable well within our lifetime, and is likely to occur even with “aligned” AGI
AI “alignment” is theater: even the rosiest visions of futures with AGI are ones in which humans are rendered obsolete and powerless, and at the mercy of AGI systems: at best an abyssal unescapable perpetual pseudo-childhood with no real work to do, no agency, no meaningful pursuits nor purpose—with economic and social decision-making and bargaining power stripped away
post-scarcity of human-level intelligence directly translates to human work being worthless, and “human work output is valuable” underpins whatever stability and beneficence has existed in every social and economic system we have and have ever had. AGI shreds these assumptions and worsens the situation: at the mercy of powerful systems at whose hands we are entirely powerless.
specifically, the promises of AGI-granted universal basic income used to legitimate AGI-granted economic cataclysm are unlikely to be upheld: with human labor output being worthless, there’s nothing humans can do if the promise is reneged on. What are we going to do, go on strike? AGI’s doing all the work. Wage an armed insurrection? AGI’s a powerful military tool. Vote the AGI overlords out of office? AGI shreds the assumptions that make democracy a workable system (and not solely a box-ticking theater like it is in totalitarian states).
the accelerationist and fundamentalist-utilitarian ideologies driving and legitimating this work place vanishingly little value on human power/agency—or even continued human existence. Some regard human desires for power/agency/meaningful work/existence as pathological, outdated, and to be eliminated by psychotechnological means, in order for humans to better cope with being relegated to the status of pets or zoo animals.
many of those working towards AGI openly revel in the thought of humanity’s obsolescence and powerlessness in the face of the systems they’re building, cheer that AI progress is so fast that political systems cannot react fast enough before AGI happens, and are stricken by a lack of faith in humanity’s ability to cope with the world and its problems—to be solved by domination and/or replacement by AGI.
the outcomes of enduring fruitful cooperation between AGI and humans (because of alleged comparative advantage) are laughably implausible: at a certain level of AGI capabilities, the costs of human involvement in AGI systems will be greater than the value of human involvement, and it will be more efficient to simply remove humans from the loop: by analogy, there’s no horse-attachment point even on the first automobile because there’s no gains to be had from having a horse in the loop. The same will be true of humans and AGI systems. To put it crudely, “centaurs” get sent to the glue factory too.
strategic deceit is used to obscure both the “total technological unemployment” cluster of motivations and the “singularity” cluster of motivations: for instance, the arguments that accelerating AGI is necessary for surviving problems which don’t actually need AI: climate change, pandemics, asteroid impact, etc (and thus the risks of AGI are justified). “we need to build AGI so I can become immortal in a really powerful computer because I don’t want to die” doesn’t quite have the same ring to it
RLHF’ing language models is not even “alignment”: it’s mostly meant to evade society cracking down on the AGI industry by making it unlikely that their language models will say politically sensitive things or obscenities. This matters because it is remarkably easier to crystallize political crackdowns on entities whose products wantonly say racial slurs than it is to crystallize political crackdowns on entities that are working towards the creation of systems that will render humans obsolete and powerless. The Lovecraftian absurdity is withering.
“AI took my job” is low-status (“can’t adapt? skill issue”) to admit seriously thinking about, but even in the dream AGI/ASI-is-aligned scenarios, the catastrophic consequences of AGI/ASI will likely look like “AI took my job” extrapolated to the entire human species: full-spectrum human obsolescence, total technological unemployment, loss of human socioeconomic bargaining power, loss of purpose, loss of human role in keeping civilization running, and a degradation of humanity to perma-NEETs tranquilized via AI-generated AR-media/games/pornography, etc.
To put it very bluntly, the overwhelming majority of humanity doesn’t want to be aboard a metaphorical Flight 93 piloted by nihilists with visions of a paradisiacal post-scarcity post-human future dancing in their heads as they make the final turns towards the singularity.
Thank you so much for this extremely insightful comment! I strongly agree with all of your points.
“‘AI took my job’ is low-status (‘can’t adapt? skill issue’) to admit seriously thinking about, but even in the dream AGI/ASI-is-aligned scenarios, the catastrophic consequences of AGI/ASI will likely look like ‘AI took my job’ extrapolated to the entire human species…”
My guess: The point at which “AI took my job” changes from low-status to an influential rallying cry is the point when a critical mass of people “wake up” to the fact that AGI is going to take their jobs (in fact, everyone’s) and that this will happen in the near future.
My guess: The point at which “AI took my job” changes from low-status to an influential rallying cry is the point when a critical mass of people “wake up” to the fact that AGI is going to take their jobs (in fact, everyone’s) and that this will happen in the near future.
My fear is that there won’t be enough time in the window between “critical mass of people “wake up” to the fact that AGI is going to take their jobs (in fact, everyone’s)” and AGI/ASI actually being capable of doing so (which would nullify human social/economic power). To be slightly cynical about it, I feel like the focus on doom/foom outcomes ends up preventing the start of a societal immune response.
In the public eye, AI work that attempts to reach human-level and beyond-human-level capabilities currently seems to live in the same category as Elon’s Starship/Super Heavy adventures: an ambitious eccentric project that could cause some very serious damage if it goes wrong—except with more at stake than a launchpad. All the current discourse is downstream of this: opposition towards AGI work thus gets described as pro-stagnation / anti-progress / pro-[euro]sclerosis / pro-stagnation / anti-tech / anti-freedom and put in the same slot as anti-nuclear-power environmentalists, as anti-cryptocurrency/anti-encryption efforts, etc.
There’s growing public realization that there’s ambitious eccentric billionaires/corporations working on a project which might be Really Dangerous If It Things Go Wrong — “AI researchers believe that super-powerful AI might kill us all, we should make sure it doesn’t” is entering the Overton window — but this ignores the cataclysmic human consequences even if things go right, even if the mythical (which human values? which humans? how is this supposed to be durable against AI systems creating AIs, how is this supposed to be durable against economic selection pressures to extract more profit and resources?) “alignment with human values” is reached.
Even today, “work towards AGI is explicitly and openly seeks to make the overwhelming majority of human work economically unviable” is still not in the Overton window of what it’s acceptable/high-status to express, fear, and coordinate around, even though “nobody finds your work to be valuable, it’s not worth it to train you to be better, and there’s better replacements for what you used to do” is something which:
most people can easily understand the implications of (people in SF can literally go outside and see what happens to humans that are rendered economically unviable by society)
is openly desired by the AGI labs: they’re not just trying to create better protein-folding AIs, they’re not trying to create better warfighting or missile-guidance AIs. They’re trying to make “highly autonomous systems that outperform humans at most economically valuable work”. Says it right on OpenAI’s website.
is not something that the supposed “alignment” work is even pretending to be able to prevent.
Your comment is valuable, because it’s a very pointed criticism of how (a lot of) EAs think about this topic, but, unlike most things in that genre, expressed in a way that will make intuitive sense to most EAs (I think). You should turn it into a post of your own if you have time.
You add some crucial additional context about some factors that will probably amplify the fury of an anti-AI backlash. Most centrally, the posthumanist ambitions of the Singularity enthusiasts who look forward to humans being replaced by machine intelligences represent an ideology that’s profoundly repulsive, outrageous, & treasonous to most people. Once those people are widely seen as traitors to our species, and their profound and pervasive influence on the AI industry is understood, an anti-AI backlash seems highly likely.
The main challenge at that point will be to keep the anti-AI backlash peaceful and effective, rather than counter-productively violent. This will basically require ‘just the right amount’ of moral stigmatization of AI: enough to pause it for a few decades, but not so much that AI researchers or AI business leaders get physically hurt.
The main challenge at that point will be to keep the anti-AI backlash peaceful and effective, rather than counter-productively violent. This will basically require ‘just the right amount’ of moral stigmatization of AI: enough to pause it for a few decades, but not so much that AI researchers or AI business leaders get physically hurt.
I agree with keeping it peaceful and effective; but I don’t think that trying to calibrate “just the right amount” is really feasible or desirable. At least not the way that EA/LW-style approaches to AGI risk has approached this; which feels like it very easily leads to totalizing fear/avoidance/endless rumination/scrupulosity around doing anything which might be too unilateral or rash. The exact same sentiment is expressed in this post and the post it’s in reply to; so I am confident this is very much a real problem: https://twitter.com/AISafetyMemes/status/1661853454074105858
“Too unilateral or rash” is not a euphemism for “non-peaceful”: I really do specifically mean that in these EA/LW/etc circles there’s a tendency to have a pathological fear (that can only be discharged by fully assuaging the scrupulosity of oneself and one’s peers) of taking decisive impactful action.
To get a bit more object-level, I believe it is cowardice and pathological scrupulosity to not take a strong assertive line against very dangerous work (GoF virology on pandemic pathogens, AGI/ASI work, etc) because of fears that some unrelated psychotic wacko might use it as an excuse to do something violent. Frankly, if unstable wackos do violent things then the guilt falls quite squarely on them, not on whatever internet posts they might have been reading.
“Don’t upset the AGI labs” and “don’t do PR to normies about how the efforts towards AGI/ASI explicitly seek out human obsolescence/replacement/disempowerment” and the like feel to me like outgrowths of the same dynamic that led to the pathological (and continuing!) aversion to being open about the probable lab origin of Covid because of fears that people (normies, wackos, political leaders, funding agencies alike) might react in bad ways. I don’t think this is good at all. I think that contorting truth, misrepresenting, and dissembling in order to subtly steer/manipulate society and public/elite sentiment leads to far worse outcomes than the consequences of just telling the truth.
To get very object-level, the AGI-accelerationist side of things does not hold themselves to any sort of scrupulosity or rumination about consequences. If anything, that side of things is defined by reckless and aggressive unilateral action; both in terms of discourse and capabilities development. Unilateral to the point that there is a nontrivial contingent of those working towards AGI who openly cheer that AI progress is so fast that political systems cannot react fast enough before AGI happens.
pmarca’s piece about AI (published when I was writing this) is called “Why AI Will Save the World”, the thesis is that fears about AI are the outgrowth of “irrational” “hysterical” “moral panic” from Ludditic economic innumerates, millenarianist doom-cultists, and ever-present hall-monitors, that accelerating AI “as fast and aggressively” as possible is a “moral obligation”, and strongly insinuates that those not on-board act as enablers to the PRC’s dreams of AI-fueled world domination.
There is, of course, no counterargument for the contention that at a certain level of AGI capabilities, the costs of human involvement will be greater than the value of human involvement, and it will be more efficient (gains from trade / comparative advantage / productivity growth didn’t save draught horses from the glue factory when tractors/trucks/cars rolled around) to simply remove humans from the loop: which leads to near-total human technological unemployment/disempowerment. There is no mention that the work towards AGI explicitly and openly seeks to make the overwhelming majority of human work economically unviable. There is no mention of any of the openly-admitted motivations, consequences, and goals of AGI research which are grievously opposed to most of humanity (those which I had highlighted in my previous comment).
But that sort of piece doesn’t need any of that, since the point isn’t to steelman a case against AI doomers, but to degrade the credibility of AI doomers and shatter their ability to coordinate by depicting them as innumerates, Luddites, cultists, disingenuous, or otherwise useful idiots of the PRC.
I cannot help but see the AGI-accelerationist side of things winning decisively, soon, and irreversibly if those who are opposed continue to be so self-limitingly scrupulous about taking action because of incredibly nebulous fears. At some point those who aren’t on-board with acceleration towards AGI/ASI have to start assertively taking the initiative if they don’t want to lose by default.
Again, to be totally explicit and clear, “assertively taking the initiative” does not mean “violence”. I agree with keeping it peaceful and effective. But it does mean things like “start being unflinchingly open about the true motivations, goals, and likely/intended/acceptable consequences of AGI/ASI research” and “stop torturing yourselves with scrupulosity and fears of what might possibly conceivably go wrong”.
In the original conception of the unilateralist’s curse, the problem arose from epistemically diverse actors/groups having different assessments of how risky an action was.
The mistake was in the people with the rosiest assessment of the risk of an action taking the action by themselves – in disregard of others’ assessments.
What I want more people in AI Safety to be aware of is that there are many other communities out there who think that what “AGI” labs are doing is super harmful and destabilising.
We’re not the one community concerned. Many epistemically diverse communities are looking at the actions by “AGI” labs and are saying that this gotta stop.
Unfortunately, in the past core people in our community have inadvertently supported the start-up of these labs. These were actions they chose to make by themselves.
If anything, unilateralist actions were taken by the accelerationists, as tacitly supported by core AI Safety folks who gave labs like DeepMind, OpenAI and Anthropic leeway to take these actions.
Remmelt—I agree. I think EA funders have been way too naive in thinking that, if they just support the right sort of AI development, with due concern for ‘alignment’ issues, they could steer the AI industry away from catastrophe.
In hindsight, this seems to have been a huge strategic blunder—and the big mistake was under-estimating the corporate incentives and individual hubris that drives unsafe AI development despite any good intentions of funders and founders.
“Too unilateral or rash” is not a euphemism for “non-peaceful”: I really do specifically mean that in these EA/LW/etc circles there’s a tendency to have a pathological fear (that can only be discharged by fully assuaging the scrupulosity of oneself and one’s peers) of taking decisive impactful action.
I cannot help but see the AGI-accelerationist side of things winning decisively, soon, and irreversibly if those who are opposed continue to be so self-limitingly scrupulous about taking action because of incredibly nebulous fears.
I second this. I further think there are a lot of image and tribe concerns that go into these sentiments. Many people in EA and especially AI Safety sort of see themselves in the same tribe with AGI companies, whether they are working toward the singularity or just generally being a tech person who understands that tech progress improves humanity and guides history. Another aspect of this is being drawn to technocracy and disdaining traditional advocacy (very not grey tribe). Some EAs actually work for AGI companies and others feel pressure to cooperate and not “defect” on others around them have made alliances with AGI companies.
Your comments are very blunt and challenging for EAs, but they are, I think, very accurate in many cases.
The AGI-accelerationists are at the very center of AGI X-risk—not least because many of them see human extinction as a positively good thing. In a very real sense, they are the X risk—and the ASI they crave is just the tool they want to be able to use to make humanity obsolete (and then gone entirely).
And, as you point out, the e/acc enthusiasts often have no epistemic standards at all, and are willing to use any rhetoric they think will be useful (e.g. ‘If you don’t support American AI hegemony, you want Chinese AI hegemony’; ‘If you don’t support AGI, you want everyone to die without the longevity drugs AGI could discover’; ‘If you oppose AGI, you’re a historically & economically ignorant luddite’, etc.)
I think a concerted effort to make the public aware of some of the underlying motivations, consequences, and goals of AGI research would likely trigger public backlash:
the singularity-flavored motivations of many AGI researchers: creation of superior successor beings to realize quasi-religious promises of a heavenly future, etc
the economic-flavored motivations of AGI labs: “highly autonomous systems that outperform humans at most economically valuable work”. This is literally on OpenAI’s website
increasing the likelihood of total human extinction is regarded as acceptable collateral damage in pursuit of the above goals. Some are even fine with human extinction if it’s replaced by sufficiently-advanced computers that are deemed to be able to experience more happiness than humans can.
human economic/social disempowerment is sought out in pursuit of these goals, is realistically achievable well within our lifetime, and is likely to occur even with “aligned” AGI
AI “alignment” is theater: even the rosiest visions of futures with AGI are ones in which humans are rendered obsolete and powerless, and at the mercy of AGI systems: at best an abyssal unescapable perpetual pseudo-childhood with no real work to do, no agency, no meaningful pursuits nor purpose—with economic and social decision-making and bargaining power stripped away
post-scarcity of human-level intelligence directly translates to human work being worthless, and “human work output is valuable” underpins whatever stability and beneficence has existed in every social and economic system we have and have ever had. AGI shreds these assumptions and worsens the situation: at the mercy of powerful systems at whose hands we are entirely powerless.
specifically, the promises of AGI-granted universal basic income used to legitimate AGI-granted economic cataclysm are unlikely to be upheld: with human labor output being worthless, there’s nothing humans can do if the promise is reneged on. What are we going to do, go on strike? AGI’s doing all the work. Wage an armed insurrection? AGI’s a powerful military tool. Vote the AGI overlords out of office? AGI shreds the assumptions that make democracy a workable system (and not solely a box-ticking theater like it is in totalitarian states).
the accelerationist and fundamentalist-utilitarian ideologies driving and legitimating this work place vanishingly little value on human power/agency—or even continued human existence. Some regard human desires for power/agency/meaningful work/existence as pathological, outdated, and to be eliminated by psychotechnological means, in order for humans to better cope with being relegated to the status of pets or zoo animals.
many of those working towards AGI openly revel in the thought of humanity’s obsolescence and powerlessness in the face of the systems they’re building, cheer that AI progress is so fast that political systems cannot react fast enough before AGI happens, and are stricken by a lack of faith in humanity’s ability to cope with the world and its problems—to be solved by domination and/or replacement by AGI.
the outcomes of enduring fruitful cooperation between AGI and humans (because of alleged comparative advantage) are laughably implausible: at a certain level of AGI capabilities, the costs of human involvement in AGI systems will be greater than the value of human involvement, and it will be more efficient to simply remove humans from the loop: by analogy, there’s no horse-attachment point even on the first automobile because there’s no gains to be had from having a horse in the loop. The same will be true of humans and AGI systems. To put it crudely, “centaurs” get sent to the glue factory too.
strategic deceit is used to obscure both the “total technological unemployment” cluster of motivations and the “singularity” cluster of motivations: for instance, the arguments that accelerating AGI is necessary for surviving problems which don’t actually need AI: climate change, pandemics, asteroid impact, etc (and thus the risks of AGI are justified). “we need to build AGI so I can become immortal in a really powerful computer because I don’t want to die” doesn’t quite have the same ring to it
RLHF’ing language models is not even “alignment”: it’s mostly meant to evade society cracking down on the AGI industry by making it unlikely that their language models will say politically sensitive things or obscenities. This matters because it is remarkably easier to crystallize political crackdowns on entities whose products wantonly say racial slurs than it is to crystallize political crackdowns on entities that are working towards the creation of systems that will render humans obsolete and powerless. The Lovecraftian absurdity is withering.
“AI took my job” is low-status (“can’t adapt? skill issue”) to admit seriously thinking about, but even in the dream AGI/ASI-is-aligned scenarios, the catastrophic consequences of AGI/ASI will likely look like “AI took my job” extrapolated to the entire human species: full-spectrum human obsolescence, total technological unemployment, loss of human socioeconomic bargaining power, loss of purpose, loss of human role in keeping civilization running, and a degradation of humanity to perma-NEETs tranquilized via AI-generated AR-media/games/pornography, etc.
To put it very bluntly, the overwhelming majority of humanity doesn’t want to be aboard a metaphorical Flight 93 piloted by nihilists with visions of a paradisiacal post-scarcity post-human future dancing in their heads as they make the final turns towards the singularity.
Thank you so much for this extremely insightful comment! I strongly agree with all of your points.
“‘AI took my job’ is low-status (‘can’t adapt? skill issue’) to admit seriously thinking about, but even in the dream AGI/ASI-is-aligned scenarios, the catastrophic consequences of AGI/ASI will likely look like ‘AI took my job’ extrapolated to the entire human species…”
My guess: The point at which “AI took my job” changes from low-status to an influential rallying cry is the point when a critical mass of people “wake up” to the fact that AGI is going to take their jobs (in fact, everyone’s) and that this will happen in the near future.
My fear is that there won’t be enough time in the window between “critical mass of people “wake up” to the fact that AGI is going to take their jobs (in fact, everyone’s)” and AGI/ASI actually being capable of doing so (which would nullify human social/economic power). To be slightly cynical about it, I feel like the focus on doom/foom outcomes ends up preventing the start of a societal immune response.
In the public eye, AI work that attempts to reach human-level and beyond-human-level capabilities currently seems to live in the same category as Elon’s Starship/Super Heavy adventures: an ambitious eccentric project that could cause some very serious damage if it goes wrong—except with more at stake than a launchpad. All the current discourse is downstream of this: opposition towards AGI work thus gets described as pro-stagnation / anti-progress / pro-[euro]sclerosis / pro-stagnation / anti-tech / anti-freedom and put in the same slot as anti-nuclear-power environmentalists, as anti-cryptocurrency/anti-encryption efforts, etc.
There’s growing public realization that there’s ambitious eccentric billionaires/corporations working on a project which might be Really Dangerous If It Things Go Wrong — “AI researchers believe that super-powerful AI might kill us all, we should make sure it doesn’t” is entering the Overton window — but this ignores the cataclysmic human consequences even if things go right, even if the mythical (which human values? which humans? how is this supposed to be durable against AI systems creating AIs, how is this supposed to be durable against economic selection pressures to extract more profit and resources?) “alignment with human values” is reached.
Even today, “work towards AGI is explicitly and openly seeks to make the overwhelming majority of human work economically unviable” is still not in the Overton window of what it’s acceptable/high-status to express, fear, and coordinate around, even though “nobody finds your work to be valuable, it’s not worth it to train you to be better, and there’s better replacements for what you used to do” is something which:
most people can easily understand the implications of (people in SF can literally go outside and see what happens to humans that are rendered economically unviable by society)
is openly desired by the AGI labs: they’re not just trying to create better protein-folding AIs, they’re not trying to create better warfighting or missile-guidance AIs. They’re trying to make “highly autonomous systems that outperform humans at most economically valuable work”. Says it right on OpenAI’s website.
is not something that the supposed “alignment” work is even pretending to be able to prevent.
@havequick
Your comment is valuable, because it’s a very pointed criticism of how (a lot of) EAs think about this topic, but, unlike most things in that genre, expressed in a way that will make intuitive sense to most EAs (I think). You should turn it into a post of your own if you have time.
havequick—thanks for an excellent reply.
You add some crucial additional context about some factors that will probably amplify the fury of an anti-AI backlash. Most centrally, the posthumanist ambitions of the Singularity enthusiasts who look forward to humans being replaced by machine intelligences represent an ideology that’s profoundly repulsive, outrageous, & treasonous to most people. Once those people are widely seen as traitors to our species, and their profound and pervasive influence on the AI industry is understood, an anti-AI backlash seems highly likely.
The main challenge at that point will be to keep the anti-AI backlash peaceful and effective, rather than counter-productively violent. This will basically require ‘just the right amount’ of moral stigmatization of AI: enough to pause it for a few decades, but not so much that AI researchers or AI business leaders get physically hurt.
I agree with keeping it peaceful and effective; but I don’t think that trying to calibrate “just the right amount” is really feasible or desirable. At least not the way that EA/LW-style approaches to AGI risk has approached this; which feels like it very easily leads to totalizing fear/avoidance/endless rumination/scrupulosity around doing anything which might be too unilateral or rash. The exact same sentiment is expressed in this post and the post it’s in reply to; so I am confident this is very much a real problem: https://twitter.com/AISafetyMemes/status/1661853454074105858
“Too unilateral or rash” is not a euphemism for “non-peaceful”: I really do specifically mean that in these EA/LW/etc circles there’s a tendency to have a pathological fear (that can only be discharged by fully assuaging the scrupulosity of oneself and one’s peers) of taking decisive impactful action.
To get a bit more object-level, I believe it is cowardice and pathological scrupulosity to not take a strong assertive line against very dangerous work (GoF virology on pandemic pathogens, AGI/ASI work, etc) because of fears that some unrelated psychotic wacko might use it as an excuse to do something violent. Frankly, if unstable wackos do violent things then the guilt falls quite squarely on them, not on whatever internet posts they might have been reading.
“Don’t upset the AGI labs” and “don’t do PR to normies about how the efforts towards AGI/ASI explicitly seek out human obsolescence/replacement/disempowerment” and the like feel to me like outgrowths of the same dynamic that led to the pathological (and continuing!) aversion to being open about the probable lab origin of Covid because of fears that people (normies, wackos, political leaders, funding agencies alike) might react in bad ways. I don’t think this is good at all. I think that contorting truth, misrepresenting, and dissembling in order to subtly steer/manipulate society and public/elite sentiment leads to far worse outcomes than the consequences of just telling the truth.
To get very object-level, the AGI-accelerationist side of things does not hold themselves to any sort of scrupulosity or rumination about consequences. If anything, that side of things is defined by reckless and aggressive unilateral action; both in terms of discourse and capabilities development. Unilateral to the point that there is a nontrivial contingent of those working towards AGI who openly cheer that AI progress is so fast that political systems cannot react fast enough before AGI happens.
pmarca’s piece about AI (published when I was writing this) is called “Why AI Will Save the World”, the thesis is that fears about AI are the outgrowth of “irrational” “hysterical” “moral panic” from Ludditic economic innumerates, millenarianist doom-cultists, and ever-present hall-monitors, that accelerating AI “as fast and aggressively” as possible is a “moral obligation”, and strongly insinuates that those not on-board act as enablers to the PRC’s dreams of AI-fueled world domination.
There is, of course, no counterargument for the contention that at a certain level of AGI capabilities, the costs of human involvement will be greater than the value of human involvement, and it will be more efficient (gains from trade / comparative advantage / productivity growth didn’t save draught horses from the glue factory when tractors/trucks/cars rolled around) to simply remove humans from the loop: which leads to near-total human technological unemployment/disempowerment. There is no mention that the work towards AGI explicitly and openly seeks to make the overwhelming majority of human work economically unviable. There is no mention of any of the openly-admitted motivations, consequences, and goals of AGI research which are grievously opposed to most of humanity (those which I had highlighted in my previous comment).
But that sort of piece doesn’t need any of that, since the point isn’t to steelman a case against AI doomers, but to degrade the credibility of AI doomers and shatter their ability to coordinate by depicting them as innumerates, Luddites, cultists, disingenuous, or otherwise useful idiots of the PRC.
I cannot help but see the AGI-accelerationist side of things winning decisively, soon, and irreversibly if those who are opposed continue to be so self-limitingly scrupulous about taking action because of incredibly nebulous fears. At some point those who aren’t on-board with acceleration towards AGI/ASI have to start assertively taking the initiative if they don’t want to lose by default.
Again, to be totally explicit and clear, “assertively taking the initiative” does not mean “violence”. I agree with keeping it peaceful and effective. But it does mean things like “start being unflinchingly open about the true motivations, goals, and likely/intended/acceptable consequences of AGI/ASI research” and “stop torturing yourselves with scrupulosity and fears of what might possibly conceivably go wrong”.
Respect for this comment.
In the original conception of the unilateralist’s curse, the problem arose from epistemically diverse actors/groups having different assessments of how risky an action was.
The mistake was in the people with the rosiest assessment of the risk of an action taking the action by themselves – in disregard of others’ assessments.
What I want more people in AI Safety to be aware of is that there are many other communities out there who think that what “AGI” labs are doing is super harmful and destabilising.
We’re not the one community concerned. Many epistemically diverse communities are looking at the actions by “AGI” labs and are saying that this gotta stop.
Unfortunately, in the past core people in our community have inadvertently supported the start-up of these labs. These were actions they chose to make by themselves.
If anything, unilateralist actions were taken by the accelerationists, as tacitly supported by core AI Safety folks who gave labs like DeepMind, OpenAI and Anthropic leeway to take these actions.
The rest of the world did not consent to this.
Remmelt—I agree. I think EA funders have been way too naive in thinking that, if they just support the right sort of AI development, with due concern for ‘alignment’ issues, they could steer the AI industry away from catastrophe.
In hindsight, this seems to have been a huge strategic blunder—and the big mistake was under-estimating the corporate incentives and individual hubris that drives unsafe AI development despite any good intentions of funders and founders.
This is an incisive description, Geoff. I couldn’t put it better.
I’m confused what the two crosses are doing on your comment.
Maybe the people who disagreed can clarify.
I second this. I further think there are a lot of image and tribe concerns that go into these sentiments. Many people in EA and especially AI Safety sort of see themselves in the same tribe with AGI companies, whether they are working toward the singularity or just generally being a tech person who understands that tech progress improves humanity and guides history. Another aspect of this is being drawn to technocracy and disdaining traditional advocacy (very not grey tribe). Some EAs actually work for AGI companies and others feel pressure to cooperate and not “defect” on others around them have made alliances with AGI companies.
Your comments are very blunt and challenging for EAs, but they are, I think, very accurate in many cases.
The AGI-accelerationists are at the very center of AGI X-risk—not least because many of them see human extinction as a positively good thing. In a very real sense, they are the X risk—and the ASI they crave is just the tool they want to be able to use to make humanity obsolete (and then gone entirely).
And, as you point out, the e/acc enthusiasts often have no epistemic standards at all, and are willing to use any rhetoric they think will be useful (e.g. ‘If you don’t support American AI hegemony, you want Chinese AI hegemony’; ‘If you don’t support AGI, you want everyone to die without the longevity drugs AGI could discover’; ‘If you oppose AGI, you’re a historically & economically ignorant luddite’, etc.)