You add some crucial additional context about some factors that will probably amplify the fury of an anti-AI backlash. Most centrally, the posthumanist ambitions of the Singularity enthusiasts who look forward to humans being replaced by machine intelligences represent an ideology that’s profoundly repulsive, outrageous, & treasonous to most people. Once those people are widely seen as traitors to our species, and their profound and pervasive influence on the AI industry is understood, an anti-AI backlash seems highly likely.
The main challenge at that point will be to keep the anti-AI backlash peaceful and effective, rather than counter-productively violent. This will basically require ‘just the right amount’ of moral stigmatization of AI: enough to pause it for a few decades, but not so much that AI researchers or AI business leaders get physically hurt.
The main challenge at that point will be to keep the anti-AI backlash peaceful and effective, rather than counter-productively violent. This will basically require ‘just the right amount’ of moral stigmatization of AI: enough to pause it for a few decades, but not so much that AI researchers or AI business leaders get physically hurt.
I agree with keeping it peaceful and effective; but I don’t think that trying to calibrate “just the right amount” is really feasible or desirable. At least not the way that EA/LW-style approaches to AGI risk has approached this; which feels like it very easily leads to totalizing fear/avoidance/endless rumination/scrupulosity around doing anything which might be too unilateral or rash. The exact same sentiment is expressed in this post and the post it’s in reply to; so I am confident this is very much a real problem: https://twitter.com/AISafetyMemes/status/1661853454074105858
“Too unilateral or rash” is not a euphemism for “non-peaceful”: I really do specifically mean that in these EA/LW/etc circles there’s a tendency to have a pathological fear (that can only be discharged by fully assuaging the scrupulosity of oneself and one’s peers) of taking decisive impactful action.
To get a bit more object-level, I believe it is cowardice and pathological scrupulosity to not take a strong assertive line against very dangerous work (GoF virology on pandemic pathogens, AGI/ASI work, etc) because of fears that some unrelated psychotic wacko might use it as an excuse to do something violent. Frankly, if unstable wackos do violent things then the guilt falls quite squarely on them, not on whatever internet posts they might have been reading.
“Don’t upset the AGI labs” and “don’t do PR to normies about how the efforts towards AGI/ASI explicitly seek out human obsolescence/replacement/disempowerment” and the like feel to me like outgrowths of the same dynamic that led to the pathological (and continuing!) aversion to being open about the probable lab origin of Covid because of fears that people (normies, wackos, political leaders, funding agencies alike) might react in bad ways. I don’t think this is good at all. I think that contorting truth, misrepresenting, and dissembling in order to subtly steer/manipulate society and public/elite sentiment leads to far worse outcomes than the consequences of just telling the truth.
To get very object-level, the AGI-accelerationist side of things does not hold themselves to any sort of scrupulosity or rumination about consequences. If anything, that side of things is defined by reckless and aggressive unilateral action; both in terms of discourse and capabilities development. Unilateral to the point that there is a nontrivial contingent of those working towards AGI who openly cheer that AI progress is so fast that political systems cannot react fast enough before AGI happens.
pmarca’s piece about AI (published when I was writing this) is called “Why AI Will Save the World”, the thesis is that fears about AI are the outgrowth of “irrational” “hysterical” “moral panic” from Ludditic economic innumerates, millenarianist doom-cultists, and ever-present hall-monitors, that accelerating AI “as fast and aggressively” as possible is a “moral obligation”, and strongly insinuates that those not on-board act as enablers to the PRC’s dreams of AI-fueled world domination.
There is, of course, no counterargument for the contention that at a certain level of AGI capabilities, the costs of human involvement will be greater than the value of human involvement, and it will be more efficient (gains from trade / comparative advantage / productivity growth didn’t save draught horses from the glue factory when tractors/trucks/cars rolled around) to simply remove humans from the loop: which leads to near-total human technological unemployment/disempowerment. There is no mention that the work towards AGI explicitly and openly seeks to make the overwhelming majority of human work economically unviable. There is no mention of any of the openly-admitted motivations, consequences, and goals of AGI research which are grievously opposed to most of humanity (those which I had highlighted in my previous comment).
But that sort of piece doesn’t need any of that, since the point isn’t to steelman a case against AI doomers, but to degrade the credibility of AI doomers and shatter their ability to coordinate by depicting them as innumerates, Luddites, cultists, disingenuous, or otherwise useful idiots of the PRC.
I cannot help but see the AGI-accelerationist side of things winning decisively, soon, and irreversibly if those who are opposed continue to be so self-limitingly scrupulous about taking action because of incredibly nebulous fears. At some point those who aren’t on-board with acceleration towards AGI/ASI have to start assertively taking the initiative if they don’t want to lose by default.
Again, to be totally explicit and clear, “assertively taking the initiative” does not mean “violence”. I agree with keeping it peaceful and effective. But it does mean things like “start being unflinchingly open about the true motivations, goals, and likely/intended/acceptable consequences of AGI/ASI research” and “stop torturing yourselves with scrupulosity and fears of what might possibly conceivably go wrong”.
In the original conception of the unilateralist’s curse, the problem arose from epistemically diverse actors/groups having different assessments of how risky an action was.
The mistake was in the people with the rosiest assessment of the risk of an action taking the action by themselves – in disregard of others’ assessments.
What I want more people in AI Safety to be aware of is that there are many other communities out there who think that what “AGI” labs are doing is super harmful and destabilising.
We’re not the one community concerned. Many epistemically diverse communities are looking at the actions by “AGI” labs and are saying that this gotta stop.
Unfortunately, in the past core people in our community have inadvertently supported the start-up of these labs. These were actions they chose to make by themselves.
If anything, unilateralist actions were taken by the accelerationists, as tacitly supported by core AI Safety folks who gave labs like DeepMind, OpenAI and Anthropic leeway to take these actions.
Remmelt—I agree. I think EA funders have been way too naive in thinking that, if they just support the right sort of AI development, with due concern for ‘alignment’ issues, they could steer the AI industry away from catastrophe.
In hindsight, this seems to have been a huge strategic blunder—and the big mistake was under-estimating the corporate incentives and individual hubris that drives unsafe AI development despite any good intentions of funders and founders.
“Too unilateral or rash” is not a euphemism for “non-peaceful”: I really do specifically mean that in these EA/LW/etc circles there’s a tendency to have a pathological fear (that can only be discharged by fully assuaging the scrupulosity of oneself and one’s peers) of taking decisive impactful action.
I cannot help but see the AGI-accelerationist side of things winning decisively, soon, and irreversibly if those who are opposed continue to be so self-limitingly scrupulous about taking action because of incredibly nebulous fears.
I second this. I further think there are a lot of image and tribe concerns that go into these sentiments. Many people in EA and especially AI Safety sort of see themselves in the same tribe with AGI companies, whether they are working toward the singularity or just generally being a tech person who understands that tech progress improves humanity and guides history. Another aspect of this is being drawn to technocracy and disdaining traditional advocacy (very not grey tribe). Some EAs actually work for AGI companies and others feel pressure to cooperate and not “defect” on others around them have made alliances with AGI companies.
Your comments are very blunt and challenging for EAs, but they are, I think, very accurate in many cases.
The AGI-accelerationists are at the very center of AGI X-risk—not least because many of them see human extinction as a positively good thing. In a very real sense, they are the X risk—and the ASI they crave is just the tool they want to be able to use to make humanity obsolete (and then gone entirely).
And, as you point out, the e/acc enthusiasts often have no epistemic standards at all, and are willing to use any rhetoric they think will be useful (e.g. ‘If you don’t support American AI hegemony, you want Chinese AI hegemony’; ‘If you don’t support AGI, you want everyone to die without the longevity drugs AGI could discover’; ‘If you oppose AGI, you’re a historically & economically ignorant luddite’, etc.)
havequick—thanks for an excellent reply.
You add some crucial additional context about some factors that will probably amplify the fury of an anti-AI backlash. Most centrally, the posthumanist ambitions of the Singularity enthusiasts who look forward to humans being replaced by machine intelligences represent an ideology that’s profoundly repulsive, outrageous, & treasonous to most people. Once those people are widely seen as traitors to our species, and their profound and pervasive influence on the AI industry is understood, an anti-AI backlash seems highly likely.
The main challenge at that point will be to keep the anti-AI backlash peaceful and effective, rather than counter-productively violent. This will basically require ‘just the right amount’ of moral stigmatization of AI: enough to pause it for a few decades, but not so much that AI researchers or AI business leaders get physically hurt.
I agree with keeping it peaceful and effective; but I don’t think that trying to calibrate “just the right amount” is really feasible or desirable. At least not the way that EA/LW-style approaches to AGI risk has approached this; which feels like it very easily leads to totalizing fear/avoidance/endless rumination/scrupulosity around doing anything which might be too unilateral or rash. The exact same sentiment is expressed in this post and the post it’s in reply to; so I am confident this is very much a real problem: https://twitter.com/AISafetyMemes/status/1661853454074105858
“Too unilateral or rash” is not a euphemism for “non-peaceful”: I really do specifically mean that in these EA/LW/etc circles there’s a tendency to have a pathological fear (that can only be discharged by fully assuaging the scrupulosity of oneself and one’s peers) of taking decisive impactful action.
To get a bit more object-level, I believe it is cowardice and pathological scrupulosity to not take a strong assertive line against very dangerous work (GoF virology on pandemic pathogens, AGI/ASI work, etc) because of fears that some unrelated psychotic wacko might use it as an excuse to do something violent. Frankly, if unstable wackos do violent things then the guilt falls quite squarely on them, not on whatever internet posts they might have been reading.
“Don’t upset the AGI labs” and “don’t do PR to normies about how the efforts towards AGI/ASI explicitly seek out human obsolescence/replacement/disempowerment” and the like feel to me like outgrowths of the same dynamic that led to the pathological (and continuing!) aversion to being open about the probable lab origin of Covid because of fears that people (normies, wackos, political leaders, funding agencies alike) might react in bad ways. I don’t think this is good at all. I think that contorting truth, misrepresenting, and dissembling in order to subtly steer/manipulate society and public/elite sentiment leads to far worse outcomes than the consequences of just telling the truth.
To get very object-level, the AGI-accelerationist side of things does not hold themselves to any sort of scrupulosity or rumination about consequences. If anything, that side of things is defined by reckless and aggressive unilateral action; both in terms of discourse and capabilities development. Unilateral to the point that there is a nontrivial contingent of those working towards AGI who openly cheer that AI progress is so fast that political systems cannot react fast enough before AGI happens.
pmarca’s piece about AI (published when I was writing this) is called “Why AI Will Save the World”, the thesis is that fears about AI are the outgrowth of “irrational” “hysterical” “moral panic” from Ludditic economic innumerates, millenarianist doom-cultists, and ever-present hall-monitors, that accelerating AI “as fast and aggressively” as possible is a “moral obligation”, and strongly insinuates that those not on-board act as enablers to the PRC’s dreams of AI-fueled world domination.
There is, of course, no counterargument for the contention that at a certain level of AGI capabilities, the costs of human involvement will be greater than the value of human involvement, and it will be more efficient (gains from trade / comparative advantage / productivity growth didn’t save draught horses from the glue factory when tractors/trucks/cars rolled around) to simply remove humans from the loop: which leads to near-total human technological unemployment/disempowerment. There is no mention that the work towards AGI explicitly and openly seeks to make the overwhelming majority of human work economically unviable. There is no mention of any of the openly-admitted motivations, consequences, and goals of AGI research which are grievously opposed to most of humanity (those which I had highlighted in my previous comment).
But that sort of piece doesn’t need any of that, since the point isn’t to steelman a case against AI doomers, but to degrade the credibility of AI doomers and shatter their ability to coordinate by depicting them as innumerates, Luddites, cultists, disingenuous, or otherwise useful idiots of the PRC.
I cannot help but see the AGI-accelerationist side of things winning decisively, soon, and irreversibly if those who are opposed continue to be so self-limitingly scrupulous about taking action because of incredibly nebulous fears. At some point those who aren’t on-board with acceleration towards AGI/ASI have to start assertively taking the initiative if they don’t want to lose by default.
Again, to be totally explicit and clear, “assertively taking the initiative” does not mean “violence”. I agree with keeping it peaceful and effective. But it does mean things like “start being unflinchingly open about the true motivations, goals, and likely/intended/acceptable consequences of AGI/ASI research” and “stop torturing yourselves with scrupulosity and fears of what might possibly conceivably go wrong”.
Respect for this comment.
In the original conception of the unilateralist’s curse, the problem arose from epistemically diverse actors/groups having different assessments of how risky an action was.
The mistake was in the people with the rosiest assessment of the risk of an action taking the action by themselves – in disregard of others’ assessments.
What I want more people in AI Safety to be aware of is that there are many other communities out there who think that what “AGI” labs are doing is super harmful and destabilising.
We’re not the one community concerned. Many epistemically diverse communities are looking at the actions by “AGI” labs and are saying that this gotta stop.
Unfortunately, in the past core people in our community have inadvertently supported the start-up of these labs. These were actions they chose to make by themselves.
If anything, unilateralist actions were taken by the accelerationists, as tacitly supported by core AI Safety folks who gave labs like DeepMind, OpenAI and Anthropic leeway to take these actions.
The rest of the world did not consent to this.
Remmelt—I agree. I think EA funders have been way too naive in thinking that, if they just support the right sort of AI development, with due concern for ‘alignment’ issues, they could steer the AI industry away from catastrophe.
In hindsight, this seems to have been a huge strategic blunder—and the big mistake was under-estimating the corporate incentives and individual hubris that drives unsafe AI development despite any good intentions of funders and founders.
This is an incisive description, Geoff. I couldn’t put it better.
I’m confused what the two crosses are doing on your comment.
Maybe the people who disagreed can clarify.
I second this. I further think there are a lot of image and tribe concerns that go into these sentiments. Many people in EA and especially AI Safety sort of see themselves in the same tribe with AGI companies, whether they are working toward the singularity or just generally being a tech person who understands that tech progress improves humanity and guides history. Another aspect of this is being drawn to technocracy and disdaining traditional advocacy (very not grey tribe). Some EAs actually work for AGI companies and others feel pressure to cooperate and not “defect” on others around them have made alliances with AGI companies.
Your comments are very blunt and challenging for EAs, but they are, I think, very accurate in many cases.
The AGI-accelerationists are at the very center of AGI X-risk—not least because many of them see human extinction as a positively good thing. In a very real sense, they are the X risk—and the ASI they crave is just the tool they want to be able to use to make humanity obsolete (and then gone entirely).
And, as you point out, the e/acc enthusiasts often have no epistemic standards at all, and are willing to use any rhetoric they think will be useful (e.g. ‘If you don’t support American AI hegemony, you want Chinese AI hegemony’; ‘If you don’t support AGI, you want everyone to die without the longevity drugs AGI could discover’; ‘If you oppose AGI, you’re a historically & economically ignorant luddite’, etc.)