If an anti-AI backlash gets formalized into strong laws and regulations against AGI development, leading governments could make it prohibitively difficult, costly, and risky to develop AGI. This doesn’t necessarily require a global totalitarian government panopticon monitoring all computer research. Instead, the moral stigmatization automatically imposes the panopticon. If most people in the world agree that AGI development is evil, they will be motivated to monitor their friends, family, colleagues, neighbors, and everybody else who might be involved in AI. They become the eyes and ears ensuring compliance. They can report evil-doers (AGI developers) to the relevant authorities – just as they would be motivated to report human traffickers or terrorists. And, unlike traffickers and terrorists, AI researchers are unlikely to have the capacity or willingness to use violence to deter whistle-blowers from whistle-blowing.
Something to add is that this sort of outcome can be augmented/bootstrapped into reality with economic incentives that make it risky to work to develop AGI-like systems while simultaneously providing economic incentives to report those doing so—and again, without any sort of nightmare global world government totalitarian thought police panopticon (the spectre of which is commonly invoked by certain AI accelerationists as a reason not to regulate/stop work towards AGI).
Things to note not in either of those posts (though possibly in other writings by the author[s]) is:
the technical capabilities to allow for decentralized robust coordination that creates/responds to real-world money incentives have drastically improved in the past decade. It is an incredibly hackneyed phrase but...cryptocurrency does provide a scaffold onto which such systems can be built.
even putting aside the extinction/x-risk stuff there are financial incentives for the median person to support systems which can peaceably yet robustly deter the creation of AI systems which would take any of the jobs they could get (“AGI”) and thereby leave them in an abyssal state of dependence without income and without a stake or meaningful role in society for the rest of their life
Something to add is that this sort of outcome can be augmented/bootstrapped into reality with economic incentives that make it risky to work to develop AGI-like systems while simultaneously providing economic incentives to report those doing so—and again, without any sort of nightmare global world government totalitarian thought police panopticon (the spectre of which is commonly invoked by certain AI accelerationists as a reason not to regulate/stop work towards AGI).
These two posts (by the same person, I think) give an example of a scheme like this (ironically inspired by Hanson’s writings on fine-insured-bounties): https://andrew-quinn.me/ai-bounties/ and https://www.lesswrong.com/posts/AAueKp9TcBBhRYe3K/fine-insured-bounties-as-ai-deterrent
Things to note not in either of those posts (though possibly in other writings by the author[s]) is:
the technical capabilities to allow for decentralized robust coordination that creates/responds to real-world money incentives have drastically improved in the past decade. It is an incredibly hackneyed phrase but...cryptocurrency does provide a scaffold onto which such systems can be built.
even putting aside the extinction/x-risk stuff there are financial incentives for the median person to support systems which can peaceably yet robustly deter the creation of AI systems which would take any of the jobs they could get (“AGI”) and thereby leave them in an abyssal state of dependence without income and without a stake or meaningful role in society for the rest of their life