Google cofounder Larry Page thinks superintelligent AI is “just the next step in evolution.” In fact, Page, who’s worth about $120 billion, has reportedly argued that efforts to prevent AI-driven extinction and protect human consciousness are “speciesist” and “sentimental nonsense.”
In July, former Google DeepMind senior scientist Richard Sutton — one of the pioneers of reinforcement learning, a major subfield of AI — said that the technology “could displace us from existence,” and that “we should not resist succession.” In a 2015 talk, Sutton said, suppose “everything fails” and AI “kill[s] us all”; he asked, “Is it so bad that humans are not the final form of intelligent life in the universe?”
This is how I begin the cover story for Jacobin’s winter issue on AI. Some very influential people openly welcome an AI-driven future, even if humans aren’t part of it.
Whether you’re new to the topic or work in the field, I think you’ll get something out of it.
I spent five months digging into the AI existential risk debates and the economic forces driving AI development. This was the most ambitious story of my career — it was informed by interviews and written conversations with three dozen people — and I’m thrilled to see it out in the world. Some of the people include:
Deep learning pioneer and Turing Award winner Yoshua Bengio
Pathbreaking AI ethics researchers Joy Buolamwini and Inioluwa Deborah Raji
Reinforcement learning pioneer Richard Sutton
Cofounder of the AI safety field Eliezer Yudkowksy
Renowned philosopher of mind David Chalmers
Sante Fe Institute complexity professor Melanie Mitchell
Researchers from leading AI labs
Some of the most powerful industrialists and companies are plowing enormous amounts of money and effort into increasing the capabilities and autonomy of AI systems, all while acknowledging that superhuman AI could literally wipe out humanity:
Bizarrely, many of the people actively advancing AI capabilities think there’s a significant chance that doing so will ultimately cause the apocalypse. A 2022 survey of machine learning researchers found that nearly half of them thought there was at least a 10 percent chance advanced AI could lead to “human extinction or [a] similarly permanent and severe disempowerment” of humanity. Just months before he cofounded OpenAI, Altman said, “AI will probably most likely lead to the end of the world, but in the meantime, there’ll be great companies.”
This is a pretty crazy situation!
But not everyone agrees that AI could cause human extinction. Some think that the idea itself causes more harm than good:
Some fear not the “sci-fi” scenario where AI models get so capable they wrest control from our feeble grasp, but instead that we will entrust biased, brittle, and confabulating systems with too much responsibility, opening a more pedestrian Pandora’s box full of awful but familiar problems that scale with the algorithms causing them. This community of researchers and advocates — often labeled “AI ethics” — tends to focus on the immediate harms being wrought by AI, exploring solutions involving model accountability, algorithmic transparency, and machine learning fairness.
Others buy the idea of transformative AI, but think it’s going to be great:
A third camp worries that when it comes to AI, we’re not actually moving fast enough. Prominent capitalists like billionaire Marc Andreessen agree with safety folks that AGI is possible but argue that, rather than killing us all, it will usher in an indefinite golden age of radical abundance and borderline magical technologies. This group, largely coming from Silicon Valley and commonly referred to as AI boosters, tends to worry far more that regulatory overreaction to AI will smother a transformative, world-saving technology in its crib, dooming humanity to economic stagnation.
Billionaire venture capitalist Marc Andreessen (who blocked me long ago) writes that slowing down AI is akin to murder! He may be the most famous proponent of effective accelerationism (e/acc):
In June, Andreessen published an essay called “Why AI Will Save the World,” where he explains how AI will make “everything we care about better,” as long as we don’t regulate it to death. He followed it up in October with his “Techno-Optimist Manifesto,” which, in addition to praising a founder of Italian fascism, named as enemies of progress ideas like “existential risk,” “sustainability,” “trust and safety,” and “tech ethics.” Andreessen does not mince words, writing, “We believe any deceleration of AI will cost lives. Deaths that were preventable by the AI that was prevented from existing [are] a form of murder.”
While this debate plays out, the vast majority of the money spent on AI is going into making it more capable, autonomous, and profitable. A compliant artificial general intelligence (AGI) would be the worker capitalists dream of — no need for bathroom breaks, no risk of unionizing, and no wages — just the cost of the computation.
But many AI researchers expect that building a true AGI (the goal of leading AI labs) will lead to an explosion in capabilities, ultimately resulting in systems far more powerful than humans:
The October “Managing AI Risks” paper states:
There is no fundamental reason why AI progress would slow or halt when it reaches human-level abilities. . . . Compared to humans, AI systems can act faster, absorb more knowledge, and communicate at a far higher bandwidth. Additionally, they can be scaled to use immense computational resources and can be replicated by the millions.
Even systems that remain at human-level would likely be wildly profitable to run.
Here’s a stylized version of the idea of “population” growth spurring an intelligence explosion: if AI systems rival human scientists at research and development, the systems will quickly proliferate, leading to the equivalent of an enormous number of new, highly productive workers entering the economy. Put another way, if GPT-7 can perform most of the tasks of a human worker and it only costs a few bucks to put the trained model to work on a day’s worth of tasks, each instance of the model would be wildly profitable, kicking off a positive feedback loop. This could lead to a virtual “population” of billions or more digital workers, each worth much more than the cost of the energy it takes to run them. Sutskever thinks it’s likely that “the entire surface of the earth will be covered with solar panels and data centers.”
(Where would we live? Unclear.)
As AI systems become more valuable, it will be harder to rein in their developers. Many have theorized about how a superintelligence could resist efforts to turn it off, but corporations are already plenty good at continuing to do risky things that we’d really rather they didn’t:
“Just unplug it,” goes the common objection. But once an AI model is powerful enough to threaten humanity, it will probably be the most valuable thing in existence. You might have an easier time “unplugging” the New York Stock Exchange or Amazon Web Services.
So why do some people think superintelligent AI would pose a threat to humanity?
The fear that keeps many x-risk people up at night is not that an advanced AI would “wake up,” “turn evil,” and decide to kill everyone out of malice, but rather that it comes to see us as an obstacle to whatever goals it does have. In his final book, Brief Answers to the Big Questions, Stephen Hawking articulated this, saying, “You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green-energy project and there’s an anthill in the region to be flooded, too bad for the ants.”
By and large, the left isn’t seriously engaging with AI and in so doing, we’re giving up a chance to shape a technology that could unmake (or remake) society:
After years of inaction, the world’s governments are finally turning their attention to AI. But by not seriously engaging with what future systems could do, socialists are ceding their seat at the table.
In no small part because of the types of people who became attracted to AI, many of the earliest serious adopters of the x-risk idea decided to either engage in extremely theoretical research on how to control advanced AI or started AI companies. But for a different type of person, the response to believing that AI could end the world is to try to get people to stop building it.
We may be entering a critical period akin to the drafting of the constitution for a new country with the potential to be more powerful than any that came before. Right now, that constitution is being drafted by unelected techno-capitalists:
Governments are complex systems that wield enormous power. The foundation upon which they’re established can influence the lives of millions now and in the future. Americans live under the yoke of dead men who were so afraid of the public, they built antidemocratic measures that continue to plague our political system more than two centuries later.
It’s ironic given how similar the problem is to another thing that leftists tend to think A LOT about! When my lefty friends point out that capitalism is the real misaligned superintelligence, it’s not exactly reassuring:
We may not need to wait to find superintelligent systems that don’t prioritize humanity. Superhuman agents ruthlessly optimize for a reward at the expense of anything else we might care about. The more capable the agent and the more ruthless the optimizer, the more extreme the results.
I found that the vitriolic debate between the people worried about extinction and those worried about AI’s existing harms hides the more meaningful divide — between those trying to make AI more profitable and those trying to make it more human.
There’s so much more in the final piece, so please do check it out and consider subscribing to Jacobin to support this kind of writing.
If you’d like to stay up to date with my work, subscribe to my Substack.
Thank you for writing this. I thought it was an interesting article. I want to push back a bit against the claim that AI risk should primarily or even significantly be seen as a problem of capitalism. You write:
I do think it is true that the drive to advance AI technology faster makes AI safety harder, and that competition under the capitalist system is one thing generating this drive. But I don’t think this is unique to capitalism, or that things would be much better under some other economic system.
The Soviet Union was not capitalist, and yet it developed dangerous nuclear weapons and bioweapons. It put tremendous resources into technological development, e.g., space technology, missile systems, military aircraft, etc. I couldn’t find figures for the Cold War as a whole, but in 1980 the USSR outspent the US (and Japan, and Germany) on R&D, in terms of % of GDP. And it did not seem to do better at developing these technologies in a safe way than capitalist countries did (cf. Sverdlovsk, Chernobyl).
If you look at what is probably the second most capable country when it comes to AI, China, you see an AI industry driven largely by priorities set by the state, and investment partly directed or provided by the state. China also has markets, but the Chinese state is highly interested in advancing AI progress, and I see no reason why this would be different under a non-market-based system. This is pretty clear from e.g., its AI development plan and Made in China 2025, and it has much more to do with national priorities (of security, control, strategic competition, and economic growth) than free market competition.
One intuitive argument for why capitalism should be expected to advance AI faster than competing economic systems is because capitalist institutions incentivize capital accumulation, and AI progress is mainly driven by the accumulation of computer capital.
This is a straightforward argument: traditionally it is widely considered that a core element of capitalist institutions is the ability to own physical capital, and receive income from this ownership. AI progress and AI-driven growth requires physical computer capital, both for training and for inference. Right now, all the major tech companies, including Microsoft, Meta and Google, are spending large sums to amass a stockpile of compute to train larger, more capable models and serve customers AI services via cloud APIs. The obvious reason why these companies are taking these actions is because they expect to profit from their ownership over AI capital.
While it’s true that competing economic systems also have mechanisms to accumulate capital, the capitalist system is practically synonymous with this motive. For example, while a centrally planned government could theoretically decide to spend 20% of GDP to purchase computer capital, the politicians and bureaucrats within such a system might only have weak incentives to pursue such a strategy, since they may not directly profit from the decision over and above the gains received by the general population. By contrast, a decentralized property and price system make such a decision extremely natural if one expects huge returns from investments in physical capital.
One can interpret this argument as a positive argument in favor of capitalist institutions (as I mostly do), or as an argument for reining in these institutions if you think that rapid AI progress is bad.
That makes sense. I agree that capitalism likely advances AI faster than other economical systems. I just don’t think the difference is large enough for economic system to be a very useful frame of analysis (or point of intervention) when it comes to existential risk, let alone the primary frame.
Thanks for your thoughtful engagement! Chalmers made a similar point during our interview (that socialist societies would also experience strong pressures to build AGI).
I tried to describe the landscape as it exists right now, without making many claims about what would likely be true under a totally different economic/political system. That being said, I do think it’s interesting that the leading labs are all corporations.
If you look at firms in a market economy as profit-maximizing agents and governments as agents trying to balance many interests, such as stability, economic growth, geopolitical/military advantage, popular support, international respect etc. then I think it’s easier to see why firms are pursuing AGI far more aggressively (by decreasing the cost of labor via automation, you can dramatically increase your profitability). For a government, AGI may boost economic growth and geopolitical/military advantage at the expense of stability and popular support.
And if you look at existential risk from AI as an externality, governments are more likely to take on the costs of mitigating that kind of risk whereas firms are more likely to pass them on to the broader society.
I’ve seen some claims that the CCP is less interested in AGI and more interested in narrow applications, like machine vision, facial recognition, natural language processing, which can all help shore up its power long term. I haven’t gone deep into this yet. I’ll dig into the China links you sent later.
Thanks for sharing this, and more importantly, for writing it. From my perspective, this is the best reporting on AI that I’ve seen. I’ve shared it with previously ultra-sceptical friends, and had an uncharacteristically positive response.
Such a good piece! It baffles me too that Marxists, socialists, labor-focused academics, etc. is not pouring over the issues around AI—if ever there was a threat to the proletariat (and especially the Western, bourgeois, left-leaning section), this is it! The trajectory seems clearly to be replacing human labor with machines, and in so doing concentrating wealth and power in a way and at a speed I think we have never seen before.
Edit/addition: The Hollywood writers’ strike is actually a promising example that some labor-focused people and organizations are on top of this.
Seems really good, though I didn’t read it fully. I liked it even before I realised you’d written it.
Minor disagreement:
I guess I’d prefer something like “it’s optimisation versus humanity” or “it’s unfettered capitalism versus humanity”. Capitalism is a good servant but a bad master and I agree that if we just optimise for GDP we’ll probably end in real trouble. But capitalism as commonly understood is really good.
Also this felt a little out of tone with the article and like you wanted some lefty credentials. Would you actually defend this point?
To the extent this is an empirical claim about superhuman agents we are likely to build and not merely a definition, it needs to be argued for, not merely assumed. “Ruthless” optimization could indeed be bad for us, but current AIs don’t seem well-described as ruthless optimizers.
Instead, LLMs appear corrigible more-or-less by default, and there don’t appear to be strong incentives to purposely make AIs that are ruthless agents if doing so predictably harmed us.
(There’s a more plausible argument that we have strong incentives to build non-ruthless agents, but these agents, by virtue of not being ruthless, seem much less risky.)
To the extent superhuman agents are simply ruthless by definition, I’d argue that this statement is largely irrelevant, since we don’t seem likely to want to build ruthless agents that would predictably harm us. It’s possible such agents could come about by accident, but again, this premise needs to be argued for, not merely assumed.
Thanks for sharing, Garrison! For balance, readers may want to check Matthew Barnett’s quick take on pro-AI-acceleration. Here is the 1st paragraph:
Executive summary: The article discusses debates around advanced AI systems, including fears they could replace or threaten humanity, optimism they will improve the world, and issues of control and alignment given their potential power.
Key points:
Some influential figures welcome superintelligent AI even if it replaces humans, while some AI researchers think advanced systems could threaten human existence.
Another view sees AI’s current harms as more pressing than speculative future risks.
Capitalists want to accelerate AI capabilities for profit, while critics worry this cedes control to unelected corporations.
Advanced AI systems optimizing narrow goals could come to see humans as an obstacle, intentionally or not.
Leftists have largely ceded this debate to techno-capitalists, missing a chance to shape a transformative technology.
The divide is less between AI alarmists and skeptics than between those seeking profit versus human interests.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.