Pausing AI is Progress

Link post

This post was written by PauseAI US Executive Director, Holly Elmore, and Organizing Director, Felix De Simone, as part of PauseAI’s Substack newsletter.

Remember how the door fell off that Boeing 737 MAX in midair? Did that feel like “progress?” The 737 MAX is certainly a recent aircraft model, involving nearly unprecedented complexity at the controls– we could call those forms of progress. But what we want from progress in our airplanes is, first and foremost, safety. When you think of “progress” within the domain of technology, you probably think about a march of technology becoming more accessible, more reliable, and safer before you think of it coming to market faster. Progress does not mean accepting cut corners and untested prototypes as the price of innovation. We are excited for new technological developments to make our lives better, but we don’t want new and dangerous airplanes– we want airplanes so safe we don’t even have to think about it. Progress takes work, foresight, and careful planning.

Move fast and break things” has long been an informal motto of Silicon Valley. But sometimes, what is needed to build good technology and make real progress, rather than simply moving ahead in any direction, is a Pause to give the development process the time it requires.

Pausing AI avoids potential catastrophe. It’s possible that the entire idea of building a machine with greater intelligence than our own is misguided and leads not to progress, but danger. Surveys of thousands of AI experts have repeatedly suggested a significant chance that AI could lead to human extinction. If this is the case, then pausing AI will allow us to keep civilization safe from a new danger. What is this, if not progress?

A pause would give us time to build AI properly, free from the market pressures that today’s unregulated AI companies exert on each other by racing to make superhuman-level AI first. Under a Pause, we can make the technology right instead of just fast.

And pausing AI can do much more than just buy us time. A Pause can lead us to a better world– not merely a world with temporary “stagnation” before superintelligent AI, but a world where we exercise foresight and wisdom in determining the shape of the future we want to have.

A pause allows society to process the implications of AI and build it such that society not only accommodates but proactively guides its deployment according to the preferences of everyone, not just a handful of tech billionaires. Even if we manage to solve the problem of aligning AI with human values, the question will remain– whose values? Pausing could be the difference between having beneficial AI in a free society, versus AI that doesn’t kill us but nonetheless leads to hypercentralization of power, degradation of shared social reality, or massive disenfranchisement.

Pausing AI is part of a smart development process. Pausing AI is progress.


One idea of progress that is popular today is that progress simply means moving quickly, or “accelerating”. To this way of thinking, pausing and taking time to make deliberate and safe choices with AI is “anti-progress” because it slows down the pace of product outputs. There would be no new frontier models during a Pause, that’s true, but a Pause would be filled with research, learning, social reflection and innovation, and it would create the option of much better choices for the technology– far more progress– than simply barreling along the path of least resistance.

The accelerationist narrative of progress contains two critical errors:

  1. Lumping all technology together, regardless of the specific effects of that technology (a category error)

  2. Only looking back to successful technologies and extrapolating that future technologies will be safe and effective (hindsight bias)

Proponents of this narrative will point to history and note that technology has improved the lives of billions around the globe. In doing so, they are implicitly lumping AI with past technologies. They believe that AI belongs in the same “bucket” as smartphones, the internet, or the steam engine, technologies which may have done some harm, but for which our species is ultimately better off.

This is a flawed comparison.

An example of beneficial technological progress? Not quite.

If experts in the field are right, superintelligent AI may be categorically different. If you want a comparison to existing technology, nuclear weapons are nearer the mark, but even this analogy falls short. Every technology thus far has been an extension of human brains and human hands. Existing technology lacks agency. But we may soon face entities with goals of their own which conflict with our interests, and who are far more capable of planning, mobilizing resources, and achieving their aims on a global scale.

As we approach this point, appeals to past technologies break down. A more apt comparison might be found in biology– multiple species competing in the same niche– or human prehistory– when Homo sapiens entered Ice Age Europe and outcompeted the hominids already living there. How did that work out for the Neanderthals?

To write off Pause advocates as enemies of technological progress is to fundamentally misunderstand our situation. At best, it is like accusing nuclear disarmament peace advocates of being anti-progress; at worst, you are like a Neanderthal tribal chieftain seeing a new species on the horizon and telling your tribe not to worry.

We are not medieval serfs. Of course technological progress has done incalculable good, freeing us from lives that many today would find unbearable. But instead of placing everything in the same “technological progress” bucket, we should examine the specifics of superintelligent AI and act accordingly.

The narrative of technological acceleration as inherently progressive can even influence those of us who recognize the dangers of AI and support a Pause. Some people believe that Pausing is a grim necessity to safeguard against human extinction or comparably severe risks, but only these risks. As soon as we’re confident it won’t kill us, they argue, we should resume “progress as usual” and proceed with developing superintelligent AI.

Embedded within this idea is the assumption that superintelligent AI, as long as it is aligned with human interests, will lead us to a flourishing future by default. Certainly this might be the case. Intelligence is akin to the ability to solve problems, so more intelligence equals more problems solved (including problems, like aging and disease, which have plagued our species from the beginning). But we should not rush headlong into this brave new world. Even an “aligned” superintelligence could be devastating to the kind of future we want to have, if we desire a future with human agency at its core.

One plausible scenario might proceed as follows. We build superintelligent AI aligned with human interests. Because this superintelligence clearly outclasses our decision-making ability, we defer to it when making difficult decisions. We defer when writing laws, planning for the future, answering the kind of questions that shape our society. Over time, we grow dependent on it– until it makes all the big decisions for us. We are still alive, perhaps even happy, but we have become a domesticated species, living aimlessly in the shadow of our own creation.

Whether you view this particular scenario as plausible is not relevant. There are many scenarios like it, some of which nobody has thought of yet. The point is that we should not resume developing superintelligence without carefully thinking through what could result from this unprecedented move.

In other words, solving the alignment problem is necessary but not sufficient to resume developing superintelligent AI. We should take the time to cultivate the future we want, instead of settling for what happens most readily “by default.”

Other conditions that might need to be met before we even consider developing superintelligence include:

  • National and global institutions have guardrails ensuring the responsible use of this technology– and we have strong reason to trust these guardrails.

  • We’ve thought extremely carefully about the role that superintelligent AI might play in our civilization, and the domains in which it will operate. We have plans to maintain human decision-making in domains such as law and politics where human opinions are crucial, and to prevent loss of agency to nonhuman systems.

  • We have weighed the set of plausible outcomes for our civilization if we develop superintelligent AI, and have determined the value of these outcomes versus the alternative of an indefinite pause.

  • We have plans to adapt our society to the effects of superintelligent AI—such as global job loss—instead of flying by the seat of our pants.

  • We have plans to ensure that powerful AI is controlled neither by 1) private companies soon to be worth trillions of dollars or 2) individual countries advancing nationalist aims, even at the expense of their citizens.

  • We have obtained broad consensus (i.e. in a global democratic referendum) to build superintelligent AI. The world is not at the whims of a few tech companies.

  • We have achieved a cultural shift, in which people in general think more deeply about the future, and our “moral circle” has expanded to include future generations who will be radically affected by this technology.

In his 2020 book The Precipice: Existential Risk and the Future of Humanity, Toby Ord described the concept of a “Long Reflection:”

“If we steer humanity to a place of safety, we will have time to think. Time to ensure that our choices are wisely made; that we will do the very best we can with our piece of the cosmos [...] there may come a time, not too far away, when we mostly have our house in order and can look in earnest at where we might go from here. Where we might address this vast question about our ultimate values.”

We need just such a period of reflection when it comes to AI. It is not enough to speed without steering toward a suboptimal future – such an approach may seem like progress in the short term, but our descendants, locked into a future over which they had no say, might have a different opinion.

Discretion is the key to true progress with AI. If we pause AI, we can take the time we need to think. In a universe filled with uncertainty, and potential technologies more uncertain still, every moment of reflection counts. The more time we have, the better our chances of progressing to a flourishing human future, avoiding the pits and perils along the way.