“If we go extinct due to misaligned AI, at least nature will continue, right? … right?”

Link post

[memetic status: stating directly despite it being a clear consequence of core AI risk knowledge because many people have “but nature will survive us” antibodies to other classes of doom and misapply them here.]

Unfortunately, no.[1]

Technically, “Nature”, meaning the fundamental physical laws, will continue. However, people usually mean forests, oceans, fungi, bacteria, and generally biological life when they say “nature”, and those would not have much chance competing against a misaligned superintelligence for resources like sunlight and atoms, which are useful to both biological and artificial systems.

There’s a thought that comforts many people when they imagine humanity going extinct due to a nuclear catastrophe or runaway global warming: Once the mushroom clouds or CO2 levels have settled, nature will reclaim the cities. Maybe mankind in our hubris will have wounded Mother Earth and paid the price ourselves, but she’ll recover in time, and she has all the time in the world.

AI is different. It would not simply destroy human civilization with brute force, leaving the flows of energy and other life-sustaining resources open for nature to make a resurgence. Instead, AI would still exist after wiping humans out, and feed on the same resources nature needs, but much more capably.

You can draw strong parallels to the way humanity has captured huge parts of the biosphere for ourselves. Except, in the case of AI, we’re the slow-moving process which is unable to keep up.

A misaligned superintelligence would have many cognitive superpowers, which include developing advanced technology. For almost any objective it might have, it would require basic physical resources, like atoms to construct things which further its goals, and energy (such as that from sunlight) to power those things. These resources are also essential to current life forms, and, just as humans drove so many species extinct by hunting or outcompeting them, AI could do the same to all life, and to the planet itself.

Planets are not a particularly efficient use of atoms for most goals, and many goals which an AI may arrive at can demand an unbounded amount of resources. For each square meter of usable surface, there are millions of tons of magma and other materials locked up. Rearranging these into a more efficient configuration could look like strip mining the entire planet and firing the extracted materials into space using self-replicating factories, and then using those materials to build megastructures in space to harness a large fraction of the sun’s output. Looking further out, the sun and other stars are themselves huge piles of resources spilling unused energy out into space, and no law of physics renders them invulnerable to sufficiently advanced technology.

Some time after a misaligned, optimizing AI wipes out humanity, it is likely that there will be no Earth and no biological life, but only a rapidly expanding sphere of darkness eating through the Milky Way as the AI reaches and extinguishes or envelops nearby stars.

This is generally considered a less comforting thought.

This is an experiment in sharing highlighted content from aisafety.info. Browse around to view some of the other 300 articles which are live, or explore related questions!

  1. ^

    There are some scenarios where this might happen, especially in extreme cases of misuse rather than agentic misaligned systems, or in edge cases where a system is misaligned with respect to humanity but terminally values keeping nature around, but this is not the mainline way things go.

  2. ^
Crossposted from LessWrong (47 points, 23 comments)