In my view, the extinction of all Earth-originating intelligent life (including AIs) seems extremely unlikely over the next several decades. While a longtermist utilitarian framework takes even a 0.01 percentage point reduction in extinction risk quite seriously, there appear to be very few plausible ways that all intelligent life originating from Earth could go extinct in the next century. Ensuring a positive transition to artificial life seems more useful on current margins.
The extinction of all Earth-originating intelligent life (including AIs) seems extremely unlikely.
Extremely unlikely to happen… when? Surely all Earth-originating intelligent life will eventually go extinct, because the universe’s resources are finite.
In my comment I later specified “in [the] next century” though it’s quite understandable if you missed that. I agree that eventual extinction of Earth-originating intelligent life (including AIs) is likely; however, I don’t currently see a plausible mechanism for this to occur over time horizons that are brief by cosmological standards.
(I just edited the original comment to make this slightly clearer.)
Thanks for the clarification. I didn’t mean to be pedantic: I think these discussions are often unclear about the relevant time horizon. Even Bostrom admits (somewhere) that his earlier writing about existential risk left the timeframe unspecified (vaguely talking about “premature” extinction).
On the substantive question, I’m interested in learning more about your reasoning. To me, it seems much more likely that Earth-originating intelligence will go extinct this century than, say, in the 8973th century AD (conditional on survival up to that century). This is because it seems plausible that humanity (or its descendants) will soon develop technology with enough destructive potential to actually kill all intelligence. Then the question becomes whether they will also successfully develop the technology to protect intelligence from being so destroyed. But I don’t think there are decisive arguments for expecting the offense-defense balance to favor either defense or offense (the strongest argument for pessimism, in my view, is stated in the first paragraph of this book review). Do you deny that this technology will be developed “over time horizons that are brief by cosmological standards”? Or are you confident that our capacity to destroy will be outpaced by our capacity to prevent from destruction?
To me, it seems much more likely that Earth-originating intelligence will go extinct this century than, say, in the 8973th century AD.
That said, I still suspect the absolute probability of total extinction of intelligent life during the 21st century is very low. To be more precise, I’d put this probability at around 1% (to be clear: I recognize other people may not agree that this credence should count as “extremely low” or “very low” in this context). To justify this statement, I would highlight several key factors:
Throughout hundreds of millions of years, complex life has demonstrated remarkable resilience. Since the first vertebrates colonized land during the late Devonian period (approximately 375–360 million years ago), no extinction event has ever eradicated all species capable of complex cognition. Even after the most catastrophic mass extinctions, such as the end-Permian extinction and the K-Pg extinction, vertebrates rebounded. Not only did they recover, but they also surpassed their previous levels of ecological dominance and cognitive complexity, as seen in the increasing brain size and adaptability of various species over time.
Unlike non-intelligent organisms, intelligent life—starting with humans—possesses advanced planning abilities and an exceptional capacity to adapt to changing environments. Humans have successfully settled in nearly every climate and terrestrial habitat on Earth, from tropical jungles to arid deserts and even Antarctica. This extreme adaptability suggests that intelligent life is less vulnerable to complete extinction compared to other complex life forms.
As human civilization has advanced, our species has become increasingly robust against most types of extinction events rather than more fragile. Technological progress has expanded our ability to mitigate threats, whether they come from natural disasters or disease. Our massive global population further reduces the likelihood that any single event could exterminate every last human, while our growing capacity to detect and neutralize threats makes us better equipped to survive crises.
History shows that even in cases of large-scale violence and genocide, the goal has almost always been the destruction of rival groups—not the annihilation of all life, including the perpetrators themselves. This suggests that intelligent beings have strong instrumental reasons to avoid total extinction events. Even in scenarios involving genocidal warfare, the likelihood of all intelligent beings willingly or accidentally destroying all life—including their own—seems very low.
I have yet to see any compelling evidence that near-term or medium-term technological advancements will introduce a weapon or catastrophe capable of wiping out all forms of intelligent life. While near-term technological risks certainly exist that threaten human life, none currently appear to pose a credible risk of total extinction of intelligent life.
Some of the most destructive long-term technologies—such as asteroid manipulation for planetary bombardment—are likely to develop alongside technologies that enhance our ability to survive and expand into space. As our capacity for destruction grows, so too will our ability to establish off-world colonies and secure alternative survival strategies. This suggests that the overall trajectory of intelligent life seems to be toward increasing resilience, not increasing vulnerability.
Artificial life could rapidly evolve to become highly resilient to environmental shocks. Future AIs could be designed to be at least as robust as insects—able to survive in a wide range of extreme and unpredictable conditions. Similar to plant seeds, artificial hardware could be engineered to efficiently store and execute complex self-replicating instructions in a highly compact form, enabling them to autonomously colonize diverse environments by utilizing various energy sources, such as solar and thermal energy. Having been engineered rather than evolved naturally, these artificial systems could take advantage of design principles that surpass biological organisms in adaptability. By leveraging a vast array of energy sources and survival strategies, they could likely colonize some of the most extreme and inhospitable environments in our solar system—places that even the most resilient biological life forms on Earth could never inhabit.
In my view, the extinction of all Earth-originating intelligent life (including AIs) seems extremely unlikely over the next several decades. While a longtermist utilitarian framework takes even a 0.01 percentage point reduction in extinction risk quite seriously, there appear to be very few plausible ways that all intelligent life originating from Earth could go extinct in the next century. Ensuring a positive transition to artificial life seems more useful on current margins.
Extremely unlikely to happen… when? Surely all Earth-originating intelligent life will eventually go extinct, because the universe’s resources are finite.
In my comment I later specified “in [the] next century” though it’s quite understandable if you missed that. I agree that eventual extinction of Earth-originating intelligent life (including AIs) is likely; however, I don’t currently see a plausible mechanism for this to occur over time horizons that are brief by cosmological standards.
(I just edited the original comment to make this slightly clearer.)
Thanks for the clarification. I didn’t mean to be pedantic: I think these discussions are often unclear about the relevant time horizon. Even Bostrom admits (somewhere) that his earlier writing about existential risk left the timeframe unspecified (vaguely talking about “premature” extinction).
On the substantive question, I’m interested in learning more about your reasoning. To me, it seems much more likely that Earth-originating intelligence will go extinct this century than, say, in the 8973th century AD (conditional on survival up to that century). This is because it seems plausible that humanity (or its descendants) will soon develop technology with enough destructive potential to actually kill all intelligence. Then the question becomes whether they will also successfully develop the technology to protect intelligence from being so destroyed. But I don’t think there are decisive arguments for expecting the offense-defense balance to favor either defense or offense (the strongest argument for pessimism, in my view, is stated in the first paragraph of this book review). Do you deny that this technology will be developed “over time horizons that are brief by cosmological standards”? Or are you confident that our capacity to destroy will be outpaced by our capacity to prevent from destruction?
I tentatively agree with your statement that,
That said, I still suspect the absolute probability of total extinction of intelligent life during the 21st century is very low. To be more precise, I’d put this probability at around 1% (to be clear: I recognize other people may not agree that this credence should count as “extremely low” or “very low” in this context). To justify this statement, I would highlight several key factors:
Throughout hundreds of millions of years, complex life has demonstrated remarkable resilience. Since the first vertebrates colonized land during the late Devonian period (approximately 375–360 million years ago), no extinction event has ever eradicated all species capable of complex cognition. Even after the most catastrophic mass extinctions, such as the end-Permian extinction and the K-Pg extinction, vertebrates rebounded. Not only did they recover, but they also surpassed their previous levels of ecological dominance and cognitive complexity, as seen in the increasing brain size and adaptability of various species over time.
Unlike non-intelligent organisms, intelligent life—starting with humans—possesses advanced planning abilities and an exceptional capacity to adapt to changing environments. Humans have successfully settled in nearly every climate and terrestrial habitat on Earth, from tropical jungles to arid deserts and even Antarctica. This extreme adaptability suggests that intelligent life is less vulnerable to complete extinction compared to other complex life forms.
As human civilization has advanced, our species has become increasingly robust against most types of extinction events rather than more fragile. Technological progress has expanded our ability to mitigate threats, whether they come from natural disasters or disease. Our massive global population further reduces the likelihood that any single event could exterminate every last human, while our growing capacity to detect and neutralize threats makes us better equipped to survive crises.
History shows that even in cases of large-scale violence and genocide, the goal has almost always been the destruction of rival groups—not the annihilation of all life, including the perpetrators themselves. This suggests that intelligent beings have strong instrumental reasons to avoid total extinction events. Even in scenarios involving genocidal warfare, the likelihood of all intelligent beings willingly or accidentally destroying all life—including their own—seems very low.
I have yet to see any compelling evidence that near-term or medium-term technological advancements will introduce a weapon or catastrophe capable of wiping out all forms of intelligent life. While near-term technological risks certainly exist that threaten human life, none currently appear to pose a credible risk of total extinction of intelligent life.
Some of the most destructive long-term technologies—such as asteroid manipulation for planetary bombardment—are likely to develop alongside technologies that enhance our ability to survive and expand into space. As our capacity for destruction grows, so too will our ability to establish off-world colonies and secure alternative survival strategies. This suggests that the overall trajectory of intelligent life seems to be toward increasing resilience, not increasing vulnerability.
Artificial life could rapidly evolve to become highly resilient to environmental shocks. Future AIs could be designed to be at least as robust as insects—able to survive in a wide range of extreme and unpredictable conditions. Similar to plant seeds, artificial hardware could be engineered to efficiently store and execute complex self-replicating instructions in a highly compact form, enabling them to autonomously colonize diverse environments by utilizing various energy sources, such as solar and thermal energy. Having been engineered rather than evolved naturally, these artificial systems could take advantage of design principles that surpass biological organisms in adaptability. By leveraging a vast array of energy sources and survival strategies, they could likely colonize some of the most extreme and inhospitable environments in our solar system—places that even the most resilient biological life forms on Earth could never inhabit.