I realize my position can be confusing, so let me clarify it as plainly as I can: I do not regard the extinction of humanity as anything close to “fine.” In fact, I think it would be a devastating tragedy if every human being died. I have repeatedly emphasized that a major upside of advanced AI lies in its potential to accelerate medical breakthroughs—breakthroughs that might save countless human lives, including potentially my own. Clearly, I value human lives, as otherwise I would not have made this particular point so frequently.
What seems to cause confusion is that I also argue the following more subtle point: while human extinction would be unbelievably bad, it would likely not be astronomically bad in the strict sense used by the “astronomical waste” argument. The standard “astronomical waste” argument says that if humanity disappears, then all possibility for a valuable, advanced civilization vanishes forever. But in a scenario where humans die out because of AI, civilization would continue—just not with humans. That means a valuable intergalactic civilization could still arise, populated by AI rather than by humans. From a purely utilitarian perspective that counts the existence of a future civilization as extremely valuable—whether human or AI—this difference lowers the cataclysm from “astronomically, supremely, world-endingly awful” to “still incredibly awful, but not on a cosmic scale.”
In other words, my position remains that human extinction is very bad indeed—it entails the loss of eight billion individual human lives, which would be horrifying. I don’t want to be forcibly replaced by an AI. Nor do I want you, or anyone else to be forcibly replaced by an AI. I am simply pushing back on the idea that such an event would constitute the absolute destruction of all future value in the universe. There is a meaningful distinction between “an unimaginable tragedy we should try very hard to avoid” and “a total collapse of all potential for a flourishing future civilization of any kind.” My stance falls firmly in the former category.
This distinction is essential to my argument because it fundamentally shapes how we evaluate trade-offs, particularly when considering policies that aim to slow or restrict AI research. If we assume that human extinction due to AI would erase all future value, then virtually any present-day sacrifice—no matter how extreme—might seem justified to reduce that risk. However, if advanced AI could continue to sustain its own value-generating civilization, even in the absence of humans, then extinction would not represent the absolute end of valuable life. While this scenario would be catastrophic for humanity, attempting to avoid it might not outweigh certain immediate benefits of AI, such as its potential to save lives through advanced technology.
In other words, there could easily be situations where accelerating AI development—rather than pausing it—ends up being the better choice for saving human lives, even if doing so technically slightly increases the risk of human species extinction. This does not mean we should be indifferent to extinction; rather, it means we should stop treating extinction as a near-infinitely overriding concern, where even the smallest reduction in its probability is always worth immense near-term costs to actual people living today.
For a moment, I’d like to reverse the criticism you leveled at me. From where I stand, it is often those who strongly advocate pausing AI development, not myself, who can appear to undervalue the lives of humans. I know they don’t see themselves this way, and they would certainly never phrase it in those terms. Nevertheless, this is my reading of the deeper implications of their position.
A common proposition that many AI pause advocates have affirmed to me is that it very well could be worth it to pause AI, even if this led to billions of humans dying prematurely due to them missing out on accelerated medical progress that could otherwise have saved their lives. Therefore, while these advocates care deeply about human extinction (something I do not deny), their concern does not seemrooted in the intrinsic worth of the people who are alive today. Instead, their primary focus often seems to be on the loss of potential future human lives that could maybe exist in the far future—lives that do not yet even exist, and on my view, are unlikely to exist in the far future in basically any scenario, since humanity is unlikely to be preserved as a fixed, static concept over the long-run.
In my view, this philosophy neither prioritizes the well-being of actual individuals nor is it grounded in the utilitarian value that humanity actively generates. If this philosophy were purely about impartial utilitarian value, then I ask: why are they not more open to my perspective? Since my philosophy takes an impartial utilitarian approach—one that considers not just human-generated value, but also the potential value that AI itself could create—it would seem to appeal to those who simply took a strict utilitarian approach, without discriminating against artificial life arbitrarily. Yet, my philosophy largely does not appeal to those who express this view, suggesting the presence of alternative, non-utilitarian concerns.
I realize my position can be confusing, so let me clarify it as plainly as I can: I do not regard the extinction of humanity as anything close to “fine.” In fact, I think it would be a devastating tragedy if every human being died. I have repeatedly emphasized that a major upside of advanced AI lies in its potential to accelerate medical breakthroughs—breakthroughs that might save countless human lives, including potentially my own. Clearly, I value human lives, as otherwise I would not have made this particular point so frequently.
What seems to cause confusion is that I also argue the following more subtle point: while human extinction would be unbelievably bad, it would likely not be astronomically bad in the strict sense used by the “astronomical waste” argument. The standard “astronomical waste” argument says that if humanity disappears, then all possibility for a valuable, advanced civilization vanishes forever. But in a scenario where humans die out because of AI, civilization would continue—just not with humans. That means a valuable intergalactic civilization could still arise, populated by AI rather than by humans. From a purely utilitarian perspective that counts the existence of a future civilization as extremely valuable—whether human or AI—this difference lowers the cataclysm from “astronomically, supremely, world-endingly awful” to “still incredibly awful, but not on a cosmic scale.”
In other words, my position remains that human extinction is very bad indeed—it entails the loss of eight billion individual human lives, which would be horrifying. I don’t want to be forcibly replaced by an AI. Nor do I want you, or anyone else to be forcibly replaced by an AI. I am simply pushing back on the idea that such an event would constitute the absolute destruction of all future value in the universe. There is a meaningful distinction between “an unimaginable tragedy we should try very hard to avoid” and “a total collapse of all potential for a flourishing future civilization of any kind.” My stance falls firmly in the former category.
This distinction is essential to my argument because it fundamentally shapes how we evaluate trade-offs, particularly when considering policies that aim to slow or restrict AI research. If we assume that human extinction due to AI would erase all future value, then virtually any present-day sacrifice—no matter how extreme—might seem justified to reduce that risk. However, if advanced AI could continue to sustain its own value-generating civilization, even in the absence of humans, then extinction would not represent the absolute end of valuable life. While this scenario would be catastrophic for humanity, attempting to avoid it might not outweigh certain immediate benefits of AI, such as its potential to save lives through advanced technology.
In other words, there could easily be situations where accelerating AI development—rather than pausing it—ends up being the better choice for saving human lives, even if doing so technically slightly increases the risk of human species extinction. This does not mean we should be indifferent to extinction; rather, it means we should stop treating extinction as a near-infinitely overriding concern, where even the smallest reduction in its probability is always worth immense near-term costs to actual people living today.
For a moment, I’d like to reverse the criticism you leveled at me. From where I stand, it is often those who strongly advocate pausing AI development, not myself, who can appear to undervalue the lives of humans. I know they don’t see themselves this way, and they would certainly never phrase it in those terms. Nevertheless, this is my reading of the deeper implications of their position.
A common proposition that many AI pause advocates have affirmed to me is that it very well could be worth it to pause AI, even if this led to billions of humans dying prematurely due to them missing out on accelerated medical progress that could otherwise have saved their lives. Therefore, while these advocates care deeply about human extinction (something I do not deny), their concern does not seem rooted in the intrinsic worth of the people who are alive today. Instead, their primary focus often seems to be on the loss of potential future human lives that could maybe exist in the far future—lives that do not yet even exist, and on my view, are unlikely to exist in the far future in basically any scenario, since humanity is unlikely to be preserved as a fixed, static concept over the long-run.
In my view, this philosophy neither prioritizes the well-being of actual individuals nor is it grounded in the utilitarian value that humanity actively generates. If this philosophy were purely about impartial utilitarian value, then I ask: why are they not more open to my perspective? Since my philosophy takes an impartial utilitarian approach—one that considers not just human-generated value, but also the potential value that AI itself could create—it would seem to appeal to those who simply took a strict utilitarian approach, without discriminating against artificial life arbitrarily. Yet, my philosophy largely does not appeal to those who express this view, suggesting the presence of alternative, non-utilitarian concerns.
Thanks, that is very helpful to me in clarifying your position.