I don’t subscribe to moral realism. My own ethical outlook is a blend of personal attachments—my own life, my family, my friends, and other living humans—as well as a broader utilitarian concern for overall well-being. In this post, I focused on impartial utilitarianism because that’s the framework most often used by effective altruists.
However, to the extent that I also have non-utilitarian concerns (like caring about specific people I know), those concerns incline me away from supporting a pause on AI. If AI can accelerate technologies that save and improve the lives of people who exist right now, then slowing it down would cost lives in the near term. A more complete, and more rigorous version of this argument was outlined in the post.
What I find confusing about other EA’s views, including yours, is why we would assign such great importance to “human values” as something specifically tied to the human species as an abstract concept, rather than merely being partial to actual individuals who exist. This perspective is neither utilitarian, nor is it individualistic. It seems to value the concept of the human species over and above the actual individuals that comprise the species, much like how an ideological nationalist might view the survival of their nation as more important than the welfare of all the individuals who actually reside within the nation.
Let’s define “shumanity” as the set of all humans who are currently alive. Under this definition, every living person today is a “shuman,” but our future children may not be, since they do not yet exist. Now, let’s define “humanity” as the set of all humans who could ever exist, including future generations. Under this broader definition, both we and our future children are part of humanity.
If all currently living humans (shumanity) were to die, this would be a catastrophic loss from the perspective of shuman values—the values held by the people who are alive today. However, it would not necessarily be a catastrophic loss from the perspective of human values—the values of humanity as a whole, across time. This distinction is crucial. In the normal course of events, every generation eventually grows old, dies, and is replaced by the next. When this happens, shumanity, as defined, ceases to exist, and as such, shuman values are lost. However, humanity continues, carried forward by the new generation. Thus, human values are preserved, but not shuman values.
Now, consider this in the context of AI. Would the extinction of shumanity by AIs be much worse than the natural generational cycle of human replacement? In my view, it is not obvious that being replaced by AIs would be much worse than being replaced by future generations of humans. Both scenarios involve the complete loss of the individual values held by currently living people, which is undeniably a major loss. To be very clear, I am not saying that it would be fine if everyone died. But in both cases, something new takes our place, continuing some form of value, mitigating part of the loss. This is the same perspective I apply to AI: its rise might not necessarily be far worse than the inevitable generational turnover of humans, which equally involves everyone dying (which I see as a bad thing!). Maybe “human values” would die in this scenario, but this would not necessarily entail the end of the broader concept of impartial utilitarian value. This is precisely my point.