I think it’s extremely careless and condemnable to impose this risk on humanity just because you have personally deemed it acceptable.
I’m not sure I fully understand this criticism. From a moral subjectivist perspective, all moral decisions are ultimately based on what individuals personally deem acceptable. If you’re suggesting that there is an objective moral standard—something external to individual preferences—that we are obligated to follow, then I would understand your point.
That said, I’m personally skeptical that such an objective morality exists. And even if it did, I don’t see why I should necessarily follow it if I could instead act according to my own moral preferences—especially if I find my own preferences to be more humane and sensible than the objective morality.
This would be a deontological nightmare. Who gave AI labs the right to risk the lives of 8 billion people?
I see why a deontologist might find accelerating AI troublesome, especially given their emphasis on act-omission asymmetry—the idea that actively causing harm is worse than merely allowing harm to happen. However, I don’t personally find that distinction very compelling, especially in this context.
I’m also not a deontologist: I approach these questions from a consequentialist perspective. My personal ethics can be described as a mix of personal attachments and broader utilitarian concerns. In other words, I both care about people who currently exist, and more generally about all morally relevant beings. So while I understand why this argument might resonate with others, it doesn’t carry much weight for me.
That makes sense. For what it’s worth, I’m also not convinced that delaying AI is the right choice from a purely utilitarian perspective. I think there are reasonable arguments on both sides. My most recent post touches on this topic, so it might be worth reading for a better understanding of where I stand.
Right now, my stance is to withhold strong judgment on whether accelerating AI is harmful on net from a utilitarian point of view. It’s not that I think a case can’t be made: it’s just I don’t think the existing arguments are decisive enough to justify a firm position. In contrast, the argument that accelerating AI benefits people who currently exist seems significantly more straightforward and compelling to me.
This combination of views leads me to see accelerating AI as a morally acceptable choice (as long as it’s paired with adequate safety measures). Put simply:
When I consider the well-being of people who currently exist, the case for acceleration appears fairly strong and compelling.
When I take an impartial utilitarian perspective—one that prioritizes long-term outcomes for all sentient beings—the arguments for delaying AI seem weak and highly uncertain.
Since I give substantial weight to both perspectives, the stronger and clearer case for acceleration (based on the interests of people alive today) outweighs the much weaker and more uncertain case for delay (based on speculative long-term utilitarian concerns) in my view.
Of course, my analysis here doesn’t apply to someone who gives almost no moral weight to the well-being of people alive today—someone who, for instance, would be fine with everyone dying horribly if it meant even a tiny increase in the probability of a better outcome for the galaxy a billion years from now. But in my view, this type of moral calculus, if taken very seriously, seems highly unstable and untethered from practical considerations.
Since I think we have very little reliable insight into what actions today will lead to a genuinely better world millions of years down the line, it seems wise to exercise caution and try to avoid overconfidence about whether delaying AI is good or bad on the basis of its very long-term effects.