That makes sense. For what it’s worth, I’m also not convinced that delaying AI is the right choice from a purely utilitarian perspective. I think there are reasonable arguments on both sides. My most recent post touches on this topic, so it might be worth reading for a better understanding of where I stand.
Right now, my stance is to withhold strong judgment on whether accelerating AI is harmful on net from a utilitarian point of view. It’s not that I think a case can’t be made: it’s just I don’t think the existing arguments are decisive enough to justify a firm position. In contrast, the argument that accelerating AI benefits people who currently exist seems significantly more straightforward and compelling to me.
This combination of views leads me to see accelerating AI as a morally acceptable choice (as long as it’s paired with adequate safety measures). Put simply:
When I consider the well-being of people who currently exist, the case for acceleration appears fairly strong and compelling.
When I take an impartial utilitarian perspective—one that prioritizes long-term outcomes for all sentient beings—the arguments for delaying AI seem weak and highly uncertain.
Since I give substantial weight to both perspectives, the stronger and clearer case for acceleration (based on the interests of people alive today) outweighs the much weaker and more uncertain case for delay (based on speculative long-term utilitarian concerns) in my view.
Of course, my analysis here doesn’t apply to someone who gives almost no moral weight to the well-being of people alive today—someone who, for instance, would be fine with everyone dying horribly if it meant even a tiny increase in the probability of a better outcome for the galaxy a billion years from now. But in my view, this type of moral calculus, if taken very seriously, seems highly unstable and untethered from practical considerations.
Since I think we have very little reliable insight into what actions today will lead to a genuinely better world millions of years down the line, it seems wise to exercise caution and try to avoid overconfidence about whether delaying AI is good or bad on the basis of its very long-term effects.
That makes sense. For what it’s worth, I’m also not convinced that delaying AI is the right choice from a purely utilitarian perspective. I think there are reasonable arguments on both sides. My most recent post touches on this topic, so it might be worth reading for a better understanding of where I stand.
Right now, my stance is to withhold strong judgment on whether accelerating AI is harmful on net from a utilitarian point of view. It’s not that I think a case can’t be made: it’s just I don’t think the existing arguments are decisive enough to justify a firm position. In contrast, the argument that accelerating AI benefits people who currently exist seems significantly more straightforward and compelling to me.
This combination of views leads me to see accelerating AI as a morally acceptable choice (as long as it’s paired with adequate safety measures). Put simply:
When I consider the well-being of people who currently exist, the case for acceleration appears fairly strong and compelling.
When I take an impartial utilitarian perspective—one that prioritizes long-term outcomes for all sentient beings—the arguments for delaying AI seem weak and highly uncertain.
Since I give substantial weight to both perspectives, the stronger and clearer case for acceleration (based on the interests of people alive today) outweighs the much weaker and more uncertain case for delay (based on speculative long-term utilitarian concerns) in my view.
Of course, my analysis here doesn’t apply to someone who gives almost no moral weight to the well-being of people alive today—someone who, for instance, would be fine with everyone dying horribly if it meant even a tiny increase in the probability of a better outcome for the galaxy a billion years from now. But in my view, this type of moral calculus, if taken very seriously, seems highly unstable and untethered from practical considerations.
Since I think we have very little reliable insight into what actions today will lead to a genuinely better world millions of years down the line, it seems wise to exercise caution and try to avoid overconfidence about whether delaying AI is good or bad on the basis of its very long-term effects.