I’m not super sure what you mean by individualistic. I was modelling this as utilitarian but assigning literally zero value to future people. From a purely selfish perspective, I’m in my mid-20s and my chances of dying from natural causes in the next say 20 years are pretty damn low, and this means that given my background beliefs about doom and timelines, slowing down AI is great deal from my perspective. While if I expected to die from old age in the next 5 years I would be a lot more opposed
A typical 25 year old man in the United States has around a 4.3% chance of dying before they turn 45 according to these actuarial statistics from 2019 (the most recent non-pandemic year in the data). I wouldn’t exactly call that “pretty damn low”, though opinions on these things differ. This is comparable to my personal credence that AIs will kill me in the next 20 years. And if AI goes well, it will probably make life really awesome. So from this narrowly selfish point of view I’m still not really convinced pausing is worth it.
Perhaps more importantly: do you not have any old family members that you care about?
I don’t think there’s any moral view that’s objectively more “reasonable” than any other moral view (as I’m a moral anti-realist). However, I personally don’t have a significant moral preference for humans beyond the fact that I am partial to my family, friends, and a lot of other people who are currently alive. When I think about potential future generations who don’t exist yet, I tend to adopt a more impartial, utilitarian framework.
In other words, my moral views can be summarized as a combination of personal attachments and broader utilitarian moral concerns. My personal attachments are not impartial: for example, I care about my family more than I care about random strangers. However, beyond my personal attachments, I tend to take an impartial utilitarian approach that doesn’t assign any special value to the human species.
In other words, to the extent I care about humans specifically, this concern merely arises from the fact that I’m attached to some currently living individuals who happen to be human—rather than because I think the human species is particularly important.
Does that make sense?
I agree this is an open question, but I think it’s much clearer that future AIs will have complex and meaningful preferences compared to a thermostat or a plant. I think we can actually be pretty confident about this prediction given the strong economic pressures that will push AIs towards being person-like and agentic. (Note, however, that I’m not making a strong claim here that all AIs will be moral patients in the future. It’s sufficient for my argument if merely a large number of them are.)
In fact, a lot of arguments for AI risk rest on the premise that AI agents will exist in the future, and that they’ll have certain preferences (at least in a functional sense). If we were to learn that future AIs won’t have preferences, that would both undermine these arguments for AI risk, and many of my moral arguments for valuing AIs. Therefore, to the extent you think AIs will lack the cognitive prerequisites for moral patienthood—under my functionalist and preference utilitarian views—this doesn’t necessarily translate into a stronger case for worrying about AI takeover.
However, I want to note that the view I have just described is actually broader than the thesis I gave in the post. If you read my post carefully, you’ll see that I actually hedged quite a bit by saying that there are potential, logically consistent utilitarian arguments that could be made in favor of pausing AI. My thesis in the post was not that such an argument couldn’t be given. It was actually a fairly narrow thesis, and I didn’t make a strong claim that AI-controlled futures would create about as much utilitarian moral value as human-controlled futures in expectation (even though I personally think this claim is plausible).