Huh? This argument only goes through if you have a sufficiently low probability of existential risk or an extremely low change in your probability of existential risk, conditioned on things moving slower.
This claim seems false, though its truth hinges on what exactly you mean by a “sufficiently low probability of existential risk” and “an extremely low change in your probability of existential risk”.
To illustrate why I think your claim is false, I’ll perform a quick calculation. I don’t know your p(doom), but in a post from three years ago, you stated,
If you believe the key claims of “there is a >=1% chance of AI causing x-risk and >=0.1% chance of bio causing x-risk in my lifetime” this is enough to justify the core action relevant points of EA.
Let’s assume that there’s a 2% chance of AI causing existential risk, and that, optimistically, pausing for a decade would cut this risk in half (rather than barely decreasing it, or even increasing it). This would imply that the total risk would diminish from 2% to 1%.
According to OWID, approximately 63 million people die every year, although this rate is expected to increase, rising to around 74 million in 2035. If we assume that around 68 million people will die per year during the relevant time period, and that they could have been saved by AI-enabled medical progress, then pausing AI for a decade would kill around 680 million people.
This figure is around 8.3% of the current global population, and would constitute a death count higher than the combined death toll from World War 1, World War 2, the Mongol Conquests, the Taiping rebellion, the Transition from Ming to Qing, and the Three Kingdoms Civil war.
(Note that, although we are counting deaths from old age in this case, these deaths are comparable to deaths in war from a years of life lost perspective, if you assume that AI-accelerated medical breakthroughs will likely greatly increase human lifespan.)
From the perspective of an individual human life, a 1% chance of death from AI is significantly lower than a 8.3% chance of death from aging—though obviously in the former case this risk would apply independently of age, and in the latter case, the risk would be concentrated heavily among people who are currently elderly.
Even a briefer pause lasting just two years, while still cutting risk in half, would not survive this basic cost-benefit test. Of course, it’s true that it’s difficult to directly compare the individual personal costs from AI existential risk to the diseases of old age. For example, AI existential risk has the potential to be briefer and less agonizing, which, all else being equal, should push us to favor it. On the other hand, most people might consider death from old age to be preferable since it’s more natural and allows the human species to continue.
Nonetheless, despite these nuances, I think the basic picture that I’m presenting holds up here: under typical assumptions (such as the ones you gave three years ago), a purely individualistic framing of the costs and benefits of AI pause do not clearly favor pausing, from the perspective of people who currently exist. This fact was noted in Nick Bostrom’s original essay on Astronomical Waste, and more recently, by Chad Jones in his paper on the tradeoffs involved in stopping AI development.
I’m not talking about “arbitrary AI entities” in this context, but instead, the AI entities who will actually exist in the future, who will presumably be shaped by our training data, as well as our training methods. From this perspective, it’s not clear to me that your claim is true. But even if your claim is true, I was actually making a different point. My point was instead that it isn’t clear that future generations of AIs would be much worse than future generations of humans from an impartial utilitarian point of view.
(That said, it sounds like the real crux between us might instead be about whether pausing AI would be very costly to people who currently exist. If indeed you disagree with me about this point, I’d prefer you reply to my other comment rather than replying to this one, as I perceive that discussion as likely to be more productive.)