TL;DR If you believe the key claims of “there is a >=1% chance of AI causing x-risk and >=0.1% chance of bio causing x-risk in my lifetime” this is enough to justify the core action relevant points of EA. This clearly matters under most reasonable moral views and the common discussion of longtermism, future generations and other details of moral philosophy in intro materials is an unnecessary distraction.
I think the central thesis of this post—as I understand it—is false, for the reasons I provided in this comment. [Edit: to be clear, I think this post was perhaps true at the time, but in my view, has since become false if one counts pausing AI as a “core action relevant point” of EA]. To quote myself:
Let’s assume that there’s a 2% chance of AI causing existential risk, and that, optimistically, pausing [AI progress] for a decade would cut this risk in half (rather than barely decreasing it, or even increasing it). This would imply that the total risk would diminish from 2% to 1%.
According to OWID, approximately 63 million people die every year, although this rate is expected to increase, rising to around 74 million in 2035. If we assume that around 68 million people will die per year during the relevant time period, and that they could have been saved by AI-enabled medical progress, then pausing AI for a decade would kill around 680 million people.
This figure is around 8.3% of the current global population, and would constitute a death count higher than the combined death toll from World War 1, World War 2, the Mongol Conquests, the Taiping rebellion, the Transition from Ming to Qing, and the Three Kingdoms Civil war.
(Note that, although we are counting deaths from old age in this case, these deaths are comparable to deaths in war from a years of life lost perspective, if you assume that AI-accelerated medical breakthroughs will likely greatly increase human lifespan.)
From the perspective of an individual human life, a 1% chance of death from AI is significantly lower than a 8.3% chance of death from aging—though obviously in the former case this risk would apply independently of age, and in the latter case, the risk would be concentrated heavily among people who are currently elderly.
Even a briefer pause lasting just two years, while still cutting risk in half, would not survive this basic cost-benefit test. Of course, it’s true that it’s difficult to directly compare the individual personal costs from AI existential risk to the diseases of old age. For example, AI existential risk has the potential to be briefer and less agonizing, which, all else being equal, should push us to favor it. On the other hand, most people might consider death from old age to be preferable since it’s more natural and allows the human species to continue.
Nonetheless, despite these nuances, I think the basic picture that I’m presenting holds up here: under typical assumptions [...] a purely individualistic framing of the costs and benefits of AI pause do not clearly favor pausing, from the perspective of people who currently exist. This fact was noted in Nick Bostrom’s original essay on Astronomical Waste, and more recently, by Chad Jones in his paper on the tradeoffs involved in stopping AI development.
I broadly agree that the costs of long pauses look much more expensive if you’re not a longtermist. (When I wrote this post, pauseAI and similar were much less of a thing).
I still stand by this post for a few reasons:
“This clearly matters under most reasonable moral views”—In my opinion, person affecting views are not that common a view (though I’m not confident here) and many people would consider human extinction to matter intrinsically, in that it affects their future children or grandchildren and legacy and future generations, quite a lot more than just the lives of everyone alive today, without being total utilitarians. Most people also aren’t even utilitarians, and may think that death from old age is natural and totally fine. I just think if you told people “there’s this new technology that could cause human extinction, or be a really big deal and save many lives and cause an age of wonders, should we be slow and cautious in how we develop it” most people would say yes? Under specifically a scope sensitive, person affecting view, I agree that pauses are unusually bad
I personally don’t even expect pauses to work, without way more evidence of imminent risk than we currently have (and probably even then not for more than 6-24 months) and I think that most actions that people in the community take here have way less of a tradeoff—do more safety research, evaluate and monitor things better, actually have any regulation whatsoever, communicate and coordinate with China, model the impact these things will have on the economy, avoid concentrations of power that enable unilateral power grabs, ensure companies can go at an appropriate pace rather than being caught in a mad commercial rush, etc. I think that, to be effective, a pause must also include things like a hardware progress pause, affect all key actors, etc which seems really hard to achieve and I think it’s very unrealistic without much stronger evidence of imminent risk, at which point I think the numbers are much more favourable towards pausing, as my risk conditional on no pausing would be higher. I just really don’t expect the world to pause on the basis of a precautionary principle.
For example, I do interpretability work. I think this is just straightforwardly good under most moral frameworks here and my argument here is sufficient to support much more investment in technical safety research, one of the major actions called for by the community. I care more about emphasising areas of common ground than justifying the most extreme and impractical positions
Personally, my risk figures and timelines are notably beyond the baseline described in this post, so I’m more pro extreme actions like pausing, even on person affecting grounds, but I agree this is a harder sell requiring stronger arguments than this post.
When I wrote this post, pauseAI and similar were much less of a thing
I agree, this seems broadly accurate. I suppose I should have clarified that your post was perhaps true at the time, but in my view, has since become false if one counts AI pause as a “core action relevant point” of EA.
I just think if you told people “there’s this new technology that could cause human extinction, or be a really big deal and save many lives and cause an age of wonders, should we be slow and cautious in how we develop it” most people would say yes?
I believe that people’s answers to questions like this are usually highly sensitive to how the issue is framed. If you simply presented them with the exact quote you wrote here, without explaining that “saving many lives” would likely include the lives of their loved ones, such as their elderly relatives, I agree that most would support slowing down development. However, if you instead clarified that continuing development would likely save their own lives and the lives of their family members by curing most types of diseases, and if you also emphasized that the risk of human extinction from continued development is very low (for example, 1-2%), then I think there would be a significantly higher chance that most people would support moving forward with the technology at a reasonably fast pace, though presumably with some form of regulation in place to govern the technology.
One possible response to my argument is to point to survey data that shows most people favor pausing AI. However, while I agree survey data can be useful, I don’t think it provides strong evidence in this case for the claim. This is because most people, when answering survey questions, lack sufficient context and have not spent much time thinking deeply about these complex issues. Their responses are often made without fully understanding the stakes or the relevant information. In contrast, if you look at the behavior of current legislators and government officials who are being advised by scientific experts and given roughly this same information, it does not seem that they are currently strongly in favor of pausing AI development.
I agree that people’s takes in response to surveys are very sensitive to framing and hard to interpret. I was trying to gesture at the hypothesis that many people are skeptical of future technologies, afraid of job loss, don’t trust tech, etc, even if they do sincerely value loved ones. But anyway, that’s not a crux.
I think we basically agree here, overall? I agree that my arguments here are not sufficient to support a large pause for a small reduction in risk. I don’t consider this a core point of EA, but I’m not confident in that, and don’t think you’re too unreasonable for doing so
Though while I’m skeptical of the type of unilateral pause pushed for in EA, I am much more supportive of not actively pushing capabilities to be faster, since I think the arguments that pauses are distortionary and penalise safety motivated actors, don’t apply there, and most acceleration will diffuse across the ecosystem. This makes me guess that Mechanize is net negative, so I imagine this is also a point of disagreement between us.
I think the central thesis of this post—as I understand it—is false, for the reasons I provided in this comment. [Edit: to be clear, I think this post was perhaps true at the time, but in my view, has since become false if one counts pausing AI as a “core action relevant point” of EA]. To quote myself:
I broadly agree that the costs of long pauses look much more expensive if you’re not a longtermist. (When I wrote this post, pauseAI and similar were much less of a thing).
I still stand by this post for a few reasons:
“This clearly matters under most reasonable moral views”—In my opinion, person affecting views are not that common a view (though I’m not confident here) and many people would consider human extinction to matter intrinsically, in that it affects their future children or grandchildren and legacy and future generations, quite a lot more than just the lives of everyone alive today, without being total utilitarians. Most people also aren’t even utilitarians, and may think that death from old age is natural and totally fine. I just think if you told people “there’s this new technology that could cause human extinction, or be a really big deal and save many lives and cause an age of wonders, should we be slow and cautious in how we develop it” most people would say yes? Under specifically a scope sensitive, person affecting view, I agree that pauses are unusually bad
I personally don’t even expect pauses to work, without way more evidence of imminent risk than we currently have (and probably even then not for more than 6-24 months) and I think that most actions that people in the community take here have way less of a tradeoff—do more safety research, evaluate and monitor things better, actually have any regulation whatsoever, communicate and coordinate with China, model the impact these things will have on the economy, avoid concentrations of power that enable unilateral power grabs, ensure companies can go at an appropriate pace rather than being caught in a mad commercial rush, etc. I think that, to be effective, a pause must also include things like a hardware progress pause, affect all key actors, etc which seems really hard to achieve and I think it’s very unrealistic without much stronger evidence of imminent risk, at which point I think the numbers are much more favourable towards pausing, as my risk conditional on no pausing would be higher. I just really don’t expect the world to pause on the basis of a precautionary principle.
For example, I do interpretability work. I think this is just straightforwardly good under most moral frameworks here and my argument here is sufficient to support much more investment in technical safety research, one of the major actions called for by the community. I care more about emphasising areas of common ground than justifying the most extreme and impractical positions
Personally, my risk figures and timelines are notably beyond the baseline described in this post, so I’m more pro extreme actions like pausing, even on person affecting grounds, but I agree this is a harder sell requiring stronger arguments than this post.
I agree, this seems broadly accurate. I suppose I should have clarified that your post was perhaps true at the time, but in my view, has since become false if one counts AI pause as a “core action relevant point” of EA.
I believe that people’s answers to questions like this are usually highly sensitive to how the issue is framed. If you simply presented them with the exact quote you wrote here, without explaining that “saving many lives” would likely include the lives of their loved ones, such as their elderly relatives, I agree that most would support slowing down development. However, if you instead clarified that continuing development would likely save their own lives and the lives of their family members by curing most types of diseases, and if you also emphasized that the risk of human extinction from continued development is very low (for example, 1-2%), then I think there would be a significantly higher chance that most people would support moving forward with the technology at a reasonably fast pace, though presumably with some form of regulation in place to govern the technology.
One possible response to my argument is to point to survey data that shows most people favor pausing AI. However, while I agree survey data can be useful, I don’t think it provides strong evidence in this case for the claim. This is because most people, when answering survey questions, lack sufficient context and have not spent much time thinking deeply about these complex issues. Their responses are often made without fully understanding the stakes or the relevant information. In contrast, if you look at the behavior of current legislators and government officials who are being advised by scientific experts and given roughly this same information, it does not seem that they are currently strongly in favor of pausing AI development.
I agree that people’s takes in response to surveys are very sensitive to framing and hard to interpret. I was trying to gesture at the hypothesis that many people are skeptical of future technologies, afraid of job loss, don’t trust tech, etc, even if they do sincerely value loved ones. But anyway, that’s not a crux.
I think we basically agree here, overall? I agree that my arguments here are not sufficient to support a large pause for a small reduction in risk. I don’t consider this a core point of EA, but I’m not confident in that, and don’t think you’re too unreasonable for doing so
Though while I’m skeptical of the type of unilateral pause pushed for in EA, I am much more supportive of not actively pushing capabilities to be faster, since I think the arguments that pauses are distortionary and penalise safety motivated actors, don’t apply there, and most acceleration will diffuse across the ecosystem. This makes me guess that Mechanize is net negative, so I imagine this is also a point of disagreement between us.