TL;DR If you believe the key claims of “there is a >=1% chance of AI causing x-risk and >=0.1% chance of bio causing x-risk in my lifetime” this is enough to justify the core action relevant points of EA. This clearly matters under most reasonable moral views and the common discussion of longtermism, future generations and other details of moral philosophy in intro materials is an unnecessary distraction.
I think the central thesis of this post—as I understand it—is false, for the reasons I provided in this comment. [Edit: to be clear, I think this post was perhaps true at the time, but in my view, has since become false if one counts pausing AI as a “core action relevant point” of EA]. To quote myself:
Let’s assume that there’s a 2% chance of AI causing existential risk, and that, optimistically, pausing [AI progress] for a decade would cut this risk in half (rather than barely decreasing it, or even increasing it). This would imply that the total risk would diminish from 2% to 1%.
According to OWID, approximately 63 million people die every year, although this rate is expected to increase, rising to around 74 million in 2035. If we assume that around 68 million people will die per year during the relevant time period, and that they could have been saved by AI-enabled medical progress, then pausing AI for a decade would kill around 680 million people.
This figure is around 8.3% of the current global population, and would constitute a death count higher than the combined death toll from World War 1, World War 2, the Mongol Conquests, the Taiping rebellion, the Transition from Ming to Qing, and the Three Kingdoms Civil war.
(Note that, although we are counting deaths from old age in this case, these deaths are comparable to deaths in war from a years of life lost perspective, if you assume that AI-accelerated medical breakthroughs will likely greatly increase human lifespan.)
From the perspective of an individual human life, a 1% chance of death from AI is significantly lower than a 8.3% chance of death from aging—though obviously in the former case this risk would apply independently of age, and in the latter case, the risk would be concentrated heavily among people who are currently elderly.
Even a briefer pause lasting just two years, while still cutting risk in half, would not survive this basic cost-benefit test. Of course, it’s true that it’s difficult to directly compare the individual personal costs from AI existential risk to the diseases of old age. For example, AI existential risk has the potential to be briefer and less agonizing, which, all else being equal, should push us to favor it. On the other hand, most people might consider death from old age to be preferable since it’s more natural and allows the human species to continue.
Nonetheless, despite these nuances, I think the basic picture that I’m presenting holds up here: under typical assumptions [...] a purely individualistic framing of the costs and benefits of AI pause do not clearly favor pausing, from the perspective of people who currently exist. This fact was noted in Nick Bostrom’s original essay on Astronomical Waste, and more recently, by Chad Jones in his paper on the tradeoffs involved in stopping AI development.
I agree, this seems broadly accurate. I suppose I should have clarified that your post was perhaps true at the time, but in my view, has since become false if one counts AI pause as a “core action relevant point” of EA.
I believe that people’s answers to questions like this are usually highly sensitive to how the issue is framed. If you simply presented them with the exact quote you wrote here, without explaining that “saving many lives” would likely include the lives of their loved ones, such as their elderly relatives, I agree that most would support slowing down development. However, if you instead clarified that continuing development would likely save their own lives and the lives of their family members by curing most types of diseases, and if you also emphasized that the risk of human extinction from continued development is very low (for example, 1-2%), then I think there would be a significantly higher chance that most people would support moving forward with the technology at a reasonably fast pace, though presumably with some form of regulation in place to govern the technology.
One possible response to my argument is to point to survey data that shows most people favor pausing AI. However, while I agree survey data can be useful, I don’t think it provides strong evidence in this case for the claim. This is because most people, when answering survey questions, lack sufficient context and have not spent much time thinking deeply about these complex issues. Their responses are often made without fully understanding the stakes or the relevant information. In contrast, if you look at the behavior of current legislators and government officials who are being advised by scientific experts and given roughly this same information, it does not seem that they are currently strongly in favor of pausing AI development.