I agree with many of the things other people have already mentioned. However, I want to add one additional argument against PauseAI, which I believe is quite important and worth emphasizing clearly:
In general, hastening technological progress tends to be a good thing. For example, if a cure for cancer were to arrive in 5 years instead of 15 years, that would be very good. The earlier arrival of the cure would save many lives and prevent a lot of suffering for people who would otherwise endure unnecessary pain or death during those additional 10 years. The difference in timing matters because every year of delay means avoidable harm continues to occur.
I believe this same principle applies to AI, as I expect its main effects will likely be overwhelmingly positive. AI seems likely to accelerate economic growth, accelerate technological progress, and significantly improve health and well-being for billions of people. These outcomes are all very desirable, and I would strongly prefer for them to arrive sooner rather than later. Delaying these benefits unnecessarily means forgoing better lives, better health, and better opportunities for many people in the interim.
Of course, there are exceptions to this principle, as it’s not always the case that hastening technology is beneficial. Sometimes it is indeed wiser to delay the deployment of a new technology if the delay would substantially increase its safety or reduce risks. I’m not dogmatic about hastening technology and I recognize there are legitimate trade-offs here. However, in the case of AI, I am simply not convinced that delaying its development and deployment is justified on current margins.
To make this concrete, let’s say that delaying AI development by 5 years would reduce existential risk by only 0.001 percentage points. I would not support such a trade-off. From the perspective of any moral framework that incorporates even a slight discounting of future consumption and well-being, such a delay would be highly undesirable. There are pragmatic reasons to include time discounting in a moral framework: the future is inherently uncertain, and the farther out we try to forecast, the less predictable and reliable our expectations about the future become. If we can bring about something very good sooner, without significant costs, we should almost always do so rather than being indifferent to when it happens.
However, if the situation were different—if delaying AI by 5 years reduced existential risk by something like 10 percentage points—then I think the case for PauseAI would be much stronger. In such a scenario, I would seriously consider supporting PauseAI and might even advocate for it loudly. That said, I find this kind of large reduction in existential risk from a delay in AI development to be implausible, partly for the reasons others in this thread have already outlined.
This argument is highly dependent on your population ethics. From a longtermist, total positive utilitarian perspective, existential risk is many, many magnitudes worse than delaying progress, as it affect many, many magnitudes more (potential) people.
I think it would require an unreasonably radical interpretation of longtermism to believe, for example, that delaying something as valuable as a cure for cancer by 10 years (or another comparably significant breakthrough) would be justified, let alone overwhelmingly outweighed, because of an extremely slight and speculative anticipated positive impact on existential risk. Similarly, I think the same is true about AI, if indeed pausing the technology would only have a very slight impact on existential risk in expectation.
I’ve already provided a pragmatic argument for incorporating at least a slight amount of time discounting into one’s moral framework, but I want to reemphasize and elaborate on this point for clarity. Even if you are firmly committed to the idea that we should have no pure rate of time preference—meaning you believe future lives and welfare matter just as much as present ones—you should still account for the fact that the future is inherently uncertain. Our ability to predict the future diminishes significantly the farther we look ahead. This uncertainty should generally lead us to favor not delaying the realization of clearly good outcomes unless there is a strong and concrete justification for why the delay would yield substantial benefits.
Longtermism, as I understand it, is simply the idea that the distant future matters a great deal and should be factored into our decision-making. Longtermism does not—and should not—imply that we should essentially ignore enormous, tangible and clear short-term harms just because we anticipate extremely slight and highly speculative long-term gains that might result from a particular course of action.
I recognize that someone who adheres to an extremely strong and rigid version of longtermism might disagree with the position I’m articulating here. Such a person might argue that even a very small and speculative reduction in existential risk justifies delaying massive and clear near-term benefits. However, I generally believe that people should not adopt this kind of extreme strong longtermism. It leads to moral conclusions that are unreasonably detached from the realities of suffering and flourishing in the present and near future, and I think this approach undermines the pragmatic and balanced principles that arguably drew many of us to longtermism in the first place.
I don’t care about population ethics so don’t take this as a good faith argument. But doesn’t astronomical waste imply that saving lives earlier can compete on the same order of magnitude as x risk?
In light of the above discussion, it may seem as if a utilitarian ought to focus her efforts on accelerating technological development. The payoff from even a very slight success in this endeavor is so enormous that it dwarfs that of almost any other activity. We appear to have a utilitarian argument for the greatest possible urgency of technological development.
However, the true lesson is a different one. If what we are concerned with is (something like) maximizing the expected number of worthwhile lives that we will create, then in addition to the opportunity cost of delayed colonization, we have to take into account the risk of failure to colonize at all. We might fall victim to an existential risk, one where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.8 Because the lifespan of galaxies is measured in billions of years, whereas the time-scale of any delays that we could realistically affect would rather be measured in years or decades, the consideration of risk trumps the consideration of opportunity cost. For example, a single percentage point of reduction of existential risks would be worth (from a utilitarian expected utility point-of-view) a delay of over 10 million years.
Therefore, if our actions have even the slightest effect on the probability of eventual colonization, this will outweigh their effect on when colonization takes place. For standard utilitarians, priority number one, two, three and four should consequently be to reduce existential risk. The utilitarian imperative “Maximize expected aggregate utility!” can be simplified to the maxim “Minimize existential risk!”.
For example, a single percentage point of reduction of existential risks would be worth (from a utilitarian expected utility point-of-view) a delay of over 10 million years.
I’m curious how many EAs believe this claim literally, and think a 10 million year pause (assuming it’s feasible in the first place) would be justified if it reduced existential risk by a single percentage point. Given the disagree votes to my other comments, it seems a fair number might in fact agree to the literal claim here.
Given my disagreement that we should take these numbers literally, I think it might be worth writing a post about why we should have a pragmatic non-zero discount rate, even from a purely longtermist perspective.
if delaying AI by 5 years reduced existential risk by something like 10 percentage points—then I think the case for PauseAI would be much stronger
This is the crux. I think it would reduce existential risk by at least 10% (probably a lot more). And 5 years would just be a start—obviously any Pause should (and in practice will) only be lifted conditionally. I link your AGI timelines are relatively short? And I don’t think your reasons for expecting the default outcome from AGI to be good are sound (as you even allude to yourself).
I think this is a reasonable point of disagreement. Though, as you allude to, it is separate from the point I was making.
I do think it is generally very important to distinguish between:
Advocacy for a policy because you think it would have a tiny impact on x-risk, which thereby outweighs all the other side effects of the policy, including potentially massive near-term effects, because reducing x-risk simply outweighs every other ethical priority by many orders of magnitude.
Advocacy for a policy because you think it would have a moderate or large effect on x-risk, and is therefore worth doing because reducing x-risk is an important ethical priority (even if it isn’t, say, one million times more important than every other ethical priority combined).
I’m happy to debate (2) on empirical grounds, and debate (1) on ethical grounds. I think the ethical philosophy behind (1) is quite dubious and resembles the type of logic that is vulnerable to Pascal-mugging. The ethical philosophy behind (2) seems sound, but the empirical basis is often uncertain.
I agree with many of the things other people have already mentioned. However, I want to add one additional argument against PauseAI, which I believe is quite important and worth emphasizing clearly:
In general, hastening technological progress tends to be a good thing. For example, if a cure for cancer were to arrive in 5 years instead of 15 years, that would be very good. The earlier arrival of the cure would save many lives and prevent a lot of suffering for people who would otherwise endure unnecessary pain or death during those additional 10 years. The difference in timing matters because every year of delay means avoidable harm continues to occur.
I believe this same principle applies to AI, as I expect its main effects will likely be overwhelmingly positive. AI seems likely to accelerate economic growth, accelerate technological progress, and significantly improve health and well-being for billions of people. These outcomes are all very desirable, and I would strongly prefer for them to arrive sooner rather than later. Delaying these benefits unnecessarily means forgoing better lives, better health, and better opportunities for many people in the interim.
Of course, there are exceptions to this principle, as it’s not always the case that hastening technology is beneficial. Sometimes it is indeed wiser to delay the deployment of a new technology if the delay would substantially increase its safety or reduce risks. I’m not dogmatic about hastening technology and I recognize there are legitimate trade-offs here. However, in the case of AI, I am simply not convinced that delaying its development and deployment is justified on current margins.
To make this concrete, let’s say that delaying AI development by 5 years would reduce existential risk by only 0.001 percentage points. I would not support such a trade-off. From the perspective of any moral framework that incorporates even a slight discounting of future consumption and well-being, such a delay would be highly undesirable. There are pragmatic reasons to include time discounting in a moral framework: the future is inherently uncertain, and the farther out we try to forecast, the less predictable and reliable our expectations about the future become. If we can bring about something very good sooner, without significant costs, we should almost always do so rather than being indifferent to when it happens.
However, if the situation were different—if delaying AI by 5 years reduced existential risk by something like 10 percentage points—then I think the case for PauseAI would be much stronger. In such a scenario, I would seriously consider supporting PauseAI and might even advocate for it loudly. That said, I find this kind of large reduction in existential risk from a delay in AI development to be implausible, partly for the reasons others in this thread have already outlined.
This argument is highly dependent on your population ethics. From a longtermist, total positive utilitarian perspective, existential risk is many, many magnitudes worse than delaying progress, as it affect many, many magnitudes more (potential) people.
I think it would require an unreasonably radical interpretation of longtermism to believe, for example, that delaying something as valuable as a cure for cancer by 10 years (or another comparably significant breakthrough) would be justified, let alone overwhelmingly outweighed, because of an extremely slight and speculative anticipated positive impact on existential risk. Similarly, I think the same is true about AI, if indeed pausing the technology would only have a very slight impact on existential risk in expectation.
I’ve already provided a pragmatic argument for incorporating at least a slight amount of time discounting into one’s moral framework, but I want to reemphasize and elaborate on this point for clarity. Even if you are firmly committed to the idea that we should have no pure rate of time preference—meaning you believe future lives and welfare matter just as much as present ones—you should still account for the fact that the future is inherently uncertain. Our ability to predict the future diminishes significantly the farther we look ahead. This uncertainty should generally lead us to favor not delaying the realization of clearly good outcomes unless there is a strong and concrete justification for why the delay would yield substantial benefits.
Longtermism, as I understand it, is simply the idea that the distant future matters a great deal and should be factored into our decision-making. Longtermism does not—and should not—imply that we should essentially ignore enormous, tangible and clear short-term harms just because we anticipate extremely slight and highly speculative long-term gains that might result from a particular course of action.
I recognize that someone who adheres to an extremely strong and rigid version of longtermism might disagree with the position I’m articulating here. Such a person might argue that even a very small and speculative reduction in existential risk justifies delaying massive and clear near-term benefits. However, I generally believe that people should not adopt this kind of extreme strong longtermism. It leads to moral conclusions that are unreasonably detached from the realities of suffering and flourishing in the present and near future, and I think this approach undermines the pragmatic and balanced principles that arguably drew many of us to longtermism in the first place.
I don’t care about population ethics so don’t take this as a good faith argument. But doesn’t astronomical waste imply that saving lives earlier can compete on the same order of magnitude as x risk?
https://nickbostrom.com/papers/astronomical-waste/
I’m curious how many EAs believe this claim literally, and think a 10 million year pause (assuming it’s feasible in the first place) would be justified if it reduced existential risk by a single percentage point. Given the disagree votes to my other comments, it seems a fair number might in fact agree to the literal claim here.
Given my disagreement that we should take these numbers literally, I think it might be worth writing a post about why we should have a pragmatic non-zero discount rate, even from a purely longtermist perspective.
This is the crux. I think it would reduce existential risk by at least 10% (probably a lot more). And 5 years would just be a start—obviously any Pause should (and in practice will) only be lifted conditionally. I link your AGI timelines are relatively short? And I don’t think your reasons for expecting the default outcome from AGI to be good are sound (as you even allude to yourself).
I do in fact believe that delaying AI by 5 years reduce existential risk by something like 10 percentage points.
Probably this thread isn’t the best place to hash it out, however.
I think this is a reasonable point of disagreement. Though, as you allude to, it is separate from the point I was making.
I do think it is generally very important to distinguish between:
Advocacy for a policy because you think it would have a tiny impact on x-risk, which thereby outweighs all the other side effects of the policy, including potentially massive near-term effects, because reducing x-risk simply outweighs every other ethical priority by many orders of magnitude.
Advocacy for a policy because you think it would have a moderate or large effect on x-risk, and is therefore worth doing because reducing x-risk is an important ethical priority (even if it isn’t, say, one million times more important than every other ethical priority combined).
I’m happy to debate (2) on empirical grounds, and debate (1) on ethical grounds. I think the ethical philosophy behind (1) is quite dubious and resembles the type of logic that is vulnerable to Pascal-mugging. The ethical philosophy behind (2) seems sound, but the empirical basis is often uncertain.