There are psychological pressures that can lead to motivated reasoning on both sides of this issue. On the pro-acceleration side, individuals may be motivated to downplay or dismiss the potential risks and downsides of rapid AI development. On the other side, those advocating for slowing or pausing AI progress may be motivated to dismiss or undervalue the possible benefits and upsides. Because both the risks and the potential rewards of AI are substantial, I don’t see a compelling reason to assume that one side must be much more prone to denial or bias than the other.
At most, I see a simple selection effect: the people most actively pushing for faster AI development are likely those who are least worried about the risks. This could lead to a unilateralist curse, where the least concerned actors push capabilities forward despite a high risk of disaster. But the opposite scenario could also happen, if the most concerned actors are able to slow down progress for everyone else, delaying the benefits of AI unacceptably. Whether you should care more about the first or second scenario depends on your judgement of whether rapid AI progress is good or bad overall.
Ultimately, I think it’s more productive to frame the issue around empirical facts and value judgments: specifically, how much risk rapid AI development actually introduces, and how much value we ought to place on the potential benefits of rapid development. I find this framing more helpful, not only because it identifies the core disagreement between accelerationists and pause advocates, but also because I think it better accounts for the pace of AI development we actually observe in the real world.
Perhaps I overstated some of my claims or was unclear. So let me try to be more clear about my basic thesis. First of all, I agree that in the most basic model of the situation, being slightly ahead of a competitor can be the decisive factor between going bankrupt and making enormous profits. This creates a significant personal incentive to race ahead, even if doing so only marginally increases existential risk overall. As a result, AI labs may end up taking on more risk than they would in the absence of such pressure. More generally, I agree that without competition—whether between states or between AI companies—progress would likely be slower than it currently is.
My main point, however, is that these effects are likely not strong enough to justify the conclusion that the socially optimal pace of AI R&D is meaningfully slower than the current pace we in fact observe. In other words, I’m not convinced that what’s rational from an individual actor’s perspective diverges greatly from what would be rational from a collective or societal standpoint.
This is the central claim underlying my objection: if there is no meaningful difference between what is individually rational and what is collectively rational, then there is little reason to believe we are facing a tragedy-of-the-commons scenario as suggested in the post.
To sketch a more complete argument here, I would like to make two points:
First, while some forces incentivize speeding up AI development, others push in the opposite direction. Measures like export controls, tariffs, and (potentially) future AI regulations can slow down progress. In these cases, the described dynamic flips: the global costs of slowing down are shared, while the political rewards—such as public credit or influence—are concentrated among the policymakers or lobbyists who implement the slowdown.
Second, as I’ve mentioned, a large share of both the risks and benefits of AI accrue directly to those driving its development. This alignment of incentives gives them a reason to avoid reckless acceleration that would dramatically increase risk.
As a testable prediction of my view, we could ask whether AI labs are actively lobbying for slower progress internationally. If they truly preferred collective constraint but felt compelled to move forward individually, we would expect them to support measures that slow everyone down—while personally moving forward as fast as they can in the meantime. However, to my knowledge, such lobbying is not happening. This suggests that labs may not, in fact, collectively prefer significantly slower development.