Perhaps I overstated some of my claims or was unclear. So let me try to be more clear about my basic thesis. First of all, I agree that in the most basic model of the situation, being slightly ahead of a competitor can be the decisive factor between going bankrupt and making enormous profits. This creates a significant personal incentive to race ahead, even if doing so only marginally increases existential risk overall. As a result, AI labs may end up taking on more risk than they would in the absence of such pressure. More generally, I agree that without competition—whether between states or between AI companies—progress would likely be slower than it currently is.
My main point, however, is that these effects are likely not strong enough to justify the conclusion that the socially optimal pace of AI R&D is meaningfully slower than the current pace we in fact observe. In other words, I’m not convinced that what’s rational from an individual actor’s perspective diverges greatly from what would be rational from a collective or societal standpoint.
This is the central claim underlying my objection: if there is no meaningful difference between what is individually rational and what is collectively rational, then there is little reason to believe we are facing a tragedy-of-the-commons scenario as suggested in the post.
To sketch a more complete argument here, I would like to make two points:
First, while some forces incentivize speeding up AI development, others push in the opposite direction. Measures like export controls, tariffs, and (potentially) future AI regulations can slow down progress. In these cases, the described dynamic flips: the global costs of slowing down are shared, while the political rewards—such as public credit or influence—are concentrated among the policymakers or lobbyists who implement the slowdown.
Second, as I’ve mentioned, a large share of both the risks and benefits of AI accrue directly to those driving its development. This alignment of incentives gives them a reason to avoid reckless acceleration that would dramatically increase risk.
As a testable prediction of my view, we could ask whether AI labs are actively lobbying for slower progress internationally. If they truly preferred collective constraint but felt compelled to move forward individually, we would expect them to support measures that slow everyone down—while personally moving forward as fast as they can in the meantime. However, to my knowledge, such lobbying is not happening. This suggests that labs may not, in fact, collectively prefer significantly slower development.
Perhaps I overstated some of my claims or was unclear. So let me try to be more clear about my basic thesis. First of all, I agree that in the most basic model of the situation, being slightly ahead of a competitor can be the decisive factor between going bankrupt and making enormous profits. This creates a significant personal incentive to race ahead, even if doing so only marginally increases existential risk overall. As a result, AI labs may end up taking on more risk than they would in the absence of such pressure. More generally, I agree that without competition—whether between states or between AI companies—progress would likely be slower than it currently is.
My main point, however, is that these effects are likely not strong enough to justify the conclusion that the socially optimal pace of AI R&D is meaningfully slower than the current pace we in fact observe. In other words, I’m not convinced that what’s rational from an individual actor’s perspective diverges greatly from what would be rational from a collective or societal standpoint.
This is the central claim underlying my objection: if there is no meaningful difference between what is individually rational and what is collectively rational, then there is little reason to believe we are facing a tragedy-of-the-commons scenario as suggested in the post.
To sketch a more complete argument here, I would like to make two points:
First, while some forces incentivize speeding up AI development, others push in the opposite direction. Measures like export controls, tariffs, and (potentially) future AI regulations can slow down progress. In these cases, the described dynamic flips: the global costs of slowing down are shared, while the political rewards—such as public credit or influence—are concentrated among the policymakers or lobbyists who implement the slowdown.
Second, as I’ve mentioned, a large share of both the risks and benefits of AI accrue directly to those driving its development. This alignment of incentives gives them a reason to avoid reckless acceleration that would dramatically increase risk.
As a testable prediction of my view, we could ask whether AI labs are actively lobbying for slower progress internationally. If they truly preferred collective constraint but felt compelled to move forward individually, we would expect them to support measures that slow everyone down—while personally moving forward as fast as they can in the meantime. However, to my knowledge, such lobbying is not happening. This suggests that labs may not, in fact, collectively prefer significantly slower development.