There are psychological pressures that can lead to motivated reasoning on both sides of this issue. On the pro-acceleration side, individuals may be motivated to downplay or dismiss the potential risks and downsides of rapid AI development. On the other side, those advocating for slowing or pausing AI progress may be motivated to dismiss or undervalue the possible benefits and upsides. Because both the risks and the potential rewards of AI are substantial, I don’t see a compelling reason to assume that one side must be much more prone to denial or bias than the other.
At most, I see a simple selection effect: the people most actively pushing for faster AI development are likely those who are least worried about the risks. This could lead to a unilateralist curse, where the least concerned actors push capabilities forward despite a high risk of disaster. But the opposite scenario could also happen, if the most concerned actors are able to slow down progress for everyone else, delaying the benefits of AI unacceptably. Whether you should care more about the first or second scenario depends on your judgement of whether rapid AI progress is good or bad overall.
Ultimately, I think it’s more productive to frame the issue around empirical facts and value judgments: specifically, how much risk rapid AI development actually introduces, and how much value we ought to place on the potential benefits of rapid development. I find this framing more helpful, not only because it identifies the core disagreement between accelerationists and pause advocates, but also because I think it better accounts for the pace of AI development we actually observe in the real world.
There are psychological pressures that can lead to motivated reasoning on both sides of this issue. On the pro-acceleration side, individuals may be motivated to downplay or dismiss the potential risks and downsides of rapid AI development. On the other side, those advocating for slowing or pausing AI progress may be motivated to dismiss or undervalue the possible benefits and upsides. Because both the risks and the potential rewards of AI are substantial, I don’t see a compelling reason to assume that one side must be much more prone to denial or bias than the other.
At most, I see a simple selection effect: the people most actively pushing for faster AI development are likely those who are least worried about the risks. This could lead to a unilateralist curse, where the least concerned actors push capabilities forward despite a high risk of disaster. But the opposite scenario could also happen, if the most concerned actors are able to slow down progress for everyone else, delaying the benefits of AI unacceptably. Whether you should care more about the first or second scenario depends on your judgement of whether rapid AI progress is good or bad overall.
Ultimately, I think it’s more productive to frame the issue around empirical facts and value judgments: specifically, how much risk rapid AI development actually introduces, and how much value we ought to place on the potential benefits of rapid development. I find this framing more helpful, not only because it identifies the core disagreement between accelerationists and pause advocates, but also because I think it better accounts for the pace of AI development we actually observe in the real world.
I agree that it seems like a valuable framing, thanks Matthew.