From a purely utilitarian viewpoint, the harm of a short delay is utterly dominated by the scale of possible misalignment risks and missed opportunities for ensuring the best long-term trajectory—whether for humans, other organic species, or digital minds. Consequently, it’s prudent to err on the side of delay if doing so meaningfully improves our chance of securing a safe and maximally valuable future.
Your argument appears to assume that, in the absence of evidence about what goals future AI systems will have, delaying AI development should be the default position to mitigate risk. But why should we accept this assumption? Why not consider acceleration just as reasonable a default? If we lack meaningful evidence about the values AI will develop, then we have no more justification for assuming that delay is preferable than we do for assuming that acceleration is.
In fact, one could just as easily argue the opposite: that AI might develop moral values superior to those of humans. This claim appears to have about as much empirical support as the assumption that AI values will be worse. This argument could then justify accelerating AI rather than delaying it. Using the same logic that you just applied, one could make a symmetrical counterargument against your position: that accelerating AI is actually the correct course of action, since any minor harms caused by moving forward are vastly outweighed by the long-term risk of locking in suboptimal values through unnecessary delay. Delaying AI development would, in this context, risk entrenching human values, which are suboptimal to the default AI values that we would get through accelerating.
You might think that even weak evidence in favor of delaying AI is sufficient to support this strategy as the default course of action. But this would seem to assume a “knife’s edge” scenario, where even a slight epistemic advantage—such as a 51% chance that delay is beneficial versus a 49% chance that acceleration is beneficial—should be enough to justify committing to a pause. If we adopted this kind of reasoning in other domains, we would quickly fall into epistemic paralysis, constantly shifting strategies based on fragile, easily reversible analysis.
Given this high level of uncertainty about AI’s future trajectory, I think the best approach is to focus on the most immediate and concrete tradeoffs that we can analyze with some degree of confidence. This includes whether delaying or accelerating AI is likely to be more beneficial to the current generation of humans. However, based on the available evidence, I believe that accelerating AI—rather than delaying it—is likely the better choice, as I highlight in my post.
Your argument appears to assume that, in the absence of evidence about what goals future AI systems will have, delaying AI development should be the default position to mitigate risk. But why should we accept this assumption? Why not consider acceleration just as reasonable a default? If we lack meaningful evidence about the values AI will develop, then we have no more justification for assuming that delay is preferable than we do for assuming that acceleration is.
In fact, one could just as easily argue the opposite: that AI might develop moral values superior to those of humans. This claim appears to have about as much empirical support as the assumption that AI values will be worse. This argument could then justify accelerating AI rather than delaying it. Using the same logic that you just applied, one could make a symmetrical counterargument against your position: that accelerating AI is actually the correct course of action, since any minor harms caused by moving forward are vastly outweighed by the long-term risk of locking in suboptimal values through unnecessary delay. Delaying AI development would, in this context, risk entrenching human values, which are suboptimal to the default AI values that we would get through accelerating.
You might think that even weak evidence in favor of delaying AI is sufficient to support this strategy as the default course of action. But this would seem to assume a “knife’s edge” scenario, where even a slight epistemic advantage—such as a 51% chance that delay is beneficial versus a 49% chance that acceleration is beneficial—should be enough to justify committing to a pause. If we adopted this kind of reasoning in other domains, we would quickly fall into epistemic paralysis, constantly shifting strategies based on fragile, easily reversible analysis.
Given this high level of uncertainty about AI’s future trajectory, I think the best approach is to focus on the most immediate and concrete tradeoffs that we can analyze with some degree of confidence. This includes whether delaying or accelerating AI is likely to be more beneficial to the current generation of humans. However, based on the available evidence, I believe that accelerating AI—rather than delaying it—is likely the better choice, as I highlight in my post.