An in-our-view interesting tangential point: It might decently often be the case that a technological development initially increases risks but then later increases risk by a smaller margin or even overall reduces risks.
One reason this can happen is that developments may be especially risky in the period before states or other actors have had time to adjust their strategies, doctrine, procedures, etc. in light of the development.
(This seems in some ways reminiscent of the Collingridge dilemma or the “pacing problem”.)
Another possible reason is that a technology may be riskiest in the period when it is just useful enough to be deployed but not yet very reliable.
Geist and Lohn (2018) suggest this might happen, for the above two reasons, with respect to AI developments and nuclear risk:
“Workshop participants agreed that the riskiest periods will occur immediately after AI enables a new capability, such as tracking and targeting or decision support about escalation. During this break-in period, errors and misunderstandings are relatively likely. With time and increased technological progress, those risks would be expected to diminish. If the main enabling capabilities are developed during peacetime, then it may be reasonable to expect progress to continue beyond the point at which they could be initially fielded, allowing time for them to increase in reliability or for their limitations to become well understood. Eventually, the AI system would develop capabilities that, while fallible, would be less error-prone than their human alternatives and therefore be stabilizing in the long term”
An in-our-view interesting tangential point: It might decently often be the case that a technological development initially increases risks but then later increases risk by a smaller margin or even overall reduces risks.
One reason this can happen is that developments may be especially risky in the period before states or other actors have had time to adjust their strategies, doctrine, procedures, etc. in light of the development.
(This seems in some ways reminiscent of the Collingridge dilemma or the “pacing problem”.)
Another possible reason is that a technology may be riskiest in the period when it is just useful enough to be deployed but not yet very reliable.
Geist and Lohn (2018) suggest this might happen, for the above two reasons, with respect to AI developments and nuclear risk:
“Workshop participants agreed that the riskiest periods will occur immediately after AI enables a new capability, such as tracking and targeting or decision support about escalation. During this break-in period, errors and misunderstandings are relatively likely. With time and increased technological progress, those risks would be expected to diminish. If the main enabling capabilities are developed during peacetime, then it may be reasonable to expect progress to continue beyond the point at which they could be initially fielded, allowing time for them to increase in reliability or for their limitations to become well understood. Eventually, the AI system would develop capabilities that, while fallible, would be less error-prone than their human alternatives and therefore be stabilizing in the long term”