An AI race is a competition between rival teams to first develop advanced artificial intelligence.
Terminology
The expression “AI race” may be used in a somewhat narrower sense to describe a competition to attain military superiority via AI. The expressions AI arms race,[1] arms race for AI,[2] and military AI arms race[3] are sometimes employed to refer to this specific type of AI race.
AI races and technological races
An AI race is an example of the broader phenomenon of a technology race, characterized by a “winner-take-all” structure where the team that first develops the technology gets all (or most) of its benefits. This could happen because of various types of feedback loops that magnify the associated benefits. In the case of AI, it is generally believed that these benefits are very large, perhaps sufficient to confer the winning team a decisive strategic advantage.
Significance of AI races
AI races are significant primarily because of their effects on AI risk: a team can plausibly improve its chances of winning the race by relaxing safety precautions, and the payoffs from winning the race are great enough to provide strong incentives for that relaxation. In addition, a race that unfolds between national governments—rather than between private firms—could increase global instability and make great power conflicts more probable.
A model of AI races
Stuart Armstrong, Nick Bostrom and Carl Shulman have developed a model of AI races.[4] (Although the model is focused on artificial intelligence, it is applicable to any technology where the first team to develop it gets a disproportionate share of its benefits and each team can speed up its development by relaxing the safety precautions needed to reduce the dangers associated with the technology.)
The model involves n different teams racing to first build AI. Each team has a given AI-building capability c, as well as a chosen AI safety level s ranging from 0 (no precautions) to 1 (maximum precaution). The team for which c – s is highest wins the race, and the probability of AI disaster is 1 – s.
Utility is normalized so that, for each team, 0 utility corresponds to an AI disaster and 1 corresponds to winning the AI race. In addition, each team has a degree of enmity e towards the other teams, ranging from 0 to 1, such that it gets utility 1 – e if another team wins the race. The model assumes a constant value of e for all teams.
Each team’s capability is drawn randomly from a uniform distribution ranging over the interval [0, μ], for a single given μ, with lower values representing lower capability.
From this model, a number of implications follow:
As μ increases, capability becomes increasingly important relative to safety in determining the outcome of the race, and teams become correspondingly less inclined to skimp on safety precautions. Conversely, lower values of μ are associated with fewer precautions; at the limiting case of μ = 0, teams will take no precautions at all.
As enmity increases, the cost to each team of losing the race increases, and so teams become more inclined to skimp on safety precautions. But whereas the relative importance of capability is largely determined by technology, and is as such mostly intractable, there are various interventions reasonably expected to decrease enmity, such as “building trust between nations and groups, sharing technologies or discoveries, merging into joint projects or agreeing to common aims.”[5]
A less intuitive finding of the model concerns how capability and enmity relate to scenarios involving (1) no information; (2) private information (each team knows its own capability); and (3) public information (each team knows the capability of every team). No information is always safer than either private or private information. But while public information can decrease risk, relative to private information, when both capability and enmity are low, the reverse is the case for sufficiently high levels of capability or enmity.
Another surprising finding concerns the impact of the number of teams under different informational scenarios. When there is either no information or public information, risk strictly increases with the number of teams. But although this effect is also observed for private information when capability is low, as capability grows the effect eventually reverses.
AI races and information hazards
AI races are sometimes cited as an example of an information hazard, i.e. a risk arising from the spread of true information. There are, in fact, a number of different hazards associated with AI races. One such hazard is the risk identified by Armstrong, Bostrom and Shulman: moving from a situation of no information to one of either private or public information increases risks. Another, more subtle information hazard concerns the sharing of information about the model itself: widespread awareness that no information is safer might encourage teams to adopt a culture of secrecy which might impede the building of trust among rival teams.[6] More generally, framing AI development as involving a winner-take-all dynamic in discussions by public leaders and intellectuals may itself be considered hazardous, insofar as it is likely to hinder cooperation and exacerbate conflict.[7][8][9]
Further reading
Armstrong, Stuart, Nick Bostrom & Carl Shulman (2016) Racing to the precipice: a model of artificial intelligence development, AI and Society, vol. 31, pp. 201–206.
Related entries
AI risk | artificial intelligence | great power conflict | technology race
- ^
Barnes, Julian E. & Josh Chin (2018) The new arms race in AI, Wall Street Journal, March 2.
- ^
Fedasiuk, Ryan (2021) We spent a year investigating what the Chinese army is buying. Here’s what we learned, Politico, November 10.
- ^
Cave, Stephen & Seán ÓhÉigeartaigh (2018) An AI race for strategic advantage: Rhetoric and risks, in Jason Furman et al. (eds.) Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, New York: Association for Computing Machinery, pp. 36–40, p. 37.
- ^
Armstrong, Stuart, Nick Bostrom & Carl Shulman (2016) Racing to the precipice: a model of artificial intelligence development, AI and Society, vol. 31, pp. 201–206.
- ^
Armstrong, Bostrom & Shulman, Racing to the precipice, p. 204.
- ^
Armstrong, Bostrom & Shulman, Racing to the precipice, p. 205, fn. 7.
- ^
Tomasik, Brian (2013) International cooperation vs. AI arms race, Center on Long-Term Risk, December 5 (updated 29 February 2016).
- ^
Baum, Seth D. (2017) On the promotion of safe and socially beneficial artificial intelligence, AI and Society, vol. 32, pp. 543–551.
- ^
Cave, Stephen & Seán ÓhÉigeartaigh, An AI race for strategic advantage.