RSS

AI race

TagLast edit: 20 Mar 2023 19:02 UTC by Pablo

An AI race is a competition between rival teams to first develop advanced artificial intelligence.

Terminology

The expression “AI race” may be used in a somewhat narrower sense to describe a competition to attain military superiority via AI. The expressions AI arms race,[1] arms race for AI,[2] and military AI arms race[3] are sometimes employed to refer to this specific type of AI race.

AI races and technological races

An AI race is an example of the broader phenomenon of a technology race, characterized by a “winner-take-all” structure where the team that first develops the technology gets all (or most) of its benefits. This could happen because of various types of feedback loops that magnify the associated benefits. In the case of AI, it is generally believed that these benefits are very large, perhaps sufficient to confer the winning team a decisive strategic advantage.

Significance of AI races

AI races are significant primarily because of their effects on AI risk: a team can plausibly improve its chances of winning the race by relaxing safety precautions, and the payoffs from winning the race are great enough to provide strong incentives for that relaxation. In addition, a race that unfolds between national governments—rather than between private firms—could increase global instability and make great power conflicts more probable.

A model of AI races

Stuart Armstrong, Nick Bostrom and Carl Shulman have developed a model of AI races.[4] (Although the model is focused on artificial intelligence, it is applicable to any technology where the first team to develop it gets a disproportionate share of its benefits and each team can speed up its development by relaxing the safety precautions needed to reduce the dangers associated with the technology.)

The model involves n different teams racing to first build AI. Each team has a given AI-building capability c, as well as a chosen AI safety level s ranging from 0 (no precautions) to 1 (maximum precaution). The team for which cs is highest wins the race, and the probability of AI disaster is 1 – s.

Utility is normalized so that, for each team, 0 utility corresponds to an AI disaster and 1 corresponds to winning the AI race. In addition, each team has a degree of enmity e towards the other teams, ranging from 0 to 1, such that it gets utility 1 – e if another team wins the race. The model assumes a constant value of e for all teams.

Each team’s capability is drawn randomly from a uniform distribution ranging over the interval [0, μ], for a single given μ, with lower values representing lower capability.

From this model, a number of implications follow:

AI races and information hazards

AI races are sometimes cited as an example of an information hazard, i.e. a risk arising from the spread of true information. There are, in fact, a number of different hazards associated with AI races. One such hazard is the risk identified by Armstrong, Bostrom and Shulman: moving from a situation of no information to one of either private or public information increases risks. Another, more subtle information hazard concerns the sharing of information about the model itself: widespread awareness that no information is safer might encourage teams to adopt a culture of secrecy which might impede the building of trust among rival teams.[6] More generally, framing AI development as involving a winner-take-all dynamic in discussions by public leaders and intellectuals may itself be considered hazardous, insofar as it is likely to hinder cooperation and exacerbate conflict.[7][8][9]

Further reading

Armstrong, Stuart, Nick Bostrom & Carl Shulman (2016) Racing to the precipice: a model of artificial intelligence development, AI and Society, vol. 31, pp. 201–206.

Related entries

AI risk | artificial intelligence | great power conflict | technology race

  1. ^

    Barnes, Julian E. & Josh Chin (2018) The new arms race in AI, Wall Street Journal, March 2.

  2. ^
  3. ^

    Cave, Stephen & Seán ÓhÉigeartaigh (2018) An AI race for strategic advantage: Rhetoric and risks, in Jason Furman et al. (eds.) Proceedings of the 2018 AAAI/​ACM Conference on AI, Ethics, and Society, New York: Association for Computing Machinery, pp. 36–40, p. 37.

  4. ^

    Armstrong, Stuart, Nick Bostrom & Carl Shulman (2016) Racing to the precipice: a model of artificial intelligence development, AI and Society, vol. 31, pp. 201–206.

  5. ^

    Armstrong, Bostrom & Shulman, Racing to the precipice, p. 204.

  6. ^

    Armstrong, Bostrom & Shulman, Racing to the precipice, p. 205, fn. 7.

  7. ^

    Tomasik, Brian (2013) International cooperation vs. AI arms race, Center on Long-Term Risk, December 5 (updated 29 February 2016).

  8. ^

    Baum, Seth D. (2017) On the promotion of safe and socially beneficial artificial intelligence, AI and Society, vol. 32, pp. 543–551.

  9. ^

    Cave, Stephen & Seán ÓhÉigeartaigh, An AI race for strategic advantage.

FLI open let­ter: Pause gi­ant AI experiments

Zach Stein-Perlman29 Mar 2023 4:04 UTC
220 points
38 comments1 min readEA link

Are you re­ally in a race? The Cau­tion­ary Tales of Szilárd and Ellsberg

HaydnBelfield19 May 2022 8:42 UTC
482 points
44 comments18 min readEA link

NYT: Google will ‘re­cal­ibrate’ the risk of re­leas­ing AI due to com­pe­ti­tion with OpenAI

Michael Huang22 Jan 2023 2:13 UTC
173 points
8 comments1 min readEA link
(www.nytimes.com)

Hooray for step­ping out of the limelight

So8res1 Apr 2023 2:45 UTC
103 points
0 comments1 min readEA link

Brian To­masik on co­op­er­a­tion and peace

Vasco Grilo🔸20 May 2024 17:01 UTC
27 points
1 comment4 min readEA link
(reducing-suffering.org)

Prin­ci­ples for the AGI Race

William_S30 Aug 2024 14:30 UTC
75 points
4 comments18 min readEA link

How LDT helps re­duce the AI arms race

Tamsin Leake10 Dec 2023 16:21 UTC
8 points
1 comment1 min readEA link
(carado.moe)

Want to win the AGI race? Solve al­ign­ment.

leopold29 Mar 2023 15:19 UTC
56 points
6 comments5 min readEA link
(www.forourposterity.com)

In­for­ma­tion in risky tech­nol­ogy races

nemeryxu2 Aug 2022 23:35 UTC
15 points
2 comments3 min readEA link

A Wind­fall Clause for CEO could worsen AI race dynamics

Larks9 Mar 2023 18:02 UTC
69 points
12 comments7 min readEA link

What AI com­pa­nies can do to­day to help with the most im­por­tant century

Holden Karnofsky20 Feb 2023 17:40 UTC
104 points
8 comments11 min readEA link
(www.cold-takes.com)

How ma­jor gov­ern­ments can help with the most im­por­tant century

Holden Karnofsky24 Feb 2023 19:37 UTC
56 points
4 comments4 min readEA link
(www.cold-takes.com)

Sum­mary of Si­tu­a­tional Aware­ness—The Decade Ahead

OscarD🔸8 Jun 2024 11:29 UTC
140 points
5 comments18 min readEA link

[Question] What kind of or­ga­ni­za­tion should be the first to de­velop AGI in a po­ten­tial arms race?

Eevee🔹17 Jul 2022 17:41 UTC
10 points
2 comments1 min readEA link

[Question] Is AI like disk drives?

Tanae2 Sep 2023 19:12 UTC
8 points
1 comment1 min readEA link

Tran­script: NBC Nightly News: AI ‘race to reck­less­ness’ w/​ Tris­tan Har­ris, Aza Raskin

WilliamKiely23 Mar 2023 3:45 UTC
47 points
1 comment1 min readEA link

Re­duc­ing profit mo­ti­va­tions in AI development

Luke Frymire3 Apr 2023 20:04 UTC
19 points
1 comment6 min readEA link

What he’s learned as an AI policy in­sider (Tan­tum Col­lins on the 80,000 Hours Pod­cast)

80000_Hours13 Oct 2023 15:01 UTC
11 points
2 comments15 min readEA link

Care­less talk on US-China AI com­pe­ti­tion? (and crit­i­cism of CAIS cov­er­age)

Oliver Sourbut20 Sep 2023 12:46 UTC
46 points
19 comments1 min readEA link
(www.oliversourbut.net)

Aligned Ob­jec­tives Prize Competition

Prometheus15 Jun 2023 12:42 UTC
8 points
0 comments1 min readEA link

Am­bi­tious Im­pact launches a for-profit ac­cel­er­a­tor in­stead of build­ing the AI Safety space. Let’s talk about this.

yanni kyriacos18 Mar 2024 3:44 UTC
−7 points
13 comments1 min readEA link

#176 – The fi­nal push for AGI, un­der­stand­ing OpenAI’s lead­er­ship drama, and red-team­ing fron­tier mod­els (Nathan Labenz on the 80,000 Hours Pod­cast)

80000_Hours4 Jan 2024 16:00 UTC
15 points
0 comments22 min readEA link

An AI Race With China Can Be Bet­ter Than Not Racing

niplav2 Jul 2024 17:57 UTC
18 points
1 comment1 min readEA link

AISN #34: New Mili­tary AI Sys­tems Plus, AI Labs Fail to Uphold Vol­un­tary Com­mit­ments to UK AI Safety In­sti­tute, and New AI Policy Pro­pos­als in the US Senate

Center for AI Safety2 May 2024 16:12 UTC
21 points
5 comments8 min readEA link
(newsletter.safe.ai)

Max Teg­mark — The AGI En­tente Delusion

Matrice Jacobine13 Oct 2024 17:42 UTC
0 points
1 comment1 min readEA link
(www.lesswrong.com)

Com­par­i­son of LLM scal­a­bil­ity and perfor­mance be­tween the U.S. and China based on benchmark

Ivanna_alvarado12 Oct 2024 21:51 UTC
8 points
0 comments34 min readEA link

AISN #30: In­vest­ments in Com­pute and Mili­tary AI Plus, Ja­pan and Sin­ga­pore’s Na­tional AI Safety Institutes

Center for AI Safety24 Jan 2024 19:38 UTC
7 points
1 comment6 min readEA link
(newsletter.safe.ai)

The Com­pendium, A full ar­gu­ment about ex­tinc­tion risk from AGI

adamShimi31 Oct 2024 12:02 UTC
9 points
1 comment2 min readEA link
(www.thecompendium.ai)

CNAS re­port: ‘Ar­tifi­cial In­tel­li­gence and Arms Con­trol’

MMMaas13 Oct 2022 8:35 UTC
16 points
0 comments1 min readEA link
(www.cnas.org)

An­thropic teams up with Palan­tir and AWS to sell AI to defense customers

Matrice Jacobine9 Nov 2024 11:47 UTC
26 points
1 comment2 min readEA link
(techcrunch.com)

In­for­ma­tion se­cu­rity con­sid­er­a­tions for AI and the long term future

Jeffrey Ladish2 May 2022 20:53 UTC
127 points
8 comments11 min readEA link

Mili­tary Ar­tifi­cial In­tel­li­gence as Con­trib­u­tor to Global Catas­trophic Risk

MMMaas27 Jun 2022 10:35 UTC
41 points
0 comments52 min readEA link

An­nounc­ing the SPT Model Web App for AI Governance

Paolo Bova4 Aug 2022 10:45 UTC
42 points
0 comments5 min readEA link

Sum­mary of “Tech­nol­ogy Favours Tyranny” by Yu­val Noah Harari

Madhav Malhotra26 Oct 2022 21:37 UTC
36 points
1 comment2 min readEA link

Rac­ing through a minefield: the AI de­ploy­ment problem

Holden Karnofsky31 Dec 2022 21:44 UTC
79 points
1 comment13 min readEA link
(www.cold-takes.com)

Google in­vests $300mn in ar­tifi­cial in­tel­li­gence start-up An­thropic | FT

𝕮𝖎𝖓𝖊𝖗𝖆3 Feb 2023 19:43 UTC
155 points
5 comments1 min readEA link
(www.ft.com)

Dear An­thropic peo­ple, please don’t re­lease Claude

Joseph Miller8 Feb 2023 2:44 UTC
27 points
5 comments1 min readEA link

AGI in sight: our look at the game board

Andrea_Miotti18 Feb 2023 22:17 UTC
25 points
18 comments1 min readEA link

A con­cern­ing ob­ser­va­tion from me­dia cov­er­age of AI in­dus­try dynamics

Justin Olive2 Mar 2023 23:56 UTC
48 points
5 comments3 min readEA link

Com­ments on OpenAI’s “Plan­ning for AGI and be­yond”

So8res3 Mar 2023 23:01 UTC
115 points
7 comments1 min readEA link
No comments.