RSS

AI race

TagLast edit: Mar 20, 2023, 7:02 PM by Pablo

An AI race is a competition between rival teams to first develop advanced artificial intelligence.

Terminology

The expression “AI race” may be used in a somewhat narrower sense to describe a competition to attain military superiority via AI. The expressions AI arms race,[1] arms race for AI,[2] and military AI arms race[3] are sometimes employed to refer to this specific type of AI race.

AI races and technological races

An AI race is an example of the broader phenomenon of a technology race, characterized by a “winner-take-all” structure where the team that first develops the technology gets all (or most) of its benefits. This could happen because of various types of feedback loops that magnify the associated benefits. In the case of AI, it is generally believed that these benefits are very large, perhaps sufficient to confer the winning team a decisive strategic advantage.

Significance of AI races

AI races are significant primarily because of their effects on AI risk: a team can plausibly improve its chances of winning the race by relaxing safety precautions, and the payoffs from winning the race are great enough to provide strong incentives for that relaxation. In addition, a race that unfolds between national governments—rather than between private firms—could increase global instability and make great power conflicts more probable.

A model of AI races

Stuart Armstrong, Nick Bostrom and Carl Shulman have developed a model of AI races.[4] (Although the model is focused on artificial intelligence, it is applicable to any technology where the first team to develop it gets a disproportionate share of its benefits and each team can speed up its development by relaxing the safety precautions needed to reduce the dangers associated with the technology.)

The model involves n different teams racing to first build AI. Each team has a given AI-building capability c, as well as a chosen AI safety level s ranging from 0 (no precautions) to 1 (maximum precaution). The team for which cs is highest wins the race, and the probability of AI disaster is 1 – s.

Utility is normalized so that, for each team, 0 utility corresponds to an AI disaster and 1 corresponds to winning the AI race. In addition, each team has a degree of enmity e towards the other teams, ranging from 0 to 1, such that it gets utility 1 – e if another team wins the race. The model assumes a constant value of e for all teams.

Each team’s capability is drawn randomly from a uniform distribution ranging over the interval [0, μ], for a single given μ, with lower values representing lower capability.

From this model, a number of implications follow:

AI races and information hazards

AI races are sometimes cited as an example of an information hazard, i.e. a risk arising from the spread of true information. There are, in fact, a number of different hazards associated with AI races. One such hazard is the risk identified by Armstrong, Bostrom and Shulman: moving from a situation of no information to one of either private or public information increases risks. Another, more subtle information hazard concerns the sharing of information about the model itself: widespread awareness that no information is safer might encourage teams to adopt a culture of secrecy which might impede the building of trust among rival teams.[6] More generally, framing AI development as involving a winner-take-all dynamic in discussions by public leaders and intellectuals may itself be considered hazardous, insofar as it is likely to hinder cooperation and exacerbate conflict.[7][8][9]

Further reading

Armstrong, Stuart, Nick Bostrom & Carl Shulman (2016) Racing to the precipice: a model of artificial intelligence development, AI and Society, vol. 31, pp. 201–206.

Related entries

AI risk | artificial intelligence | great power conflict | technology race

  1. ^

    Barnes, Julian E. & Josh Chin (2018) The new arms race in AI, Wall Street Journal, March 2.

  2. ^
  3. ^

    Cave, Stephen & Seán ÓhÉigeartaigh (2018) An AI race for strategic advantage: Rhetoric and risks, in Jason Furman et al. (eds.) Proceedings of the 2018 AAAI/​ACM Conference on AI, Ethics, and Society, New York: Association for Computing Machinery, pp. 36–40, p. 37.

  4. ^

    Armstrong, Stuart, Nick Bostrom & Carl Shulman (2016) Racing to the precipice: a model of artificial intelligence development, AI and Society, vol. 31, pp. 201–206.

  5. ^

    Armstrong, Bostrom & Shulman, Racing to the precipice, p. 204.

  6. ^

    Armstrong, Bostrom & Shulman, Racing to the precipice, p. 205, fn. 7.

  7. ^

    Tomasik, Brian (2013) International cooperation vs. AI arms race, Center on Long-Term Risk, December 5 (updated 29 February 2016).

  8. ^

    Baum, Seth D. (2017) On the promotion of safe and socially beneficial artificial intelligence, AI and Society, vol. 32, pp. 543–551.

  9. ^

    Cave, Stephen & Seán ÓhÉigeartaigh, An AI race for strategic advantage.

FLI open let­ter: Pause gi­ant AI experiments

Zach Stein-PerlmanMar 29, 2023, 4:04 AM
220 points
38 comments1 min readEA link

Are you re­ally in a race? The Cau­tion­ary Tales of Szilárd and Ellsberg

HaydnBelfieldMay 19, 2022, 8:42 AM
489 points
44 comments18 min readEA link

Gw­ern on cre­at­ing your own AI race and China’s Fast Fol­lower strat­egy.

LarksNov 25, 2024, 3:01 AM
126 points
4 comments2 min readEA link
(www.lesswrong.com)

Brian To­masik on co­op­er­a­tion and peace

Vasco Grilo🔸May 20, 2024, 5:01 PM
27 points
1 comment4 min readEA link
(reducing-suffering.org)

Hooray for step­ping out of the limelight

So8resApr 1, 2023, 2:45 AM
103 points
0 comments1 min readEA link

NYT: Google will ‘re­cal­ibrate’ the risk of re­leas­ing AI due to com­pe­ti­tion with OpenAI

Michael HuangJan 22, 2023, 2:13 AM
173 points
8 comments1 min readEA link
(www.nytimes.com)

Prin­ci­ples for the AGI Race

William_SAug 30, 2024, 2:30 PM
81 points
4 comments18 min readEA link

Sum­mary of Si­tu­a­tional Aware­ness—The Decade Ahead

OscarD🔸Jun 8, 2024, 11:29 AM
143 points
5 comments18 min readEA link

[Question] What kind of or­ga­ni­za­tion should be the first to de­velop AGI in a po­ten­tial arms race?

Eevee🔹Jul 17, 2022, 5:41 PM
10 points
2 comments1 min readEA link

How LDT helps re­duce the AI arms race

Tamsin LeakeDec 10, 2023, 4:21 PM
8 points
1 comment1 min readEA link
(carado.moe)

In­for­ma­tion in risky tech­nol­ogy races

nemeryxuAug 2, 2022, 11:35 PM
15 points
2 comments3 min readEA link

A Wind­fall Clause for CEO could worsen AI race dynamics

LarksMar 9, 2023, 6:02 PM
69 points
12 comments7 min readEA link

How ma­jor gov­ern­ments can help with the most im­por­tant century

Holden KarnofskyFeb 24, 2023, 7:37 PM
56 points
4 comments4 min readEA link
(www.cold-takes.com)

Want to win the AGI race? Solve al­ign­ment.

leopoldMar 29, 2023, 3:19 PM
56 points
6 comments5 min readEA link
(www.forourposterity.com)

Wor­ri­some Trends for Digi­tal Mind Evaluations

Derek ShillerFeb 20, 2025, 3:35 PM
72 points
10 comments8 min readEA link

AI 2027: What Su­per­in­tel­li­gence Looks Like (Linkpost)

Manuel AllgaierApr 11, 2025, 10:31 AM
43 points
3 comments42 min readEA link
(ai-2027.com)

What AI com­pa­nies can do to­day to help with the most im­por­tant century

Holden KarnofskyFeb 20, 2023, 5:40 PM
104 points
8 comments11 min readEA link
(www.cold-takes.com)

What he’s learned as an AI policy in­sider (Tan­tum Col­lins on the 80,000 Hours Pod­cast)

80000_HoursOct 13, 2023, 3:01 PM
11 points
2 comments15 min readEA link

Care­less talk on US-China AI com­pe­ti­tion? (and crit­i­cism of CAIS cov­er­age)

Oliver SourbutSep 20, 2023, 12:46 PM
52 points
19 comments1 min readEA link
(www.oliversourbut.net)

Aligned Ob­jec­tives Prize Competition

PrometheusJun 15, 2023, 12:42 PM
8 points
0 comments1 min readEA link

Am­bi­tious Im­pact launches a for-profit ac­cel­er­a­tor in­stead of build­ing the AI Safety space. Let’s talk about this.

yanni kyriacosMar 18, 2024, 3:44 AM
−7 points
13 comments1 min readEA link

The U.S. Na­tional Se­cu­rity State is Here to Make AI Even Less Trans­par­ent and Accountable

Matrice JacobineNov 24, 2024, 9:34 AM
7 points
0 comments2 min readEA link
(www.eff.org)

#176 – The fi­nal push for AGI, un­der­stand­ing OpenAI’s lead­er­ship drama, and red-team­ing fron­tier mod­els (Nathan Labenz on the 80,000 Hours Pod­cast)

80000_HoursJan 4, 2024, 4:00 PM
15 points
0 comments22 min readEA link

An AI Race With China Can Be Bet­ter Than Not Racing

niplavJul 2, 2024, 5:57 PM
19 points
1 comment1 min readEA link

AISN #34: New Mili­tary AI Sys­tems Plus, AI Labs Fail to Uphold Vol­un­tary Com­mit­ments to UK AI Safety In­sti­tute, and New AI Policy Pro­pos­als in the US Senate

Center for AI SafetyMay 2, 2024, 4:12 PM
21 points
5 comments8 min readEA link
(newsletter.safe.ai)

Max Teg­mark — The AGI En­tente Delusion

Matrice JacobineOct 13, 2024, 5:42 PM
0 points
1 comment1 min readEA link
(www.lesswrong.com)

Com­par­i­son of LLM scal­a­bil­ity and perfor­mance be­tween the U.S. and China based on benchmark

Ivanna_alvaradoOct 12, 2024, 9:51 PM
8 points
0 comments34 min readEA link

AISN #30: In­vest­ments in Com­pute and Mili­tary AI Plus, Ja­pan and Sin­ga­pore’s Na­tional AI Safety Institutes

Center for AI SafetyJan 24, 2024, 7:38 PM
7 points
1 comment6 min readEA link
(newsletter.safe.ai)

Co­op­er­a­tion for AI safety must tran­scend geopoli­ti­cal interference

Matrice JacobineFeb 16, 2025, 6:18 PM
9 points
0 comments1 min readEA link
(www.scmp.com)

Rol­ling Thresh­olds for AGI Scal­ing Regulation

LarksJan 12, 2025, 1:30 AM
60 points
4 comments6 min readEA link

The Com­pendium, A full ar­gu­ment about ex­tinc­tion risk from AGI

adamShimiOct 31, 2024, 12:02 PM
9 points
1 comment2 min readEA link
(www.thecompendium.ai)

CNAS re­port: ‘Ar­tifi­cial In­tel­li­gence and Arms Con­trol’

MMMaasOct 13, 2022, 8:35 AM
16 points
0 comments1 min readEA link
(www.cnas.org)

An­thropic teams up with Palan­tir and AWS to sell AI to defense customers

Matrice JacobineNov 9, 2024, 11:47 AM
28 points
1 comment2 min readEA link
(techcrunch.com)

Tether­ware #2: What ev­ery hu­man should know about our most likely AI future

Jáchym FibírFeb 28, 2025, 11:25 AM
3 points
0 comments11 min readEA link
(tetherware.substack.com)

Na­tional Se­cu­rity Is Not In­ter­na­tional Se­cu­rity: A Cri­tique of AGI Realism

C.K.Feb 2, 2025, 5:04 PM
44 points
2 comments36 min readEA link
(conradkunadu.substack.com)

China Hawks are Man­u­fac­tur­ing an AI Arms Race

GarrisonNov 20, 2024, 6:17 PM
100 points
3 comments5 min readEA link
(garrisonlovely.substack.com)

[Question] How con­fi­dent are you that it’s prefer­able for Amer­ica to de­velop AGI be­fore China does?

ScienceMon🔸Feb 22, 2025, 1:37 PM
204 points
48 comments1 min readEA link

In­for­ma­tion se­cu­rity con­sid­er­a­tions for AI and the long term future

Jeffrey LadishMay 2, 2022, 8:53 PM
134 points
8 comments11 min readEA link

Mili­tary Ar­tifi­cial In­tel­li­gence as Con­trib­u­tor to Global Catas­trophic Risk

MMMaasJun 27, 2022, 10:35 AM
42 points
0 comments52 min readEA link

An­nounc­ing the SPT Model Web App for AI Governance

Paolo BovaAug 4, 2022, 10:45 AM
42 points
0 comments5 min readEA link

Sum­mary of “Tech­nol­ogy Favours Tyranny” by Yu­val Noah Harari

Madhav MalhotraOct 26, 2022, 9:37 PM
36 points
1 comment2 min readEA link

Rac­ing through a minefield: the AI de­ploy­ment problem

Holden KarnofskyDec 31, 2022, 9:44 PM
79 points
1 comment13 min readEA link
(www.cold-takes.com)

Google in­vests $300mn in ar­tifi­cial in­tel­li­gence start-up An­thropic | FT

𝕮𝖎𝖓𝖊𝖗𝖆Feb 3, 2023, 7:43 PM
155 points
5 comments1 min readEA link
(www.ft.com)

Dear An­thropic peo­ple, please don’t re­lease Claude

Joseph MillerFeb 8, 2023, 2:44 AM
27 points
5 comments1 min readEA link

AGI in sight: our look at the game board

Andrea_MiottiFeb 18, 2023, 10:17 PM
25 points
18 comments1 min readEA link

A con­cern­ing ob­ser­va­tion from me­dia cov­er­age of AI in­dus­try dynamics

Justin OliveMar 2, 2023, 11:56 PM
48 points
5 comments3 min readEA link

Com­ments on OpenAI’s “Plan­ning for AGI and be­yond”

So8resMar 3, 2023, 11:01 PM
115 points
7 comments1 min readEA link

[Question] Is AI like disk drives?

TanaeSep 2, 2023, 7:12 PM
8 points
1 comment1 min readEA link

Tran­script: NBC Nightly News: AI ‘race to reck­less­ness’ w/​ Tris­tan Har­ris, Aza Raskin

WilliamKielyMar 23, 2023, 3:45 AM
47 points
1 comment1 min readEA link

Re­duc­ing profit mo­ti­va­tions in AI development

Luke FrymireApr 3, 2023, 8:04 PM
20 points
1 comment6 min readEA link
No comments.