RSS

Warn­ing shot

TagLast edit: Sep 2, 2023, 2:07 PM by Yadav

A warning shot is a global catastrophe that indirectly reduces existential risk by increasing concern about future catastrophes.

Terminology

The expression warning sign is sometimes used to describe any event that increases concern about a particular category of existential risk, regardless of whether the event itself constitutes a global catastrophe. For example, plausible candidates for an AI warning sign include a catastrophic failure by an AI system and public outreach campaigns or the publication of an exceptionally persuasive book on AI safety.[1]

A related notion is that of a fire alarm, a warning sign that creates common knowledge that some technology—typically advanced artificial intelligence—actually poses an existential risk.[2]

Note, however, that both “warning shot” and “fire alarm” are sometimes used as synonyms for “warning sign”.[3][4]

Further reading

Beckstead, Nick (2015) The long-term significance of reducing global catastrophic risks, Open Philanthropy, August 13.

Carlsmith, Joseph (2021) Is power-seeking AI an existential risk?, Open Philanthropy, April, section 6.2.

Related entries

artificial intelligence | existential risk | global catastrophic risk

  1. ^
  2. ^

    Grace, Katja (2021) Beyond fire alarms: freeing the groupstruck, AI Impacts, September 26.

  3. ^

    Kokotajlo, Daniel (2020) What are the most plausible ‘AI Safety warning shot’ scenarios?, AI Alignment Forum, March 26.

  4. ^

    McCluskey, Peter (2021) AI fire alarm scenarios, Bayesian Investor, December 23.

“Risk Aware­ness Mo­ments” (Rams): A con­cept for think­ing about AI gov­er­nance interventions

oegApr 14, 2023, 5:40 PM
53 points
0 comments9 min readEA link

Les­sons from Three Mile Is­land for AI Warn­ing Shots

NickGabsSep 26, 2022, 2:47 AM
42 points
0 comments15 min readEA link

Warn­ing Shots Prob­a­bly Wouldn’t Change The Pic­ture Much

So8resOct 6, 2022, 5:15 AM
93 points
20 comments2 min readEA link

There’s No Fire Alarm for Ar­tifi­cial Gen­eral Intelligence

EA Forum ArchivesOct 14, 2017, 2:41 AM
30 points
1 comment25 min readEA link
(www.lesswrong.com)

“Slower tech de­vel­op­ment” can be about or­der­ing, grad­u­al­ness, or dis­tance from now

MichaelA🔸Nov 14, 2021, 8:58 PM
47 points
3 comments4 min readEA link

Beyond fire alarms: free­ing the groupstruck

Katja_GraceOct 3, 2021, 2:33 AM
61 points
6 comments49 min readEA link

[Question] Should re­cent events make us more or less con­cerned about biorisk?

LinchMar 19, 2020, 12:00 AM
23 points
7 comments1 min readEA link

[Question] Will the coro­n­avirus pan­demic ad­vance or hin­der the spread of longter­mist-style val­ues/​think­ing?

MichaelA🔸Mar 19, 2020, 6:07 AM
12 points
3 comments1 min readEA link

Con­di­tional Trees: Gen­er­at­ing In­for­ma­tive Fore­cast­ing Ques­tions (FRI) -- AI Risk Case Study

Forecasting Research InstituteAug 12, 2024, 4:24 PM
43 points
2 comments8 min readEA link
(forecastingresearch.org)

Cause Pri­ori­ti­za­tion in Light of In­spira­tional Disasters

stecasJun 7, 2020, 7:52 PM
2 points
15 comments3 min readEA link

How we could stum­ble into AI catastrophe

Holden KarnofskyJan 16, 2023, 2:52 PM
83 points
0 comments31 min readEA link
(www.cold-takes.com)

Cru­cial ques­tions for longtermists

MichaelA🔸Jul 29, 2020, 9:39 AM
104 points
17 comments19 min readEA link

‘AI Emer­gency Eject Cri­te­ria’ Survey

tcelferactApr 19, 2023, 9:55 PM
5 points
3 comments1 min readEA link

[Question] How will the world re­spond to “AI x-risk warn­ing shots” ac­cord­ing to refer­ence class fore­cast­ing?

Ryan KiddApr 18, 2022, 9:10 AM
18 points
1 comment1 min readEA link

We’re (sur­pris­ingly) more pos­i­tive about tack­ling bio risks: out­comes of a survey

SanjayAug 25, 2020, 9:14 AM
58 points
5 comments11 min readEA link