RSS

Warn­ing shot

TagLast edit: 2 Sep 2023 14:07 UTC by Yadav

A warning shot is a global catastrophe that indirectly reduces existential risk by increasing concern about future catastrophes.

Terminology

The expression warning sign is sometimes used to describe any event that increases concern about a particular category of existential risk, regardless of whether the event itself constitutes a global catastrophe. For example, plausible candidates for an AI warning sign include a catastrophic failure by an AI system and public outreach campaigns or the publication of an exceptionally persuasive book on AI safety.[1]

A related notion is that of a fire alarm, a warning sign that creates common knowledge that some technology—typically advanced artificial intelligence—actually poses an existential risk.[2]

Note, however, that both “warning shot” and “fire alarm” are sometimes used as synonyms for “warning sign”.[3][4]

Further reading

Beckstead, Nick (2015) The long-term significance of reducing global catastrophic risks, Open Philanthropy, August 13.

Carlsmith, Joseph (2021) Is power-seeking AI an existential risk?, Open Philanthropy, April, section 6.2.

Related entries

artificial intelligence | existential risk | global catastrophic risk

  1. ^
  2. ^

    Grace, Katja (2021) Beyond fire alarms: freeing the groupstruck, AI Impacts, September 26.

  3. ^

    Kokotajlo, Daniel (2020) What are the most plausible ‘AI Safety warning shot’ scenarios?, AI Alignment Forum, March 26.

  4. ^

    McCluskey, Peter (2021) AI fire alarm scenarios, Bayesian Investor, December 23.

“Risk Aware­ness Mo­ments” (Rams): A con­cept for think­ing about AI gov­er­nance interventions

oeg14 Apr 2023 17:40 UTC
53 points
0 comments9 min readEA link

Les­sons from Three Mile Is­land for AI Warn­ing Shots

NickGabs26 Sep 2022 2:47 UTC
42 points
0 comments15 min readEA link

Warn­ing Shots Prob­a­bly Wouldn’t Change The Pic­ture Much

So8res6 Oct 2022 5:15 UTC
93 points
20 comments2 min readEA link

There’s No Fire Alarm for Ar­tifi­cial Gen­eral Intelligence

EA Forum Archives14 Oct 2017 2:41 UTC
30 points
1 comment25 min readEA link
(www.lesswrong.com)

“Slower tech de­vel­op­ment” can be about or­der­ing, grad­u­al­ness, or dis­tance from now

MichaelA🔸14 Nov 2021 20:58 UTC
47 points
3 comments4 min readEA link

Beyond fire alarms: free­ing the groupstruck

Katja_Grace3 Oct 2021 2:33 UTC
61 points
6 comments49 min readEA link

[Question] Should re­cent events make us more or less con­cerned about biorisk?

Linch19 Mar 2020 0:00 UTC
23 points
7 comments1 min readEA link

[Question] Will the coro­n­avirus pan­demic ad­vance or hin­der the spread of longter­mist-style val­ues/​think­ing?

MichaelA🔸19 Mar 2020 6:07 UTC
12 points
3 comments1 min readEA link

Con­di­tional Trees: Gen­er­at­ing In­for­ma­tive Fore­cast­ing Ques­tions (FRI) -- AI Risk Case Study

Forecasting Research Institute12 Aug 2024 16:24 UTC
43 points
2 comments8 min readEA link
(forecastingresearch.org)

Cause Pri­ori­ti­za­tion in Light of In­spira­tional Disasters

stecas7 Jun 2020 19:52 UTC
2 points
15 comments3 min readEA link

How we could stum­ble into AI catastrophe

Holden Karnofsky16 Jan 2023 14:52 UTC
83 points
0 comments31 min readEA link
(www.cold-takes.com)

Cru­cial ques­tions for longtermists

MichaelA🔸29 Jul 2020 9:39 UTC
104 points
17 comments19 min readEA link

‘AI Emer­gency Eject Cri­te­ria’ Survey

tcelferact19 Apr 2023 21:55 UTC
5 points
3 comments1 min readEA link

[Question] How will the world re­spond to “AI x-risk warn­ing shots” ac­cord­ing to refer­ence class fore­cast­ing?

Ryan Kidd18 Apr 2022 9:10 UTC
18 points
1 comment1 min readEA link

We’re (sur­pris­ingly) more pos­i­tive about tack­ling bio risks: out­comes of a survey

Sanjay25 Aug 2020 9:14 UTC
58 points
5 comments11 min readEA link