RSS

Public com­mu­ni­ca­tion on AI safety

TagLast edit: 27 Nov 2023 22:13 UTC by Sarah Cheng

The public communication on AI safety tag covers depictions of AI safety in media as well as meta-discussions regarding the most effective ways to represent the topic when engaging with journalists or publishing content for a broader audience, and posts that discuss ways to convey AI safety to various audiences.

Related entries

AI Safety | Building the field of AI safety | AI governance | Slowing down AI

News: Span­ish AI image out­cry + US AI work­force “reg­u­la­tion”

Ulrik Horn26 Sep 2023 7:43 UTC
9 points
0 comments1 min readEA link

Why some peo­ple dis­agree with the CAIS state­ment on AI

David_Moss15 Aug 2023 13:39 UTC
144 points
14 comments16 min readEA link

Care­less talk on US-China AI com­pe­ti­tion? (and crit­i­cism of CAIS cov­er­age)

Oliver Sourbut20 Sep 2023 12:46 UTC
46 points
19 comments1 min readEA link
(www.oliversourbut.net)

New blog: Planned Obsolescence

Ajeya27 Mar 2023 19:46 UTC
198 points
9 comments1 min readEA link
(www.planned-obsolescence.org)

Talk­ing pub­li­cly about AI risk

Jan_Kulveit24 Apr 2023 9:19 UTC
147 points
13 comments1 min readEA link

Keep Mak­ing AI Safety News

RedStateBlueState31 Mar 2023 20:11 UTC
67 points
4 comments1 min readEA link

Spread­ing mes­sages to help with the most im­por­tant century

Holden Karnofsky25 Jan 2023 20:35 UTC
123 points
21 comments18 min readEA link
(www.cold-takes.com)

FLI open let­ter: Pause gi­ant AI experiments

Zach Stein-Perlman29 Mar 2023 4:04 UTC
220 points
38 comments1 min readEA link

Let’s think about slow­ing down AI

Katja_Grace23 Dec 2022 19:56 UTC
334 points
8 comments1 min readEA link

US pub­lic opinion of AI policy and risk

Jamie Elsey12 May 2023 13:22 UTC
112 points
7 comments15 min readEA link

Max Teg­mark’s new Time ar­ti­cle on how we’re in a Don’t Look Up sce­nario [Linkpost]

Jonas Hallgren25 Apr 2023 15:47 UTC
41 points
0 comments1 min readEA link

FT: We must slow down the race to God-like AI

Angelina Li24 Apr 2023 11:57 UTC
33 points
2 comments2 min readEA link
(www.ft.com)

I de­signed an AI safety course (for a philos­o­phy de­part­ment)

Eleni_A23 Sep 2023 21:56 UTC
26 points
3 comments2 min readEA link

US pub­lic per­cep­tion of CAIS state­ment and the risk of extinction

Jamie Elsey22 Jun 2023 16:39 UTC
126 points
4 comments9 min readEA link

AI Risk is like Ter­mi­na­tor; Stop Say­ing it’s Not

skluug8 Mar 2022 19:17 UTC
188 points
43 comments10 min readEA link
(skluug.substack.com)

AI Align­ment in The New Yorker

Eleni_A17 May 2023 21:19 UTC
23 points
0 comments1 min readEA link
(www.newyorker.com)

[Linkpost] OpenAI lead­ers call for reg­u­la­tion of “su­per­in­tel­li­gence” to re­duce ex­is­ten­tial risk.

Lowe25 May 2023 14:14 UTC
5 points
0 comments1 min readEA link

Keep Chas­ing AI Safety Press Coverage

RedStateBlueState4 Apr 2023 20:40 UTC
106 points
16 comments5 min readEA link

The Over­ton Win­dow widens: Ex­am­ples of AI risk in the media

Akash23 Mar 2023 17:10 UTC
112 points
11 comments1 min readEA link

Paus­ing AI Devel­op­ments Isn’t Enough. We Need to Shut it All Down

EliezerYudkowsky9 Apr 2023 15:53 UTC
48 points
3 comments1 min readEA link

An­nounc­ing New Begin­ner-friendly Book on AI Safety and Risk

Darren McKee25 Nov 2023 15:57 UTC
105 points
9 comments1 min readEA link

Paus­ing AI Devel­op­ments Isn’t Enough. We Need to Shut it All Down by Eliezer Yudkowsky

jacquesthibs29 Mar 2023 23:30 UTC
209 points
77 comments3 min readEA link
(time.com)

Or­ga­niz­ing a de­bate with ex­perts and MPs to raise AI xrisk aware­ness: a pos­si­ble blueprint

Otto19 Apr 2023 10:50 UTC
74 points
1 comment4 min readEA link

[US] NTIA: AI Ac­countabil­ity Policy Re­quest for Comment

Kyle J. Lucchese13 Apr 2023 16:12 UTC
47 points
4 comments1 min readEA link
(ntia.gov)

Tar­bell Fel­low­ship 2024 - Ap­pli­ca­tions Open (AI Jour­nal­ism)

Cillian_28 Sep 2023 10:38 UTC
58 points
1 comment3 min readEA link

[Linkpost] 538 Poli­tics Pod­cast on AI risk & politics

jackva11 Apr 2023 17:03 UTC
64 points
5 comments1 min readEA link
(fivethirtyeight.com)

AI policy ideas: Read­ing list

Zach Stein-Perlman17 Apr 2023 19:00 UTC
57 points
3 comments1 min readEA link

The Cruel Trade-Off Between AI Mi­suse and AI X-risk Concerns

simeon_c22 Apr 2023 13:49 UTC
21 points
17 comments1 min readEA link

[linkpost] “What Are Rea­son­able AI Fears?” by Robin Han­son, 2023-04-23

Arjun Panickssery14 Apr 2023 23:26 UTC
41 points
3 comments4 min readEA link
(quillette.com)

How bad a fu­ture do ML re­searchers ex­pect?

Katja_Grace13 Mar 2023 5:47 UTC
165 points
20 comments1 min readEA link

AI Safety Newslet­ter #3: AI policy pro­pos­als and a new challenger approaches

Oliver Z25 Apr 2023 16:15 UTC
35 points
1 comment4 min readEA link
(newsletter.safe.ai)

[Linkpost] ‘The God­father of A.I.’ Leaves Google and Warns of Danger Ahead

DM1 May 2023 19:54 UTC
43 points
3 comments3 min readEA link
(www.nytimes.com)

A tran­script of the TED talk by Eliezer Yudkowsky

MikhailSamin12 Jul 2023 12:12 UTC
39 points
0 comments1 min readEA link

Did Ben­gio and Teg­mark lose a de­bate about AI x-risk against LeCun and Mitchell?

Karl von Wendt25 Jun 2023 16:59 UTC
79 points
24 comments1 min readEA link

Ex­cerpts from “Ma­jor­ity Leader Schumer De­liv­ers Re­marks To Launch SAFE In­no­va­tion Frame­work For Ar­tifi­cial In­tel­li­gence At CSIS”

Chris Leong21 Jul 2023 23:15 UTC
19 points
0 comments1 min readEA link
(www.democrats.senate.gov)

Effi­cacy of AI Ac­tivism: Have We Ever Said No?

charlieh94327 Oct 2023 16:52 UTC
76 points
22 comments20 min readEA link

Sam Alt­man /​ Open AI Dis­cus­sion Thread

John Salter20 Nov 2023 9:21 UTC
40 points
36 comments1 min readEA link

Ilya: The AI sci­en­tist shap­ing the world

David Varga20 Nov 2023 12:43 UTC
6 points
1 comment4 min readEA link

Former Is­raeli Prime Minister Speaks About AI X-Risk

Yonatan Cale20 May 2023 12:09 UTC
73 points
6 comments1 min readEA link

Pos­si­ble OpenAI’s Q* break­through and Deep­Mind’s AlphaGo-type sys­tems plus LLMs

Burnydelic23 Nov 2023 7:02 UTC
13 points
4 comments2 min readEA link

[Linkpost] “Gover­nance of su­per­in­tel­li­gence” by OpenAI

Daniel_Eth22 May 2023 20:15 UTC
51 points
6 comments2 min readEA link
(openai.com)

OpenAI board re­ceived let­ter warn­ing of pow­er­ful AI

JordanStone23 Nov 2023 0:16 UTC
26 points
2 comments1 min readEA link
(www.reuters.com)

[Question] Would an An­thropic/​OpenAI merger be good for AI safety?

M22 Nov 2023 20:21 UTC
5 points
0 comments1 min readEA link

Rishi Su­nak men­tions “ex­is­ten­tial threats” in talk with OpenAI, Deep­Mind, An­thropic CEOs

Arjun Panickssery24 May 2023 21:06 UTC
44 points
2 comments1 min readEA link

Tim Cook was asked about ex­tinc­tion risks from AI

Saul Munn6 Jun 2023 18:46 UTC
8 points
1 comment1 min readEA link

Could AI ac­cel­er­ate eco­nomic growth?

Tom_Davidson7 Jun 2023 19:07 UTC
21 points
0 comments6 min readEA link

On Deep­Mind and Try­ing to Fairly Hear Out Both AI Doomers and Doubters (Ro­hin Shah on The 80,000 Hours Pod­cast)

80000_Hours12 Jun 2023 12:53 UTC
28 points
1 comment15 min readEA link

UK gov­ern­ment to host first global sum­mit on AI Safety

DavidNash8 Jun 2023 13:24 UTC
78 points
1 comment5 min readEA link
(www.gov.uk)

Linkpost: Dwarkesh Pa­tel in­ter­view­ing Carl Shulman

Stefan_Schubert14 Jun 2023 15:30 UTC
106 points
5 comments1 min readEA link
(podcastaddict.com)

Google Deep­Mind re­leases Gemini

Yarrow Bouchard6 Dec 2023 17:39 UTC
21 points
0 comments1 min readEA link
(deepmind.google)

The UK AI Safety Sum­mit tomorrow

SebastianSchmidt31 Oct 2023 19:09 UTC
17 points
2 comments2 min readEA link

AISN #25: White House Ex­ec­u­tive Order on AI, UK AI Safety Sum­mit, and Progress on Vol­un­tary Eval­u­a­tions of AI Risks

Center for AI Safety31 Oct 2023 19:24 UTC
15 points
0 comments6 min readEA link
(newsletter.safe.ai)

Pres­i­dent Bi­den Is­sues Ex­ec­u­tive Order on Safe, Se­cure, and Trust­wor­thy Ar­tifi­cial Intelligence

Tristan Williams30 Oct 2023 11:15 UTC
143 points
8 comments3 min readEA link
(www.whitehouse.gov)

AISN #20: LLM Pro­lifer­a­tion, AI De­cep­tion, and Con­tin­u­ing Drivers of AI Capabilities

Center for AI Safety29 Aug 2023 15:03 UTC
12 points
0 comments8 min readEA link
(newsletter.safe.ai)

The Bletch­ley Dec­la­ra­tion on AI Safety

Hauke Hillebrandt1 Nov 2023 11:44 UTC
60 points
3 comments4 min readEA link
(www.gov.uk)

Unions for AI safety?

dEAsign24 Sep 2023 0:13 UTC
7 points
12 comments2 min readEA link

[Con­gres­sional Hear­ing] Over­sight of A.I.: Leg­is­lat­ing on Ar­tifi­cial Intelligence

Tristan Williams1 Nov 2023 18:15 UTC
5 points
1 comment7 min readEA link
(www.judiciary.senate.gov)

Ama­zon to in­vest up to $4 billion in Anthropic

Davis_Kingsley25 Sep 2023 14:55 UTC
38 points
35 comments1 min readEA link
(twitter.com)

An­nounc­ing #AISum­mitTalks fea­tur­ing Pro­fes­sor Stu­art Rus­sell and many others

Otto24 Oct 2023 10:16 UTC
9 points
1 comment1 min readEA link

Go Mo­bi­lize? Les­sons from GM Protests for Paus­ing AI

charlieh94324 Oct 2023 15:01 UTC
50 points
11 comments31 min readEA link

[Linkpost] NY Times Fea­ture on Anthropic

Garrison12 Jul 2023 19:30 UTC
34 points
3 comments5 min readEA link
(www.nytimes.com)

Sam Alt­man fired from OpenAI

Larks17 Nov 2023 21:07 UTC
133 points
91 comments1 min readEA link
(openai.com)

Thoughts on yes­ter­day’s UN Se­cu­rity Coun­cil meet­ing on AI

Greg_Colbourn19 Jul 2023 16:46 UTC
31 points
2 comments1 min readEA link

AI Im­pacts Quar­terly Newslet­ter, Apr-Jun 2023

Harlan18 Jul 2023 18:01 UTC
4 points
0 comments3 min readEA link
(blog.aiimpacts.org)

AISN #16: White House Se­cures Vol­un­tary Com­mit­ments from Lead­ing AI Labs and Les­sons from Oppenheimer

Center for AI Safety25 Jul 2023 16:45 UTC
7 points
0 comments6 min readEA link
(newsletter.safe.ai)

[Cross­post] An AI Pause Is Hu­man­ity’s Best Bet For Prevent­ing Ex­tinc­tion (TIME)

Otto24 Jul 2023 10:18 UTC
36 points
3 comments7 min readEA link
(time.com)

Linkpost: 7 A.I. Com­pa­nies Agree to Safe­guards After Pres­sure From the White House

MHR21 Jul 2023 13:23 UTC
61 points
4 comments1 min readEA link
(www.nytimes.com)

[link post] AI Should Be Ter­rified of Humans

BrianK24 Jul 2023 11:13 UTC
28 points
0 comments1 min readEA link
(time.com)

[Linkpost] Eric Sch­witzgebel: AI sys­tems must not con­fuse users about their sen­tience or moral status

Zachary Brown18 Aug 2023 17:21 UTC
6 points
0 comments2 min readEA link
(www.sciencedirect.com)

AISN #17: Au­to­mat­i­cally Cir­cum­vent­ing LLM Guardrails, the Fron­tier Model Fo­rum, and Se­nate Hear­ing on AI Oversight

Center for AI Safety1 Aug 2023 15:24 UTC
15 points
0 comments8 min readEA link

Elic­it­ing re­sponses to Marc An­dreessen’s “Why AI Will Save the World”

Coleman@21stTalks17 Jul 2023 19:58 UTC
2 points
2 comments1 min readEA link
(a16z.com)

Fron­tier Model Forum

Zach Stein-Perlman26 Jul 2023 14:30 UTC
40 points
7 comments1 min readEA link
(blog.google)

As­ter­isk Magaz­ine Is­sue 03: AI

alejandro24 Jul 2023 15:53 UTC
34 points
3 comments1 min readEA link
(asteriskmag.com)

AISN #27: Defen­sive Ac­cel­er­a­tionism, A Ret­ro­spec­tive On The OpenAI Board Saga, And A New AI Bill From Se­na­tors Thune And Klobuchar

Center for AI Safety7 Dec 2023 15:57 UTC
8 points
0 comments6 min readEA link
(newsletter.safe.ai)

The costs of caution

Kelsey Piper1 May 2023 20:04 UTC
117 points
17 comments4 min readEA link

AI Safety Newslet­ter #4: AI and Cy­ber­se­cu­rity, Per­sua­sive AIs, Weaponiza­tion, and Ge­offrey Hin­ton talks AI risks

Center for AI Safety2 May 2023 16:51 UTC
35 points
2 comments5 min readEA link
(newsletter.safe.ai)

AI Safety Newslet­ter #1 [CAIS Linkpost]

Akash10 Apr 2023 20:18 UTC
38 points
0 comments1 min readEA link

AI Safety Newslet­ter #2: ChaosGPT, Nat­u­ral Selec­tion, and AI Safety in the Media

Oliver Z18 Apr 2023 18:36 UTC
56 points
1 comment4 min readEA link
(newsletter.safe.ai)

My choice of AI mis­al­ign­ment in­tro­duc­tion for a gen­eral audience

Bill3 May 2023 0:15 UTC
7 points
2 comments1 min readEA link
(youtu.be)

AI X-risk in the News: How Effec­tive are Re­cent Me­dia Items and How is Aware­ness Chang­ing? Our New Sur­vey Re­sults.

Otto4 May 2023 14:04 UTC
48 points
1 comment9 min readEA link

[Link Post: New York Times] White House Un­veils Ini­ti­a­tives to Re­duce Risks of A.I.

Rockwell4 May 2023 14:04 UTC
50 points
1 comment2 min readEA link

An Up­date On The Cam­paign For AI Safety Dot Org

Yanni Kyriacos5 May 2023 0:19 UTC
26 points
4 comments1 min readEA link

AI Safety Newslet­ter #5: Ge­offrey Hin­ton speaks out on AI risk, the White House meets with AI labs, and Tro­jan at­tacks on lan­guage models

Center for AI Safety9 May 2023 15:26 UTC
60 points
0 comments4 min readEA link
(newsletter.safe.ai)

AI-Risk in the State of the Euro­pean Union Address

Sam Bogerd13 Sep 2023 13:27 UTC
25 points
0 comments3 min readEA link
(state-of-the-union.ec.europa.eu)

The In­ter­na­tional PauseAI Protest: Ac­tivism un­der uncertainty

Joseph Miller12 Oct 2023 17:36 UTC
123 points
3 comments4 min readEA link

AI Safety Newslet­ter #6: Ex­am­ples of AI safety progress, Yoshua Ben­gio pro­poses a ban on AI agents, and les­sons from nu­clear arms control

Center for AI Safety16 May 2023 15:14 UTC
32 points
1 comment6 min readEA link
(newsletter.safe.ai)
No comments.