RSS

Public com­mu­ni­ca­tion on AI safety

TagLast edit: 27 Nov 2023 22:13 UTC by Sarah Cheng

The public communication on AI safety tag covers depictions of AI safety in media as well as meta-discussions regarding the most effective ways to represent the topic when engaging with journalists or publishing content for a broader audience, and posts that discuss ways to convey AI safety to various audiences.

Related entries

AI Safety | Building the field of AI safety | AI governance | Slowing down AI

Anal­ogy Bank for AI Safety

utilistrutil29 Jan 2024 2:35 UTC
14 points
5 comments1 min readEA link

“Near Mid­night in Suicide City”

Greg_Colbourn6 Dec 2024 19:54 UTC
5 points
0 comments1 min readEA link
(www.youtube.com)

AI Safety Ac­tion Plan—A re­port com­mis­sioned by the US State Department

Agustín Covarrubias 🔸11 Mar 2024 22:13 UTC
25 points
1 comment1 min readEA link
(www.gladstone.ai)

Why some peo­ple dis­agree with the CAIS state­ment on AI

David_Moss15 Aug 2023 13:39 UTC
144 points
15 comments16 min readEA link

Talk­ing pub­li­cly about AI risk

Jan_Kulveit24 Apr 2023 9:19 UTC
152 points
13 comments1 min readEA link

If try­ing to com­mu­ni­cate about AI risks, make it vivid

Michael Noetel 🔸27 May 2024 0:59 UTC
19 points
2 comments2 min readEA link

New blog: Planned Obsolescence

Ajeya27 Mar 2023 19:46 UTC
198 points
9 comments1 min readEA link
(www.planned-obsolescence.org)

Care­less talk on US-China AI com­pe­ti­tion? (and crit­i­cism of CAIS cov­er­age)

Oliver Sourbut20 Sep 2023 12:46 UTC
46 points
19 comments1 min readEA link
(www.oliversourbut.net)

xAI raises $6B

andzuck5 Jun 2024 15:26 UTC
18 points
1 comment1 min readEA link
(x.ai)

Claude 3.5 Sonnet

Zach Stein-Perlman20 Jun 2024 18:00 UTC
31 points
0 comments1 min readEA link
(www.anthropic.com)

FT: We must slow down the race to God-like AI

Angelina Li24 Apr 2023 11:57 UTC
33 points
2 comments2 min readEA link
(www.ft.com)

Let’s think about slow­ing down AI

Katja_Grace23 Dec 2022 19:56 UTC
334 points
9 comments1 min readEA link

[Linkpost] OpenAI lead­ers call for reg­u­la­tion of “su­per­in­tel­li­gence” to re­duce ex­is­ten­tial risk.

Lowe Lundin25 May 2023 14:14 UTC
5 points
0 comments1 min readEA link

The Over­ton Win­dow widens: Ex­am­ples of AI risk in the media

Akash23 Mar 2023 17:10 UTC
112 points
11 comments1 min readEA link

Paus­ing AI Devel­op­ments Isn’t Enough. We Need to Shut it All Down

EliezerYudkowsky9 Apr 2023 15:53 UTC
50 points
3 comments1 min readEA link

Sur­vey of 2,778 AI au­thors: six parts in pictures

Katja_Grace6 Jan 2024 4:43 UTC
176 points
10 comments1 min readEA link

Is fear pro­duc­tive when com­mu­ni­cat­ing AI x-risk? [Study re­sults]

Johanna Roniger22 Jan 2024 5:38 UTC
78 points
10 comments5 min readEA link

Keep Mak­ing AI Safety News

Gil31 Mar 2023 20:11 UTC
67 points
4 comments1 min readEA link

Spread­ing mes­sages to help with the most im­por­tant century

Holden Karnofsky25 Jan 2023 20:35 UTC
128 points
21 comments18 min readEA link
(www.cold-takes.com)

AI Align­ment in The New Yorker

Eleni_A17 May 2023 21:19 UTC
23 points
0 comments1 min readEA link
(www.newyorker.com)

I de­signed an AI safety course (for a philos­o­phy de­part­ment)

Eleni_A23 Sep 2023 21:56 UTC
27 points
3 comments2 min readEA link

Ar­ti­cles about re­cent OpenAI departures

bruce17 May 2024 17:38 UTC
126 points
12 comments1 min readEA link
(www.vox.com)

Wor­ri­some mi­s­un­der­stand­ing of the core is­sues with AI transition

Roman Leventov18 Jan 2024 10:05 UTC
4 points
3 comments1 min readEA link

New vol­un­tary com­mit­ments (AI Seoul Sum­mit)

Zach Stein-Perlman21 May 2024 11:00 UTC
12 points
1 comment1 min readEA link
(www.gov.uk)

[Linkpost] State­ment from Scar­lett Jo­hans­son on OpenAI’s use of the “Sky” voice, that was shock­ingly similar to her own voice.

Linch20 May 2024 23:50 UTC
46 points
8 comments1 min readEA link
(variety.com)

My Proven AI Safety Ex­pla­na­tion (as a com­put­ing stu­dent)

Mica White6 Feb 2024 3:58 UTC
8 points
4 comments6 min readEA link

World’s first ma­jor law for ar­tifi­cial in­tel­li­gence gets fi­nal EU green light

Dane Valerie24 May 2024 14:57 UTC
3 points
1 comment2 min readEA link
(www.cnbc.com)

Jan Leike: “I’m ex­cited to join @An­throp­icAI to con­tinue the su­per­al­ign­ment mis­sion!”

defun 🔸28 May 2024 18:08 UTC
35 points
11 comments1 min readEA link
(x.com)

My cover story in Ja­cobin on AI cap­i­tal­ism and the x-risk debates

Garrison12 Feb 2024 23:34 UTC
154 points
10 comments6 min readEA link
(jacobin.com)

Max Teg­mark’s new Time ar­ti­cle on how we’re in a Don’t Look Up sce­nario [Linkpost]

Jonas Hallgren25 Apr 2023 15:47 UTC
41 points
0 comments1 min readEA link

US pub­lic per­cep­tion of CAIS state­ment and the risk of extinction

Jamie E22 Jun 2023 16:39 UTC
126 points
4 comments9 min readEA link

AI Risk is like Ter­mi­na­tor; Stop Say­ing it’s Not

skluug8 Mar 2022 19:17 UTC
189 points
43 comments10 min readEA link
(skluug.substack.com)

US pub­lic opinion of AI policy and risk

Jamie E12 May 2023 13:22 UTC
111 points
7 comments15 min readEA link

Keep Chas­ing AI Safety Press Coverage

Gil4 Apr 2023 20:40 UTC
106 points
16 comments5 min readEA link

FLI open let­ter: Pause gi­ant AI experiments

Zach Stein-Perlman29 Mar 2023 4:04 UTC
220 points
38 comments1 min readEA link

How bad a fu­ture do ML re­searchers ex­pect?

Katja_Grace13 Mar 2023 5:47 UTC
165 points
20 comments1 min readEA link

News: Span­ish AI image out­cry + US AI work­force “reg­u­la­tion”

Benevolent_Rain26 Sep 2023 7:43 UTC
9 points
0 comments1 min readEA link

Or­ga­niz­ing a de­bate with ex­perts and MPs to raise AI xrisk aware­ness: a pos­si­ble blueprint

Otto19 Apr 2023 10:50 UTC
75 points
1 comment4 min readEA link

A tran­script of the TED talk by Eliezer Yudkowsky

MikhailSamin12 Jul 2023 12:12 UTC
39 points
0 comments1 min readEA link

Did Ben­gio and Teg­mark lose a de­bate about AI x-risk against LeCun and Mitchell?

Karl von Wendt25 Jun 2023 16:59 UTC
80 points
24 comments1 min readEA link

Short re­view of our Ten­sorTrust-based AI safety uni­ver­sity out­reach event

Milan Weibel🔹22 Sep 2024 14:54 UTC
15 points
0 comments2 min readEA link

Ex­cerpts from “Ma­jor­ity Leader Schumer De­liv­ers Re­marks To Launch SAFE In­no­va­tion Frame­work For Ar­tifi­cial In­tel­li­gence At CSIS”

Chris Leong21 Jul 2023 23:15 UTC
19 points
0 comments1 min readEA link
(www.democrats.senate.gov)

An EA used de­cep­tive mes­sag­ing to ad­vance her pro­ject; we need mechanisms to avoid de­on­tolog­i­cally du­bi­ous plans

MikhailSamin13 Feb 2024 23:11 UTC
24 points
39 comments5 min readEA link

Against most, but not all, AI risk analogies

Matthew_Barnett14 Jan 2024 19:13 UTC
43 points
9 comments1 min readEA link

[US] NTIA: AI Ac­countabil­ity Policy Re­quest for Comment

Kyle J. Lucchese13 Apr 2023 16:12 UTC
47 points
4 comments1 min readEA link
(ntia.gov)

AI Safety Newslet­ter #3: AI policy pro­pos­als and a new challenger approaches

Oliver Z25 Apr 2023 16:15 UTC
35 points
1 comment4 min readEA link
(newsletter.safe.ai)

Brand­ing AI Safety Groups: A Field Guide

Agustín Covarrubias 🔸13 May 2024 17:17 UTC
44 points
6 comments1 min readEA link

The Best Ar­gu­ment is not a Sim­ple English Yud Essay

Jonathan Bostock19 Sep 2024 15:29 UTC
74 points
3 comments5 min readEA link
(www.lesswrong.com)

[Linkpost] ‘The God­father of A.I.’ Leaves Google and Warns of Danger Ahead

DM1 May 2023 19:54 UTC
43 points
3 comments3 min readEA link
(www.nytimes.com)

[linkpost] “What Are Rea­son­able AI Fears?” by Robin Han­son, 2023-04-23

Arjun Panickssery14 Apr 2023 23:26 UTC
41 points
3 comments4 min readEA link
(quillette.com)

Tar­bell Fel­low­ship 2024 - Ap­pli­ca­tions Open (AI Jour­nal­ism)

Cillian_28 Sep 2023 10:38 UTC
58 points
1 comment3 min readEA link

[Linkpost] 538 Poli­tics Pod­cast on AI risk & politics

jackva11 Apr 2023 17:03 UTC
64 points
5 comments1 min readEA link
(fivethirtyeight.com)

Sam Alt­man’s Chip Am­bi­tions Un­der­cut OpenAI’s Safety Strategy

Garrison10 Feb 2024 19:52 UTC
280 points
20 comments3 min readEA link
(garrisonlovely.substack.com)

AI policy ideas: Read­ing list

Zach Stein-Perlman17 Apr 2023 19:00 UTC
60 points
3 comments1 min readEA link

The Cruel Trade-Off Between AI Mi­suse and AI X-risk Concerns

simeon_c22 Apr 2023 13:49 UTC
21 points
17 comments1 min readEA link

INTERVIEW: StakeOut.AI w/​ Dr. Peter Park

Jacob-Haimes5 Mar 2024 18:04 UTC
21 points
7 comments1 min readEA link
(into-ai-safety.github.io)

An­nounc­ing New Begin­ner-friendly Book on AI Safety and Risk

Darren McKee25 Nov 2023 15:57 UTC
112 points
9 comments1 min readEA link

Paus­ing AI Devel­op­ments Isn’t Enough. We Need to Shut it All Down by Eliezer Yudkowsky

jacquesthibs29 Mar 2023 23:30 UTC
211 points
75 comments3 min readEA link
(time.com)

AISN #25: White House Ex­ec­u­tive Order on AI, UK AI Safety Sum­mit, and Progress on Vol­un­tary Eval­u­a­tions of AI Risks

Center for AI Safety31 Oct 2023 19:24 UTC
21 points
0 comments6 min readEA link
(newsletter.safe.ai)

Pres­i­dent Bi­den Is­sues Ex­ec­u­tive Order on Safe, Se­cure, and Trust­wor­thy Ar­tifi­cial Intelligence

Tristan Williams30 Oct 2023 11:15 UTC
143 points
8 comments3 min readEA link
(www.whitehouse.gov)

AISN #20: LLM Pro­lifer­a­tion, AI De­cep­tion, and Con­tin­u­ing Drivers of AI Capabilities

Center for AI Safety29 Aug 2023 15:03 UTC
12 points
0 comments8 min readEA link
(newsletter.safe.ai)

The Bletch­ley Dec­la­ra­tion on AI Safety

Hauke Hillebrandt1 Nov 2023 11:44 UTC
60 points
3 comments4 min readEA link
(www.gov.uk)

An­nounc­ing Su­per­in­tel­li­gence Imag­ined: A cre­ative con­test on the risks of superintelligence

TaylorJns12 Jun 2024 15:20 UTC
17 points
0 comments1 min readEA link

Bi­den-Har­ris Ad­minis­tra­tion An­nounces First-Ever Con­sor­tium Ded­i­cated to AI Safety

ben.smith9 Feb 2024 6:40 UTC
15 points
1 comment1 min readEA link
(www.nist.gov)

Dis­rupt­ing mal­i­cious uses of AI by state-af­fili­ated threat actors

Agustín Covarrubias 🔸14 Feb 2024 21:28 UTC
22 points
1 comment1 min readEA link
(openai.com)

In­tro­duc­ing StakeOut.AI

Harry Luk17 Feb 2024 0:21 UTC
52 points
6 comments9 min readEA link

My ar­ti­cle in The Na­tion — Cal­ifor­nia’s AI Safety Bill Is a Mask-Off Mo­ment for the Industry

Garrison15 Aug 2024 19:25 UTC
134 points
0 comments1 min readEA link
(www.thenation.com)

Propos­ing the Con­di­tional AI Safety Treaty (linkpost TIME)

Otto15 Nov 2024 13:56 UTC
12 points
6 comments3 min readEA link
(time.com)

Demis Hass­abis — Google Deep­Mind: The Podcast

Zach Stein-Perlman16 Aug 2024 0:00 UTC
22 points
2 comments1 min readEA link
(www.youtube.com)

Fron­tier AI sys­tems have sur­passed the self-repli­cat­ing red line

Greg_Colbourn10 Dec 2024 16:33 UTC
30 points
14 comments1 min readEA link
(github.com)

Claude Doesn’t Want to Die

Garrison5 Mar 2024 6:00 UTC
22 points
14 comments10 min readEA link
(garrisonlovely.substack.com)

Im­por­tant news for AI Align­ment

cmeinel6 Mar 2024 15:12 UTC
−6 points
1 comment1 min readEA link

AISN #32: Mea­sur­ing and Re­duc­ing Hazardous Knowl­edge in LLMs Plus, Fore­cast­ing the Fu­ture with LLMs, and Reg­u­la­tory Markets

Center for AI Safety7 Mar 2024 16:37 UTC
15 points
2 comments8 min readEA link
(newsletter.safe.ai)

OpenAI o1

Zach Stein-Perlman12 Sep 2024 18:54 UTC
38 points
0 comments1 min readEA link

OpenAI: Pre­pared­ness framework

Zach Stein-Perlman18 Dec 2023 18:30 UTC
24 points
0 comments1 min readEA link
(openai.com)

OpenAI an­nounces new mem­bers to board of directors

Will Howard🔹9 Mar 2024 11:27 UTC
47 points
12 comments2 min readEA link
(openai.com)

Among the A.I. Doom­say­ers—The New Yorker

Agustín Covarrubias 🔸11 Mar 2024 21:12 UTC
66 points
0 comments1 min readEA link
(www.newyorker.com)

Cy­ber­se­cu­rity and AI: The Evolv­ing Se­cu­rity Landscape

Center for AI Safety14 Mar 2024 20:14 UTC
9 points
0 comments12 min readEA link
(www.safe.ai)

INTERVIEW: Round 2 - StakeOut.AI w/​ Dr. Peter Park

Jacob-Haimes18 Mar 2024 21:26 UTC
8 points
0 comments1 min readEA link
(into-ai-safety.github.io)

Some thoughts from a Univer­sity AI Debate

Charlie Harrison20 Mar 2024 17:03 UTC
25 points
2 comments1 min readEA link

Pod­cast: In­ter­view se­ries fea­tur­ing Dr. Peter Park

Jacob-Haimes26 Mar 2024 0:35 UTC
1 point
0 comments2 min readEA link
(into-ai-safety.github.io)

AISN #28: Cen­ter for AI Safety 2023 Year in Review

Center for AI Safety23 Dec 2023 21:31 UTC
17 points
1 comment5 min readEA link
(newsletter.safe.ai)

AI safety ad­vo­cates should con­sider pro­vid­ing gen­tle push­back fol­low­ing the events at OpenAI

I_machinegun_Kelly22 Dec 2023 21:05 UTC
86 points
5 comments3 min readEA link
(www.lesswrong.com)

NYT is su­ing OpenAI&Microsoft for alleged copy­right in­fringe­ment; some quick thoughts

MikhailSamin28 Dec 2023 18:37 UTC
29 points
0 comments1 min readEA link

AISN #29: Progress on the EU AI Act Plus, the NY Times sues OpenAI for Copy­right In­fringe­ment, and Con­gres­sional Ques­tions about Re­search Stan­dards in AI Safety

Center for AI Safety4 Jan 2024 16:03 UTC
5 points
0 comments6 min readEA link
(newsletter.safe.ai)

#176 – The fi­nal push for AGI, un­der­stand­ing OpenAI’s lead­er­ship drama, and red-team­ing fron­tier mod­els (Nathan Labenz on the 80,000 Hours Pod­cast)

80000_Hours4 Jan 2024 16:00 UTC
15 points
0 comments22 min readEA link

U.S. Com­merce Sec­re­tary Gina Raimondo An­nounces Ex­pan­sion of U.S. AI Safety In­sti­tute Lead­er­ship Team [and Paul Chris­ti­ano up­date]

Phib16 Apr 2024 17:10 UTC
110 points
8 comments1 min readEA link
(www.commerce.gov)

AI Safety Newslet­ter #42: New­som Ve­toes SB 1047 Plus, OpenAI’s o1, and AI Gover­nance Summary

Center for AI Safety1 Oct 2024 20:33 UTC
10 points
0 comments6 min readEA link
(newsletter.safe.ai)

£1 mil­lion prize for the most cut­ting-edge AI solu­tion for pub­lic good [link post]

rileyharris17 Jan 2024 14:36 UTC
8 points
0 comments2 min readEA link
(manchesterprize.org)

I read ev­ery ma­jor AI lab’s safety plan so you don’t have to

sarahhw16 Dec 2024 14:12 UTC
42 points
2 comments11 min readEA link
(longerramblings.substack.com)

AISN #35: Lob­by­ing on AI Reg­u­la­tion Plus, New Models from OpenAI and Google, and Le­gal Regimes for Train­ing on Copy­righted Data

Center for AI Safety16 May 2024 14:26 UTC
14 points
0 comments6 min readEA link
(newsletter.safe.ai)

Miti­gat­ing ex­treme AI risks amid rapid progress [Linkpost]

Akash21 May 2024 20:04 UTC
36 points
1 comment1 min readEA link

Publi­ca­tion of the In­ter­na­tional Scien­tific Re­port on the Safety of Ad­vanced AI (In­term Re­port)

James Herbert21 May 2024 21:58 UTC
11 points
2 comments2 min readEA link
(www.gov.uk)

He­len Toner (ex-OpenAI board mem­ber): “We learned about ChatGPT on Twit­ter.”

defun 🔸29 May 2024 7:40 UTC
123 points
13 comments1 min readEA link
(x.com)

The U.S. and China Need an AI In­ci­dents Hotline

christian.r3 Jun 2024 18:46 UTC
25 points
0 comments1 min readEA link
(www.lawfaremedia.org)

An­thropic rewrote its RSP

Zach Stein-Perlman15 Oct 2024 14:30 UTC
32 points
1 comment1 min readEA link

Is prin­ci­pled mass-out­reach pos­si­ble, for AGI X-risk?

Nicholas / Heather Kross21 Jan 2024 17:45 UTC
12 points
2 comments1 min readEA link

AISN #30: In­vest­ments in Com­pute and Mili­tary AI Plus, Ja­pan and Sin­ga­pore’s Na­tional AI Safety Institutes

Center for AI Safety24 Jan 2024 19:38 UTC
7 points
1 comment6 min readEA link
(newsletter.safe.ai)

A short con­ver­sa­tion I had with Google Gem­ini on the dan­gers of un­reg­u­lated LLM API use, while mildly drunk in an air­port.

EvanMcCormick17 Dec 2024 12:25 UTC
1 point
0 comments8 min readEA link

AI Safety: Why We Need to Keep Our Smart Machines in Check

adityaraj@eanita17 Dec 2024 12:29 UTC
1 point
0 comments2 min readEA link
(medium.com)

It is time to start war gam­ing for AGI

yanni kyriacos17 Oct 2024 5:14 UTC
14 points
4 comments1 min readEA link

OpenAI defected, but we can take hon­est actions

Remmelt21 Oct 2024 8:41 UTC
19 points
1 comment2 min readEA link

Miles Brundage re­signed from OpenAI, and his AGI readi­ness team was disbanded

Garrison23 Oct 2024 23:42 UTC
57 points
4 comments7 min readEA link
(garrisonlovely.substack.com)

Finish­ing The SB-1047 Doc­u­men­tary In 6 Weeks

Michaël Trazzi28 Oct 2024 20:26 UTC
67 points
0 comments4 min readEA link

The Com­pendium, A full ar­gu­ment about ex­tinc­tion risk from AGI

adamShimi31 Oct 2024 12:02 UTC
9 points
1 comment2 min readEA link
(www.thecompendium.ai)

Ex­plor­ing AI Safety through “Es­cape Ex­per­i­ment”: A Short Film on Su­per­in­tel­li­gence Risks

Gaetan_Selle10 Nov 2024 4:42 UTC
2 points
0 comments2 min readEA link

China Hawks are Man­u­fac­tur­ing an AI Arms Race

Garrison20 Nov 2024 18:17 UTC
95 points
3 comments5 min readEA link
(garrisonlovely.substack.com)

OpenAI’s CBRN tests seem unclear

Luca Righetti 🔸21 Nov 2024 17:26 UTC
82 points
3 comments7 min readEA link

[Question] Seek­ing Tan­gible Ex­am­ples of AI Catastrophes

clifford.banes25 Nov 2024 7:55 UTC
9 points
2 comments1 min readEA link

OpenAI’s o1 tried to avoid be­ing shut down, and lied about it, in evals

Greg_Colbourn6 Dec 2024 15:25 UTC
23 points
9 comments1 min readEA link
(www.transformernews.ai)

An­thropic An­nounces new S.O.T.A. Claude 3

Joseph Miller4 Mar 2024 19:02 UTC
10 points
5 comments1 min readEA link
(twitter.com)

Ter­minol­ogy sug­ges­tion: stan­dard­ize terms for prob­a­bil­ity ranges

Egg Syntax30 Aug 2024 16:05 UTC
2 points
0 comments1 min readEA link

AI Safety Newslet­ter #41: The Next Gen­er­a­tion of Com­pute Scale Plus, Rank­ing Models by Sus­cep­ti­bil­ity to Jailbreak­ing, and Ma­chine Ethics

Center for AI Safety11 Sep 2024 19:11 UTC
12 points
0 comments5 min readEA link
(newsletter.safe.ai)

An­thropic is be­ing sued for copy­ing books to train Claude

Remmelt31 Aug 2024 2:57 UTC
3 points
0 comments1 min readEA link
(fingfx.thomsonreuters.com)

Unions for AI safety?

dEAsign24 Sep 2023 0:13 UTC
7 points
12 comments2 min readEA link

[Con­gres­sional Hear­ing] Over­sight of A.I.: Leg­is­lat­ing on Ar­tifi­cial Intelligence

Tristan Williams1 Nov 2023 18:15 UTC
5 points
1 comment7 min readEA link
(www.judiciary.senate.gov)

Ama­zon to in­vest up to $4 billion in Anthropic

Davis_Kingsley25 Sep 2023 14:55 UTC
38 points
34 comments1 min readEA link
(twitter.com)

An­nounc­ing #AISum­mitTalks fea­tur­ing Pro­fes­sor Stu­art Rus­sell and many others

Otto24 Oct 2023 10:16 UTC
9 points
1 comment1 min readEA link

Go Mo­bi­lize? Les­sons from GM Protests for Paus­ing AI

Charlie Harrison24 Oct 2023 15:01 UTC
48 points
11 comments31 min readEA link

The Dis­solu­tion of AI Safety

Roko12 Dec 2024 10:46 UTC
−7 points
0 comments1 min readEA link
(www.transhumanaxiology.com)

[Linkpost] NY Times Fea­ture on Anthropic

Garrison12 Jul 2023 19:30 UTC
34 points
3 comments5 min readEA link
(www.nytimes.com)

Sam Alt­man fired from OpenAI

Larks17 Nov 2023 21:07 UTC
133 points
91 comments1 min readEA link
(openai.com)

Thoughts on yes­ter­day’s UN Se­cu­rity Coun­cil meet­ing on AI

Greg_Colbourn19 Jul 2023 16:46 UTC
31 points
2 comments1 min readEA link

AI Im­pacts Quar­terly Newslet­ter, Apr-Jun 2023

Harlan18 Jul 2023 18:01 UTC
4 points
0 comments3 min readEA link
(blog.aiimpacts.org)

AISN #16: White House Se­cures Vol­un­tary Com­mit­ments from Lead­ing AI Labs and Les­sons from Oppenheimer

Center for AI Safety25 Jul 2023 16:45 UTC
7 points
0 comments6 min readEA link
(newsletter.safe.ai)

[Cross­post] An AI Pause Is Hu­man­ity’s Best Bet For Prevent­ing Ex­tinc­tion (TIME)

Otto24 Jul 2023 10:18 UTC
36 points
3 comments7 min readEA link
(time.com)

Linkpost: 7 A.I. Com­pa­nies Agree to Safe­guards After Pres­sure From the White House

MHR🔸21 Jul 2023 13:23 UTC
61 points
4 comments1 min readEA link
(www.nytimes.com)

[link post] AI Should Be Ter­rified of Humans

BrianK24 Jul 2023 11:13 UTC
28 points
0 comments1 min readEA link
(time.com)

[Linkpost] Eric Sch­witzgebel: AI sys­tems must not con­fuse users about their sen­tience or moral status

🔸Zachary Brown18 Aug 2023 17:21 UTC
6 points
0 comments2 min readEA link
(www.sciencedirect.com)

AISN #17: Au­to­mat­i­cally Cir­cum­vent­ing LLM Guardrails, the Fron­tier Model Fo­rum, and Se­nate Hear­ing on AI Oversight

Center for AI Safety1 Aug 2023 15:24 UTC
15 points
0 comments8 min readEA link

Elic­it­ing re­sponses to Marc An­dreessen’s “Why AI Will Save the World”

Coleman17 Jul 2023 19:58 UTC
2 points
2 comments1 min readEA link
(a16z.com)

Fron­tier Model Forum

Zach Stein-Perlman26 Jul 2023 14:30 UTC
40 points
7 comments1 min readEA link
(blog.google)

As­ter­isk Magaz­ine Is­sue 03: AI

Alejandro Ortega24 Jul 2023 15:53 UTC
34 points
3 comments1 min readEA link
(asteriskmag.com)

AISN #27: Defen­sive Ac­cel­er­a­tionism, A Ret­ro­spec­tive On The OpenAI Board Saga, And A New AI Bill From Se­na­tors Thune And Klobuchar

Center for AI Safety7 Dec 2023 15:57 UTC
10 points
0 comments6 min readEA link
(newsletter.safe.ai)

Gavin New­som ve­toes SB 1047

Larks30 Sep 2024 0:06 UTC
39 points
14 comments1 min readEA link
(www.wsj.com)

The costs of caution

Kelsey Piper1 May 2023 20:04 UTC
112 points
17 comments4 min readEA link

AI Safety Newslet­ter #4: AI and Cy­ber­se­cu­rity, Per­sua­sive AIs, Weaponiza­tion, and Ge­offrey Hin­ton talks AI risks

Center for AI Safety2 May 2023 16:51 UTC
35 points
2 comments5 min readEA link
(newsletter.safe.ai)

AI Safety Newslet­ter #1 [CAIS Linkpost]

Akash10 Apr 2023 20:18 UTC
38 points
0 comments1 min readEA link

AI Safety Newslet­ter #2: ChaosGPT, Nat­u­ral Selec­tion, and AI Safety in the Media

Oliver Z18 Apr 2023 18:36 UTC
56 points
1 comment4 min readEA link
(newsletter.safe.ai)

My choice of AI mis­al­ign­ment in­tro­duc­tion for a gen­eral audience

Bill3 May 2023 0:15 UTC
7 points
2 comments1 min readEA link
(youtu.be)

AI X-risk in the News: How Effec­tive are Re­cent Me­dia Items and How is Aware­ness Chang­ing? Our New Sur­vey Re­sults.

Otto4 May 2023 14:04 UTC
49 points
1 comment9 min readEA link

[Link Post: New York Times] White House Un­veils Ini­ti­a­tives to Re­duce Risks of A.I.

Rockwell4 May 2023 14:04 UTC
50 points
1 comment2 min readEA link

An Up­date On The Cam­paign For AI Safety Dot Org

yanni kyriacos5 May 2023 0:19 UTC
26 points
4 comments1 min readEA link

AI Safety Newslet­ter #5: Ge­offrey Hin­ton speaks out on AI risk, the White House meets with AI labs, and Tro­jan at­tacks on lan­guage models

Center for AI Safety9 May 2023 15:26 UTC
60 points
0 comments4 min readEA link
(newsletter.safe.ai)

AI-Risk in the State of the Euro­pean Union Address

Sam Bogerd13 Sep 2023 13:27 UTC
25 points
0 comments3 min readEA link
(state-of-the-union.ec.europa.eu)

The In­ter­na­tional PauseAI Protest: Ac­tivism un­der uncertainty

Joseph Miller12 Oct 2023 17:36 UTC
129 points
3 comments4 min readEA link

AI Safety Newslet­ter #6: Ex­am­ples of AI safety progress, Yoshua Ben­gio pro­poses a ban on AI agents, and les­sons from nu­clear arms control

Center for AI Safety16 May 2023 15:14 UTC
32 points
1 comment6 min readEA link
(newsletter.safe.ai)

Effi­cacy of AI Ac­tivism: Have We Ever Said No?

Charlie Harrison27 Oct 2023 16:52 UTC
78 points
25 comments20 min readEA link

Sam Alt­man /​ Open AI Dis­cus­sion Thread

John Salter20 Nov 2023 9:21 UTC
40 points
36 comments1 min readEA link

Ilya: The AI sci­en­tist shap­ing the world

David Varga20 Nov 2023 12:43 UTC
6 points
1 comment4 min readEA link

Former Is­raeli Prime Minister Speaks About AI X-Risk

Yonatan Cale20 May 2023 12:09 UTC
73 points
6 comments1 min readEA link

Pos­si­ble OpenAI’s Q* break­through and Deep­Mind’s AlphaGo-type sys­tems plus LLMs

Burnydelic23 Nov 2023 7:02 UTC
13 points
4 comments2 min readEA link

[Linkpost] “Gover­nance of su­per­in­tel­li­gence” by OpenAI

Daniel_Eth22 May 2023 20:15 UTC
51 points
6 comments2 min readEA link
(openai.com)

OpenAI board re­ceived let­ter warn­ing of pow­er­ful AI

JordanStone23 Nov 2023 0:16 UTC
26 points
2 comments1 min readEA link
(www.reuters.com)

[Question] Would an An­thropic/​OpenAI merger be good for AI safety?

M22 Nov 2023 20:21 UTC
6 points
1 comment1 min readEA link

Rishi Su­nak men­tions “ex­is­ten­tial threats” in talk with OpenAI, Deep­Mind, An­thropic CEOs

Arjun Panickssery24 May 2023 21:06 UTC
44 points
2 comments1 min readEA link

Tim Cook was asked about ex­tinc­tion risks from AI

Saul Munn6 Jun 2023 18:46 UTC
8 points
1 comment1 min readEA link

Could AI ac­cel­er­ate eco­nomic growth?

Tom_Davidson7 Jun 2023 19:07 UTC
28 points
0 comments6 min readEA link

On Deep­Mind and Try­ing to Fairly Hear Out Both AI Doomers and Doubters (Ro­hin Shah on The 80,000 Hours Pod­cast)

80000_Hours12 Jun 2023 12:53 UTC
28 points
1 comment15 min readEA link

UK gov­ern­ment to host first global sum­mit on AI Safety

DavidNash8 Jun 2023 13:24 UTC
78 points
1 comment5 min readEA link
(www.gov.uk)

Linkpost: Dwarkesh Pa­tel in­ter­view­ing Carl Shulman

Stefan_Schubert14 Jun 2023 15:30 UTC
110 points
5 comments1 min readEA link
(podcastaddict.com)

Google Deep­Mind re­leases Gemini

Yarrow6 Dec 2023 17:39 UTC
21 points
7 comments1 min readEA link
(deepmind.google)

Com­mu­ni­ca­tion by ex­is­ten­tial risk or­ga­ni­za­tions: State of the field and sug­ges­tions for improvement

Existential Risk Communication Project13 Aug 2024 7:06 UTC
10 points
3 comments13 min readEA link

The UK AI Safety Sum­mit tomorrow

SebastianSchmidt31 Oct 2023 19:09 UTC
17 points
2 comments2 min readEA link
No comments.