EA is three rad­i­cal ideas I want to protect

Peter Wildeford27 Mar 2023 15:31 UTC
628 points
16 comments4 min readEA link

Why I love effec­tive altruism

Michelle_Hutchinson1 Mar 2023 9:41 UTC
399 points
12 comments5 min readEA link

Share the burden

2ndRichter11 Mar 2023 0:58 UTC
349 points
50 comments9 min readEA link

No­body’s on the ball on AGI alignment

leopold29 Mar 2023 14:26 UTC
325 points
65 comments9 min readEA link
(www.forourposterity.com)

Some Com­ments on the Re­cent FTX TIME Article

Ben_West20 Mar 2023 17:36 UTC
296 points
38 comments5 min readEA link

Re­mind­ing my­self just how awful pain can get (plus, an ex­per­i­ment on my­self)

Ren Ryba15 Mar 2023 22:44 UTC
290 points
57 comments16 min readEA link

Nathan A. Sears (1987-2023)

HaydnBelfield29 Mar 2023 16:07 UTC
284 points
7 comments4 min readEA link

Run Posts By Orgs

Jeff Kaufman29 Mar 2023 2:40 UTC
281 points
13 comments1 min readEA link

Offer an op­tion to Mus­lim donors; grow effec­tive giving

GiveDirectly16 Mar 2023 7:26 UTC
268 points
34 comments4 min readEA link

After launch. How are CE char­i­ties pro­gress­ing?

Ula Zarosa6 Mar 2023 12:34 UTC
266 points
13 comments6 min readEA link

How much should gov­ern­ments pay to pre­vent catas­tro­phes? Longter­mism’s limited role

EJT19 Mar 2023 16:50 UTC
258 points
35 comments35 min readEA link
(philpapers.org)

Reflect­ing on the Last Year — Les­sons for EA (open­ing keynote at EAG)

Toby_Ord24 Mar 2023 15:35 UTC
254 points
14 comments16 min readEA link

Shut­ting Down the Light­cone Offices

Habryka15 Mar 2023 1:46 UTC
242 points
68 comments17 min readEA link
(www.lesswrong.com)

Time Ar­ti­cle Dis­cus­sion—“Effec­tive Altru­ist Lead­ers Were Re­peat­edly Warned About Sam Bankman-Fried Years Be­fore FTX Col­lapsed”

Nathan Young15 Mar 2023 12:40 UTC
241 points
174 comments4 min readEA link
(time.com)

Assess­ment of Hap­pier Lives In­sti­tute’s Cost-Effec­tive­ness Anal­y­sis of StrongMinds

GiveWell22 Mar 2023 17:04 UTC
239 points
57 comments23 min readEA link
(www.givewell.org)

Ad­vice on com­mu­ni­cat­ing in and around the biose­cu­rity policy community

Elika2 Mar 2023 21:32 UTC
227 points
27 comments6 min readEA link

On the First An­niver­sary of my Best Friend’s Death

Rockwell6 Mar 2023 4:59 UTC
220 points
9 comments4 min readEA link
(www.rockwellschwartz.com)

FLI open let­ter: Pause gi­ant AI experiments

Zach Stein-Perlman29 Mar 2023 4:04 UTC
220 points
38 comments1 min readEA link

How my com­mu­nity suc­cess­fully re­duced sex­ual misconduct

titotal11 Mar 2023 13:50 UTC
212 points
32 comments5 min readEA link

Paus­ing AI Devel­op­ments Isn’t Enough. We Need to Shut it All Down by Eliezer Yudkowsky

jacquesthibs29 Mar 2023 23:30 UTC
211 points
75 comments3 min readEA link
(time.com)

Scor­ing fore­casts from the 2016 “Ex­pert Sur­vey on Progress in AI”

PatrickL1 Mar 2023 14:39 UTC
204 points
21 comments9 min readEA link

Against EA-Com­mu­nity-Re­ceived-Wis­dom on Prac­ti­cal So­ciolog­i­cal Questions

Michael_Cohen9 Mar 2023 2:12 UTC
202 points
35 comments16 min readEA link

New blog: Planned Obsolescence

Ajeya27 Mar 2023 19:46 UTC
198 points
9 comments1 min readEA link
(www.planned-obsolescence.org)

Write a Book?

Jeff Kaufman16 Mar 2023 0:11 UTC
188 points
24 comments1 min readEA link

Coun­ter­pro­duc­tive Altru­ism: The Other Heavy Tail

Vasco Grilo1 Mar 2023 9:58 UTC
186 points
8 comments7 min readEA link
(onlinelibrary.wiley.com)

FTX Com­mu­nity Re­sponse Sur­vey Results

Willem Sleegers15 Mar 2023 14:49 UTC
181 points
33 comments7 min readEA link

Sugges­tion: A work­able ro­man­tic non-es­ca­la­tion policy for EA com­mu­nity builders

Severin8 Mar 2023 2:17 UTC
179 points
54 comments3 min readEA link

EA In­fosec: skill up in or make a tran­si­tion to in­fosec via this book club

Jason Clinton5 Mar 2023 21:02 UTC
170 points
15 comments2 min readEA link

My Ob­jec­tions to “We’re All Gonna Die with Eliezer Yud­kowsky”

Quintin Pope21 Mar 2023 1:23 UTC
167 points
20 comments39 min readEA link

How bad a fu­ture do ML re­searchers ex­pect?

Katja_Grace13 Mar 2023 5:47 UTC
165 points
20 comments1 min readEA link

More Cen­tral­i­sa­tion?

DavidNash6 Mar 2023 16:01 UTC
164 points
30 comments2 min readEA link

It’s not all that simple

Brnr00113 Mar 2023 4:33 UTC
164 points
20 comments9 min readEA link

Global catas­trophic risks law ap­proved in the United States

JorgeTorresC7 Mar 2023 14:28 UTC
157 points
7 comments1 min readEA link
(riesgoscatastroficosglobales.com)

Sur­vey on in­ter­me­di­ate goals in AI governance

MichaelA17 Mar 2023 12:44 UTC
155 points
4 comments1 min readEA link

Holden Karnofsky’s re­cent com­ments on FTX

Lizka24 Mar 2023 11:44 UTC
149 points
2 comments5 min readEA link

Ra­cial and gen­der de­mo­graph­ics at EA Global in 2022

Amy Labenz10 Mar 2023 14:29 UTC
147 points
16 comments4 min readEA link

Call for Cruxes by Rhyme, a Longter­mist His­tory Con­sul­tancy

Lara_TH1 Mar 2023 10:20 UTC
147 points
6 comments3 min readEA link

3 Ba­sic Steps to Re­duce Per­sonal Li­a­bil­ity as an Org Leader

Deena Englander6 Mar 2023 15:04 UTC
143 points
12 comments1 min readEA link

Po­ten­tial em­ploy­ees have a unique lever to in­fluence the be­hav­iors of AI labs

oxalis18 Mar 2023 20:58 UTC
139 points
1 comment5 min readEA link

An­nounc­ing the Open Philan­thropy AI Wor­ld­views Contest

Jason Schukraft10 Mar 2023 2:33 UTC
137 points
33 comments3 min readEA link
(www.openphilanthropy.org)

Some up­dates to my think­ing in light of the FTX col­lapse by Owen Cot­ton Bar­ratt [Link Post]

Nathan Young29 Mar 2023 15:23 UTC
133 points
16 comments1 min readEA link
(docs.google.com)

What Has EAGxLatAm 2023 Taught Us: Ret­ro­spec­tive & Thoughts on Mea­sur­ing the Im­pact of EA Conferences

Hugo Ikta3 Mar 2023 16:02 UTC
130 points
8 comments8 min readEA link

The illu­sion of con­sen­sus about EA celebrities

Ben Millwood17 Mar 2023 21:16 UTC
130 points
12 comments2 min readEA link

How oral re­hy­dra­tion ther­apy was de­vel­oped

Kelsey Piper10 Mar 2023 2:16 UTC
128 points
1 comment1 min readEA link
(asteriskmag.com)

An­nounc­ing the Euro­pean Net­work for AI Safety (ENAIS)

Esben Kran22 Mar 2023 17:57 UTC
124 points
3 comments3 min readEA link

In­tro­duc­ing Artists of Impact

Fernando_MG25 Mar 2023 3:21 UTC
124 points
11 comments2 min readEA link

Some prob­lems in op­er­a­tions at EA orgs: in­puts from a dozen ops staff

Vaidehi Agarwalla16 Mar 2023 20:32 UTC
118 points
23 comments6 min readEA link

Abuse in LessWrong and ra­tio­nal­ist com­mu­ni­ties in Bloomberg News

whistleblower677 Mar 2023 20:36 UTC
115 points
137 comments7 min readEA link
(www.bloomberg.com)

Com­ments on OpenAI’s “Plan­ning for AGI and be­yond”

So8res3 Mar 2023 23:01 UTC
115 points
7 comments1 min readEA link

The Over­ton Win­dow widens: Ex­am­ples of AI risk in the media

Akash23 Mar 2023 17:10 UTC
112 points
11 comments1 min readEA link