The most good sys­tem vi­sual and sta­bi­liza­tion steps

brb24314 Mar 2022 23:54 UTC
3 points
0 comments1 min readEA link

Five recom­men­da­tions for bet­ter poli­ti­cal discourse

Tobias_Baumann14 Mar 2022 22:35 UTC
31 points
0 comments7 min readEA link
(centerforreducingsuffering.org)

Ap­ply Now | EA for Chris­ti­ans An­nual Con­fer­ence | 23 April 2022

JDBauman14 Mar 2022 19:47 UTC
26 points
2 comments1 min readEA link

Twit­ter-length re­sponses to 24 AI al­ign­ment arguments

RobBensinger14 Mar 2022 19:34 UTC
67 points
17 comments8 min readEA link

Po­ten­tially great ways fore­cast­ing can im­prove the longterm future

Linch14 Mar 2022 19:21 UTC
43 points
6 comments6 min readEA link

Early-warn­ing Fore­cast­ing Cen­ter: What it is, and why it’d be cool

Linch14 Mar 2022 19:20 UTC
62 points
8 comments11 min readEA link

[Question] Are there any EA-based vir­tual as­sis­tant agen­cies?

Benjamin Start14 Mar 2022 18:09 UTC
9 points
2 comments1 min readEA link

LW4EA: Paper-Read­ing for Gears

Jeremy14 Mar 2022 17:26 UTC
18 points
6 comments1 min readEA link
(www.lesswrong.com)

There should be an AI safety pro­ject board

mariushobbhahn14 Mar 2022 16:08 UTC
24 points
3 comments1 min readEA link

Ex­pres­sion of in­ter­est for a writer at 80,000 Hours

80000_Hours14 Mar 2022 14:45 UTC
13 points
0 comments3 min readEA link

Col­lec­tion of defi­ni­tions of “good judge­ment”

MichaelA🔸14 Mar 2022 14:14 UTC
31 points
1 comment12 min readEA link

EA Pro­jects I’d Like to See

finm13 Mar 2022 18:12 UTC
153 points
42 comments27 min readEA link
(www.finmoorhouse.com)

Policy Work in the Euro­pean Par­li­a­ment—Im­pres­sions from my Internship

EP intern13 Mar 2022 13:21 UTC
48 points
14 comments5 min readEA link

New GPT3 Im­pres­sive Ca­pa­bil­ities—In­struc­tGPT3 [1/​2]

simeon_c13 Mar 2022 10:45 UTC
49 points
4 comments7 min readEA link

Anec­dotes Can Be Strong Ev­i­dence and Bayes The­o­rem Proves It

FCCC13 Mar 2022 4:37 UTC
15 points
5 comments4 min readEA link

Let Rus­si­ans go abroad

Viktoria Malyasova12 Mar 2022 17:22 UTC
115 points
17 comments3 min readEA link

A new me­dia out­let fo­cused in part on philanthropy

teddyschleifer12 Mar 2022 17:21 UTC
47 points
14 comments1 min readEA link

[Question] Can any­one seize the op­por­tu­nity to ad­vise philan­thropists on the war?

SebastianSchmidt12 Mar 2022 13:29 UTC
3 points
0 comments1 min readEA link

Com­mu­nity Builder Writ­ing Con­test: $20,000 in prizes for reflections

Akash12 Mar 2022 1:53 UTC
39 points
24 comments5 min readEA link

Fol­lowup on Terminator

skluug12 Mar 2022 1:11 UTC
32 points
0 comments9 min readEA link
(skluug.substack.com)

Re­spon­si­ble Trans­parency Consumption

Jeff Kaufman 🔸11 Mar 2022 21:34 UTC
44 points
2 comments2 min readEA link

Pod­cast: Samo Burja on the war in Ukraine, avoid­ing nu­clear war and the longer term implications

Gus Docker11 Mar 2022 18:50 UTC
4 points
6 comments14 min readEA link
(www.utilitarianpodcast.com)

Com­pa­nies with the most EAs and those with the biggest po­ten­tial for new Work­place Groups

High Impact Professionals11 Mar 2022 15:24 UTC
121 points
7 comments3 min readEA link

Mis­sion-cor­re­lated in­vest­ing: Ex­am­ples of mis­sion hedg­ing and ‘lev­er­ag­ing’

jh11 Mar 2022 9:33 UTC
25 points
1 comment7 min readEA link

Mis­sion cor­re­la­tion, more than just hedging

jh11 Mar 2022 9:32 UTC
29 points
5 comments3 min readEA link

The EA Be­hav­ioral Science Newslet­ter #4

PeterSlattery11 Mar 2022 8:16 UTC
17 points
1 comment10 min readEA link

[Cross­post] How do Effec­tive Altru­ist recom­men­da­tions change in times of war? [From Marginal Revolu­tion]

Jpmos11 Mar 2022 5:09 UTC
17 points
1 comment1 min readEA link

🇺🇦Ukraine Si­tu­a­tion Re­lated Life Prin­ci­ples Dis­cus­sion—OpenPrin­ci­ples Event

Ti Guo11 Mar 2022 3:30 UTC
3 points
0 comments1 min readEA link

The Bunker Pro­ject: are there con­sid­er­a­tions for so­cial co­he­sive­ness?

Lakin11 Mar 2022 2:53 UTC
13 points
1 comment1 min readEA link

EA Librar­ian Update

calebp11 Mar 2022 1:22 UTC
31 points
4 comments5 min readEA link

What does or­thog­o­nal­ity mean? (EA Librar­ian)

calebp11 Mar 2022 1:18 UTC
9 points
0 comments2 min readEA link

What are the strongest ar­gu­ments against work­ing on ex­is­ten­tial risk? (EA Librar­ian)

calebp11 Mar 2022 1:14 UTC
8 points
3 comments4 min readEA link

What are the differ­ent types of longter­misms? (EA Librar­ian)

calebp11 Mar 2022 1:09 UTC
17 points
0 comments3 min readEA link

EAGxBos­ton: Up­dates and Info from the Or­ga­niz­ing Team

Kaleem11 Mar 2022 0:13 UTC
65 points
9 comments7 min readEA link

[Question] La­bel­ing cash trans­fers to solve char­coal-re­lated prob­lems?

brb24311 Mar 2022 0:12 UTC
6 points
9 comments1 min readEA link

[Question] Ukraine: can we talk to the Rus­sian sol­diers?

DPiepgrass11 Mar 2022 0:12 UTC
31 points
3 comments2 min readEA link

Up­date from Open Philan­thropy’s Longter­mist EA Move­ment-Build­ing team

ClaireZabel10 Mar 2022 19:37 UTC
200 points
19 comments12 min readEA link

Samotsvety Nu­clear Risk Fore­casts — March 2022

NunoSempere10 Mar 2022 18:52 UTC
155 points
54 comments6 min readEA link

“co­coons”: an idea to cri­tique.

Thomas10 Mar 2022 15:28 UTC
−2 points
5 comments1 min readEA link

Par­ti­ci­pate in the Next Gen­er­a­tion for Biose­cu­rity Competition

jtm10 Mar 2022 14:49 UTC
61 points
3 comments3 min readEA link

Les­sons and re­sults from work­place giv­ing talks

Jack Lewars10 Mar 2022 10:26 UTC
44 points
1 comment7 min readEA link

AI Value Align­ment Speaker Series Pre­sented By EA Berkeley

Mahendra Prasad10 Mar 2022 1:33 UTC
2 points
0 comments1 min readEA link

A Com­par­i­son of Donor-Ad­vised Fund Providers

MichaelDickens9 Mar 2022 18:53 UTC
103 points
29 comments12 min readEA link

Con­cerns with the Wel­lbe­ing of Fu­ture Gen­er­a­tions Bill

Larks9 Mar 2022 18:12 UTC
126 points
37 comments21 min readEA link

“In­tro to brain-like-AGI safety” se­ries—halfway point!

Steven Byrnes9 Mar 2022 15:21 UTC
8 points
0 comments2 min readEA link

[Question] When did EA miss a great op­por­tu­nity to do good?

JamesÖz9 Mar 2022 10:50 UTC
40 points
23 comments2 min readEA link

Ask AI com­pa­nies about what they are do­ing for AI safety?

mic8 Mar 2022 21:54 UTC
44 points
1 comment2 min readEA link

On pre­sent­ing the case for AI risk

Aryeh Englander8 Mar 2022 21:37 UTC
114 points
12 comments4 min readEA link

AI Risk is like Ter­mi­na­tor; Stop Say­ing it’s Not

skluug8 Mar 2022 19:17 UTC
189 points
43 comments10 min readEA link
(skluug.substack.com)

I want Fu­ture Perfect, but for sci­ence publications

James Lin8 Mar 2022 17:09 UTC
67 points
8 comments5 min readEA link