Coun­ter­mea­sures & sub­sti­tu­tion effects in biosecurity

ASB16 Dec 2021 21:40 UTC
87 points
6 comments3 min readEA link

Six Take­aways from EA Global and EA Retreats

Akash16 Dec 2021 21:14 UTC
55 points
4 comments11 min readEA link

Re­views of “Is power-seek­ing AI an ex­is­ten­tial risk?”

Joe_Carlsmith16 Dec 2021 20:50 UTC
71 points
4 comments1 min readEA link

Do sour grapes ap­ply to moral­ity?

Nikola16 Dec 2021 18:00 UTC
21 points
3 comments2 min readEA link

High School Se­niors Re­act to 80k Advice

johnburidan16 Dec 2021 17:46 UTC
178 points
9 comments3 min readEA link

An­nual Re­views Aren’t Just for Organizations

kyle_fish16 Dec 2021 13:05 UTC
19 points
4 comments2 min readEA link

Biose­cu­rity needs en­g­ineers and ma­te­ri­als scientists

Will Bradshaw16 Dec 2021 11:37 UTC
161 points
11 comments3 min readEA link

Op­por­tu­nity to start a high-im­pact non­profit—ap­pli­ca­tions for the 2022-23 Char­ity En­trepreneur­ship In­cu­ba­tion Pro­grams are now open!

KarolinaSarek🔸16 Dec 2021 11:33 UTC
94 points
2 comments5 min readEA link

[Question] Where are you donat­ing in 2021, and why?

Aaron Gertler 🔸16 Dec 2021 9:18 UTC
24 points
21 comments1 min readEA link

My Overview of the AI Align­ment Land­scape: A Bird’s Eye View

Neel Nanda15 Dec 2021 23:46 UTC
45 points
15 comments16 min readEA link
(www.alignmentforum.org)

AI Safety: Ap­ply­ing to Grad­u­ate Studies

frances_lorenz15 Dec 2021 22:56 UTC
23 points
0 comments12 min readEA link

A model for en­gage­ment growth in universities

Nikola15 Dec 2021 19:11 UTC
34 points
3 comments6 min readEA link

Linkpost for “Or­ga­ni­za­tions vs. Get­ting Stuff Done” and dis­cus­sion of Zvi’s post about SFF and the S-pro­cess (or; Do­ing Ac­tual Thing)

quinn15 Dec 2021 14:16 UTC
10 points
6 comments5 min readEA link
(humaniterations.net)

Zvi’s Thoughts on the Sur­vival and Flour­ish­ing Fund (SFF)

Zvi 15 Dec 2021 2:44 UTC
81 points
8 comments65 min readEA link

Ap­ply for Stan­ford Ex­is­ten­tial Risks Ini­ti­a­tive (SERI) Postdoc

Vael Gates14 Dec 2021 21:50 UTC
28 points
2 comments1 min readEA link

ARC is hiring al­ign­ment the­ory researchers

Paul_Christiano14 Dec 2021 20:17 UTC
89 points
4 comments1 min readEA link

Against Nega­tive Utilitarianism

Omnizoid14 Dec 2021 20:17 UTC
1 point
59 comments4 min readEA link

We sum­ma­rized the top info haz­ard ar­ti­cles and made a pri­ori­tized read­ing list

Corey_Wood14 Dec 2021 19:46 UTC
41 points
2 comments22 min readEA link

Ar­gu­ing for util­i­tar­i­anism

Omnizoid14 Dec 2021 19:31 UTC
3 points
2 comments64 min readEA link

Ngo’s view on al­ign­ment difficulty

richard_ngo14 Dec 2021 19:03 UTC
19 points
6 comments17 min readEA link

A huge op­por­tu­nity for im­pact: move­ment build­ing at top universities

Alex HT14 Dec 2021 14:37 UTC
178 points
50 comments12 min readEA link

[Question] What ad­vice would you give to the world’s most fa­mous philan­thropist: Father Christ­mas?

Barry Grimes14 Dec 2021 10:58 UTC
32 points
2 comments1 min readEA link

80,000 Hours wants to talk to more peo­ple than ever

Habiba Banu14 Dec 2021 10:21 UTC
134 points
8 comments2 min readEA link

[Feed­back Wanted] DAF Dona­tion Approach

Will Hastings14 Dec 2021 5:45 UTC
12 points
7 comments2 min readEA link

[Question] Cel­e­brat­ing 2021: What are your favourite wins & good news for EA, the world and your­self?

Luke Freeman14 Dec 2021 3:56 UTC
19 points
9 comments1 min readEA link

Free health coach­ing for any­one work­ing on AI safety

Sgestal14 Dec 2021 0:28 UTC
29 points
0 comments1 min readEA link

No mat­ter your job, here’s 3 ev­i­dence-based ways any­one can have a real im­pact − 80,000 Hours

Jesse Rothman14 Dec 2021 0:00 UTC
1 point
0 comments1 min readEA link
(80000hours.org)

AMA: Seth Baum, Global Catas­trophic Risk Institute

SethBaum13 Dec 2021 19:13 UTC
38 points
23 comments2 min readEA link

Ex­ter­nal Eval­u­a­tion of the EA Wiki

NunoSempere13 Dec 2021 17:09 UTC
78 points
18 comments19 min readEA link

Re­sponse to Re­cent Crit­i­cisms of Longtermism

ab13 Dec 2021 13:36 UTC
249 points
31 comments28 min readEA link

Stack­elberg Games and Co­op­er­a­tive Com­mit­ment: My Thoughts and Reflec­tions on a 2-Month Re­search Project

Ben Bucknall13 Dec 2021 10:49 UTC
18 points
1 comment9 min readEA link

Nines of safety: Ter­ence Tao’s pro­posed unit of mea­sure­ment of risk

anson12 Dec 2021 18:01 UTC
41 points
17 comments4 min readEA link

I need help on choos­ing a re­search ques­tion

Hashem12 Dec 2021 17:02 UTC
2 points
5 comments1 min readEA link

’Tis The Sea­son of Change

Jaime Sevilla12 Dec 2021 14:02 UTC
37 points
2 comments5 min readEA link

Reflec­tions on EA Global London

PabloAMC 🔸12 Dec 2021 0:56 UTC
37 points
3 comments3 min readEA link

EA Din­ner Covid Logistics

Jeff Kaufman 🔸11 Dec 2021 21:50 UTC
11 points
4 comments2 min readEA link
(www.jefftk.com)

The Maker of MIND

Tomas B.11 Dec 2021 17:33 UTC
8 points
4 comments11 min readEA link

An Emer­gency Fund for Effec­tive Altruists

bob11 Dec 2021 4:49 UTC
86 points
26 comments1 min readEA link

I am tak­ing a break from the EA com­mu­nity. Here are a few words about why.

eaforumthrowaway2021121011 Dec 2021 4:49 UTC
14 points
6 comments1 min readEA link

What role should evolu­tion­ary analo­gies play in un­der­stand­ing AI take­off speeds?

anson11 Dec 2021 1:16 UTC
12 points
0 comments42 min readEA link

[Question] Any ini­ti­a­tive to in­tro­duce small and cheap CO² sen­sors?

Martin (Huge) Vlach10 Dec 2021 9:50 UTC
2 points
1 comment1 min readEA link

En­abling more feedback

JJ Hepburn10 Dec 2021 6:52 UTC
41 points
3 comments3 min readEA link

How I Think Brains Work

HoratioVonBecker10 Dec 2021 5:58 UTC
−4 points
0 comments1 min readEA link

How I’d set up a med­i­cal practice

HoratioVonBecker10 Dec 2021 4:05 UTC
−7 points
2 comments1 min readEA link

You Should Not Like Be­ing Smarter Than The Aver­age Hu­man Being

HoratioVonBecker10 Dec 2021 2:41 UTC
−9 points
3 comments1 min readEA link

Mon­i­tor­ing Wild An­i­mal Welfare via Vocalizations

Hannah McKay🔸10 Dec 2021 0:37 UTC
79 points
1 comment24 min readEA link

Po­ten­tially high-im­pact job: Colorado Depart­ment of Agri­cul­ture, Bureau of An­i­mal Pro­tec­tion Manager

alene10 Dec 2021 0:00 UTC
16 points
6 comments1 min readEA link
(www.governmentjobs.com)

Aiming for the min­i­mum of self-care is dangerous

Tessa A 🔸9 Dec 2021 21:27 UTC
230 points
32 comments6 min readEA link

Help CEA plan fu­ture events and conferences

Lizka9 Dec 2021 19:54 UTC
23 points
16 comments1 min readEA link

Con­ver­sa­tion on tech­nol­ogy fore­cast­ing and gradualism

RobBensinger9 Dec 2021 19:00 UTC
15 points
3 comments31 min readEA link