Not all x-risk is the same: im­pli­ca­tions of non-hu­man-descendants

Nikola18 Dec 2021 21:22 UTC
36 points
4 comments5 min readEA link

C.S. Lewis on Value Lock-In

calebo18 Dec 2021 20:00 UTC
16 points
0 comments3 min readEA link

Un­der­stand­ing Open Philan­thropy’s evolu­tion on mi­gra­tion policy

vipulnaik18 Dec 2021 19:45 UTC
22 points
3 comments12 min readEA link

[Ex­tended Dead­line: Jan 23rd] An­nounc­ing the PIBBSS Sum­mer Re­search Fellowship

nora18 Dec 2021 16:54 UTC
36 points
1 comment1 min readEA link

In­tro­duc­ing the Prin­ci­ples of In­tel­li­gent Be­havi­our in Biolog­i­cal and So­cial Sys­tems (PIBBSS) Fellowship

adamShimi18 Dec 2021 15:25 UTC
37 points
5 comments10 min readEA link

Can the most effec­tive char­i­ties be poli­ti­cal?

Dacyn18 Dec 2021 10:23 UTC
7 points
4 comments2 min readEA link

Fish Welfare Ini­ti­a­tive: The 2.5 Year Retrospective

haven18 Dec 2021 9:07 UTC
94 points
2 comments12 min readEA link

A Case for Im­prov­ing Global Equity as Rad­i­cal Longtermism

LiaH18 Dec 2021 6:32 UTC
1 point
5 comments1 min readEA link

A Tree of Light

Jarred Filmer18 Dec 2021 5:27 UTC
8 points
0 comments3 min readEA link

Coaches for ex­plor­ing ca­reers?

Joe Connolly18 Dec 2021 5:27 UTC
11 points
7 comments1 min readEA link

[Question] Meta-EA peo­ple: How many of your own days would you trade to have rough im­pact es­ti­mates for all of the pro­jects you are con­sid­er­ing pur­su­ing?

stag18 Dec 2021 0:37 UTC
4 points
2 comments1 min readEA link

[Question] What are the most un­der­funded EA or­ga­ni­za­tions?

Question Mark17 Dec 2021 23:28 UTC
3 points
5 comments3 min readEA link

EA out­reach to high school competitors

Nikola17 Dec 2021 17:18 UTC
33 points
11 comments3 min readEA link

Pri­ori­ti­za­tion when size mat­ters: Model

jh17 Dec 2021 16:16 UTC
23 points
0 comments8 min readEA link

[Question] Which EA orgs, pro­grams, com­pa­nies, etc. started as side pro­jects?

kyle_fish17 Dec 2021 12:13 UTC
19 points
2 comments1 min readEA link

[Question] Why do you find the Repug­nant Con­clu­sion re­pug­nant?

Will Bradshaw17 Dec 2021 10:00 UTC
58 points
60 comments1 min readEA link

[Question] With how many EA pro­fes­sion­als have you no­ticed some de­gree of dishon­esty about how im­pact­ful it would be to work for them?

stag17 Dec 2021 7:23 UTC
17 points
6 comments1 min readEA link

Coun­ter­mea­sures & sub­sti­tu­tion effects in biosecurity

ASB16 Dec 2021 21:40 UTC
87 points
6 comments3 min readEA link

Six Take­aways from EA Global and EA Retreats

Akash16 Dec 2021 21:14 UTC
55 points
4 comments11 min readEA link

Re­views of “Is power-seek­ing AI an ex­is­ten­tial risk?”

Joe_Carlsmith16 Dec 2021 20:50 UTC
71 points
4 comments1 min readEA link

Do sour grapes ap­ply to moral­ity?

Nikola16 Dec 2021 18:00 UTC
21 points
3 comments2 min readEA link

High School Se­niors Re­act to 80k Advice

johnburidan16 Dec 2021 17:46 UTC
178 points
9 comments3 min readEA link

An­nual Re­views Aren’t Just for Organizations

kyle_fish16 Dec 2021 13:05 UTC
19 points
4 comments2 min readEA link

Biose­cu­rity needs en­g­ineers and ma­te­ri­als scientists

Will Bradshaw16 Dec 2021 11:37 UTC
161 points
11 comments3 min readEA link

Op­por­tu­nity to start a high-im­pact non­profit—ap­pli­ca­tions for the 2022-23 Char­ity En­trepreneur­ship In­cu­ba­tion Pro­grams are now open!

KarolinaSarek🔸16 Dec 2021 11:33 UTC
94 points
2 comments5 min readEA link

[Question] Where are you donat­ing in 2021, and why?

Aaron Gertler 🔸16 Dec 2021 9:18 UTC
24 points
21 comments1 min readEA link

My Overview of the AI Align­ment Land­scape: A Bird’s Eye View

Neel Nanda15 Dec 2021 23:46 UTC
45 points
15 comments16 min readEA link
(www.alignmentforum.org)

AI Safety: Ap­ply­ing to Grad­u­ate Studies

frances_lorenz15 Dec 2021 22:56 UTC
23 points
0 comments12 min readEA link

A model for en­gage­ment growth in universities

Nikola15 Dec 2021 19:11 UTC
34 points
3 comments6 min readEA link

Linkpost for “Or­ga­ni­za­tions vs. Get­ting Stuff Done” and dis­cus­sion of Zvi’s post about SFF and the S-pro­cess (or; Do­ing Ac­tual Thing)

quinn15 Dec 2021 14:16 UTC
10 points
6 comments5 min readEA link
(humaniterations.net)

Zvi’s Thoughts on the Sur­vival and Flour­ish­ing Fund (SFF)

Zvi 15 Dec 2021 2:44 UTC
81 points
8 comments65 min readEA link

Ap­ply for Stan­ford Ex­is­ten­tial Risks Ini­ti­a­tive (SERI) Postdoc

Vael Gates14 Dec 2021 21:50 UTC
28 points
2 comments1 min readEA link

ARC is hiring al­ign­ment the­ory researchers

Paul_Christiano14 Dec 2021 20:17 UTC
89 points
4 comments1 min readEA link

Against Nega­tive Utilitarianism

Omnizoid14 Dec 2021 20:17 UTC
1 point
59 comments4 min readEA link

We sum­ma­rized the top info haz­ard ar­ti­cles and made a pri­ori­tized read­ing list

Corey_Wood14 Dec 2021 19:46 UTC
41 points
2 comments22 min readEA link

Ar­gu­ing for util­i­tar­i­anism

Omnizoid14 Dec 2021 19:31 UTC
3 points
2 comments64 min readEA link

Ngo’s view on al­ign­ment difficulty

richard_ngo14 Dec 2021 19:03 UTC
19 points
6 comments17 min readEA link

A huge op­por­tu­nity for im­pact: move­ment build­ing at top universities

Alex HT14 Dec 2021 14:37 UTC
178 points
50 comments12 min readEA link

[Question] What ad­vice would you give to the world’s most fa­mous philan­thropist: Father Christ­mas?

Barry Grimes14 Dec 2021 10:58 UTC
32 points
2 comments1 min readEA link

80,000 Hours wants to talk to more peo­ple than ever

Habiba Banu14 Dec 2021 10:21 UTC
134 points
8 comments2 min readEA link

[Feed­back Wanted] DAF Dona­tion Approach

Will Hastings14 Dec 2021 5:45 UTC
12 points
7 comments2 min readEA link

[Question] Cel­e­brat­ing 2021: What are your favourite wins & good news for EA, the world and your­self?

Luke Freeman14 Dec 2021 3:56 UTC
19 points
9 comments1 min readEA link

Free health coach­ing for any­one work­ing on AI safety

Sgestal14 Dec 2021 0:28 UTC
29 points
0 comments1 min readEA link

No mat­ter your job, here’s 3 ev­i­dence-based ways any­one can have a real im­pact − 80,000 Hours

Jesse Rothman14 Dec 2021 0:00 UTC
1 point
0 comments1 min readEA link
(80000hours.org)

AMA: Seth Baum, Global Catas­trophic Risk Institute

SethBaum13 Dec 2021 19:13 UTC
38 points
23 comments2 min readEA link

Ex­ter­nal Eval­u­a­tion of the EA Wiki

NunoSempere13 Dec 2021 17:09 UTC
78 points
18 comments19 min readEA link

Re­sponse to Re­cent Crit­i­cisms of Longtermism

ab13 Dec 2021 13:36 UTC
249 points
31 comments28 min readEA link

Stack­elberg Games and Co­op­er­a­tive Com­mit­ment: My Thoughts and Reflec­tions on a 2-Month Re­search Project

Ben Bucknall13 Dec 2021 10:49 UTC
18 points
1 comment9 min readEA link

Nines of safety: Ter­ence Tao’s pro­posed unit of mea­sure­ment of risk

anson12 Dec 2021 18:01 UTC
41 points
17 comments4 min readEA link

I need help on choos­ing a re­search ques­tion

Hashem12 Dec 2021 17:02 UTC
2 points
5 comments1 min readEA link