New ver­sion of Men­tal Health Nav­i­ga­tor website

Emily8 Jan 2023 21:37 UTC
22 points
8 comments1 min readEA link

Po­ten­tial Fu­ture People

TeddyW8 Jan 2023 17:20 UTC
11 points
6 comments1 min readEA link

Mo­ral Weights ac­cord­ing to EA Orgs

Simon_M8 Jan 2023 16:46 UTC
102 points
15 comments1 min readEA link

Hal­i­fax Monthly Meetup: Moloch in the HRM

Conor Barnes8 Jan 2023 14:51 UTC
4 points
0 comments1 min readEA link

Dangers of defer­ence

TsviBT8 Jan 2023 14:41 UTC
46 points
7 comments2 min readEA link

Should UBI be a top pri­or­ity for longter­mism?

Michael Simm8 Jan 2023 12:45 UTC
2 points
33 comments4 min readEA link

Ad­ding im­por­tant nu­ances to “pre­serve op­tion value” ar­gu­ments

MichaelA8 Jan 2023 9:30 UTC
36 points
1 comment5 min readEA link

EA Ger­many’s Strat­egy for 2023

Sarah Tegeler8 Jan 2023 8:30 UTC
126 points
13 comments15 min readEA link

Is this com­mu­nity over-em­pha­siz­ing AI al­ign­ment?

Lixiang8 Jan 2023 6:23 UTC
1 point
5 comments1 min readEA link

A Differ­ent Take on What’s Effec­tive Altruism

Marty Nemko8 Jan 2023 2:27 UTC
0 points
1 comment1 min readEA link
(medium.com)

Learn­ing as much Deep Learn­ing math as I could in 24 hours

Phosphorous8 Jan 2023 2:19 UTC
58 points
5 comments7 min readEA link

David Krueger on AI Align­ment in Academia and Coordination

Michaël Trazzi7 Jan 2023 21:14 UTC
32 points
1 comment3 min readEA link
(theinsideview.ai)

[Question] How to cre­ate cur­ricu­lum for self-study to­wards AI al­ign­ment work?

OIUJHKDFS7 Jan 2023 19:53 UTC
10 points
5 comments1 min readEA link

Street Episte­mol­ogy (EA Shenani­gans) - please RSVP

Milli | Martin7 Jan 2023 16:39 UTC
5 points
0 comments1 min readEA link

EA uni­ver­sity groups are miss­ing out on most of their po­ten­tial

Johan de Kock7 Jan 2023 12:44 UTC
50 points
15 comments29 min readEA link

An­chor­ing fo­cal­ism and the Iden­ti­fi­able vic­tim effect: Bias in Eval­u­at­ing AGI X-Risks

Remmelt7 Jan 2023 9:59 UTC
−2 points
1 comment1 min readEA link

[Dis­cus­sion] How Broad is the Hu­man Cog­ni­tive Spec­trum?

𝕮𝖎𝖓𝖊𝖗𝖆7 Jan 2023 0:59 UTC
16 points
1 comment1 min readEA link

[Linkpost] Jan Leike on three kinds of al­ign­ment taxes

Akash6 Jan 2023 23:57 UTC
29 points
0 comments1 min readEA link

Misha Yagudin and Ozzie Gooen Dis­cuss LLMs and Effec­tive Altruism

Ozzie Gooen6 Jan 2023 22:59 UTC
47 points
3 comments14 min readEA link
(quri.substack.com)

[Question] What ra­tio­nale puts a limit to the cost of an EA’s (or any­body’s) life?

Juergen6 Jan 2023 18:59 UTC
6 points
1 comment1 min readEA link

EAs in­ter­ested in EU policy: Con­sider ap­ply­ing for the Euro­pean Com­mis­sion’s Blue Book Traineeship

EU Policy Careers6 Jan 2023 18:28 UTC
102 points
5 comments19 min readEA link

Effec­tive Altru­ism Read­ing List In­for­ma­tion De­sign Poster (2/​2)

annaleptikon6 Jan 2023 14:49 UTC
50 points
0 comments5 min readEA link

Con­sumer Power Ini­ti­a­tive- Ac­tive Pro­jects and Open Roles

Brad West6 Jan 2023 14:40 UTC
17 points
0 comments3 min readEA link

[Question] Is there an “EA alumni” group?

Jonathan Yan6 Jan 2023 10:06 UTC
19 points
3 comments1 min readEA link

Foun­da­tion En­trepreneur­ship—How the first train­ing pro­gram went

Aidan Alexander6 Jan 2023 9:17 UTC
157 points
6 comments6 min readEA link

Ma­chine Learn­ing for Scien­tific Dis­cov­ery—AI Safety Camp

Eleni_A6 Jan 2023 3:06 UTC
9 points
0 comments1 min readEA link

Me­tac­u­lus Begin­ner Tour­na­ment for New Forecasters

Anastasia6 Jan 2023 2:35 UTC
33 points
5 comments1 min readEA link

Trans­for­ma­tive AI is­sues (not just mis­al­ign­ment): an overview

Holden Karnofsky6 Jan 2023 2:19 UTC
31 points
0 comments22 min readEA link
(www.cold-takes.com)

Me­tac­u­lus Year in Re­view: 2022

christian6 Jan 2023 1:23 UTC
25 points
2 comments4 min readEA link
(metaculus.medium.com)

AI Safety Camp, Vir­tual Edi­tion 2023

Linda Linsefors6 Jan 2023 0:55 UTC
24 points
0 comments1 min readEA link

Han­dling Mo­ral Uncer­tainty with Aver­age vs. To­tal Utili­tar­i­anism: One Method That Ap­par­ently *Doesn’t* Work (But Seemed Like it Should)

Harrison Durland5 Jan 2023 22:18 UTC
10 points
0 comments8 min readEA link

EA Mar­ket Test­ing: Sum­mary of your feedback

david_reinstein5 Jan 2023 21:09 UTC
18 points
2 comments8 min readEA link

En­ter Scott Alexan­der’s Pre­dic­tion Competition

ChanaMessinger5 Jan 2023 20:52 UTC
18 points
1 comment1 min readEA link

Pri­ori­ti­za­tion Re­search Ca­reers—Prob­a­bly Good

Probably Good5 Jan 2023 15:05 UTC
51 points
1 comment1 min readEA link
(www.probablygood.org)

On be­ing compromised

Gavin5 Jan 2023 12:56 UTC
187 points
46 comments1 min readEA link

Skill up in ML for AI safety with the In­tro to ML Safety course (Spring 2023)

james5 Jan 2023 11:02 UTC
36 points
3 comments2 min readEA link

Mislead­ing phrase in a GiveWell Youtube ad

Thomas Kwa5 Jan 2023 10:28 UTC
85 points
13 comments1 min readEA link

Illu­sion of truth effect and Am­bi­guity effect: Bias in Eval­u­at­ing AGI X-Risks

Remmelt5 Jan 2023 4:05 UTC
1 point
1 comment1 min readEA link

When you plan ac­cord­ing to your AI timelines, should you put more weight on the me­dian fu­ture, or the me­dian fu­ture | even­tual AI al­ign­ment suc­cess? ⚖️

Jeffrey Ladish5 Jan 2023 1:55 UTC
16 points
2 comments2 min readEA link

Large Lan­guage Models as Cor­po­rate Lob­by­ists, and Im­pli­ca­tions for So­cietal-AI Alignment

johnjnay4 Jan 2023 22:22 UTC
10 points
6 comments8 min readEA link

I am work­ing on a pro­ject to view sus­tain­abil­ity and welfare in a new evolu­tion­ary light

Sherry 4 Jan 2023 22:11 UTC
7 points
3 comments2 min readEA link

ChatGPT un­der­stands, but largely does not gen­er­ate Span­glish (and other code-mixed) text

Milan Weibel4 Jan 2023 22:10 UTC
6 points
0 comments4 min readEA link
(www.lesswrong.com)

The value of a statis­ti­cal life

JacksonHarrison4 Jan 2023 10:58 UTC
6 points
2 comments7 min readEA link

Bill Burr on Boiling Lob­sters (also man­li­ness and AW)

Lixiang4 Jan 2023 7:55 UTC
33 points
15 comments1 min readEA link

An­nounc­ing In­sights for Impact

Christian Pearson4 Jan 2023 7:00 UTC
80 points
6 comments1 min readEA link

[Question] Do peo­ple have a form or re­sources for cap­tur­ing in­di­rect in­ter­per­sonal im­pacts?

PeterSlattery4 Jan 2023 4:47 UTC
47 points
6 comments1 min readEA link

Nor­malcy bias and Base rate ne­glect: Bias in Eval­u­at­ing AGI X-Risks

Remmelt4 Jan 2023 3:16 UTC
5 points
0 comments1 min readEA link

“AI” is an indexical

ThomasW3 Jan 2023 22:00 UTC
23 points
2 comments1 min readEA link

An ap­proach for get­ting bet­ter at prac­tic­ing any skill

jacquesthibs3 Jan 2023 17:47 UTC
9 points
0 comments1 min readEA link

Holden Karnofsky In­ter­view about Most Im­por­tant Cen­tury & Trans­for­ma­tive AI

Dwarkesh Patel3 Jan 2023 17:31 UTC
29 points
2 comments1 min readEA link