David Krueger on AI Align­ment in Academia and Coordination

Michaël Trazzi7 Jan 2023 21:14 UTC
32 points
1 comment3 min readEA link
(theinsideview.ai)

[Question] How to cre­ate cur­ricu­lum for self-study to­wards AI al­ign­ment work?

OIUJHKDFS7 Jan 2023 19:53 UTC
10 points
5 comments1 min readEA link

Street Episte­mol­ogy (EA Shenani­gans) - please RSVP

Milli🔸7 Jan 2023 16:39 UTC
5 points
0 comments1 min readEA link

EA uni­ver­sity groups are miss­ing out on most of their po­ten­tial

Johan de Kock7 Jan 2023 12:44 UTC
52 points
15 comments30 min readEA link

An­chor­ing fo­cal­ism and the Iden­ti­fi­able vic­tim effect: Bias in Eval­u­at­ing AGI X-Risks

Remmelt7 Jan 2023 9:59 UTC
−2 points
1 comment1 min readEA link

[Dis­cus­sion] How Broad is the Hu­man Cog­ni­tive Spec­trum?

𝕮𝖎𝖓𝖊𝖗𝖆7 Jan 2023 0:59 UTC
16 points
1 comment1 min readEA link

[Linkpost] Jan Leike on three kinds of al­ign­ment taxes

Akash6 Jan 2023 23:57 UTC
29 points
0 comments1 min readEA link

Misha Yagudin and Ozzie Gooen Dis­cuss LLMs and Effec­tive Altruism

Ozzie Gooen6 Jan 2023 22:59 UTC
47 points
3 comments14 min readEA link
(quri.substack.com)

[Question] What ra­tio­nale puts a limit to the cost of an EA’s (or any­body’s) life?

Juergen6 Jan 2023 18:59 UTC
6 points
1 comment1 min readEA link

EAs in­ter­ested in EU policy: Con­sider ap­ply­ing for the Euro­pean Com­mis­sion’s Blue Book Traineeship

EU Policy Careers6 Jan 2023 18:28 UTC
108 points
5 comments19 min readEA link

Effec­tive Altru­ism Read­ing List In­for­ma­tion De­sign Poster (2/​2)

annaleptikon6 Jan 2023 14:49 UTC
52 points
1 comment5 min readEA link

Con­sumer Power Ini­ti­a­tive- Ac­tive Pro­jects and Open Roles

Brad West🔸6 Jan 2023 14:40 UTC
17 points
0 comments3 min readEA link

[Question] Is there an “EA alumni” group?

Jonathan Yan6 Jan 2023 10:06 UTC
19 points
3 comments1 min readEA link

Foun­da­tion En­trepreneur­ship—How the first train­ing pro­gram went

Aidan Alexander6 Jan 2023 9:17 UTC
158 points
6 comments6 min readEA link

Ma­chine Learn­ing for Scien­tific Dis­cov­ery—AI Safety Camp

Eleni_A6 Jan 2023 3:06 UTC
9 points
0 comments1 min readEA link

Me­tac­u­lus Begin­ner Tour­na­ment for New Forecasters

Anastasia6 Jan 2023 2:35 UTC
33 points
5 comments1 min readEA link

Trans­for­ma­tive AI is­sues (not just mis­al­ign­ment): an overview

Holden Karnofsky6 Jan 2023 2:19 UTC
36 points
0 comments22 min readEA link
(www.cold-takes.com)

Me­tac­u­lus Year in Re­view: 2022

christian6 Jan 2023 1:23 UTC
25 points
2 comments4 min readEA link
(metaculus.medium.com)

AI Safety Camp, Vir­tual Edi­tion 2023

Linda Linsefors6 Jan 2023 0:55 UTC
24 points
0 comments1 min readEA link

Han­dling Mo­ral Uncer­tainty with Aver­age vs. To­tal Utili­tar­i­anism: One Method That Ap­par­ently *Doesn’t* Work (But Seemed Like it Should)

Marcel D5 Jan 2023 22:18 UTC
10 points
0 comments8 min readEA link

EA Mar­ket Test­ing: Sum­mary of your feedback

david_reinstein5 Jan 2023 21:09 UTC
18 points
2 comments8 min readEA link

En­ter Scott Alexan­der’s Pre­dic­tion Competition

ChanaMessinger5 Jan 2023 20:52 UTC
18 points
1 comment1 min readEA link

Pri­ori­ti­za­tion Re­search Ca­reers—Prob­a­bly Good

Probably Good5 Jan 2023 15:05 UTC
51 points
1 comment1 min readEA link
(www.probablygood.org)

On be­ing compromised

Gavin5 Jan 2023 12:56 UTC
187 points
46 comments1 min readEA link

Skill up in ML for AI safety with the In­tro to ML Safety course (Spring 2023)

james5 Jan 2023 11:02 UTC
36 points
3 comments2 min readEA link

Mislead­ing phrase in a GiveWell Youtube ad

Thomas Kwa5 Jan 2023 10:28 UTC
85 points
13 comments1 min readEA link

Illu­sion of truth effect and Am­bi­guity effect: Bias in Eval­u­at­ing AGI X-Risks

Remmelt5 Jan 2023 4:05 UTC
1 point
1 comment1 min readEA link

When you plan ac­cord­ing to your AI timelines, should you put more weight on the me­dian fu­ture, or the me­dian fu­ture | even­tual AI al­ign­ment suc­cess? ⚖️

Jeffrey Ladish5 Jan 2023 1:55 UTC
16 points
2 comments2 min readEA link

Large Lan­guage Models as Cor­po­rate Lob­by­ists, and Im­pli­ca­tions for So­cietal-AI Alignment

johnjnay4 Jan 2023 22:22 UTC
10 points
6 comments8 min readEA link

I am work­ing on a pro­ject to view sus­tain­abil­ity and welfare in a new evolu­tion­ary light

Sherry 4 Jan 2023 22:11 UTC
7 points
3 comments2 min readEA link

ChatGPT un­der­stands, but largely does not gen­er­ate Span­glish (and other code-mixed) text

Milan Weibel🔹4 Jan 2023 22:10 UTC
6 points
0 comments4 min readEA link
(www.lesswrong.com)

The value of a statis­ti­cal life

JacksonHarrison4 Jan 2023 10:58 UTC
6 points
2 comments7 min readEA link

Bill Burr on Boiling Lob­sters (also man­li­ness and AW)

Lixiang4 Jan 2023 7:55 UTC
33 points
15 comments1 min readEA link

An­nounc­ing In­sights for Impact

Christian Pearson4 Jan 2023 7:00 UTC
80 points
6 comments1 min readEA link

[Question] Do peo­ple have a form or re­sources for cap­tur­ing in­di­rect in­ter­per­sonal im­pacts?

PeterSlattery4 Jan 2023 4:47 UTC
47 points
6 comments1 min readEA link

Nor­malcy bias and Base rate ne­glect: Bias in Eval­u­at­ing AGI X-Risks

Remmelt4 Jan 2023 3:16 UTC
5 points
0 comments1 min readEA link

“AI” is an indexical

TW1233 Jan 2023 22:00 UTC
23 points
2 comments1 min readEA link

An ap­proach for get­ting bet­ter at prac­tic­ing any skill

jacquesthibs3 Jan 2023 17:47 UTC
9 points
0 comments1 min readEA link

Holden Karnofsky In­ter­view about Most Im­por­tant Cen­tury & Trans­for­ma­tive AI

Dwarkesh Patel3 Jan 2023 17:31 UTC
29 points
2 comments1 min readEA link

EA Global: Lon­don 2023

Eli_Nathan3 Jan 2023 15:32 UTC
13 points
1 comment1 min readEA link

EA Global: Bay Area 2023

Eli_Nathan3 Jan 2023 15:26 UTC
8 points
0 comments1 min readEA link

Safety Sells: For-profit in­vest­ing into civ­i­liza­tional re­silience (food se­cu­rity, biose­cu­rity)

FGH3 Jan 2023 12:24 UTC
30 points
4 comments6 min readEA link

[Question] How have shorter AI timelines been af­fect­ing you, and how have you been re­spond­ing to them?

Liav.Koren3 Jan 2023 4:20 UTC
35 points
15 comments1 min readEA link

LW4EA: Elas­tic Pro­duc­tivity Tools

Jeremy3 Jan 2023 3:18 UTC
−1 points
5 comments1 min readEA link
(www.lesswrong.com)

Sta­tus quo bias; Sys­tem justification

Remmelt3 Jan 2023 2:50 UTC
4 points
1 comment1 min readEA link

AI Safety Doesn’t Have to be Weird

Mica White2 Jan 2023 21:56 UTC
11 points
1 comment2 min readEA link

If EA Com­mu­nity-Build­ing Could Be Net-Nega­tive, What Fol­lows?

joshcmorrison2 Jan 2023 19:21 UTC
153 points
78 comments5 min readEA link

Pre­dic­tion Mar­kets for Science

vaniver2 Jan 2023 17:55 UTC
14 points
4 comments1 min readEA link

Plan­ning and doc­u­men­ta­tion: should we do more (or less)?

david_reinstein2 Jan 2023 17:35 UTC
4 points
0 comments2 min readEA link

Com­mu­nity Build­ing from scratch: The first year of EA Hungary

gergo2 Jan 2023 16:35 UTC
96 points
8 comments7 min readEA link