StrongMinds should not be a top-rated char­ity (yet)

Simon_M27 Dec 2022 23:53 UTC
334 points
129 comments6 min readEA link
(simonm.substack.com)

GiveDirectly $1 Million Match Cam­paign

adam galas27 Dec 2022 21:29 UTC
33 points
4 comments1 min readEA link

Why GiveWell should use com­plete un­cer­tainty quantification

Tanae27 Dec 2022 20:11 UTC
32 points
1 comment1 min readEA link
(suboptimal.substack.com)

How to Catch a ChatGPT Cheat: 7 Prac­ti­cal Tips

Marshall27 Dec 2022 16:09 UTC
8 points
2 comments4 min readEA link

The AIA and its Brus­sels Effect

Kathryn O'Rourke27 Dec 2022 16:01 UTC
16 points
0 comments5 min readEA link

Why The Fo­cus on Ex­pected Utility Max­imisers?

𝕮𝖎𝖓𝖊𝖗𝖆27 Dec 2022 15:51 UTC
11 points
1 comment1 min readEA link

Pre­sump­tive Listen­ing: stick­ing to fa­mil­iar con­cepts and miss­ing the outer rea­son­ing paths

Remmelt27 Dec 2022 15:40 UTC
3 points
0 comments1 min readEA link

Mere ex­po­sure effect: Bias in Eval­u­at­ing AGI X-Risks

Remmelt27 Dec 2022 14:05 UTC
4 points
1 comment1 min readEA link

[Question] I have thou­sands of copies of HPMOR in Rus­sian. How to use them with the most im­pact?

MikhailSamin27 Dec 2022 11:07 UTC
39 points
10 comments1 min readEA link

In­sti­tu­tions Can­not Res­train Dark-Triad AI Exploitation

Remmelt27 Dec 2022 10:34 UTC
8 points
0 comments1 min readEA link

In­tro­duc­tion: Bias in Eval­u­at­ing AGI X-Risks

Remmelt27 Dec 2022 10:27 UTC
4 points
0 comments1 min readEA link

Effec­tive An­i­mal Char­ity recom­men­da­tions?

Frederik27 Dec 2022 9:11 UTC
28 points
10 comments1 min readEA link

The Non-Iden­tity Prob­lem ex­plained col­lo­quially

Alex Vellins27 Dec 2022 7:22 UTC
5 points
0 comments2 min readEA link

What would you do if you were the only in­tel­li­gent be­ing in the uni­verse?

Jonathan Yan27 Dec 2022 3:54 UTC
7 points
2 comments1 min readEA link
(www.lesswrong.com)

How ‘Hu­man-Hu­man’ dy­nam­ics give way to ‘Hu­man-AI’ and then ‘AI-AI’ dynamics

Remmelt27 Dec 2022 3:16 UTC
4 points
0 comments1 min readEA link

Nine Points of Col­lec­tive Insanity

Remmelt27 Dec 2022 3:14 UTC
1 point
0 comments1 min readEA link

Con­sider Fi­nan­cial In­de­pen­dence First

River27 Dec 2022 2:04 UTC
51 points
9 comments5 min readEA link

Longter­mism and An­i­mal Farm­ing Trajectories

MichaelDello27 Dec 2022 0:58 UTC
51 points
8 comments17 min readEA link
(www.sentienceinstitute.org)

Against Agents as an Ap­proach to Aligned Trans­for­ma­tive AI

𝕮𝖎𝖓𝖊𝖗𝖆27 Dec 2022 0:47 UTC
4 points
0 comments1 min readEA link

What are the best char­i­ties to donate to in 2022?

Luke Freeman26 Dec 2022 23:38 UTC
33 points
3 comments7 min readEA link
(www.givingwhatwecan.org)

Slightly against al­ign­ing with neo-luddites

Matthew_Barnett26 Dec 2022 23:27 UTC
71 points
17 comments4 min readEA link

Up­date on GWWC dona­tion platform

Luke Freeman26 Dec 2022 22:57 UTC
100 points
1 comment1 min readEA link

Air-gap­ping eval­u­a­tion and support

Ryan Kidd26 Dec 2022 22:52 UTC
22 points
12 comments1 min readEA link

[Opz­ionale] Per ap­profondire “La Men­tal­ità dell’Effi­ca­cia”

EA Italy26 Dec 2022 22:11 UTC
1 point
0 comments1 min readEA link

500 mil­ioni, ma non uno di più

EA Italy26 Dec 2022 21:58 UTC
1 point
0 comments1 min readEA link
(altruismoefficace.it)

[Question] Would it make sense to use hu­man rights law against catas­trophic risks—like some seem to be do­ing re­gard­ing cli­mate change?

Ramiro26 Dec 2022 21:14 UTC
12 points
6 comments1 min readEA link

An­nounc­ing a sub­fo­rum for fore­cast­ing & estimation

Sharang Phadke26 Dec 2022 20:51 UTC
72 points
2 comments1 min readEA link

Register your pre­dic­tions for 2023

Lizka26 Dec 2022 20:49 UTC
42 points
13 comments2 min readEA link

How to bring EA into the class­room?

TimSpreeuwers26 Dec 2022 19:13 UTC
8 points
2 comments4 min readEA link

Safety of Self-Assem­bled Neu­ro­mor­phic Hardware

Can Rager26 Dec 2022 19:10 UTC
8 points
1 comment10 min readEA link

An overview of some promis­ing work by ju­nior al­ign­ment researchers

Akash26 Dec 2022 17:23 UTC
10 points
0 comments1 min readEA link

Air Safety to Com­bat Global Catas­trophic Biorisks [OLD VERSION]

Jam Kraprayoon26 Dec 2022 16:58 UTC
78 points
0 comments36 min readEA link
(docs.google.com)

Con­crete Steps to Get Started in Trans­former Mechanis­tic Interpretability

Neel Nanda26 Dec 2022 13:00 UTC
18 points
0 comments12 min readEA link

The Limit of Lan­guage Models

𝕮𝖎𝖓𝖊𝖗𝖆26 Dec 2022 11:17 UTC
10 points
0 comments1 min readEA link

How long till Brus­sels?: A light in­ves­ti­ga­tion into the Brus­sels Gap

Yadav26 Dec 2022 7:49 UTC
50 points
2 comments5 min readEA link

Vida Plena Pre­dic­tive Cost-Effec­tive­ness Analysis

Samuel Dupret26 Dec 2022 5:43 UTC
48 points
2 comments12 min readEA link

An­nounc­ing Vida Plena: the first Latin Amer­i­can or­ga­ni­za­tion in­cu­bated by Char­ity Entrepreneurship

Joy Bittner26 Dec 2022 5:41 UTC
166 points
6 comments11 min readEA link

Sa­vor­ing my moral circle

Angelina Li26 Dec 2022 3:02 UTC
87 points
5 comments1 min readEA link

Si­amo in triage ogni sec­ondo di ogni giorno

EA Italy26 Dec 2022 0:44 UTC
2 points
0 comments4 min readEA link

Why I am happy to re­ject the pos­si­bil­ity of in­finite worlds

Vasco Grilo🔸25 Dec 2022 19:51 UTC
13 points
42 comments3 min readEA link

YCom­bi­na­tor fraud rates

Ben_West🔸25 Dec 2022 18:01 UTC
91 points
14 comments4 min readEA link

[Question] Will EU/​ESMA fi­nan­cial reg­u­la­tion on ESG Fund Names in­clude an­i­mal welfare? Should some­one ask them to?

Ramiro25 Dec 2022 12:52 UTC
39 points
0 comments2 min readEA link

May The Fac­tory Farms Burn

Omnizoid25 Dec 2022 8:48 UTC
209 points
27 comments14 min readEA link

[Question] What are ex­am­ples of EA orgs pivot­ing af­ter re­ceiv­ing fund­ing?

Drew Spartz25 Dec 2022 5:52 UTC
16 points
7 comments1 min readEA link

New EA cause area: chewier food for children

Andre Popovitch25 Dec 2022 5:16 UTC
40 points
19 comments3 min readEA link
(chadnauseam.com)

Ground­wa­ter crisis: a threat of civ­i­liza­tion collapse

RickJS24 Dec 2022 21:21 UTC
0 points
0 comments3 min readEA link
(drive.google.com)

What you pri­ori­tise is mostly moral intuition

JamesÖz24 Dec 2022 12:06 UTC
73 points
8 comments12 min readEA link

List #3: Why not to as­sume on prior that AGI-al­ign­ment workarounds are available

Remmelt24 Dec 2022 9:54 UTC
6 points
0 comments1 min readEA link

List #2: Why co­or­di­nat­ing to al­ign as hu­mans to not de­velop AGI is a lot eas­ier than, well… co­or­di­nat­ing as hu­mans with AGI co­or­di­nat­ing to be al­igned with humans

Remmelt24 Dec 2022 9:53 UTC
3 points
0 comments1 min readEA link

List #1: Why stop­ping the de­vel­op­ment of AGI is hard but doable

Remmelt24 Dec 2022 9:52 UTC
24 points
2 comments1 min readEA link