[Question] What ad­vice would you give some­one who wants to avoid dox­ing them­selves here?

Throwaway012723Feb 2, 2023, 4:07 AM
40 points
21 comments1 min readEA link

Ret­ro­spec­tive on the AI Safety Field Build­ing Hub

Vael GatesFeb 2, 2023, 2:06 AM
64 points
2 comments9 min readEA link

“AI Risk Dis­cus­sions” web­site: Ex­plor­ing in­ter­views from 97 AI Researchers

Vael GatesFeb 2, 2023, 1:00 AM
46 points
1 comment1 min readEA link

Pre­dict­ing re­searcher in­ter­est in AI alignment

Vael GatesFeb 2, 2023, 12:58 AM
30 points
0 comments21 min readEA link
(docs.google.com)

Fo­cus on the places where you feel shocked ev­ery­one’s drop­ping the ball

So8resFeb 2, 2023, 12:27 AM
92 points
6 comments4 min readEA link

Fore­cast­ing Our World in Data: The Next 100 Years

AlexLeaderFeb 1, 2023, 10:13 PM
97 points
8 comments66 min readEA link
(www.metaculus.com)

AI Safety Ar­gu­ments: An In­ter­ac­tive Guide

Lukas Trötzmüller🔸Feb 1, 2023, 7:21 PM
32 points
5 comments3 min readEA link

Align­ing Self-In­ter­est With Sur­vival And Thriving

Marc WongFeb 1, 2023, 4:15 PM
3 points
0 comments2 min readEA link

Trends in the dol­lar train­ing cost of ma­chine learn­ing systems

Ben CottierFeb 1, 2023, 2:48 PM
63 points
3 comments2 min readEA link
(epochai.org)

Po­lis: Why and How to Use it

brookFeb 1, 2023, 2:03 PM
18 points
3 comments2 min readEA link

Wits & Wagers: An En­gag­ing Game for Effec­tive Altruists

JohnWFeb 1, 2023, 9:30 AM
31 points
5 comments4 min readEA link

Nel­son Man­dela’s or­ga­ni­za­tion, The Elders, back­ing x risk pre­ven­tion and longtermism

krohmal5Feb 1, 2023, 6:40 AM
179 points
4 comments1 min readEA link
(theelders.org)

[Question] Who owns “Effec­tive Altru­ism”?

RiverFeb 1, 2023, 1:37 AM
37 points
4 comments1 min readEA link

Eli Lifland on Nav­i­gat­ing the AI Align­ment Landscape

Ozzie GooenFeb 1, 2023, 12:07 AM
48 points
9 comments31 min readEA link
(quri.substack.com)

On value in hu­mans, other an­i­mals, and AI

Michele CampoloJan 31, 2023, 11:48 PM
8 points
6 comments5 min readEA link

Alexan­der and Yud­kowsky on AGI goals

Scott AlexanderJan 31, 2023, 11:36 PM
29 points
1 comment26 min readEA link

Google Maps nuke-mode

AndreFerrettiJan 31, 2023, 9:37 PM
11 points
6 comments1 min readEA link

What Are The Biggest Threats To Hu­man­ity? (A Hap­pier World video)

Jeroen Willems🔸Jan 31, 2023, 7:50 PM
17 points
1 comment15 min readEA link

What I thought about child mar­riage as a cause area, and how I’ve changed my mind

Catherine FJan 31, 2023, 7:50 PM
218 points
44 comments7 min readEA link

[Linkpost] Hu­man-nar­rated au­dio ver­sion of “Is Power-Seek­ing AI an Ex­is­ten­tial Risk?”

Joe_CarlsmithJan 31, 2023, 7:19 PM
9 points
0 comments1 min readEA link

Post-Mortem: McGill EA x Law Pre­sents: Ex­is­ten­tial Ad­vo­cacy with Prof. John Bliss

McGill EA x LawJan 31, 2023, 6:57 PM
11 points
0 comments4 min readEA link

Post-Mortem: Effec­tive Altru­ism x Law Pre­sents: Im­pact Liti­ga­tion for An­i­mal Welfare

McGill EA x LawJan 31, 2023, 6:52 PM
14 points
2 comments5 min readEA link

Talk to me about your sum­mer/​ca­reer plans

AkashJan 31, 2023, 6:29 PM
31 points
0 comments2 min readEA link

More Is Prob­a­bly More—Fore­cast­ing Ac­cu­racy and Num­ber of Fore­cast­ers on Metaculus

nikosJan 31, 2023, 5:20 PM
36 points
11 comments10 min readEA link

Pro­ject for Awe­some 2023: Make a short video for an EA char­ity!

EA_ProjectForAwesomeJan 31, 2023, 2:57 PM
54 points
0 comments2 min readEA link

On­line EA book­club, any­one?

Manuel Del Río Rodríguez 🔹Jan 31, 2023, 2:26 PM
10 points
4 comments1 min readEA link

Fore­cast­ing tools and Pre­dic­tion Mar­kets: Why and How

brookJan 31, 2023, 12:55 PM
19 points
0 comments4 min readEA link

Fore­cast­ing: How and Why

brookJan 31, 2023, 12:54 PM
4 points
0 comments1 min readEA link

Law & Longter­mism Din­ner—EAG Bay Area 2023

Alfredo Parra 🔸Jan 31, 2023, 10:47 AM
10 points
1 comment1 min readEA link

Longter­mism and an­i­mals: Re­sources + join our Dis­cord com­mu­nity!

Ren RybaJan 31, 2023, 10:45 AM
102 points
0 comments4 min readEA link

[Question] How to hedge in­vest­ment port­fo­lio against AI risk?

Timothy_LiptrotJan 31, 2023, 8:04 AM
9 points
0 comments1 min readEA link

Ques­tions about AI that bother me

Eleni_AJan 31, 2023, 6:50 AM
33 points
6 comments2 min readEA link

How to use AI speech tran­scrip­tion and anal­y­sis to ac­cel­er­ate so­cial sci­ence research

Alexander SaeriJan 31, 2023, 4:01 AM
39 points
6 comments11 min readEA link

EA & LW Fo­rum Weekly Sum­mary (23rd − 29th Jan ’23)

Zoe WilliamsJan 31, 2023, 12:36 AM
16 points
0 comments13 min readEA link

FIRE & EA: Seek­ing feed­back on “Fi-lan­thropy” Calculator

Rebecca HerbstJan 30, 2023, 8:20 PM
121 points
21 comments4 min readEA link

We’re no longer “paus­ing most new longter­mist fund­ing com­mit­ments”

Holden KarnofskyJan 30, 2023, 7:29 PM
201 points
39 comments6 min readEA link

Karma over­rates some top­ics; re­sult­ing is­sues and po­ten­tial solutions

LizkaJan 30, 2023, 6:32 PM
295 points
51 comments3 min readEA link

An in-progress ex­per­i­ment to test how Laplace’s rule of suc­ces­sion performs in prac­tice.

NunoSempereJan 30, 2023, 5:41 PM
57 points
11 comments3 min readEA link

[Question] In­vest­ing in cli­mate miti­ga­tion in Africa

drbrake 🔸Jan 30, 2023, 4:21 PM
3 points
2 comments1 min readEA link

What I mean by “al­ign­ment is in large part about mak­ing cog­ni­tion aimable at all”

So8resJan 30, 2023, 3:22 PM
57 points
3 comments2 min readEA link

Reg­u­la­tory in­quiry into Effec­tive Ven­tures Foun­da­tion UK

Howie_LempelJan 30, 2023, 2:33 PM
183 points
15 comments3 min readEA link

EA logic cre­ates men­tal suffer­ing. Here is how the mis­ery trap might be fixed.

Davidh96Jan 30, 2023, 2:26 PM
−6 points
1 comment5 min readEA link

An­nounc­ing In­terim CEOs of EVF

Owen Cotton-BarrattJan 30, 2023, 2:21 PM
163 points
37 comments6 min readEA link

Squig­gle: Why and How to Use it

brookJan 30, 2023, 2:17 PM
12 points
0 comments1 min readEA link

Squig­gle: Why and how to use it

brookJan 30, 2023, 2:14 PM
44 points
4 comments3 min readEA link

[Question] What does a good re­sponse to crit­i­cism look like? [Poll post]

Nathan YoungJan 30, 2023, 1:41 PM
5 points
11 comments1 min readEA link

Time-stamp­ing: An ur­gent, ne­glected AI safety measure

Axel SvenssonJan 30, 2023, 11:21 AM
57 points
27 comments3 min readEA link

Com­pendium of prob­lems with RLHF

Raphaël SJan 30, 2023, 8:48 AM
18 points
0 comments10 min readEA link

[Question] What do you think the effec­tive al­tru­ism move­ment sucks at?

Evan_GaensbauerJan 30, 2023, 6:38 AM
9 points
2 comments1 min readEA link

Pro­posed im­prove­ments to EAG(x) ad­mis­sions process

ESJan 30, 2023, 6:10 AM
62 points
29 comments3 min readEA link