Fal­li­bil­ism, Bias, and the Rule of Law

Elliot Temple17 Oct 2022 23:59 UTC
14 points
6 comments13 min readEA link
(criticalfallibilism.com)

EA & LW Fo­rums Weekly Sum­mary (10 − 16 Oct 22′)

Zoe Williams17 Oct 2022 22:51 UTC
24 points
2 comments16 min readEA link

Pre­dic­tors of suc­cess in hiring CEA’s Full-Stack Engineer

Akara17 Oct 2022 22:09 UTC
33 points
19 comments3 min readEA link

Be care­ful with (out­sourc­ing) hiring

Richard Möhn17 Oct 2022 20:30 UTC
40 points
38 comments13 min readEA link

Con­se­quen­tial­ism and Cluelessness

Richard Y Chappell🔸17 Oct 2022 18:57 UTC
32 points
5 comments9 min readEA link
(rychappell.substack.com)

Alien Counterfactuals

Charlie_Guthmann17 Oct 2022 17:33 UTC
19 points
3 comments1 min readEA link

AI Safety Ideas: A col­lab­o­ra­tive AI safety re­search platform

Apart Research17 Oct 2022 17:01 UTC
67 points
13 comments4 min readEA link

For­mal­iz­ing Ex­tinc­tion Risk Re­duc­tion vs. Longtermism

Charlie_Guthmann17 Oct 2022 15:37 UTC
12 points
2 comments1 min readEA link

In­tro­duc­ing Cause In­no­va­tion Bootcamp

Akhil17 Oct 2022 13:42 UTC
158 points
19 comments5 min readEA link

Hi

Kelly Walker17 Oct 2022 8:35 UTC
−12 points
0 comments1 min readEA link

Pop­u­la­tion with high IQ pre­dicts real GDP bet­ter than population

Vasco Grilo🔸17 Oct 2022 7:22 UTC
9 points
12 comments2 min readEA link

Space

Jarred Filmer17 Oct 2022 6:34 UTC
7 points
0 comments1 min readEA link

A mod­est case for hope

xavier rg17 Oct 2022 6:03 UTC
28 points
0 comments1 min readEA link

Pop­u­lar Per­sonal Fi­nan­cial Ad­vice ver­sus the Pro­fes­sors (James Choi, NBER)

Eevee🔹16 Oct 2022 22:21 UTC
41 points
6 comments1 min readEA link

[Question] Why not to solve al­ign­ment by mak­ing su­per­in­tel­li­gent hu­mans?

Pato16 Oct 2022 21:26 UTC
9 points
12 comments1 min readEA link

As­sis­tant-pro­fes­sor-ranked AI ethics philoso­pher job op­por­tu­nity at Can­ter­bury Univer­sity, New Zealand

ben.smith16 Oct 2022 17:56 UTC
27 points
0 comments1 min readEA link
(www.linkedin.com)

My dona­tion bud­get and fal­lback dona­tion allocation

vipulnaik16 Oct 2022 16:04 UTC
14 points
0 comments18 min readEA link

Sign of qual­ity of life in GiveWell’s analyses

brb24316 Oct 2022 14:54 UTC
57 points
19 comments3 min readEA link

Hal­i­fax, NS – Monthly Ra­tion­al­ist, EA, and ACX Meetup Kick-Off

Conor Barnes16 Oct 2022 13:19 UTC
2 points
0 comments1 min readEA link

GWWC Pledge Cel­e­bra­tion (Europe/​Asia)

Jmd16 Oct 2022 11:54 UTC
2 points
0 comments1 min readEA link

GWWC Pledge Cel­e­bra­tion (Amer­i­cas/​Ocea­nia)

Jmd16 Oct 2022 11:50 UTC
2 points
0 comments1 min readEA link

GWWC End of Year Cel­e­bra­tion (Europe/​Asia)

Jmd16 Oct 2022 11:48 UTC
2 points
0 comments1 min readEA link

GWWC End of Year Cel­e­bra­tion (Amer­i­cas/​Ocea­nia)

Jmd16 Oct 2022 11:46 UTC
2 points
0 comments1 min readEA link

GWWC Meetup (Europe/​Asia)

Jmd16 Oct 2022 11:41 UTC
7 points
0 comments1 min readEA link

GWWC Meetup (Amer­i­cas/​Ocea­nia)

Jmd16 Oct 2022 11:37 UTC
7 points
0 comments1 min readEA link

[Question] Effec­tive Re­fugee Sup­port + Re­sponse?

Nick G16 Oct 2022 5:49 UTC
3 points
2 comments1 min readEA link

Is in­ter­est in al­ign­ment worth men­tion­ing for grad school ap­pli­ca­tions?

Franziska Fischer16 Oct 2022 4:50 UTC
5 points
4 comments1 min readEA link

A vi­sion of the fu­ture (fic­tional short-story)

EffAlt15 Oct 2022 12:38 UTC
12 points
0 comments2 min readEA link

The most effec­tive ques­tion to ask your­self.

EffAlt15 Oct 2022 12:28 UTC
7 points
3 comments1 min readEA link

Ber­lin EA Shenani­gans (un­af­fili­ated) - please RSVP

Milli🔸15 Oct 2022 11:14 UTC
6 points
0 comments1 min readEA link

James Nor­ris from Upgrad­able on “What is Beyond Liv­ing a Prin­ci­pled Life”—OpenPrin­ci­ples Speaker Session

Ti Guo15 Oct 2022 3:22 UTC
2 points
0 comments1 min readEA link

Hackathon on Mon, 12/​5 to fol­low EAGxBerkeley

NicoleJaneway15 Oct 2022 0:06 UTC
38 points
52 comments1 min readEA link

[Question] Test­ing Im­pact: Longter­mist TV Show

Anthony Fleming14 Oct 2022 23:30 UTC
4 points
1 comment1 min readEA link

A com­mon failure for foxes

RobBensinger14 Oct 2022 22:51 UTC
22 points
2 comments1 min readEA link

An­swer­ing some ques­tions about wa­ter qual­ity programs

GiveWell14 Oct 2022 20:36 UTC
26 points
0 comments9 min readEA link
(blog.givewell.org)

Coun­ter­ar­gu­ments to the ba­sic AI risk case

Katja_Grace14 Oct 2022 20:30 UTC
284 points
23 comments34 min readEA link

[Job]: AI Stan­dards Devel­op­ment Re­search Assistant

Tony Barrett14 Oct 2022 20:18 UTC
13 points
0 comments2 min readEA link

The US ex­pands re­stric­tions on AI ex­ports to China. What are the x-risk effects?

Stephen Clare14 Oct 2022 18:17 UTC
155 points
20 comments4 min readEA link

Me­tac­u­lus Launches the ‘Fore­cast­ing Our World In Data’ Pro­ject to Probe the Long-Term Future

christian14 Oct 2022 17:00 UTC
65 points
6 comments1 min readEA link
(www.metaculus.com)

The prop­erty rights ap­proach to moral uncertainty

Harry R. Lloyd14 Oct 2022 16:49 UTC
31 points
14 comments2 min readEA link
(www.happierlivesinstitute.org)

What Peter Singer Got Wrong (And Where Give Well Could Im­prove)

LiaH14 Oct 2022 16:15 UTC
4 points
3 comments6 min readEA link

[Question] Is there a UK char­i­ta­ble in­vest­ment ve­hi­cle that I could in­vest into and then later use to in­vest in a startup I make in the fu­ture?

Olly P14 Oct 2022 14:53 UTC
2 points
2 comments1 min readEA link

[Question] If you could 2x the num­ber of fu­ture hu­mans by re­duc­ing the QALYs per per­son by half, would you choose to do it? Why or why not?

Parmest Roy14 Oct 2022 14:06 UTC
2 points
0 comments1 min readEA link

Mea­sur­ing Good Better

MichaelPlant14 Oct 2022 13:36 UTC
235 points
19 comments15 min readEA link

EA Or­ga­ni­za­tion Up­dates: Oc­to­ber 2022

Lizka14 Oct 2022 13:36 UTC
23 points
2 comments11 min readEA link

The Sig­nifi­cance, Per­sis­tence, Contin­gency Frame­work (William MacAskill, Teruji Thomas and Aron Val­lin­der)

Global Priorities Institute14 Oct 2022 9:24 UTC
43 points
0 comments1 min readEA link
(globalprioritiesinstitute.org)

The Vi­talik Bu­terin Fel­low­ship in AI Ex­is­ten­tial Safety is open for ap­pli­ca­tions!

Cynthia Chen14 Oct 2022 3:23 UTC
38 points
0 comments2 min readEA link

[Question] Will Ev­i­dence-Based Man­age­ment Prac­tices In­crease Your Im­pact?

Lorenzo Gallí14 Oct 2022 3:22 UTC
25 points
14 comments1 min readEA link

Con­tra shard the­ory, in the con­text of the di­a­mond max­i­mizer problem

So8res13 Oct 2022 23:51 UTC
27 points
0 comments1 min readEA link

Changes to EA Giv­ing Tues­day for 2022

Giving What We Can13 Oct 2022 23:37 UTC
79 points
4 comments1 min readEA link