Home
All
Wiki
Shortform
Recent
Comments
Archive
About
Search
Log In
All
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
All
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
All
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
Page
1
Fallibilism, Bias, and the Rule of Law
Elliot Temple
17 Oct 2022 23:59 UTC
14
points
6
comments
13
min read
EA
link
(criticalfallibilism.com)
EA & LW Forums Weekly Summary (10 − 16 Oct 22′)
Zoe Williams
17 Oct 2022 22:51 UTC
24
points
2
comments
16
min read
EA
link
Predictors of success in hiring CEA’s Full-Stack Engineer
Akara
17 Oct 2022 22:09 UTC
33
points
19
comments
3
min read
EA
link
Be careful with (outsourcing) hiring
Richard Möhn
17 Oct 2022 20:30 UTC
40
points
38
comments
13
min read
EA
link
Consequentialism and Cluelessness
Richard Y Chappell🔸
17 Oct 2022 18:57 UTC
32
points
5
comments
9
min read
EA
link
(rychappell.substack.com)
Alien Counterfactuals
Charlie_Guthmann
17 Oct 2022 17:33 UTC
19
points
3
comments
1
min read
EA
link
AI Safety Ideas: A collaborative AI safety research platform
Apart Research
17 Oct 2022 17:01 UTC
67
points
13
comments
4
min read
EA
link
Formalizing Extinction Risk Reduction vs. Longtermism
Charlie_Guthmann
17 Oct 2022 15:37 UTC
12
points
2
comments
1
min read
EA
link
Introducing Cause Innovation Bootcamp
Akhil
17 Oct 2022 13:42 UTC
158
points
19
comments
5
min read
EA
link
Hi
Kelly Walker
17 Oct 2022 8:35 UTC
−12
points
0
comments
1
min read
EA
link
Population with high IQ predicts real GDP better than population
Vasco Grilo🔸
17 Oct 2022 7:22 UTC
9
points
12
comments
2
min read
EA
link
Space
Jarred Filmer
17 Oct 2022 6:34 UTC
7
points
0
comments
1
min read
EA
link
A modest case for hope
xavier rg
17 Oct 2022 6:03 UTC
28
points
0
comments
1
min read
EA
link
Popular Personal Financial Advice versus the Professors (James Choi, NBER)
Eevee🔹
16 Oct 2022 22:21 UTC
41
points
6
comments
1
min read
EA
link
[Question]
Why not to solve alignment by making superintelligent humans?
Pato
16 Oct 2022 21:26 UTC
9
points
12
comments
1
min read
EA
link
Assistant-professor-ranked AI ethics philosopher job opportunity at Canterbury University, New Zealand
ben.smith
16 Oct 2022 17:56 UTC
27
points
0
comments
1
min read
EA
link
(www.linkedin.com)
My donation budget and fallback donation allocation
vipulnaik
16 Oct 2022 16:04 UTC
14
points
0
comments
18
min read
EA
link
Sign of quality of life in GiveWell’s analyses
brb243
16 Oct 2022 14:54 UTC
57
points
19
comments
3
min read
EA
link
Halifax, NS – Monthly Rationalist, EA, and ACX Meetup Kick-Off
Conor Barnes
16 Oct 2022 13:19 UTC
2
points
0
comments
1
min read
EA
link
GWWC Pledge Celebration (Europe/Asia)
Jmd
16 Oct 2022 11:54 UTC
2
points
0
comments
1
min read
EA
link
GWWC Pledge Celebration (Americas/Oceania)
Jmd
16 Oct 2022 11:50 UTC
2
points
0
comments
1
min read
EA
link
GWWC End of Year Celebration (Europe/Asia)
Jmd
16 Oct 2022 11:48 UTC
2
points
0
comments
1
min read
EA
link
GWWC End of Year Celebration (Americas/Oceania)
Jmd
16 Oct 2022 11:46 UTC
2
points
0
comments
1
min read
EA
link
GWWC Meetup (Europe/Asia)
Jmd
16 Oct 2022 11:41 UTC
7
points
0
comments
1
min read
EA
link
GWWC Meetup (Americas/Oceania)
Jmd
16 Oct 2022 11:37 UTC
7
points
0
comments
1
min read
EA
link
[Question]
Effective Refugee Support + Response?
Nick G
16 Oct 2022 5:49 UTC
3
points
2
comments
1
min read
EA
link
Is interest in alignment worth mentioning for grad school applications?
Franziska Fischer
16 Oct 2022 4:50 UTC
5
points
4
comments
1
min read
EA
link
A vision of the future (fictional short-story)
EffAlt
15 Oct 2022 12:38 UTC
12
points
0
comments
2
min read
EA
link
The most effective question to ask yourself.
EffAlt
15 Oct 2022 12:28 UTC
7
points
3
comments
1
min read
EA
link
Berlin EA Shenanigans (unaffiliated) - please RSVP
Milli🔸
15 Oct 2022 11:14 UTC
6
points
0
comments
1
min read
EA
link
James Norris from Upgradable on “What is Beyond Living a Principled Life”—OpenPrinciples Speaker Session
Ti Guo
15 Oct 2022 3:22 UTC
2
points
0
comments
1
min read
EA
link
Hackathon on Mon, 12/5 to follow EAGxBerkeley
NicoleJaneway
15 Oct 2022 0:06 UTC
38
points
52
comments
1
min read
EA
link
[Question]
Testing Impact: Longtermist TV Show
Anthony Fleming
14 Oct 2022 23:30 UTC
4
points
1
comment
1
min read
EA
link
A common failure for foxes
RobBensinger
14 Oct 2022 22:51 UTC
22
points
2
comments
1
min read
EA
link
Answering some questions about water quality programs
GiveWell
14 Oct 2022 20:36 UTC
26
points
0
comments
9
min read
EA
link
(blog.givewell.org)
Counterarguments to the basic AI risk case
Katja_Grace
14 Oct 2022 20:30 UTC
284
points
23
comments
34
min read
EA
link
[Job]: AI Standards Development Research Assistant
Tony Barrett
14 Oct 2022 20:18 UTC
13
points
0
comments
2
min read
EA
link
The US expands restrictions on AI exports to China. What are the x-risk effects?
Stephen Clare
14 Oct 2022 18:17 UTC
155
points
20
comments
4
min read
EA
link
Metaculus Launches the ‘Forecasting Our World In Data’ Project to Probe the Long-Term Future
christian
14 Oct 2022 17:00 UTC
65
points
6
comments
1
min read
EA
link
(www.metaculus.com)
The property rights approach to moral uncertainty
Harry R. Lloyd
14 Oct 2022 16:49 UTC
31
points
14
comments
2
min read
EA
link
(www.happierlivesinstitute.org)
What Peter Singer Got Wrong (And Where Give Well Could Improve)
LiaH
14 Oct 2022 16:15 UTC
4
points
3
comments
6
min read
EA
link
[Question]
Is there a UK charitable investment vehicle that I could invest into and then later use to invest in a startup I make in the future?
Olly P
14 Oct 2022 14:53 UTC
2
points
2
comments
1
min read
EA
link
[Question]
If you could 2x the number of future humans by reducing the QALYs per person by half, would you choose to do it? Why or why not?
Parmest Roy
14 Oct 2022 14:06 UTC
2
points
0
comments
1
min read
EA
link
Measuring Good Better
MichaelPlant
14 Oct 2022 13:36 UTC
235
points
19
comments
15
min read
EA
link
EA Organization Updates: October 2022
Lizka
14 Oct 2022 13:36 UTC
23
points
2
comments
11
min read
EA
link
The Significance, Persistence, Contingency Framework (William MacAskill, Teruji Thomas and Aron Vallinder)
Global Priorities Institute
14 Oct 2022 9:24 UTC
43
points
0
comments
1
min read
EA
link
(globalprioritiesinstitute.org)
The Vitalik Buterin Fellowship in AI Existential Safety is open for applications!
Cynthia Chen
14 Oct 2022 3:23 UTC
38
points
0
comments
2
min read
EA
link
[Question]
Will Evidence-Based Management Practices Increase Your Impact?
Lorenzo Gallí
14 Oct 2022 3:22 UTC
25
points
14
comments
1
min read
EA
link
Contra shard theory, in the context of the diamond maximizer problem
So8res
13 Oct 2022 23:51 UTC
27
points
0
comments
1
min read
EA
link
Changes to EA Giving Tuesday for 2022
Giving What We Can
13 Oct 2022 23:37 UTC
79
points
4
comments
1
min read
EA
link
Back to top
Next