Home
All
Wiki
Shortform
Recent
Comments
Archive
About
Search
Log In
All
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
All
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
All
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
Page
1
New version of Mental Health Navigator website
Emily
8 Jan 2023 21:37 UTC
22
points
8
comments
1
min read
EA
link
Potential Future People
TeddyW
8 Jan 2023 17:20 UTC
11
points
6
comments
1
min read
EA
link
Moral Weights according to EA Orgs
Simon_M
8 Jan 2023 16:46 UTC
102
points
15
comments
1
min read
EA
link
Halifax Monthly Meetup: Moloch in the HRM
Conor Barnes
8 Jan 2023 14:51 UTC
4
points
0
comments
1
min read
EA
link
Dangers of deference
TsviBT
8 Jan 2023 14:41 UTC
46
points
7
comments
2
min read
EA
link
Should UBI be a top priority for longtermism?
Michael Simm
8 Jan 2023 12:45 UTC
2
points
33
comments
4
min read
EA
link
Adding important nuances to “preserve option value” arguments
MichaelA
8 Jan 2023 9:30 UTC
36
points
1
comment
5
min read
EA
link
EA Germany’s Strategy for 2023
Sarah Tegeler
8 Jan 2023 8:30 UTC
126
points
13
comments
15
min read
EA
link
Is this community over-emphasizing AI alignment?
Lixiang
8 Jan 2023 6:23 UTC
1
point
5
comments
1
min read
EA
link
A Different Take on What’s Effective Altruism
Marty Nemko
8 Jan 2023 2:27 UTC
0
points
1
comment
1
min read
EA
link
(medium.com)
Learning as much Deep Learning math as I could in 24 hours
Phosphorous
8 Jan 2023 2:19 UTC
58
points
5
comments
7
min read
EA
link
David Krueger on AI Alignment in Academia and Coordination
Michaël Trazzi
7 Jan 2023 21:14 UTC
32
points
1
comment
3
min read
EA
link
(theinsideview.ai)
[Question]
How to create curriculum for self-study towards AI alignment work?
OIUJHKDFS
7 Jan 2023 19:53 UTC
10
points
5
comments
1
min read
EA
link
Street Epistemology (EA Shenanigans) - please RSVP
Milli | Martin
7 Jan 2023 16:39 UTC
5
points
0
comments
1
min read
EA
link
EA university groups are missing out on most of their potential
Johan de Kock
7 Jan 2023 12:44 UTC
50
points
15
comments
29
min read
EA
link
Anchoring focalism and the Identifiable victim effect: Bias in Evaluating AGI X-Risks
Remmelt
7 Jan 2023 9:59 UTC
−2
points
1
comment
1
min read
EA
link
[Discussion] How Broad is the Human Cognitive Spectrum?
𝕮𝖎𝖓𝖊𝖗𝖆
7 Jan 2023 0:59 UTC
16
points
1
comment
1
min read
EA
link
[Linkpost] Jan Leike on three kinds of alignment taxes
Akash
6 Jan 2023 23:57 UTC
29
points
0
comments
1
min read
EA
link
Misha Yagudin and Ozzie Gooen Discuss LLMs and Effective Altruism
Ozzie Gooen
6 Jan 2023 22:59 UTC
47
points
3
comments
14
min read
EA
link
(quri.substack.com)
[Question]
What rationale puts a limit to the cost of an EA’s (or anybody’s) life?
Juergen
6 Jan 2023 18:59 UTC
6
points
1
comment
1
min read
EA
link
EAs interested in EU policy: Consider applying for the European Commission’s Blue Book Traineeship
EU Policy Careers
6 Jan 2023 18:28 UTC
102
points
5
comments
19
min read
EA
link
Effective Altruism Reading List Information Design Poster (2/2)
annaleptikon
6 Jan 2023 14:49 UTC
50
points
0
comments
5
min read
EA
link
Consumer Power Initiative- Active Projects and Open Roles
Brad West
6 Jan 2023 14:40 UTC
17
points
0
comments
3
min read
EA
link
[Question]
Is there an “EA alumni” group?
Jonathan Yan
6 Jan 2023 10:06 UTC
19
points
3
comments
1
min read
EA
link
Foundation Entrepreneurship—How the first training program went
Aidan Alexander
6 Jan 2023 9:17 UTC
157
points
6
comments
6
min read
EA
link
Machine Learning for Scientific Discovery—AI Safety Camp
Eleni_A
6 Jan 2023 3:06 UTC
9
points
0
comments
1
min read
EA
link
Metaculus Beginner Tournament for New Forecasters
Anastasia
6 Jan 2023 2:35 UTC
33
points
5
comments
1
min read
EA
link
Transformative AI issues (not just misalignment): an overview
Holden Karnofsky
6 Jan 2023 2:19 UTC
31
points
0
comments
22
min read
EA
link
(www.cold-takes.com)
Metaculus Year in Review: 2022
christian
6 Jan 2023 1:23 UTC
25
points
2
comments
4
min read
EA
link
(metaculus.medium.com)
AI Safety Camp, Virtual Edition 2023
Linda Linsefors
6 Jan 2023 0:55 UTC
24
points
0
comments
1
min read
EA
link
Handling Moral Uncertainty with Average vs. Total Utilitarianism: One Method That Apparently *Doesn’t* Work (But Seemed Like it Should)
Harrison Durland
5 Jan 2023 22:18 UTC
10
points
0
comments
8
min read
EA
link
EA Market Testing: Summary of your feedback
david_reinstein
5 Jan 2023 21:09 UTC
18
points
2
comments
8
min read
EA
link
Enter Scott Alexander’s Prediction Competition
ChanaMessinger
5 Jan 2023 20:52 UTC
18
points
1
comment
1
min read
EA
link
Prioritization Research Careers—Probably Good
Probably Good
5 Jan 2023 15:05 UTC
51
points
1
comment
1
min read
EA
link
(www.probablygood.org)
On being compromised
Gavin
5 Jan 2023 12:56 UTC
187
points
46
comments
1
min read
EA
link
Skill up in ML for AI safety with the Intro to ML Safety course (Spring 2023)
james
5 Jan 2023 11:02 UTC
36
points
3
comments
2
min read
EA
link
Misleading phrase in a GiveWell Youtube ad
Thomas Kwa
5 Jan 2023 10:28 UTC
85
points
13
comments
1
min read
EA
link
Illusion of truth effect and Ambiguity effect: Bias in Evaluating AGI X-Risks
Remmelt
5 Jan 2023 4:05 UTC
1
point
1
comment
1
min read
EA
link
When you plan according to your AI timelines, should you put more weight on the median future, or the median future | eventual AI alignment success? ⚖️
Jeffrey Ladish
5 Jan 2023 1:55 UTC
16
points
2
comments
2
min read
EA
link
Large Language Models as Corporate Lobbyists, and Implications for Societal-AI Alignment
johnjnay
4 Jan 2023 22:22 UTC
10
points
6
comments
8
min read
EA
link
I am working on a project to view sustainability and welfare in a new evolutionary light
Sherry
4 Jan 2023 22:11 UTC
7
points
3
comments
2
min read
EA
link
ChatGPT understands, but largely does not generate Spanglish (and other code-mixed) text
Milan Weibel
4 Jan 2023 22:10 UTC
6
points
0
comments
4
min read
EA
link
(www.lesswrong.com)
The value of a statistical life
JacksonHarrison
4 Jan 2023 10:58 UTC
6
points
2
comments
7
min read
EA
link
Bill Burr on Boiling Lobsters (also manliness and AW)
Lixiang
4 Jan 2023 7:55 UTC
33
points
15
comments
1
min read
EA
link
Announcing Insights for Impact
Christian Pearson
4 Jan 2023 7:00 UTC
80
points
6
comments
1
min read
EA
link
[Question]
Do people have a form or resources for capturing indirect interpersonal impacts?
PeterSlattery
4 Jan 2023 4:47 UTC
47
points
6
comments
1
min read
EA
link
Normalcy bias and Base rate neglect: Bias in Evaluating AGI X-Risks
Remmelt
4 Jan 2023 3:16 UTC
5
points
0
comments
1
min read
EA
link
“AI” is an indexical
ThomasW
3 Jan 2023 22:00 UTC
23
points
2
comments
1
min read
EA
link
An approach for getting better at practicing any skill
jacquesthibs
3 Jan 2023 17:47 UTC
9
points
0
comments
1
min read
EA
link
Holden Karnofsky Interview about Most Important Century & Transformative AI
Dwarkesh Patel
3 Jan 2023 17:31 UTC
29
points
2
comments
1
min read
EA
link
Back to top
Next