RSS

Eliezer Yudkowsky

TagLast edit: 20 Jun 2022 14:22 UTC by Pablo

Eliezer Shlomo Yudkowsky (born 11 September 1979) is an American artificial intelligence researcher and the co-founder of the Machine Intelligence Research Institute.

Further reading

Christiano, Paul (2022) Where I agree and disagree with Eliezer, AI Alignment Forum, June 19.

LessWrong (2012) Eliezer Yudkowsky, LessWrong Wiki, October 29.

Rice, Issa (2018) List of discussions between Eliezer Yudkowsky and Paul Christiano, Cause Prioritization Wiki, March 21.

Harris, Sam (2018) AI: racing toward the brink: A conversation with Eliezer Yudkowsky, Making Sense, February 6.

External links

Eliezer Yudkowsky. Effective Altruism Forum account.

Related entries

AI alignment | Center for Applied Rationality | LessWrong | Machine Intelligence Research Institute | rationality community

Pur­chase fuzzies and utilons separately

EliezerYudkowsky27 Dec 2019 2:21 UTC
131 points
4 comments5 min readEA link
(www.lesswrong.com)

Con­ver­sa­tion with Holden Karnofsky, Nick Beck­stead, and Eliezer Yud­kowsky on the “long-run” per­spec­tive on effec­tive altruism

Nick_Beckstead18 Aug 2014 4:30 UTC
11 points
7 comments6 min readEA link

Shul­man and Yud­kowsky on AI progress

CarlShulman4 Dec 2021 11:37 UTC
46 points
0 comments20 min readEA link

Yud­kowsky and Chris­ti­ano on AI Take­off Speeds [LINKPOST]

aog5 Apr 2022 0:57 UTC
15 points
0 comments11 min readEA link

Dis­cus­sion with Eliezer Yud­kowsky on AGI interventions

RobBensinger11 Nov 2021 3:21 UTC
60 points
33 comments34 min readEA link

Yud­kowsky and Chris­ti­ano dis­cuss “Take­off Speeds”

EliezerYudkowsky22 Nov 2021 19:42 UTC
42 points
0 comments60 min readEA link

AI timelines by bio an­chors: the de­bate in one place

Will Aldred30 Jul 2022 23:04 UTC
93 points
6 comments2 min readEA link

Paus­ing AI Devel­op­ments Isn’t Enough. We Need to Shut it All Down by Eliezer Yudkowsky

jacquesthibs29 Mar 2023 23:30 UTC
212 points
75 comments3 min readEA link
(time.com)

My Ob­jec­tions to “We’re All Gonna Die with Eliezer Yud­kowsky”

Quintin Pope21 Mar 2023 1:23 UTC
166 points
21 comments39 min readEA link

[Question] I have thou­sands of copies of HPMOR in Rus­sian. How to use them with the most im­pact?

MikhailSamin27 Dec 2022 11:07 UTC
39 points
10 comments1 min readEA link

An even deeper atheism

Joe_Carlsmith11 Jan 2024 17:28 UTC
26 points
2 comments15 min readEA link

Four mind­set dis­agree­ments be­hind ex­is­ten­tial risk dis­agree­ments in ML

RobBensinger11 Apr 2023 4:53 UTC
61 points
2 comments9 min readEA link

Eliezer Yud­kowsky Is Fre­quently, Con­fi­dently, Egre­giously Wrong

Bentham's Bulldog27 Aug 2023 1:07 UTC
50 points
205 comments36 min readEA link

On Defer­ence and Yud­kowsky’s AI Risk Estimates

bmg19 Jun 2022 14:35 UTC
288 points
194 comments17 min readEA link

Imi­ta­tion Learn­ing is Prob­a­bly Ex­is­ten­tially Safe

Vasco Grilo🔸30 Apr 2024 17:06 UTC
19 points
7 comments3 min readEA link
(www.openphilanthropy.org)

Pod­cast/​video/​tran­script: Eliezer Yud­kowsky—Why AI Will Kill Us, Align­ing LLMs, Na­ture of In­tel­li­gence, SciFi, & Rationality

PeterSlattery9 Apr 2023 10:37 UTC
32 points
2 comments137 min readEA link
(www.youtube.com)

De­con­fus­ing ‘AI’ and ‘evolu­tion’

Remmelt22 Jul 2025 6:56 UTC
6 points
1 comment28 min readEA link

Alexan­der and Yud­kowsky on AGI goals

Scott Alexander31 Jan 2023 23:36 UTC
29 points
1 comment26 min readEA link

“Dangers of AI and the End of Hu­man Civ­i­liza­tion” Yud­kowsky on Lex Fridman

𝕮𝖎𝖓𝖊𝖗𝖆30 Mar 2023 15:44 UTC
28 points
0 comments1 min readEA link
(www.youtube.com)

[linkpost] Chris­ti­ano on agree­ment/​dis­agree­ment with Yud­kowsky’s “List of Lethal­ities”

Owen Cotton-Barratt19 Jun 2022 22:47 UTC
130 points
1 comment1 min readEA link
(www.lesswrong.com)

Adquira sen­ti­men­tos calorosos e útilons separadamente

AE Brasil / EA Brazil20 Jul 2023 18:48 UTC
4 points
0 comments5 min readEA link

Why Yud­kowsky is wrong about “co­va­lently bonded equiv­a­lents of biol­ogy”

titotal6 Dec 2023 14:09 UTC
29 points
20 comments16 min readEA link
(open.substack.com)

Sum­mary of Eliezer Yud­kowsky’s “Cog­ni­tive Bi­ases Po­ten­tially Affect­ing Judg­ment of Global Risks”

Damin Curtis🔹7 Nov 2023 18:19 UTC
5 points
2 comments6 min readEA link

Yud­kowsky on AGI risk on the Ban­kless podcast

RobBensinger13 Mar 2023 0:42 UTC
54 points
2 comments75 min readEA link

Con­sider Pre­order­ing If Any­one Builds It, Every­one Dies

peterbarnett12 Aug 2025 22:03 UTC
48 points
4 comments2 min readEA link

The Parable of the Dag­ger—The Animation

Writer29 Jul 2023 14:03 UTC
10 points
0 comments1 min readEA link
(youtu.be)

NYT ar­ti­cle about the Zizi­ans in­clud­ing quotes from Eliezer, Anna, Ozy, Jes­sica, Zvi

Matrice Jacobine🔸🏳️‍⚧️8 Jul 2025 1:42 UTC
2 points
0 comments1 min readEA link
(www.nytimes.com)

Point-by-point re­ply to Yud­kowsky on UFOs

Magnus Vinding19 Dec 2024 21:24 UTC
4 points
0 comments9 min readEA link

Nu­clear brinks­man­ship is not a good AI x-risk strategy

titotal30 Mar 2023 22:07 UTC
19 points
8 comments5 min readEA link