RSS

Eliezer Yudkowsky

TagLast edit: Jun 20, 2022, 2:22 PM by Pablo

Eliezer Shlomo Yudkowsky (born 11 September 1979) is an American artificial intelligence researcher and the co-founder of the Machine Intelligence Research Institute.

Further reading

Christiano, Paul (2022) Where I agree and disagree with Eliezer, AI Alignment Forum, June 19.

LessWrong (2012) Eliezer Yudkowsky, LessWrong Wiki, October 29.

Rice, Issa (2018) List of discussions between Eliezer Yudkowsky and Paul Christiano, Cause Prioritization Wiki, March 21.

Harris, Sam (2018) AI: racing toward the brink: A conversation with Eliezer Yudkowsky, Making Sense, February 6.

External links

Eliezer Yudkowsky. Effective Altruism Forum account.

Related entries

AI alignment | Center for Applied Rationality | LessWrong | Machine Intelligence Research Institute | rationality community

Pur­chase fuzzies and utilons separately

EliezerYudkowskyDec 27, 2019, 2:21 AM
125 points
4 comments5 min readEA link
(www.lesswrong.com)

AI timelines by bio an­chors: the de­bate in one place

Will AldredJul 30, 2022, 11:04 PM
93 points
6 comments2 min readEA link

Yud­kowsky and Chris­ti­ano dis­cuss “Take­off Speeds”

EliezerYudkowskyNov 22, 2021, 7:42 PM
42 points
0 comments60 min readEA link

Yud­kowsky and Chris­ti­ano on AI Take­off Speeds [LINKPOST]

aogApr 5, 2022, 12:57 AM
15 points
0 comments11 min readEA link

Con­ver­sa­tion with Holden Karnofsky, Nick Beck­stead, and Eliezer Yud­kowsky on the “long-run” per­spec­tive on effec­tive altruism

Nick_BecksteadAug 18, 2014, 4:30 AM
11 points
7 comments6 min readEA link

Dis­cus­sion with Eliezer Yud­kowsky on AGI interventions

RobBensingerNov 11, 2021, 3:21 AM
60 points
33 comments34 min readEA link

Shul­man and Yud­kowsky on AI progress

CarlShulmanDec 4, 2021, 11:37 AM
46 points
0 comments20 min readEA link

My Ob­jec­tions to “We’re All Gonna Die with Eliezer Yud­kowsky”

Quintin PopeMar 21, 2023, 1:23 AM
166 points
21 comments39 min readEA link

Paus­ing AI Devel­op­ments Isn’t Enough. We Need to Shut it All Down by Eliezer Yudkowsky

jacquesthibsMar 29, 2023, 11:30 PM
212 points
75 comments3 min readEA link
(time.com)

An even deeper atheism

Joe_CarlsmithJan 11, 2024, 5:28 PM
26 points
2 comments1 min readEA link

Imi­ta­tion Learn­ing is Prob­a­bly Ex­is­ten­tially Safe

Vasco Grilo🔸Apr 30, 2024, 5:06 PM
19 points
7 comments3 min readEA link
(www.openphilanthropy.org)

[Question] I have thou­sands of copies of HPMOR in Rus­sian. How to use them with the most im­pact?

MikhailSaminDec 27, 2022, 11:07 AM
39 points
10 comments1 min readEA link

On Defer­ence and Yud­kowsky’s AI Risk Estimates

bgarfinkelJun 19, 2022, 2:35 PM
285 points
194 comments17 min readEA link

Eliezer Yud­kowsky Is Fre­quently, Con­fi­dently, Egre­giously Wrong

Bentham's BulldogAug 27, 2023, 1:07 AM
41 points
205 comments36 min readEA link

Four mind­set dis­agree­ments be­hind ex­is­ten­tial risk dis­agree­ments in ML

RobBensingerApr 11, 2023, 4:53 AM
61 points
2 comments9 min readEA link

Pod­cast/​video/​tran­script: Eliezer Yud­kowsky—Why AI Will Kill Us, Align­ing LLMs, Na­ture of In­tel­li­gence, SciFi, & Rationality

PeterSlatteryApr 9, 2023, 10:37 AM
32 points
2 comments137 min readEA link
(www.youtube.com)

Sum­mary of Eliezer Yud­kowsky’s “Cog­ni­tive Bi­ases Po­ten­tially Affect­ing Judg­ment of Global Risks”

Damin Curtis🔹Nov 7, 2023, 6:19 PM
5 points
2 comments6 min readEA link

Point-by-point re­ply to Yud­kowsky on UFOs

Magnus VindingDec 19, 2024, 9:24 PM
4 points
0 comments9 min readEA link

[linkpost] Chris­ti­ano on agree­ment/​dis­agree­ment with Yud­kowsky’s “List of Lethal­ities”

Owen Cotton-BarrattJun 19, 2022, 10:47 PM
130 points
1 comment1 min readEA link
(www.lesswrong.com)

Alexan­der and Yud­kowsky on AGI goals

Scott AlexanderJan 31, 2023, 11:36 PM
29 points
1 comment1 min readEA link

Yud­kowsky on AGI risk on the Ban­kless podcast

RobBensingerMar 13, 2023, 12:42 AM
54 points
2 comments75 min readEA link

Why Yud­kowsky is wrong about “co­va­lently bonded equiv­a­lents of biol­ogy”

titotalDec 6, 2023, 2:09 PM
29 points
20 comments16 min readEA link
(open.substack.com)

Adquira sen­ti­men­tos calorosos e útilons separadamente

AE Brasil / EA BrazilJul 20, 2023, 6:48 PM
4 points
0 comments5 min readEA link

The Parable of the Dag­ger—The Animation

WriterJul 29, 2023, 2:03 PM
10 points
0 comments1 min readEA link
(youtu.be)

“Dangers of AI and the End of Hu­man Civ­i­liza­tion” Yud­kowsky on Lex Fridman

𝕮𝖎𝖓𝖊𝖗𝖆Mar 30, 2023, 3:44 PM
28 points
0 comments1 min readEA link

Nu­clear brinks­man­ship is not a good AI x-risk strategy

titotalMar 30, 2023, 10:07 PM
19 points
8 comments5 min readEA link
No comments.