RSS

RobBensinger

Karma: 8,345

MIRI’s 2024 End-of-Year Update

RobBensingerDec 3, 2024, 4:33 AM
32 points
7 comments1 min readEA link

Re­sponse to Aschen­bren­ner’s “Si­tu­a­tional Aware­ness”

RobBensingerJun 6, 2024, 10:57 PM
111 points
15 comments1 min readEA link

Why hasn’t EA done an SBF in­ves­ti­ga­tion and post­mortem?

RobBensingerApr 1, 2024, 2:28 AM
166 points
45 comments3 min readEA link

AI Views Snapshots

RobBensingerDec 13, 2023, 12:45 AM
25 points
0 comments1 min readEA link

Com­ments on Man­heim’s “What’s in a Pause?”

RobBensingerSep 18, 2023, 12:16 PM
74 points
11 comments6 min readEA link

AGI ruin mostly rests on strong claims about al­ign­ment and de­ploy­ment, not about society

RobBensingerApr 24, 2023, 1:07 PM
16 points
4 comments1 min readEA link

The ba­sic rea­sons I ex­pect AGI ruin

RobBensingerApr 18, 2023, 3:37 AM
58 points
13 comments1 min readEA link

Four mind­set dis­agree­ments be­hind ex­is­ten­tial risk dis­agree­ments in ML

RobBensingerApr 11, 2023, 4:53 AM
61 points
2 comments9 min readEA link

Yud­kowsky on AGI risk on the Ban­kless podcast

RobBensingerMar 13, 2023, 12:42 AM
54 points
2 comments75 min readEA link

Ele­ments of Ra­tion­al­ist Discourse

RobBensingerFeb 14, 2023, 3:39 AM
68 points
12 comments9 min readEA link

Be wary of en­act­ing norms you think are unethical

RobBensingerJan 18, 2023, 6:07 AM
147 points
6 comments1 min readEA link

Thoughts on AGI or­ga­ni­za­tions and ca­pa­bil­ities work

RobBensingerDec 7, 2022, 7:46 PM
77 points
7 comments5 min readEA link

A challenge for AGI or­ga­ni­za­tions, and a challenge for readers

RobBensingerDec 1, 2022, 11:11 PM
172 points
13 comments1 min readEA link

EA should blurt

RobBensingerNov 22, 2022, 9:57 PM
155 points
26 comments5 min readEA link

A com­mon failure for foxes

RobBensingerOct 14, 2022, 10:51 PM
22 points
2 comments1 min readEA link

Have faux-evil EA energy

RobBensingerAug 23, 2022, 10:55 PM
92 points
11 comments1 min readEA link

The in­or­di­nately slow spread of good AGI con­ver­sa­tions in ML

RobBensingerJun 29, 2022, 4:02 AM
59 points
2 comments8 min readEA link

Twit­ter-length re­sponses to 24 AI al­ign­ment arguments

RobBensingerMar 14, 2022, 7:34 PM
67 points
17 comments8 min readEA link

AI views and dis­agree­ments AMA: Chris­ti­ano, Ngo, Shah, Soares, Yudkowsky

RobBensingerMar 1, 2022, 1:13 AM
30 points
4 comments1 min readEA link
(www.lesswrong.com)

An­i­mal welfare EA and per­sonal dietary options

RobBensingerJan 5, 2022, 6:53 PM
17 points
10 comments3 min readEA link