RSS

RobBensinger

Karma: 8,071

Why hasn’t EA done an SBF in­ves­ti­ga­tion and post­mortem?

RobBensinger1 Apr 2024 2:28 UTC
166 points
43 comments3 min readEA link

AI Views Snapshots

RobBensinger13 Dec 2023 0:45 UTC
25 points
0 comments1 min readEA link

Com­ments on Man­heim’s “What’s in a Pause?”

RobBensinger18 Sep 2023 12:16 UTC
71 points
11 comments6 min readEA link

AGI ruin mostly rests on strong claims about al­ign­ment and de­ploy­ment, not about society

RobBensinger24 Apr 2023 13:07 UTC
16 points
4 comments1 min readEA link

The ba­sic rea­sons I ex­pect AGI ruin

RobBensinger18 Apr 2023 3:37 UTC
58 points
13 comments1 min readEA link

Four mind­set dis­agree­ments be­hind ex­is­ten­tial risk dis­agree­ments in ML

RobBensinger11 Apr 2023 4:53 UTC
61 points
2 comments9 min readEA link

Yud­kowsky on AGI risk on the Ban­kless podcast

RobBensinger13 Mar 2023 0:42 UTC
54 points
2 comments75 min readEA link

Ele­ments of Ra­tion­al­ist Discourse

RobBensinger14 Feb 2023 3:39 UTC
66 points
12 comments9 min readEA link

Be wary of en­act­ing norms you think are unethical

RobBensinger18 Jan 2023 6:07 UTC
143 points
6 comments1 min readEA link

Thoughts on AGI or­ga­ni­za­tions and ca­pa­bil­ities work

RobBensinger7 Dec 2022 19:46 UTC
77 points
7 comments5 min readEA link

A challenge for AGI or­ga­ni­za­tions, and a challenge for readers

RobBensinger1 Dec 2022 23:11 UTC
168 points
13 comments1 min readEA link

EA should blurt

RobBensinger22 Nov 2022 21:57 UTC
155 points
26 comments5 min readEA link

A com­mon failure for foxes

RobBensinger14 Oct 2022 22:51 UTC
22 points
2 comments1 min readEA link

Have faux-evil EA energy

RobBensinger23 Aug 2022 22:55 UTC
92 points
11 comments1 min readEA link

The in­or­di­nately slow spread of good AGI con­ver­sa­tions in ML

RobBensinger29 Jun 2022 4:02 UTC
59 points
2 comments6 min readEA link

Twit­ter-length re­sponses to 24 AI al­ign­ment arguments

RobBensinger14 Mar 2022 19:34 UTC
67 points
17 comments8 min readEA link

AI views and dis­agree­ments AMA: Chris­ti­ano, Ngo, Shah, Soares, Yudkowsky

RobBensinger1 Mar 2022 1:13 UTC
30 points
4 comments1 min readEA link
(www.lesswrong.com)

An­i­mal welfare EA and per­sonal dietary options

RobBensinger5 Jan 2022 18:53 UTC
17 points
10 comments3 min readEA link

Con­ver­sa­tion on tech­nol­ogy fore­cast­ing and gradualism

RobBensinger9 Dec 2021 19:00 UTC
15 points
3 comments31 min readEA link

Dis­cus­sion with Eliezer Yud­kowsky on AGI interventions

RobBensinger11 Nov 2021 3:21 UTC
60 points
33 comments35 min readEA link