RSS

RobBensinger

Karma: 8,127

MIRI is seek­ing an Office Man­ager /​ Force Multiplier

RobBensinger5 Jul 2015 19:02 UTC
8 points
1 comment7 min readEA link

Ask MIRI Any­thing (AMA)

RobBensinger11 Oct 2016 19:54 UTC
18 points
77 comments1 min readEA link

Anony­mous EA comments

RobBensinger7 Feb 2017 21:42 UTC
42 points
91 comments2 min readEA link

AI Sum­mer Fel­lows Pro­gram: Ap­pli­ca­tions open

RobBensinger23 Mar 2018 21:20 UTC
5 points
2 comments1 min readEA link

New edi­tion of “Ra­tion­al­ity: From AI to Zom­bies”

RobBensinger15 Dec 2018 23:39 UTC
22 points
2 comments2 min readEA link

Poli­tics is far too meta

RobBensinger17 Mar 2021 23:57 UTC
58 points
9 comments12 min readEA link

Ju­lia Galef and Matt Ygle­sias on bioethics and “ethics ex­per­tise”

RobBensinger30 Mar 2021 3:06 UTC
23 points
3 comments4 min readEA link

Pre­dict re­sponses to the “ex­is­ten­tial risk from AI” survey

RobBensinger28 May 2021 1:38 UTC
36 points
8 comments2 min readEA link

“Ex­is­ten­tial risk from AI” sur­vey results

RobBensinger1 Jun 2021 20:19 UTC
80 points
35 comments11 min readEA link

Out­line of Galef’s “Scout Mind­set”

RobBensinger10 Aug 2021 0:18 UTC
95 points
9 comments13 min readEA link

Quick gen­eral thoughts on suffer­ing and consciousness

RobBensinger30 Oct 2021 18:09 UTC
9 points
3 comments22 min readEA link

2020 PhilPapers Sur­vey Results

RobBensinger2 Nov 2021 5:06 UTC
40 points
0 comments12 min readEA link

Dis­cus­sion with Eliezer Yud­kowsky on AGI interventions

RobBensinger11 Nov 2021 3:21 UTC
60 points
33 comments35 min readEA link

Con­ver­sa­tion on tech­nol­ogy fore­cast­ing and gradualism

RobBensinger9 Dec 2021 19:00 UTC
15 points
3 comments31 min readEA link

An­i­mal welfare EA and per­sonal dietary options

RobBensinger5 Jan 2022 18:53 UTC
17 points
10 comments3 min readEA link

AI views and dis­agree­ments AMA: Chris­ti­ano, Ngo, Shah, Soares, Yudkowsky

RobBensinger1 Mar 2022 1:13 UTC
30 points
4 comments1 min readEA link
(www.lesswrong.com)

Twit­ter-length re­sponses to 24 AI al­ign­ment arguments

RobBensinger14 Mar 2022 19:34 UTC
67 points
17 comments8 min readEA link

The in­or­di­nately slow spread of good AGI con­ver­sa­tions in ML

RobBensinger29 Jun 2022 4:02 UTC
59 points
2 comments6 min readEA link

Have faux-evil EA energy

RobBensinger23 Aug 2022 22:55 UTC
92 points
11 comments1 min readEA link

A com­mon failure for foxes

RobBensinger14 Oct 2022 22:51 UTC
22 points
2 comments1 min readEA link