Archive
About
Search
Log In
Home
All
Wiki
Shortform
Recent
Comments
RSS
RobBensinger
Karma:
8,127
All
Posts
Comments
New
Top
Old
Page
1
MIRI is seeking an Office Manager / Force Multiplier
RobBensinger
5 Jul 2015 19:02 UTC
8
points
1
comment
7
min read
EA
link
Ask MIRI Anything (AMA)
RobBensinger
11 Oct 2016 19:54 UTC
18
points
77
comments
1
min read
EA
link
Anonymous EA comments
RobBensinger
7 Feb 2017 21:42 UTC
42
points
91
comments
2
min read
EA
link
AI Summer Fellows Program: Applications open
RobBensinger
23 Mar 2018 21:20 UTC
5
points
2
comments
1
min read
EA
link
New edition of “Rationality: From AI to Zombies”
RobBensinger
15 Dec 2018 23:39 UTC
22
points
2
comments
2
min read
EA
link
Politics is far too meta
RobBensinger
17 Mar 2021 23:57 UTC
58
points
9
comments
12
min read
EA
link
Julia Galef and Matt Yglesias on bioethics and “ethics expertise”
RobBensinger
30 Mar 2021 3:06 UTC
23
points
3
comments
4
min read
EA
link
Predict responses to the “existential risk from AI” survey
RobBensinger
28 May 2021 1:38 UTC
36
points
8
comments
2
min read
EA
link
“Existential risk from AI” survey results
RobBensinger
1 Jun 2021 20:19 UTC
80
points
35
comments
11
min read
EA
link
Outline of Galef’s “Scout Mindset”
RobBensinger
10 Aug 2021 0:18 UTC
95
points
9
comments
13
min read
EA
link
Quick general thoughts on suffering and consciousness
RobBensinger
30 Oct 2021 18:09 UTC
9
points
3
comments
22
min read
EA
link
2020 PhilPapers Survey Results
RobBensinger
2 Nov 2021 5:06 UTC
40
points
0
comments
12
min read
EA
link
Discussion with Eliezer Yudkowsky on AGI interventions
RobBensinger
11 Nov 2021 3:21 UTC
60
points
33
comments
35
min read
EA
link
Conversation on technology forecasting and gradualism
RobBensinger
9 Dec 2021 19:00 UTC
15
points
3
comments
31
min read
EA
link
Animal welfare EA and personal dietary options
RobBensinger
5 Jan 2022 18:53 UTC
17
points
10
comments
3
min read
EA
link
AI views and disagreements AMA: Christiano, Ngo, Shah, Soares, Yudkowsky
RobBensinger
1 Mar 2022 1:13 UTC
30
points
4
comments
1
min read
EA
link
(www.lesswrong.com)
Twitter-length responses to 24 AI alignment arguments
RobBensinger
14 Mar 2022 19:34 UTC
67
points
17
comments
8
min read
EA
link
The inordinately slow spread of good AGI conversations in ML
RobBensinger
29 Jun 2022 4:02 UTC
59
points
2
comments
6
min read
EA
link
Have faux-evil EA energy
RobBensinger
23 Aug 2022 22:55 UTC
92
points
11
comments
1
min read
EA
link
A common failure for foxes
RobBensinger
14 Oct 2022 22:51 UTC
22
points
2
comments
1
min read
EA
link
Back to top
Next