Archive
About
Search
Log In
Home
All
Wiki
Shortform
Recent
Comments
RSS
RobBensinger
Karma:
8,292
All
Posts
Comments
New
Top
Old
Page
1
Response to Aschenbrenner’s “Situational Awareness”
RobBensinger
6 Jun 2024 22:57 UTC
111
points
15
comments
1
min read
EA
link
Why hasn’t EA done an SBF investigation and postmortem?
RobBensinger
1 Apr 2024 2:28 UTC
166
points
45
comments
3
min read
EA
link
AI Views Snapshots
RobBensinger
13 Dec 2023 0:45 UTC
25
points
0
comments
1
min read
EA
link
Comments on Manheim’s “What’s in a Pause?”
RobBensinger
18 Sep 2023 12:16 UTC
71
points
11
comments
6
min read
EA
link
AGI ruin mostly rests on strong claims about alignment and deployment, not about society
RobBensinger
24 Apr 2023 13:07 UTC
16
points
4
comments
1
min read
EA
link
The basic reasons I expect AGI ruin
RobBensinger
18 Apr 2023 3:37 UTC
58
points
13
comments
1
min read
EA
link
Four mindset disagreements behind existential risk disagreements in ML
RobBensinger
11 Apr 2023 4:53 UTC
61
points
2
comments
9
min read
EA
link
Yudkowsky on AGI risk on the Bankless podcast
RobBensinger
13 Mar 2023 0:42 UTC
54
points
2
comments
75
min read
EA
link
Elements of Rationalist Discourse
RobBensinger
14 Feb 2023 3:39 UTC
68
points
12
comments
9
min read
EA
link
Be wary of enacting norms you think are unethical
RobBensinger
18 Jan 2023 6:07 UTC
147
points
6
comments
1
min read
EA
link
Thoughts on AGI organizations and capabilities work
RobBensinger
7 Dec 2022 19:46 UTC
77
points
7
comments
5
min read
EA
link
A challenge for AGI organizations, and a challenge for readers
RobBensinger
1 Dec 2022 23:11 UTC
172
points
13
comments
1
min read
EA
link
EA should blurt
RobBensinger
22 Nov 2022 21:57 UTC
155
points
26
comments
5
min read
EA
link
A common failure for foxes
RobBensinger
14 Oct 2022 22:51 UTC
22
points
2
comments
1
min read
EA
link
Have faux-evil EA energy
RobBensinger
23 Aug 2022 22:55 UTC
92
points
11
comments
1
min read
EA
link
The inordinately slow spread of good AGI conversations in ML
RobBensinger
29 Jun 2022 4:02 UTC
59
points
2
comments
8
min read
EA
link
Twitter-length responses to 24 AI alignment arguments
RobBensinger
14 Mar 2022 19:34 UTC
67
points
17
comments
8
min read
EA
link
AI views and disagreements AMA: Christiano, Ngo, Shah, Soares, Yudkowsky
RobBensinger
1 Mar 2022 1:13 UTC
30
points
4
comments
1
min read
EA
link
(www.lesswrong.com)
Animal welfare EA and personal dietary options
RobBensinger
5 Jan 2022 18:53 UTC
17
points
10
comments
3
min read
EA
link
Conversation on technology forecasting and gradualism
RobBensinger
9 Dec 2021 19:00 UTC
15
points
3
comments
31
min read
EA
link
Back to top
Next