Archive
About
Search
Log In
Home
All
Wiki
Shortform
Recent
Comments
RSS
So8res
Karma:
3,537
All
Posts
Comments
New
Top
Old
Page
1
Quick takes on “AI is easy to control”
So8res
2 Dec 2023 22:33 UTC
−12
points
4
comments
1
min read
EA
link
Apocalypse insurance, and the hardline libertarian take on AI risk
So8res
28 Nov 2023 2:09 UTC
21
points
0
comments
1
min read
EA
link
Ability to solve long-horizon tasks correlates with wanting things in the behaviorist sense
So8res
24 Nov 2023 17:37 UTC
38
points
1
comment
1
min read
EA
link
Thoughts on the AI Safety Summit company policy requests and responses
So8res
31 Oct 2023 23:54 UTC
42
points
3
comments
1
min read
EA
link
AI as a science, and three obstacles to alignment strategies
So8res
25 Oct 2023 21:02 UTC
41
points
1
comment
1
min read
EA
link
But why would the AI kill us?
So8res
17 Apr 2023 19:38 UTC
45
points
3
comments
1
min read
EA
link
Misgeneralization as a misnomer
So8res
6 Apr 2023 20:43 UTC
48
points
0
comments
1
min read
EA
link
If interpretability research goes well, it may get dangerous
So8res
3 Apr 2023 21:48 UTC
33
points
0
comments
1
min read
EA
link
Hooray for stepping out of the limelight
So8res
1 Apr 2023 2:45 UTC
103
points
0
comments
1
min read
EA
link
A rough and incomplete review of some of John Wentworth’s research
So8res
28 Mar 2023 18:52 UTC
27
points
0
comments
1
min read
EA
link
A stylized dialogue on John Wentworth’s claims about markets and optimization
So8res
25 Mar 2023 22:32 UTC
18
points
0
comments
1
min read
EA
link
Truth and Advantage: Response to a draft of “AI safety seems hard to measure”
So8res
22 Mar 2023 3:36 UTC
11
points
0
comments
1
min read
EA
link
Deep Deceptiveness
So8res
21 Mar 2023 2:51 UTC
40
points
1
comment
1
min read
EA
link
Comments on OpenAI’s “Planning for AGI and beyond”
So8res
3 Mar 2023 23:01 UTC
115
points
7
comments
1
min read
EA
link
Enemies vs Malefactors
So8res
28 Feb 2023 23:38 UTC
85
points
5
comments
5
min read
EA
link
AI alignment researchers don’t (seem to) stack
So8res
21 Feb 2023 0:48 UTC
47
points
3
comments
1
min read
EA
link
A personal reflection on SBF
So8res
7 Feb 2023 17:56 UTC
321
points
23
comments
19
min read
EA
link
Focus on the places where you feel shocked everyone’s dropping the ball
So8res
2 Feb 2023 0:27 UTC
92
points
6
comments
1
min read
EA
link
Alignment is mostly about making cognition aimable at all
So8res
30 Jan 2023 15:22 UTC
57
points
3
comments
1
min read
EA
link
Thoughts on AGI organizations and capabilities work
RobBensinger
7 Dec 2022 19:46 UTC
77
points
7
comments
5
min read
EA
link
Back to top
Next