Archive
About
Search
Log In
Home
All
Wiki
Shortform
Recent
Comments
RSS
So8res
Karma:
3,505
All
Posts
Comments
New
Top
Old
Page
1
A personal reflection on SBF
So8res
7 Feb 2023 17:56 UTC
321
points
23
comments
19
min read
EA
link
On Caring
So8res
7 Oct 2014 5:12 UTC
288
points
14
comments
10
min read
EA
link
On how various plans miss the hard bits of the alignment challenge
So8res
12 Jul 2022 5:35 UTC
125
points
13
comments
27
min read
EA
link
Comments on OpenAI’s “Planning for AGI and beyond”
So8res
3 Mar 2023 23:01 UTC
115
points
7
comments
1
min read
EA
link
Hooray for stepping out of the limelight
So8res
1 Apr 2023 2:45 UTC
103
points
0
comments
1
min read
EA
link
Don’t leave your fingerprints on the future
So8res
8 Oct 2022 0:35 UTC
93
points
4
comments
1
min read
EA
link
Focus on the places where you feel shocked everyone’s dropping the ball
So8res
2 Feb 2023 0:27 UTC
92
points
6
comments
1
min read
EA
link
Warning Shots Probably Wouldn’t Change The Picture Much
So8res
6 Oct 2022 5:15 UTC
91
points
20
comments
2
min read
EA
link
Enemies vs Malefactors
So8res
28 Feb 2023 23:38 UTC
84
points
5
comments
5
min read
EA
link
A note about differential technological development
So8res
24 Jul 2022 23:41 UTC
58
points
8
comments
5
min read
EA
link
Alignment is mostly about making cognition aimable at all
So8res
30 Jan 2023 15:22 UTC
57
points
3
comments
1
min read
EA
link
AGI ruin scenarios are likely (and disjunctive)
So8res
27 Jul 2022 3:24 UTC
53
points
5
comments
6
min read
EA
link
What does it mean for an AGI to be ‘safe’?
So8res
7 Oct 2022 4:43 UTC
53
points
21
comments
1
min read
EA
link
Altruistic Motivations
So8res
4 Jan 2019 20:38 UTC
52
points
7
comments
3
min read
EA
link
(mindingourway.com)
A central AI alignment problem: capabilities generalization, and the sharp left turn
So8res
15 Jun 2022 14:19 UTC
51
points
2
comments
7
min read
EA
link
Half-assing it with everything you’ve got
So8res
13 Mar 2015 6:30 UTC
49
points
2
comments
8
min read
EA
link
(mindingourway.com)
AI alignment researchers don’t (seem to) stack
So8res
21 Feb 2023 0:48 UTC
47
points
3
comments
1
min read
EA
link
But why would the AI kill us?
So8res
17 Apr 2023 19:38 UTC
45
points
3
comments
1
min read
EA
link
Misgeneralization as a misnomer
So8res
6 Apr 2023 20:43 UTC
45
points
0
comments
1
min read
EA
link
Thoughts on the AI Safety Summit company policy requests and responses
So8res
31 Oct 2023 23:54 UTC
42
points
3
comments
1
min read
EA
link
Back to top
Next