Archive
About
Search
Log In
Home
All
Wiki
Shortform
Recent
Comments
RSS
So8res
Karma:
3,569
All
Posts
Comments
New
Top
Old
Page
2
Focus on the places where you feel shocked everyone’s dropping the ball
So8res
Feb 2, 2023, 12:27 AM
92
points
6
comments
EA
link
Alignment is mostly about making cognition aimable at all
So8res
Jan 30, 2023, 3:22 PM
57
points
3
comments
EA
link
Thoughts on AGI organizations and capabilities work
RobBensinger
Dec 7, 2022, 7:46 PM
77
points
7
comments
5
min read
EA
link
Distinguishing test from training
So8res
Nov 29, 2022, 9:41 PM
27
points
0
comments
EA
link
How could we know that an AGI system will have good consequences?
So8res
Nov 7, 2022, 10:42 PM
25
points
0
comments
EA
link
Superintelligent AI is necessary for an amazing future, but far from sufficient
So8res
Oct 31, 2022, 9:16 PM
35
points
5
comments
EA
link
Notes on “Can you control the past”
So8res
Oct 20, 2022, 3:41 AM
15
points
0
comments
EA
link
Decision theory does not imply that we get to have nice things
So8res
Oct 18, 2022, 3:04 AM
36
points
0
comments
EA
link
Contra shard theory, in the context of the diamond maximizer problem
So8res
Oct 13, 2022, 11:51 PM
27
points
0
comments
EA
link
Niceness is unnatural
So8res
Oct 13, 2022, 1:30 AM
20
points
1
comment
EA
link
Don’t leave your fingerprints on the future
So8res
Oct 8, 2022, 12:35 AM
94
points
4
comments
EA
link
What does it mean for an AGI to be ‘safe’?
So8res
Oct 7, 2022, 4:43 AM
53
points
21
comments
EA
link
Warning Shots Probably Wouldn’t Change The Picture Much
So8res
Oct 6, 2022, 5:15 AM
93
points
20
comments
2
min read
EA
link
Humans aren’t fitness maximizers
So8res
Oct 4, 2022, 1:32 AM
30
points
2
comments
5
min read
EA
link
Where I currently disagree with Ryan Greenblatt’s version of the ELK approach
So8res
Sep 29, 2022, 9:19 PM
21
points
0
comments
5
min read
EA
link
AGI ruin scenarios are likely (and disjunctive)
So8res
Jul 27, 2022, 3:24 AM
53
points
5
comments
6
min read
EA
link
Brainstorm of things that could force an AI team to burn their lead
So8res
Jul 25, 2022, 12:00 AM
26
points
1
comment
13
min read
EA
link
A note about differential technological development
So8res
Jul 24, 2022, 11:41 PM
58
points
8
comments
6
min read
EA
link
On how various plans miss the hard bits of the alignment challenge
So8res
Jul 12, 2022, 5:35 AM
126
points
13
comments
29
min read
EA
link
A central AI alignment problem: capabilities generalization, and the sharp left turn
So8res
Jun 15, 2022, 2:19 PM
53
points
2
comments
10
min read
EA
link
Previous
Back to top
Next
N
W
F
A
C
D
E
F
G
H
I
Customize appearance
Current theme:
default
A
C
D
E
F
G
H
I
Less Wrong (text)
Less Wrong (link)
Invert colors
Reset to defaults
OK
Cancel
Hi, I’m Bobby the Basilisk! Click on the minimize button (
) to minimize the theme tweaker window, so that you can see what the page looks like with the current tweaked values. (But remember,
the changes won’t be saved until you click “OK”!
)
Theme tweaker help
Show Bobby the Basilisk
OK
Cancel