Chris—this is all quite reasonable.
However, one could dispute ‘Premise 2: AGI has a reasonable chance of arriving in the next 30 or 40 years.’
Yes, without any organized resistance to the AI industry, the AI industry will develop AGI (if AGI is possible) -- probably fairly quickly.
But, if enough people accept Premise 5 (likely catastrophe) and Premise 6 (we can make a difference), then we can prevent AGI from arriving.
In other words, the best way to make ‘AI go well’ may be to prevent AGI (or ASI) from happening at all.
Good point. I added in “by default”.
Also, would be keen to hear if you think I should have restructured this argument in any other way?
Current theme: default
Less Wrong (text)
Less Wrong (link)
Arrow keys: Next/previous image
Escape or click: Hide zoomed image
Space bar: Reset image size & position
Scroll to zoom in/out
(When zoomed in, drag to pan; double-click to close)
Keys shown in yellow (e.g., ]) are accesskeys, and require a browser-specific modifier key (or keys).
]
Keys shown in grey (e.g., ?) do not require any modifier keys.
?
Esc
h
f
a
m
v
c
r
q
t
u
o
,
.
/
s
n
e
;
Enter
[
\
k
i
l
=
-
0
′
1
2
3
4
5
6
7
8
9
→
↓
←
↑
Space
x
z
`
g
Chris—this is all quite reasonable.
However, one could dispute ‘Premise 2: AGI has a reasonable chance of arriving in the next 30 or 40 years.’
Yes, without any organized resistance to the AI industry, the AI industry will develop AGI (if AGI is possible) -- probably fairly quickly.
But, if enough people accept Premise 5 (likely catastrophe) and Premise 6 (we can make a difference), then we can prevent AGI from arriving.
In other words, the best way to make ‘AI go well’ may be to prevent AGI (or ASI) from happening at all.
Good point. I added in “by default”.
Also, would be keen to hear if you think I should have restructured this argument in any other way?