I’m willing to discuss this over Zoom, or face to face once I return to Israel in November.
What I think my main points are:
We don’t seem to be anywhere near AGI. The amount of compute might very soon be enough but we also need major theoretical breakthroughs.
Most extinction scenarios that I’ve read about or thought about require some amount of bad luck, at least if AGI is born out of the ML paradigm
AGI is poorly defined, so it’s hard to reason on what it would do once it comes into existence, of you could even describe that as a binary event
It seems unlikely that a malignant AI succeeds in deceiving us until it is capable of preventing us from shutting it off
I’m not entirely convinced in any of them—I haven’t thought about this carefully.
Edit: there’s a doom scenario that I’m more worried about, and it doesn’t require AGI—and that’s global domination by a tyrannical government.
Current theme: default
Less Wrong (text)
Less Wrong (link)
Arrow keys: Next/previous image
Escape or click: Hide zoomed image
Space bar: Reset image size & position
Scroll to zoom in/out
(When zoomed in, drag to pan; double-click to close)
Keys shown in yellow (e.g., ]) are accesskeys, and require a browser-specific modifier key (or keys).
]
Keys shown in grey (e.g., ?) do not require any modifier keys.
?
Esc
h
f
a
m
v
c
r
q
t
u
o
,
.
/
s
n
e
;
Enter
[
\
k
i
l
=
-
0
′
1
2
3
4
5
6
7
8
9
→
↓
←
↑
Space
x
z
`
g
I’m willing to discuss this over Zoom, or face to face once I return to Israel in November.
What I think my main points are:
We don’t seem to be anywhere near AGI. The amount of compute might very soon be enough but we also need major theoretical breakthroughs.
Most extinction scenarios that I’ve read about or thought about require some amount of bad luck, at least if AGI is born out of the ML paradigm
AGI is poorly defined, so it’s hard to reason on what it would do once it comes into existence, of you could even describe that as a binary event
It seems unlikely that a malignant AI succeeds in deceiving us until it is capable of preventing us from shutting it off
I’m not entirely convinced in any of them—I haven’t thought about this carefully.
Edit: there’s a doom scenario that I’m more worried about, and it doesn’t require AGI—and that’s global domination by a tyrannical government.