I don’t know if this can be answered in full-generality.I suppose it comes down to things like:• Financial runway/back-up plans in case your prediction is wrong• Importance of what you’re doing now• Potential for impact in AI safety
I agree. I think it could be a useful exercise though to make the whole thing (ASI) less abstract.
I find it hard to reconcile that (1) I think we’re going to have AGI soon and (2) I haven’t made more significant life changes.
I don’t buy the argument that much shouldn’t change (at least, in my life).
Happy to talk that through if you’d like, though I’m kind of biased, so probably better to speak to someone who doesn’t have a horse in the race.
Current theme: default
Less Wrong (text)
Less Wrong (link)
Arrow keys: Next/previous image
Escape or click: Hide zoomed image
Space bar: Reset image size & position
Scroll to zoom in/out
(When zoomed in, drag to pan; double-click to close)
Keys shown in yellow (e.g., ]) are accesskeys, and require a browser-specific modifier key (or keys).
]
Keys shown in grey (e.g., ?) do not require any modifier keys.
?
Esc
h
f
a
m
v
c
r
q
t
u
o
,
.
/
s
n
e
;
Enter
[
\
k
i
l
=
-
0
′
1
2
3
4
5
6
7
8
9
→
↓
←
↑
Space
x
z
`
g
I don’t know if this can be answered in full-generality.
I suppose it comes down to things like:
• Financial runway/back-up plans in case your prediction is wrong
• Importance of what you’re doing now
• Potential for impact in AI safety
I agree. I think it could be a useful exercise though to make the whole thing (ASI) less abstract.
I find it hard to reconcile that (1) I think we’re going to have AGI soon and (2) I haven’t made more significant life changes.
I don’t buy the argument that much shouldn’t change (at least, in my life).
Happy to talk that through if you’d like, though I’m kind of biased, so probably better to speak to someone who doesn’t have a horse in the race.