+1 to sharing lists of questions.
What signs do I need to look for to tell whether a model’s cognition has started to emerge?
I don’t know what ‘cognition emerging’ means. I suspect the concept is vague/confused.
What is the best way to explain the difference between forecasting extinction scenarios and narratives from chiliasm or escatology?
Why would you want to explain the difference?
I’ve been asked this question! Or, to be specific, I’ve been asked something along these lines: human cultures have always been speculating about the end of the world so how is this forecasting x-risk any different?
Current theme: default
Less Wrong (text)
Less Wrong (link)
Arrow keys: Next/previous image
Escape or click: Hide zoomed image
Space bar: Reset image size & position
Scroll to zoom in/out
(When zoomed in, drag to pan; double-click to close)
Keys shown in yellow (e.g., ]) are accesskeys, and require a browser-specific modifier key (or keys).
]
Keys shown in grey (e.g., ?) do not require any modifier keys.
?
Esc
h
f
a
m
v
c
r
q
t
u
o
,
.
/
s
n
e
;
Enter
[
\
k
i
l
=
-
0
′
1
2
3
4
5
6
7
8
9
→
↓
←
↑
Space
x
z
`
g
+1 to sharing lists of questions.
I don’t know what ‘cognition emerging’ means. I suspect the concept is vague/confused.
Why would you want to explain the difference?
I’ve been asked this question! Or, to be specific, I’ve been asked something along these lines: human cultures have always been speculating about the end of the world so how is this forecasting x-risk any different?