Thanks! That’s helpful.
Seems to me that at least 80,000 Hours still “bat for longtermism” (E.g. it’s very central in their resources about cause prioritisation.)
Not sure why you think that no “‘EA leader’ however defined is going to bat for longtermism any more in the public sphere”.
Longtermism (or at least, x-risk / GCRs as proxies for long-term impact) seem pretty crucial to various prioritisation decisions within AI and bio?
And longtermism unequivocally seems pretty crucial to s-risk work and justification, although that’s a far smaller component of EA than x-risk work.
(No need to reply to these, just registering some things that seem surprising to me.)
Current theme: default
Less Wrong (text)
Less Wrong (link)
Arrow keys: Next/previous image
Escape or click: Hide zoomed image
Space bar: Reset image size & position
Scroll to zoom in/out
(When zoomed in, drag to pan; double-click to close)
Keys shown in yellow (e.g., ]) are accesskeys, and require a browser-specific modifier key (or keys).
]
Keys shown in grey (e.g., ?) do not require any modifier keys.
?
Esc
h
f
a
m
v
c
r
q
t
u
o
,
.
/
s
n
e
;
Enter
[
\
k
i
l
=
-
0
′
1
2
3
4
5
6
7
8
9
→
↓
←
↑
Space
x
z
`
g
Thanks! That’s helpful.
Seems to me that at least 80,000 Hours still “bat for longtermism” (E.g. it’s very central in their resources about cause prioritisation.)
Not sure why you think that no “‘EA leader’ however defined is going to bat for longtermism any more in the public sphere”.
Longtermism (or at least, x-risk / GCRs as proxies for long-term impact) seem pretty crucial to various prioritisation decisions within AI and bio?
And longtermism unequivocally seems pretty crucial to s-risk work and justification, although that’s a far smaller component of EA than x-risk work.
(No need to reply to these, just registering some things that seem surprising to me.)