Thanks! That’s helpful.
Seems to me that at least 80,000 Hours still “bat for longtermism” (E.g. it’s very central in their resources about cause prioritisation.)
Not sure why you think that no “‘EA leader’ however defined is going to bat for longtermism any more in the public sphere”.
Longtermism (or at least, x-risk / GCRs as proxies for long-term impact) seem pretty crucial to various prioritisation decisions within AI and bio?
And longtermism unequivocally seems pretty crucial to s-risk work and justification, although that’s a far smaller component of EA than x-risk work.
(No need to reply to these, just registering some things that seem surprising to me.)
Thanks! That’s helpful.
Seems to me that at least 80,000 Hours still “bat for longtermism” (E.g. it’s very central in their resources about cause prioritisation.)
Not sure why you think that no “‘EA leader’ however defined is going to bat for longtermism any more in the public sphere”.
Longtermism (or at least, x-risk / GCRs as proxies for long-term impact) seem pretty crucial to various prioritisation decisions within AI and bio?
And longtermism unequivocally seems pretty crucial to s-risk work and justification, although that’s a far smaller component of EA than x-risk work.
(No need to reply to these, just registering some things that seem surprising to me.)