“Longtermism is dead”: I feel quite confused about what the idea is here.
Is it that
(1) people no longer find the key claims underlying longtermism compelling?
(2) it seems irrelevant to influencing decisions?
(3) it seems less likely to be the best messaging strategy for motivating people to take specific actions?
(4) something else?
I’m also guessing that this is just a general summary of vibe and attitudes from people you’ve spoken to, but if there’s some evidence you could point to that demonstrates this overall point or any of the subpoints I’d be pretty interested in that.
(Responding to you, but Peter made a similar point.)
On the platonic/philosophical side I’m not sure, I think many EAs weren’t really bought into it to begin with and the shift to longtermism was in various ways the effect of deference and/or cohort effects. In my case I feel that the epistemic/cluelessness challenge to longtermism/far future effects is pretty dispositive, but I’m just one person.
On the vibes side, I think the evidence is pretty damning:
The launch of WWOTF was almost perfectly at the worst time possible and the idea seems indelibly linked with SBF’s risky/naïve ethics and immoral actions.
Do a Google News or Twitter search for ‘longtermism’ in its EA context and it’s ~broadly to universally negative. The Google trends data also points toward the term fading away.
No big EA org or “EA leader” however defined is going to bat for longtermism any more in the public sphere. The only people talking about it are the critics. When you get that kind of dynamic, it’s difficult to see how an idea can survive.
Even on the Forum, very little discussion on the Forum seems to be based on ‘longtermism’ these days. People either seem to have left the Forum/EA, or longtermist concerns have been subsumed into AI/bio risk. Longtermism just seems superfluous to these discussions.
That’s just my personal read on things though. But yeah, seems very much like that SBF-Community Drama-OpenAI board triple whammy from Nov22-Nov23 marked the death knell for longtermism at least as the public facing justification of EA.
“Longtermism is dead”: I feel quite confused about what the idea is here.
Is it that (1) people no longer find the key claims underlying longtermism compelling? (2) it seems irrelevant to influencing decisions? (3) it seems less likely to be the best messaging strategy for motivating people to take specific actions? (4) something else?
I’m also guessing that this is just a general summary of vibe and attitudes from people you’ve spoken to, but if there’s some evidence you could point to that demonstrates this overall point or any of the subpoints I’d be pretty interested in that.
(Responding to you, but Peter made a similar point.)
Thanks!
On the platonic/philosophical side I’m not sure, I think many EAs weren’t really bought into it to begin with and the shift to longtermism was in various ways the effect of deference and/or cohort effects. In my case I feel that the epistemic/cluelessness challenge to longtermism/far future effects is pretty dispositive, but I’m just one person.
On the vibes side, I think the evidence is pretty damning:
The launch of WWOTF was almost perfectly at the worst time possible and the idea seems indelibly linked with SBF’s risky/naïve ethics and immoral actions.
Do a Google News or Twitter search for ‘longtermism’ in its EA context and it’s ~broadly to universally negative. The Google trends data also points toward the term fading away.
No big EA org or “EA leader” however defined is going to bat for longtermism any more in the public sphere. The only people talking about it are the critics. When you get that kind of dynamic, it’s difficult to see how an idea can survive.
Even on the Forum, very little discussion on the Forum seems to be based on ‘longtermism’ these days. People either seem to have left the Forum/EA, or longtermist concerns have been subsumed into AI/bio risk. Longtermism just seems superfluous to these discussions.
That’s just my personal read on things though. But yeah, seems very much like that SBF-Community Drama-OpenAI board triple whammy from Nov22-Nov23 marked the death knell for longtermism at least as the public facing justification of EA.
Thanks! That’s helpful.
Seems to me that at least 80,000 Hours still “bat for longtermism” (E.g. it’s very central in their resources about cause prioritisation.)
Not sure why you think that no “‘EA leader’ however defined is going to bat for longtermism any more in the public sphere”.
Longtermism (or at least, x-risk / GCRs as proxies for long-term impact) seem pretty crucial to various prioritisation decisions within AI and bio?
And longtermism unequivocally seems pretty crucial to s-risk work and justification, although that’s a far smaller component of EA than x-risk work.
(No need to reply to these, just registering some things that seem surprising to me.)