I’ve upvoted this comment, but weakly disagree that there’s such a shift happening (EVF orgs still seem to be selecting pretty heavily for longtermist projects, the global health and development fund has been discontinued while the LTFF is still around etc), and quite strongly disagree that it would be bad if it is:
From a longtermist (~totalist classical utilitarian) perspective, there’s a huge difference between ~99% and 100% of the population dying, if humanity recovers in the former case, but not the latter.
That ‘if’ clause is doing a huge amount of work here. In practice I think the EA community is far too sanguine about our prospects post-civilisational collapse of becoming interstellar (which, from a longtermist perspective, is what matters—not ‘recovery’). I’ve written a sequence on this here, and have a calculator which allows you to easily explore the simple model’s implications on your beliefs described in post 3 here, with an implementation of the more complex model available on the repo. As Titotal wrote in another reply, it’s easy to believe ‘lesser’ catastrophes are many times more likely, so could very well be where the main expected loss of value lies.
From a longtermist (~totalist classical utilitarian) perspective, preventing a GCR doesn’t differentiate between “humanity prevents GCRs and realises 1% of it’s potential” and “humanity prevents GCRs realises 99% of its potential”
I think I agree with this, but draw a different conclusion. Longtermist work has focused heavily on existential risk, and in practice the risk of extinction, IMO seriously dropping the ball on trajectory changes with little more justification that the latter are hard to think about. As a consequence they’ve ignored what seem to me the very real loss of expected unit-value from lesser catastrophes, and the to-me-plausible increase in it from interventions designed to make people’s lives better (generally lumping those in as ‘shorttermist’). If people are now starting to take other catastrophic risks more seriously, that might be remedied. (also relevant to your 3rd and 4th points)
From a “current generations” perspective, reducing GCRs is probably not more cost-effective than directly improving the welfare of people / animals alive today
This seems to be treating ‘focus only on current generations’ and ‘focus on Pascalian arguments for astronomical value in the distant future’ as the only two reasonable views. David Thorstad has written a lot, I think very reasonably, about reasons why expected value of longtermist scenarios might actually be quite low, but one might still have considerable concern for the next few generations.
From a general virtue ethics / integrity perspective, making this change on PR / marketing reasons alone—without an underlying change in longtermist motivation—feels somewhat deceptive.
Counterpoint: I think the discourse before the purported shift to GCRs was substantially more dishonest. Nanda and Alexander’s posts argued that we should talk about x-risk rather than longtermism on the grounds that it might kill you and everyone you know—which is very misleading if you only seriously consider catastrophes that kill 100% of people, and ignore (or conceivably even promote) those that leave >0.01% behind (which, judging by Luisa Rodriguez’s work is around the point beyond which EAs would typically consider something an existential catastrophe).
I basically read Zabel’s post as doing the same, not as desiring a shift to GCR focus, but as desiring presenting the work that way, saying ‘I’d guess that if most of us woke up without our memories here in 2022 [now 2023], and the arguments about potentially imminent existential risks were called to our attention, it’s unlikely that we’d re-derive EA and philosophical longtermism as the main and best onramp to getting other people to work on that problem’ (emphasis mine).
Nanda, Alexander and Zabel’s posts all left a very bad taste in my mouth for exactly that reason.
There’s something fairly disorienting about the community switching so quickly from [quite aggressive] “yay longtermism!” (e.g. much hype around launch of WWOTF) to essentially disowning the word longtermism, with very little mention / admission that this happened or why
This is as much an argument that we made a mistake ever focusing on longtermism as that we shouldn’t now shift away from it. Oliver Habryka (can’t find link offhand) and Kelsey Piper are two EAs who’ve publicly expressed discomfort with the level of artificial support WWOTF received, and I’m much less notable, but happy to add myself to the list of people uncomfortable the business, especially since at the time he was a trustee of the charity that was doing so much to promote his career.
I’ve upvoted this comment, but weakly disagree that there’s such a shift happening (EVF orgs still seem to be selecting pretty heavily for longtermist projects, the global health and development fund has been discontinued while the LTFF is still around etc), and quite strongly disagree that it would be bad if it is:
That ‘if’ clause is doing a huge amount of work here. In practice I think the EA community is far too sanguine about our prospects post-civilisational collapse of becoming interstellar (which, from a longtermist perspective, is what matters—not ‘recovery’). I’ve written a sequence on this here, and have a calculator which allows you to easily explore the simple model’s implications on your beliefs described in post 3 here, with an implementation of the more complex model available on the repo. As Titotal wrote in another reply, it’s easy to believe ‘lesser’ catastrophes are many times more likely, so could very well be where the main expected loss of value lies.
I think I agree with this, but draw a different conclusion. Longtermist work has focused heavily on existential risk, and in practice the risk of extinction, IMO seriously dropping the ball on trajectory changes with little more justification that the latter are hard to think about. As a consequence they’ve ignored what seem to me the very real loss of expected unit-value from lesser catastrophes, and the to-me-plausible increase in it from interventions designed to make people’s lives better (generally lumping those in as ‘shorttermist’). If people are now starting to take other catastrophic risks more seriously, that might be remedied. (also relevant to your 3rd and 4th points)
This seems to be treating ‘focus only on current generations’ and ‘focus on Pascalian arguments for astronomical value in the distant future’ as the only two reasonable views. David Thorstad has written a lot, I think very reasonably, about reasons why expected value of longtermist scenarios might actually be quite low, but one might still have considerable concern for the next few generations.
Counterpoint: I think the discourse before the purported shift to GCRs was substantially more dishonest. Nanda and Alexander’s posts argued that we should talk about x-risk rather than longtermism on the grounds that it might kill you and everyone you know—which is very misleading if you only seriously consider catastrophes that kill 100% of people, and ignore (or conceivably even promote) those that leave >0.01% behind (which, judging by Luisa Rodriguez’s work is around the point beyond which EAs would typically consider something an existential catastrophe).
I basically read Zabel’s post as doing the same, not as desiring a shift to GCR focus, but as desiring presenting the work that way, saying ‘I’d guess that if most of us woke up without our memories here in 2022 [now 2023], and the arguments about potentially imminent existential risks were called to our attention, it’s unlikely that we’d re-derive EA and philosophical longtermism as the main and best onramp to getting other people to work on that problem’ (emphasis mine).
Nanda, Alexander and Zabel’s posts all left a very bad taste in my mouth for exactly that reason.
This is as much an argument that we made a mistake ever focusing on longtermism as that we shouldn’t now shift away from it. Oliver Habryka (can’t find link offhand) and Kelsey Piper are two EAs who’ve publicly expressed discomfort with the level of artificial support WWOTF received, and I’m much less notable, but happy to add myself to the list of people uncomfortable the business, especially since at the time he was a trustee of the charity that was doing so much to promote his career.