I have not researched longtermism deeply. However, what I have found out so far leaves me puzzled and skeptical. As I currently see it, you can divide what longtermism cares about into two categories:
1) Existential risk.
2) Common sense long-term priorities, such as:
economic growth
environmentalism
scientific and technological progress
social and moral progress
Existential risk isn’t a new idea (relative to longtermism) and economic growth, environmentalism, and societal progress aren’t new ideas either. Suppose I already care a lot about low-probability existential catastrophes and I already buy into common sense ideas about sustainability, growth, and progress. Does longtermism have anything new to tell me?
Longtermism suggests a different focus within existential risks, because it feels very differently about “99% of humanity is destroyed, but the remaining 1% are able to rebuild civilisation” and “100% of humanity is destroyed, civilisation ends”, even though from the perspective of people alive today these outcomes are very similar.
I think relative to neartermist intuitions about catastrophic risk, the particular focus on extinction increases the threat from AI and engineered biorisks relative to e.g. climate change and natural pandemics. Basically, total extinction is quite a high bar, and most easily reached by things deliberately attempting to reach it, relative to natural disasters which don’t tend to counter-adapt when some survive.
Longtermism also supports research into civilisational resilience measures, like bunkers, or research into how or whether civilisation could survive and rebuild after a catastrophe.
Longtermism also lowers the probability bar that an extinction risk has to reach before being worth taking seriously. I think this used to be a bigger part of the reason why people worked on x-risk when typical risk estimates were lower; over time, as risk estimates increased. longtermism became less necessary to justify working on them.
Maybe? This depends on what you think about the probability that intelligent life re-evolves on earth (it seems likely to me) and how good you feel about the next intelligent species on earth vs humans.
IMO, most x-risk from AI probably doesn’t come from literal human extinction but instead AI systems acquiring most of the control over long run resources while some/most/all humans survive, but fair enough.
Yeah, it seems possible to be longtermist but not think that human extinction entails loss of all hope, but extinction still seems more important to the longtermist than the neartermist.
valid. I guess longtermists and neartermists will also feel quite different about this fate.
This is an interesting point, and I guess it’s important to make, but it doesn’t exactly answer the question I asked in the OP.
In 2013, Nick Bostrom gave a TEDx talk about existential risk where he argued that it’s so important to care about because of the 10^umpteen future lives at stake. In the talk, Bostrom referenced even older work by Derek Parfit. (From a quick Google, the Parfit stuff on existential risk was from his book Reasons and Persons, published in 1984.)
I feel like people in the EA community only started talking about “longtermism” in the last few years, whereas they had been talking about existential risk many years prior to that.
Suppose I already bought into Bostrom’s argument about existential risk and future people in 2013. Does longtermism have anything new to tell me?
I guess I think of caring about future people as the core of longtermism, so if you’re already signed up to that, I would already call you a longtermist? I think most people aren’t signed up for that, though.
I agree that if you’re already bought in to moral consideration for 10^umpteen future people, that’s longtermism.
One takeaway, I think, is that these things which already seem good under common sense are much more important in the longtermist view. For example, I think a longtermist would want extinction risk to be much lower than what you’d want from a commonsense view.
Does this apply to things other than existential risk?
Yes. I think your list of commonsense priorities are even more beneficial in the view of longtermism. Factors like “would this have happened anyway, just a bit later” may still apply and reduce the impact of any given intervention. Then again, notions like “we can reach more of the universe the sooner we start expanding” could be an argument for sooner being better for economic growth.