Longtermism suggests a different focus within existential risks, because it feels very differently about â99% of humanity is destroyed, but the remaining 1% are able to rebuild civilisationâ and â100% of humanity is destroyed, civilisation endsâ, even though from the perspective of people alive today these outcomes are very similar.
I think relative to neartermist intuitions about catastrophic risk, the particular focus on extinction increases the threat from AI and engineered biorisks relative to e.g. climate change and natural pandemics. Basically, total extinction is quite a high bar, and most easily reached by things deliberately attempting to reach it, relative to natural disasters which donât tend to counter-adapt when some survive.
Longtermism also supports research into civilisational resilience measures, like bunkers, or research into how or whether civilisation could survive and rebuild after a catastrophe.
Longtermism also lowers the probability bar that an extinction risk has to reach before being worth taking seriously. I think this used to be a bigger part of the reason why people worked on x-risk when typical risk estimates were lower; over time, as risk estimates increased. longtermism became less necessary to justify working on them.
because it feels very differently about â99% of humanity is destroyed, but the remaining 1% are able to rebuild civilisationâ and â100% of humanity is destroyed, civilisation endsâ
Maybe? This depends on what you think about the probability that intelligent life re-evolves on earth (it seems likely to me) and how good you feel about the next intelligent species on earth vs humans.
the particular focus on extinction increases the threat from AI and engineered biorisks
IMO, most x-risk from AI probably doesnât come from literal human extinction but instead AI systems acquiring most of the control over long run resources while some/âmost/âall humans survive, but fair enough.
Maybe? This depends on what you think about the probability that intelligent life re-evolves on earth (it seems likely to me) and how good you feel about the next intelligent species on earth vs humans.
Yeah, it seems possible to be longtermist but not think that human extinction entails loss of all hope, but extinction still seems more important to the longtermist than the neartermist.
IMO, most x-risk from AI probably doesnât come from literal human extinction but instead AI systems acquiring most of the control over long run resources while some/âmost/âall humans survive, but fair enough.
valid. I guess longtermists and neartermists will also feel quite different about this fate.
This is an interesting point, and I guess itâs important to make, but it doesnât exactly answer the question I asked in the OP.
In 2013, Nick Bostrom gave a TEDx talk about existential risk where he argued that itâs so important to care about because of the 10^umpteen future lives at stake. In the talk, Bostrom referenced even older work by Derek Parfit. (From a quick Google, the Parfit stuff on existential risk was from his book Reasons and Persons, published in 1984.)
I feel like people in the EA community only started talking about âlongtermismâ in the last few years, whereas they had been talking about existential risk many years prior to that.
Suppose I already bought into Bostromâs argument about existential risk and future people in 2013. Does longtermism have anything new to tell me?
I guess I think of caring about future people as the core of longtermism, so if youâre already signed up to that, I would already call you a longtermist? I think most people arenât signed up for that, though.
Longtermism suggests a different focus within existential risks, because it feels very differently about â99% of humanity is destroyed, but the remaining 1% are able to rebuild civilisationâ and â100% of humanity is destroyed, civilisation endsâ, even though from the perspective of people alive today these outcomes are very similar.
I think relative to neartermist intuitions about catastrophic risk, the particular focus on extinction increases the threat from AI and engineered biorisks relative to e.g. climate change and natural pandemics. Basically, total extinction is quite a high bar, and most easily reached by things deliberately attempting to reach it, relative to natural disasters which donât tend to counter-adapt when some survive.
Longtermism also supports research into civilisational resilience measures, like bunkers, or research into how or whether civilisation could survive and rebuild after a catastrophe.
Longtermism also lowers the probability bar that an extinction risk has to reach before being worth taking seriously. I think this used to be a bigger part of the reason why people worked on x-risk when typical risk estimates were lower; over time, as risk estimates increased. longtermism became less necessary to justify working on them.
Maybe? This depends on what you think about the probability that intelligent life re-evolves on earth (it seems likely to me) and how good you feel about the next intelligent species on earth vs humans.
IMO, most x-risk from AI probably doesnât come from literal human extinction but instead AI systems acquiring most of the control over long run resources while some/âmost/âall humans survive, but fair enough.
Yeah, it seems possible to be longtermist but not think that human extinction entails loss of all hope, but extinction still seems more important to the longtermist than the neartermist.
valid. I guess longtermists and neartermists will also feel quite different about this fate.
This is an interesting point, and I guess itâs important to make, but it doesnât exactly answer the question I asked in the OP.
In 2013, Nick Bostrom gave a TEDx talk about existential risk where he argued that itâs so important to care about because of the 10^umpteen future lives at stake. In the talk, Bostrom referenced even older work by Derek Parfit. (From a quick Google, the Parfit stuff on existential risk was from his book Reasons and Persons, published in 1984.)
I feel like people in the EA community only started talking about âlongtermismâ in the last few years, whereas they had been talking about existential risk many years prior to that.
Suppose I already bought into Bostromâs argument about existential risk and future people in 2013. Does longtermism have anything new to tell me?
I guess I think of caring about future people as the core of longtermism, so if youâre already signed up to that, I would already call you a longtermist? I think most people arenât signed up for that, though.
I agree that if youâre already bought in to moral consideration for 10^umpteen future people, thatâs longtermism.