Longtermism suggests a different focus within existential risks, because it feels very differently about â99% of humanity is destroyed, but the remaining 1% are able to rebuild civilisationâ and â100% of humanity is destroyed, civilisation endsâ, even though from the perspective of people alive today these outcomes are very similar.
I think relative to neartermist intuitions about catastrophic risk, the particular focus on extinction increases the threat from AI and engineered biorisks relative to e.g. climate change and natural pandemics. Basically, total extinction is quite a high bar, and most easily reached by things deliberately attempting to reach it, relative to natural disasters which donât tend to counter-adapt when some survive.
Longtermism also supports research into civilisational resilience measures, like bunkers, or research into how or whether civilisation could survive and rebuild after a catastrophe.
Longtermism also lowers the probability bar that an extinction risk has to reach before being worth taking seriously. I think this used to be a bigger part of the reason why people worked on x-risk when typical risk estimates were lower; over time, as risk estimates increased. longtermism became less necessary to justify working on them.
because it feels very differently about â99% of humanity is destroyed, but the remaining 1% are able to rebuild civilisationâ and â100% of humanity is destroyed, civilisation endsâ
Maybe? This depends on what you think about the probability that intelligent life re-evolves on earth (it seems likely to me) and how good you feel about the next intelligent species on earth vs humans.
the particular focus on extinction increases the threat from AI and engineered biorisks
IMO, most x-risk from AI probably doesnât come from literal human extinction but instead AI systems acquiring most of the control over long run resources while some/âmost/âall humans survive, but fair enough.
Maybe? This depends on what you think about the probability that intelligent life re-evolves on earth (it seems likely to me) and how good you feel about the next intelligent species on earth vs humans.
Yeah, it seems possible to be longtermist but not think that human extinction entails loss of all hope, but extinction still seems more important to the longtermist than the neartermist.
IMO, most x-risk from AI probably doesnât come from literal human extinction but instead AI systems acquiring most of the control over long run resources while some/âmost/âall humans survive, but fair enough.
valid. I guess longtermists and neartermists will also feel quite different about this fate.
Conditioned on human extinction, do you expect intelligent life to re-evolve with levels of autonomy similar to what humanity has now (which seems quite important for assessing how bad human extinction would be on longtermist grounds)? I donât think itâs likely.
Maybe the underlying crux (if your intuition differs) is what proportion of human extinction scenarios (not including non-extinction x-risk) involve intelligent/âagentic AIs, and/âor other conditions which would significantly limit the potential of new intelligent life even if it did re-emerge. My current low-resilience impression is probably 90+%.
And the above considerations and credences make how good the next intelligent species are vs. humans fairly inconsequential.
This is an interesting point, and I guess itâs important to make, but it doesnât exactly answer the question I asked in the OP.
In 2013, Nick Bostrom gave a TEDx talk about existential risk where he argued that itâs so important to care about because of the 10^umpteen future lives at stake. In the talk, Bostrom referenced even older work by Derek Parfit. (From a quick Google, the Parfit stuff on existential risk was from his book Reasons and Persons, published in 1984.)
I feel like people in the EA community only started talking about âlongtermismâ in the last few years, whereas they had been talking about existential risk many years prior to that.
Suppose I already bought into Bostromâs argument about existential risk and future people in 2013. Does longtermism have anything new to tell me?
I guess I think of caring about future people as the core of longtermism, so if youâre already signed up to that, I would already call you a longtermist? I think most people arenât signed up for that, though.
Sorry for replying to this ancient post now. (I was looking at my old EA Forum posts after not being active on the forum for about a year.)
Hereâs why this answer feels unsatisfying to me. An incredibly mainstream view is to care about everyone alive today and everyone who will be born in the next 100 years. I have to imagine over 90% of people in the world would agree to that view or a view very close to that if you asked them.
Thatâs already a reason to care about existential risks and a reason people do care about what they perceive as existential risks or global catastrophic risks. Itâs the reason most people who care about climate change care about climate change.
I donât really know what the best way to express the most mainstream view(s) would be. I donât think most people have tried to form a rigorous view on the ethics of far future people. (I have a hard enough time translating my own intuitions into a rigorous view, even with exposure to academic philosophy and to these sorts of ideas.) But maybe we could conjecture that most people mentally apply a âdiscount rateâ to future lives, so that they care less and less about future lives as the centuries stretch into the future, and at some point it reaches zero.
Future lives in the distant future (i.e. people born significantly later than 100 years from now) only make an actionable difference to existential risk when the estimated risk is so low that it changes the expected value math to account for 10^16 or 10^52 or whatever it is hypothetical future lives. That feels like an important insight to me, but its applicability feels limited.
So: people who donât take a longtermist view of existential risk already have a good reason to care about existential risk.
Also: people who take a longtermist view of ethics donât seem to have a good reason to think differently about any other subject than existential risk. At least, thatâs the impression I get from trying to engage open-mindedly and charitably with this new idea of âlongtermismâ.
Ultimately, Iâm still kind of annoyed (or at least perplexed) by âlongtermismâ being promoted as if itâs a new idea with broad applicability, when:
A longtermist view of existential risk has been promoted in discussions of existential risk for a very long time. Like, decades.
If longtermism is actionable for anything, itâs for existential risk and very little (if anything) else.
Most people are already bought in to caring about existential risk for relatively âneartermistâ reasons.
When I heard the hype about longtermism, I was expecting there to be more meat on the bone.
An incredibly mainstream view is to care about everyone alive today and everyone who will be born in the next 100 years. I have to imagine over 90% of people in the world would agree to that view or a view very close to that if you asked them.
I think we have an empirical disagreement here. If I felt strongly motivated to try to persuade you about this, I would go try to find studies about it; I suspect we may not even have 90% agreement on âeveryone alive today is worthy of moral concernâ, and I would strongly guess we donât have that level of agreement on caring about people who will be born 50 years from now. (Although I would also guess that many people just donât think about this kind of question very much and arenât guaranteed to have very clear or consistent answers.)
Even if people agreed with the premises, we could try to justify longtermism as arguing that the consequences of this belief are underexplored, though I hear you that you donât see a lot of neglected consequences.
At this point, though, Iâm not actually that invested in trying to champion longtermism specifically, so Iâm not the right person to defend it to you here. Letâs fix x-risk and check in about it after that :)
I havenât looked at any surveys, but it seems universal to care about future generations. This doesnât mean people will necessarily act in a way that protects future generationsâ interests â doesnât mean they wonât pollute or deforest, for example â but the idea is not controversial and is widely accepted.
Similarly, I think itâs basically universal to believe that all humans, in principle, have some value and have certain rights that should not be violated, but then, in practice, factors like racism, xenophobia, hatred based on religious fundamentalism, anti-LGBT hatred, etc. lead many people to dehumanize certain humans. There is typically an attempt to morally justify this, though, for example through appeals to âself-defenseâ (or similar concepts).
If you apply strict standards to the belief that everyone alive today is worthy of moral concern, then some self-identified effective altruists would fail the test, since they hold dehumanizing views about Black people, LGBT people, women, etc.
Thatâs getting into a different point than I was trying to make in the chunk of text you quoted. Which is just that Will MacAskill didnât fall out of a coconut tree and come up with the idea that future generations matter yesterday. His university, Oxford, is over 900 years old. I believe in his longtermism book he cites the Iroquois principle of making decisions while considering how they will affect the next seven generations. Historically, many (most?) families on Earth have had close relationships between grandparents and grandchildren. Passing down tradition and transmitting culture (e.g., stories, rituals, moral principles) over long timescales is considered important in many cultures and religions.
There is a risk of a sort of plagiarism with this kind of discourse where people take ideas that have existed for centuries or millennia across many parts of the world and then package them as if they are novel, without adequately acknowledging the history of the ideas. Thatâs like the effective altruistâs or the ethical theoristâs version of ânot invented hereâ.
Longtermism suggests a different focus within existential risks, because it feels very differently about â99% of humanity is destroyed, but the remaining 1% are able to rebuild civilisationâ and â100% of humanity is destroyed, civilisation endsâ, even though from the perspective of people alive today these outcomes are very similar.
I think relative to neartermist intuitions about catastrophic risk, the particular focus on extinction increases the threat from AI and engineered biorisks relative to e.g. climate change and natural pandemics. Basically, total extinction is quite a high bar, and most easily reached by things deliberately attempting to reach it, relative to natural disasters which donât tend to counter-adapt when some survive.
Longtermism also supports research into civilisational resilience measures, like bunkers, or research into how or whether civilisation could survive and rebuild after a catastrophe.
Longtermism also lowers the probability bar that an extinction risk has to reach before being worth taking seriously. I think this used to be a bigger part of the reason why people worked on x-risk when typical risk estimates were lower; over time, as risk estimates increased. longtermism became less necessary to justify working on them.
Maybe? This depends on what you think about the probability that intelligent life re-evolves on earth (it seems likely to me) and how good you feel about the next intelligent species on earth vs humans.
IMO, most x-risk from AI probably doesnât come from literal human extinction but instead AI systems acquiring most of the control over long run resources while some/âmost/âall humans survive, but fair enough.
Yeah, it seems possible to be longtermist but not think that human extinction entails loss of all hope, but extinction still seems more important to the longtermist than the neartermist.
valid. I guess longtermists and neartermists will also feel quite different about this fate.
Conditioned on human extinction, do you expect intelligent life to re-evolve with levels of autonomy similar to what humanity has now (which seems quite important for assessing how bad human extinction would be on longtermist grounds)? I donât think itâs likely.
Maybe the underlying crux (if your intuition differs) is what proportion of human extinction scenarios (not including non-extinction x-risk) involve intelligent/âagentic AIs, and/âor other conditions which would significantly limit the potential of new intelligent life even if it did re-emerge. My current low-resilience impression is probably 90+%.
And the above considerations and credences make how good the next intelligent species are vs. humans fairly inconsequential.
This is an interesting point, and I guess itâs important to make, but it doesnât exactly answer the question I asked in the OP.
In 2013, Nick Bostrom gave a TEDx talk about existential risk where he argued that itâs so important to care about because of the 10^umpteen future lives at stake. In the talk, Bostrom referenced even older work by Derek Parfit. (From a quick Google, the Parfit stuff on existential risk was from his book Reasons and Persons, published in 1984.)
I feel like people in the EA community only started talking about âlongtermismâ in the last few years, whereas they had been talking about existential risk many years prior to that.
Suppose I already bought into Bostromâs argument about existential risk and future people in 2013. Does longtermism have anything new to tell me?
I guess I think of caring about future people as the core of longtermism, so if youâre already signed up to that, I would already call you a longtermist? I think most people arenât signed up for that, though.
I agree that if youâre already bought in to moral consideration for 10^umpteen future people, thatâs longtermism.
Sorry for replying to this ancient post now. (I was looking at my old EA Forum posts after not being active on the forum for about a year.)
Hereâs why this answer feels unsatisfying to me. An incredibly mainstream view is to care about everyone alive today and everyone who will be born in the next 100 years. I have to imagine over 90% of people in the world would agree to that view or a view very close to that if you asked them.
Thatâs already a reason to care about existential risks and a reason people do care about what they perceive as existential risks or global catastrophic risks. Itâs the reason most people who care about climate change care about climate change.
I donât really know what the best way to express the most mainstream view(s) would be. I donât think most people have tried to form a rigorous view on the ethics of far future people. (I have a hard enough time translating my own intuitions into a rigorous view, even with exposure to academic philosophy and to these sorts of ideas.) But maybe we could conjecture that most people mentally apply a âdiscount rateâ to future lives, so that they care less and less about future lives as the centuries stretch into the future, and at some point it reaches zero.
Future lives in the distant future (i.e. people born significantly later than 100 years from now) only make an actionable difference to existential risk when the estimated risk is so low that it changes the expected value math to account for 10^16 or 10^52 or whatever it is hypothetical future lives. That feels like an important insight to me, but its applicability feels limited.
So: people who donât take a longtermist view of existential risk already have a good reason to care about existential risk.
Also: people who take a longtermist view of ethics donât seem to have a good reason to think differently about any other subject than existential risk. At least, thatâs the impression I get from trying to engage open-mindedly and charitably with this new idea of âlongtermismâ.
Ultimately, Iâm still kind of annoyed (or at least perplexed) by âlongtermismâ being promoted as if itâs a new idea with broad applicability, when:
A longtermist view of existential risk has been promoted in discussions of existential risk for a very long time. Like, decades.
If longtermism is actionable for anything, itâs for existential risk and very little (if anything) else.
Most people are already bought in to caring about existential risk for relatively âneartermistâ reasons.
When I heard the hype about longtermism, I was expecting there to be more meat on the bone.
I think we have an empirical disagreement here. If I felt strongly motivated to try to persuade you about this, I would go try to find studies about it; I suspect we may not even have 90% agreement on âeveryone alive today is worthy of moral concernâ, and I would strongly guess we donât have that level of agreement on caring about people who will be born 50 years from now. (Although I would also guess that many people just donât think about this kind of question very much and arenât guaranteed to have very clear or consistent answers.)
Even if people agreed with the premises, we could try to justify longtermism as arguing that the consequences of this belief are underexplored, though I hear you that you donât see a lot of neglected consequences.
At this point, though, Iâm not actually that invested in trying to champion longtermism specifically, so Iâm not the right person to defend it to you here. Letâs fix x-risk and check in about it after that :)
I havenât looked at any surveys, but it seems universal to care about future generations. This doesnât mean people will necessarily act in a way that protects future generationsâ interests â doesnât mean they wonât pollute or deforest, for example â but the idea is not controversial and is widely accepted.
Similarly, I think itâs basically universal to believe that all humans, in principle, have some value and have certain rights that should not be violated, but then, in practice, factors like racism, xenophobia, hatred based on religious fundamentalism, anti-LGBT hatred, etc. lead many people to dehumanize certain humans. There is typically an attempt to morally justify this, though, for example through appeals to âself-defenseâ (or similar concepts).
If you apply strict standards to the belief that everyone alive today is worthy of moral concern, then some self-identified effective altruists would fail the test, since they hold dehumanizing views about Black people, LGBT people, women, etc.
Thatâs getting into a different point than I was trying to make in the chunk of text you quoted. Which is just that Will MacAskill didnât fall out of a coconut tree and come up with the idea that future generations matter yesterday. His university, Oxford, is over 900 years old. I believe in his longtermism book he cites the Iroquois principle of making decisions while considering how they will affect the next seven generations. Historically, many (most?) families on Earth have had close relationships between grandparents and grandchildren. Passing down tradition and transmitting culture (e.g., stories, rituals, moral principles) over long timescales is considered important in many cultures and religions.
There is a risk of a sort of plagiarism with this kind of discourse where people take ideas that have existed for centuries or millennia across many parts of the world and then package them as if they are novel, without adequately acknowledging the history of the ideas. Thatâs like the effective altruistâs or the ethical theoristâs version of ânot invented hereâ.