This is an interesting point, and I guess itâs important to make, but it doesnât exactly answer the question I asked in the OP.
In 2013, Nick Bostrom gave a TEDx talk about existential risk where he argued that itâs so important to care about because of the 10^umpteen future lives at stake. In the talk, Bostrom referenced even older work by Derek Parfit. (From a quick Google, the Parfit stuff on existential risk was from his book Reasons and Persons, published in 1984.)
I feel like people in the EA community only started talking about âlongtermismâ in the last few years, whereas they had been talking about existential risk many years prior to that.
Suppose I already bought into Bostromâs argument about existential risk and future people in 2013. Does longtermism have anything new to tell me?
I guess I think of caring about future people as the core of longtermism, so if youâre already signed up to that, I would already call you a longtermist? I think most people arenât signed up for that, though.
Sorry for replying to this ancient post now. (I was looking at my old EA Forum posts after not being active on the forum for about a year.)
Hereâs why this answer feels unsatisfying to me. An incredibly mainstream view is to care about everyone alive today and everyone who will be born in the next 100 years. I have to imagine over 90% of people in the world would agree to that view or a view very close to that if you asked them.
Thatâs already a reason to care about existential risks and a reason people do care about what they perceive as existential risks or global catastrophic risks. Itâs the reason most people who care about climate change care about climate change.
I donât really know what the best way to express the most mainstream view(s) would be. I donât think most people have tried to form a rigorous view on the ethics of far future people. (I have a hard enough time translating my own intuitions into a rigorous view, even with exposure to academic philosophy and to these sorts of ideas.) But maybe we could conjecture that most people mentally apply a âdiscount rateâ to future lives, so that they care less and less about future lives as the centuries stretch into the future, and at some point it reaches zero.
Future lives in the distant future (i.e. people born significantly later than 100 years from now) only make an actionable difference to existential risk when the estimated risk is so low that it changes the expected value math to account for 10^16 or 10^52 or whatever it is hypothetical future lives. That feels like an important insight to me, but its applicability feels limited.
So: people who donât take a longtermist view of existential risk already have a good reason to care about existential risk.
Also: people who take a longtermist view of ethics donât seem to have a good reason to think differently about any other subject than existential risk. At least, thatâs the impression I get from trying to engage open-mindedly and charitably with this new idea of âlongtermismâ.
Ultimately, Iâm still kind of annoyed (or at least perplexed) by âlongtermismâ being promoted as if itâs a new idea with broad applicability, when:
A longtermist view of existential risk has been promoted in discussions of existential risk for a very long time. Like, decades.
If longtermism is actionable for anything, itâs for existential risk and very little (if anything) else.
Most people are already bought in to caring about existential risk for relatively âneartermistâ reasons.
When I heard the hype about longtermism, I was expecting there to be more meat on the bone.
An incredibly mainstream view is to care about everyone alive today and everyone who will be born in the next 100 years. I have to imagine over 90% of people in the world would agree to that view or a view very close to that if you asked them.
I think we have an empirical disagreement here. If I felt strongly motivated to try to persuade you about this, I would go try to find studies about it; I suspect we may not even have 90% agreement on âeveryone alive today is worthy of moral concernâ, and I would strongly guess we donât have that level of agreement on caring about people who will be born 50 years from now. (Although I would also guess that many people just donât think about this kind of question very much and arenât guaranteed to have very clear or consistent answers.)
Even if people agreed with the premises, we could try to justify longtermism as arguing that the consequences of this belief are underexplored, though I hear you that you donât see a lot of neglected consequences.
At this point, though, Iâm not actually that invested in trying to champion longtermism specifically, so Iâm not the right person to defend it to you here. Letâs fix x-risk and check in about it after that :)
I havenât looked at any surveys, but it seems universal to care about future generations. This doesnât mean people will necessarily act in a way that protects future generationsâ interests â doesnât mean they wonât pollute or deforest, for example â but the idea is not controversial and is widely accepted.
Similarly, I think itâs basically universal to believe that all humans, in principle, have some value and have certain rights that should not be violated, but then, in practice, factors like racism, xenophobia, hatred based on religious fundamentalism, anti-LGBT hatred, etc. lead many people to dehumanize certain humans. There is typically an attempt to morally justify this, though, for example through appeals to âself-defenseâ (or similar concepts).
If you apply strict standards to the belief that everyone alive today is worthy of moral concern, then some self-identified effective altruists would fail the test, since they hold dehumanizing views about Black people, LGBT people, women, etc.
Thatâs getting into a different point than I was trying to make in the chunk of text you quoted. Which is just that Will MacAskill didnât fall out of a coconut tree and come up with the idea that future generations matter yesterday. His university, Oxford, is over 900 years old. I believe in his longtermism book he cites the Iroquois principle of making decisions while considering how they will affect the next seven generations. Historically, many (most?) families on Earth have had close relationships between grandparents and grandchildren. Passing down tradition and transmitting culture (e.g., stories, rituals, moral principles) over long timescales is considered important in many cultures and religions.
There is a risk of a sort of plagiarism with this kind of discourse where people take ideas that have existed for centuries or millennia across many parts of the world and then package them as if they are novel, without adequately acknowledging the history of the ideas. Thatâs like the effective altruistâs or the ethical theoristâs version of ânot invented hereâ.
This is an interesting point, and I guess itâs important to make, but it doesnât exactly answer the question I asked in the OP.
In 2013, Nick Bostrom gave a TEDx talk about existential risk where he argued that itâs so important to care about because of the 10^umpteen future lives at stake. In the talk, Bostrom referenced even older work by Derek Parfit. (From a quick Google, the Parfit stuff on existential risk was from his book Reasons and Persons, published in 1984.)
I feel like people in the EA community only started talking about âlongtermismâ in the last few years, whereas they had been talking about existential risk many years prior to that.
Suppose I already bought into Bostromâs argument about existential risk and future people in 2013. Does longtermism have anything new to tell me?
I guess I think of caring about future people as the core of longtermism, so if youâre already signed up to that, I would already call you a longtermist? I think most people arenât signed up for that, though.
I agree that if youâre already bought in to moral consideration for 10^umpteen future people, thatâs longtermism.
Sorry for replying to this ancient post now. (I was looking at my old EA Forum posts after not being active on the forum for about a year.)
Hereâs why this answer feels unsatisfying to me. An incredibly mainstream view is to care about everyone alive today and everyone who will be born in the next 100 years. I have to imagine over 90% of people in the world would agree to that view or a view very close to that if you asked them.
Thatâs already a reason to care about existential risks and a reason people do care about what they perceive as existential risks or global catastrophic risks. Itâs the reason most people who care about climate change care about climate change.
I donât really know what the best way to express the most mainstream view(s) would be. I donât think most people have tried to form a rigorous view on the ethics of far future people. (I have a hard enough time translating my own intuitions into a rigorous view, even with exposure to academic philosophy and to these sorts of ideas.) But maybe we could conjecture that most people mentally apply a âdiscount rateâ to future lives, so that they care less and less about future lives as the centuries stretch into the future, and at some point it reaches zero.
Future lives in the distant future (i.e. people born significantly later than 100 years from now) only make an actionable difference to existential risk when the estimated risk is so low that it changes the expected value math to account for 10^16 or 10^52 or whatever it is hypothetical future lives. That feels like an important insight to me, but its applicability feels limited.
So: people who donât take a longtermist view of existential risk already have a good reason to care about existential risk.
Also: people who take a longtermist view of ethics donât seem to have a good reason to think differently about any other subject than existential risk. At least, thatâs the impression I get from trying to engage open-mindedly and charitably with this new idea of âlongtermismâ.
Ultimately, Iâm still kind of annoyed (or at least perplexed) by âlongtermismâ being promoted as if itâs a new idea with broad applicability, when:
A longtermist view of existential risk has been promoted in discussions of existential risk for a very long time. Like, decades.
If longtermism is actionable for anything, itâs for existential risk and very little (if anything) else.
Most people are already bought in to caring about existential risk for relatively âneartermistâ reasons.
When I heard the hype about longtermism, I was expecting there to be more meat on the bone.
I think we have an empirical disagreement here. If I felt strongly motivated to try to persuade you about this, I would go try to find studies about it; I suspect we may not even have 90% agreement on âeveryone alive today is worthy of moral concernâ, and I would strongly guess we donât have that level of agreement on caring about people who will be born 50 years from now. (Although I would also guess that many people just donât think about this kind of question very much and arenât guaranteed to have very clear or consistent answers.)
Even if people agreed with the premises, we could try to justify longtermism as arguing that the consequences of this belief are underexplored, though I hear you that you donât see a lot of neglected consequences.
At this point, though, Iâm not actually that invested in trying to champion longtermism specifically, so Iâm not the right person to defend it to you here. Letâs fix x-risk and check in about it after that :)
I havenât looked at any surveys, but it seems universal to care about future generations. This doesnât mean people will necessarily act in a way that protects future generationsâ interests â doesnât mean they wonât pollute or deforest, for example â but the idea is not controversial and is widely accepted.
Similarly, I think itâs basically universal to believe that all humans, in principle, have some value and have certain rights that should not be violated, but then, in practice, factors like racism, xenophobia, hatred based on religious fundamentalism, anti-LGBT hatred, etc. lead many people to dehumanize certain humans. There is typically an attempt to morally justify this, though, for example through appeals to âself-defenseâ (or similar concepts).
If you apply strict standards to the belief that everyone alive today is worthy of moral concern, then some self-identified effective altruists would fail the test, since they hold dehumanizing views about Black people, LGBT people, women, etc.
Thatâs getting into a different point than I was trying to make in the chunk of text you quoted. Which is just that Will MacAskill didnât fall out of a coconut tree and come up with the idea that future generations matter yesterday. His university, Oxford, is over 900 years old. I believe in his longtermism book he cites the Iroquois principle of making decisions while considering how they will affect the next seven generations. Historically, many (most?) families on Earth have had close relationships between grandparents and grandchildren. Passing down tradition and transmitting culture (e.g., stories, rituals, moral principles) over long timescales is considered important in many cultures and religions.
There is a risk of a sort of plagiarism with this kind of discourse where people take ideas that have existed for centuries or millennia across many parts of the world and then package them as if they are novel, without adequately acknowledging the history of the ideas. Thatâs like the effective altruistâs or the ethical theoristâs version of ânot invented hereâ.