This is an interesting point, and I guess it’s important to make, but it doesn’t exactly answer the question I asked in the OP.
In 2013, Nick Bostrom gave a TEDx talk about existential risk where he argued that it’s so important to care about because of the 10^umpteen future lives at stake. In the talk, Bostrom referenced even older work by Derek Parfit. (From a quick Google, the Parfit stuff on existential risk was from his book Reasons and Persons, published in 1984.)
I feel like people in the EA community only started talking about “longtermism” in the last few years, whereas they had been talking about existential risk many years prior to that.
Suppose I already bought into Bostrom’s argument about existential risk and future people in 2013. Does longtermism have anything new to tell me?
I guess I think of caring about future people as the core of longtermism, so if you’re already signed up to that, I would already call you a longtermist? I think most people aren’t signed up for that, though.
Sorry for replying to this ancient post now. (I was looking at my old EA Forum posts after not being active on the forum for about a year.)
Here’s why this answer feels unsatisfying to me. An incredibly mainstream view is to care about everyone alive today and everyone who will be born in the next 100 years. I have to imagine over 90% of people in the world would agree to that view or a view very close to that if you asked them.
That’s already a reason to care about existential risks and a reason people do care about what they perceive as existential risks or global catastrophic risks. It’s the reason most people who care about climate change care about climate change.
I don’t really know what the best way to express the most mainstream view(s) would be. I don’t think most people have tried to form a rigorous view on the ethics of far future people. (I have a hard enough time translating my own intuitions into a rigorous view, even with exposure to academic philosophy and to this sorts of ideas.) But maybe we could conjecture that most people mentally apply a “discount rate” to future lives, so that they care less and less about future lives as the centuries stretch into the future, and at some point it reaches zero.
Future lives in the distant future (i.e. people born significantly later than 100 years from now) only make an actionable different to existential risk when the estimated risk is so low that it changes the expected value math to account for 10^16 or 10^52 or whatever it is hypothetical future lives. That feels like an important insight to me, but its applicability feels limited.
So: people who don’t take a longtermist view of existential risk already have a good reason to care about existential risk.
Also: people who take a longtermist view of ethics don’t seem to have a good reason to think differently about any other subject than existential risk. At least, that’s the impression I get from trying to engage open-mindedly and charitably with this new idea of “longtermism”.
Ultimately, I’m still kind of annoyed (or at least perplexed) by “longtermism” being promoted as if it’s a new idea with broad applicability, when:
A longtermist view of existential risk has been promoted in discussions of existential risk for a very long time. Like, decades.
If longtermism is actionable for anything, it’s for existential risk and very little (if anything) else.
Most people are already bought in to caring about existential risk for relatively “neartermist” reasons.
When I heard the hype about longtermism, I was expecting there to be more meat on the bone.
This is an interesting point, and I guess it’s important to make, but it doesn’t exactly answer the question I asked in the OP.
In 2013, Nick Bostrom gave a TEDx talk about existential risk where he argued that it’s so important to care about because of the 10^umpteen future lives at stake. In the talk, Bostrom referenced even older work by Derek Parfit. (From a quick Google, the Parfit stuff on existential risk was from his book Reasons and Persons, published in 1984.)
I feel like people in the EA community only started talking about “longtermism” in the last few years, whereas they had been talking about existential risk many years prior to that.
Suppose I already bought into Bostrom’s argument about existential risk and future people in 2013. Does longtermism have anything new to tell me?
I guess I think of caring about future people as the core of longtermism, so if you’re already signed up to that, I would already call you a longtermist? I think most people aren’t signed up for that, though.
I agree that if you’re already bought in to moral consideration for 10^umpteen future people, that’s longtermism.
Sorry for replying to this ancient post now. (I was looking at my old EA Forum posts after not being active on the forum for about a year.)
Here’s why this answer feels unsatisfying to me. An incredibly mainstream view is to care about everyone alive today and everyone who will be born in the next 100 years. I have to imagine over 90% of people in the world would agree to that view or a view very close to that if you asked them.
That’s already a reason to care about existential risks and a reason people do care about what they perceive as existential risks or global catastrophic risks. It’s the reason most people who care about climate change care about climate change.
I don’t really know what the best way to express the most mainstream view(s) would be. I don’t think most people have tried to form a rigorous view on the ethics of far future people. (I have a hard enough time translating my own intuitions into a rigorous view, even with exposure to academic philosophy and to this sorts of ideas.) But maybe we could conjecture that most people mentally apply a “discount rate” to future lives, so that they care less and less about future lives as the centuries stretch into the future, and at some point it reaches zero.
Future lives in the distant future (i.e. people born significantly later than 100 years from now) only make an actionable different to existential risk when the estimated risk is so low that it changes the expected value math to account for 10^16 or 10^52 or whatever it is hypothetical future lives. That feels like an important insight to me, but its applicability feels limited.
So: people who don’t take a longtermist view of existential risk already have a good reason to care about existential risk.
Also: people who take a longtermist view of ethics don’t seem to have a good reason to think differently about any other subject than existential risk. At least, that’s the impression I get from trying to engage open-mindedly and charitably with this new idea of “longtermism”.
Ultimately, I’m still kind of annoyed (or at least perplexed) by “longtermism” being promoted as if it’s a new idea with broad applicability, when:
A longtermist view of existential risk has been promoted in discussions of existential risk for a very long time. Like, decades.
If longtermism is actionable for anything, it’s for existential risk and very little (if anything) else.
Most people are already bought in to caring about existential risk for relatively “neartermist” reasons.
When I heard the hype about longtermism, I was expecting there to be more meat on the bone.