Sorry for replying to this ancient post now. (I was looking at my old EA Forum posts after not being active on the forum for about a year.)
Here’s why this answer feels unsatisfying to me. An incredibly mainstream view is to care about everyone alive today and everyone who will be born in the next 100 years. I have to imagine over 90% of people in the world would agree to that view or a view very close to that if you asked them.
That’s already a reason to care about existential risks and a reason people do care about what they perceive as existential risks or global catastrophic risks. It’s the reason most people who care about climate change care about climate change.
I don’t really know what the best way to express the most mainstream view(s) would be. I don’t think most people have tried to form a rigorous view on the ethics of far future people. (I have a hard enough time translating my own intuitions into a rigorous view, even with exposure to academic philosophy and to these sorts of ideas.) But maybe we could conjecture that most people mentally apply a “discount rate” to future lives, so that they care less and less about future lives as the centuries stretch into the future, and at some point it reaches zero.
Future lives in the distant future (i.e. people born significantly later than 100 years from now) only make an actionable different to existential risk when the estimated risk is so low that it changes the expected value math to account for 10^16 or 10^52 or whatever it is hypothetical future lives. That feels like an important insight to me, but its applicability feels limited.
So: people who don’t take a longtermist view of existential risk already have a good reason to care about existential risk.
Also: people who take a longtermist view of ethics don’t seem to have a good reason to think differently about any other subject than existential risk. At least, that’s the impression I get from trying to engage open-mindedly and charitably with this new idea of “longtermism”.
Ultimately, I’m still kind of annoyed (or at least perplexed) by “longtermism” being promoted as if it’s a new idea with broad applicability, when:
A longtermist view of existential risk has been promoted in discussions of existential risk for a very long time. Like, decades.
If longtermism is actionable for anything, it’s for existential risk and very little (if anything) else.
Most people are already bought in to caring about existential risk for relatively “neartermist” reasons.
When I heard the hype about longtermism, I was expecting there to be more meat on the bone.
Sorry for replying to this ancient post now. (I was looking at my old EA Forum posts after not being active on the forum for about a year.)
Here’s why this answer feels unsatisfying to me. An incredibly mainstream view is to care about everyone alive today and everyone who will be born in the next 100 years. I have to imagine over 90% of people in the world would agree to that view or a view very close to that if you asked them.
That’s already a reason to care about existential risks and a reason people do care about what they perceive as existential risks or global catastrophic risks. It’s the reason most people who care about climate change care about climate change.
I don’t really know what the best way to express the most mainstream view(s) would be. I don’t think most people have tried to form a rigorous view on the ethics of far future people. (I have a hard enough time translating my own intuitions into a rigorous view, even with exposure to academic philosophy and to these sorts of ideas.) But maybe we could conjecture that most people mentally apply a “discount rate” to future lives, so that they care less and less about future lives as the centuries stretch into the future, and at some point it reaches zero.
Future lives in the distant future (i.e. people born significantly later than 100 years from now) only make an actionable different to existential risk when the estimated risk is so low that it changes the expected value math to account for 10^16 or 10^52 or whatever it is hypothetical future lives. That feels like an important insight to me, but its applicability feels limited.
So: people who don’t take a longtermist view of existential risk already have a good reason to care about existential risk.
Also: people who take a longtermist view of ethics don’t seem to have a good reason to think differently about any other subject than existential risk. At least, that’s the impression I get from trying to engage open-mindedly and charitably with this new idea of “longtermism”.
Ultimately, I’m still kind of annoyed (or at least perplexed) by “longtermism” being promoted as if it’s a new idea with broad applicability, when:
A longtermist view of existential risk has been promoted in discussions of existential risk for a very long time. Like, decades.
If longtermism is actionable for anything, it’s for existential risk and very little (if anything) else.
Most people are already bought in to caring about existential risk for relatively “neartermist” reasons.
When I heard the hype about longtermism, I was expecting there to be more meat on the bone.