Why on earth would you set 2017 as a cutoff? Language changes, there is nothing wrong with a word being coined for a concept, and then applied to uses of the concept that predate the word. That is usually how it goes. So I think your exclusion of existential risk is just wrong. The various interventions for existential risks, of which there are many, are the answer to your question.
If you’re saying that longtermism is not a novel idea, then I think we might agree.
Everything is relative to expectations. I tried to make that clear in the post, but let me try again. I think if something is pitched as a new idea, then it should be a new idea. If it’s not a new idea, that should be made more clear. The kind of talk and activity I’ve observed around “longtermism” is incongruent with the notion that it’s an idea that’s at least decades and quite possibly many centuries old, about which much, if not most, if not all, the low-hanging fruit has already been plucked — if not in practice, than at least in research.
For instance, if you held that notion, you would probably not think the amount of resources — time, attention, money, etc. — that was reallocated around “longtermism” roughly in the 2017-2025 period would be justified, nor would the rhetoric around “longtermism” be justified.
You can find places where Will MacAskill says that longtermism is not a new idea, and references things like Nick Bostrom’s previous work, the Long Now Foundation, and the Seventh Generation philosophy. That’s all fine and good. But What We Owe The Future and MacAskill’s discussions of it, like on the 80,000 Hours Podcast, don’t come across to me as a recapitulation of a decades-old or centuries-old idea. I also don’t think the effective altruism community’s energy around “longtermism” would have been what it’s been if they genuinely saw longtermism as non-novel.
For example, MacAskill defines longtermism as “the idea that positively influencing the long-term future is a key moral priority of our time.” Why our time? Why not also the time of the founders of Oxford University 929 years ago or whenever it was? Sure, there’s the time of perils argument, but, objections to the time of perils argument aside, why would a time of perils-esque argument also apply to all the non-existential risk-related things like economic growth, making moral progress, and so on?
I’m not especially familiar with the history—I came to EA after the term “longtermism” was coined so that’s just always been the vocabulary for me. But you seem to be equating an idea being chronologically old with it already being well studied and explored and the low hanging fruit having been picked. You seem to think that old → not neglected. And that does not follow. I don’t know how old the idea of longtermism is. I don’t particularly care. It is certainly older than the word. But it does seem to be pretty much completely neglected outside EA, as well as important and, at least with regard to x-risks, tractable. That makes it an important EA cause area.
Wow, this makes me feel old, haha! (Feeling old feels much better than I thought it would. It’s good to be alive.)
There was a lot of scholarship on existential risks and global catastrophic risks going back to the 2000s. There was Nick Bostrom and the Future of Humanity Institute at Oxford, the Global Catastrophic Risks Conference (e.g. I love this talk from the 2008 conference), the Global Catastrophic Risks anthology published in 2008, and so on. So, existential risk/global catastrophic risk was an idea about which there had already been a lot of study even going back about a decade before the coining of “longtermism”. Imagine my disappointment when I hear about this hot new idea called longtermism — I love hot new ideas! — and it just turns out to be rewarmed existential risk.
I agree that it might be perfectly fine to re-brand old, good ideas, and give them a fresh coat of paint. Sure, go for it. But I’m just asking for a little truth in advertising here.
Yes. One of the Four Focus Areas of Effective Altruism (2013) was “The Long-Term Future” and “Far future-focused EAs” are on the map of Bay Area memespace (2013). This social and ideological cluster has existed long before this exact name was coined to refer to it.
The only intervention discussed in relation to the far future at that first link is existential risk mitigation, which indeed has been a topic discussed within the EA community for a long time. My point is that if such discussions were happening as early as 2013 and, indeed, even earlier than that, and even before effective altruism existed, then that part of longtermism is not a new idea. (And none of the longtermist interventions that have been proposed other than those relating to existential risk are novel, realistic, important, and genuinely motivated by longtermism.) Whether people care if longtermism is a new idea or not is, I guess, another matter.
Up until recently, there was no name for the cluster of views that involved concern about ensuring the long-run future goes as well as possible. The most common language to refer to this cluster of views was just to say something like ‘people interested in x-risk reduction’. There are a few reasons why this terminology isn’t ideal [...]
For these reasons, and with Toby Ord’s in-progress book on existential risk providing urgency, Toby and Joe Carlsmith started leading discussions about whether there were better terms to use. In October 2017, I proposed the term ‘longtermism’, with the following definition:
People also talked about “astronomical waste” (per the nick bostrom paper) -- the idea that we should race to colonize the galaxy as quickly as possible because we’re losing literally a couple galaxies every second we delay. (But everyone seemed to agree that this wasn’t practical, racing to colonize the galaxy soonest would have all kinds of bad consequences that would cause the whole thing to backfire, etc)
People since long before EA existed have been concerned about environmentalist causes like preventing species extinctions, based on a kind of emotional proto-longtermist feeling that “extinction is forever” and it isn’t right that humanity, for its short-term benefit, should cause irreversible losses to the natural world. (Similar “extinction is forever” thinking applies to the way that genocide—essentially seeking the extinction of a cultural / religious / racial / etc group, is considered a uniquely terrible horror, worse than just killing an equal number of randomly-selected people.)
A lot of “improving institutional decisionmaking” style interventions make more and more sense as timelines get longer (since the improved institutions and better decisions have more time to snowball into better outcomes).
That’s a good and interesting point about environmentalism. I took an environmental philosophy class sometime in the early-to-mid-2010s and very long-term thinking was definitely part of the conversation. As in, thinking many centuries, millennia, or even millions of years in the future. One paper (published in 2010) we read imagined humans in the fourth millennium (i.e. from the year 3000 to 4000) living in “civilization reserves”, the inverse of wilderness reverses.
My problem with interventions like improving institutional decision-making is that we are already maximally motivated to do this based on neartermist concerns. Everyone wants governments and other powerful institutions to do a better a job making decisions, to do as good a job as possible.
Let’s say you are alarmed about the Trump administration’s illiberalism or creeping authoritarianism in the United States. Does thinking about the future in 1,000 or 10,000 years actually motivate you to care about this more, to do more about it, to try harder? I don’t see how it would. Even if it did make you care a little bit more about it inside yourself, I don’t see how it would make a practical difference to what you do about it.
And taking such a long-term perspective might bring to mind all the nations and empires that have risen and fallen over the ages, and make you wonder if what happens this decade or the next might fade away just as easily. So, the effect on how much you care might be neutral, or it might make you care a little less. I don’t know — it depends on subjective gut intuition and each individual’s personal perspective.
Also, something like improving governments or institutions is a relay race where the baton is passed between generations, each of which makes its own contribution and has its own impact. Deflecting a big asteroid heading toward Earth is a way for a single organization like NASA to have a direct impact on the far future. But there are very few interventions of that kind. The clearest cases are existential risks or global catastrophic risks originating from natural sources, such as asteroids and pandemics. Every step you take to widen the circle of interventions you consider introduces more irreducible uncertainty and fundamental unpredictability.
I think asteroids and anti-asteroid interventions like NASA’s NEO Surveyor should be a global priority for governments and space agencies (and anyone else who can help). The total cost of solving like 95% of the problem (or whatever it is) is in the ballpark of the cost of building a bridge. I think people look at the asteroid example and think ‘ah, there must be a hundred more examples of things just like that’. But in reality it’s a very short list, something like: asteroids, pandemics, nuclear weapons, bioterror, climate change, and large volcanoes. And each of these varies a lot in terms of how neglected they are.
So, I think longtermism is an instance of taking a good idea — protect the world from asteroids for the price of building a bridge, and maybe a half a dozen other things like that such as launch a satellite to observe volcanoes — and running with it way too far. I don’t think there is enough meat on this bone to constitute a worldview or a life philosophy that can be generally embraced (although hat’s off to the few who make keeping the world safe from asteroids or big volcanoes). Which overall is the mistake of effective altruism over the last decade: take one good idea or a few — like donating a lot of money to cost-effective global health charities — and try to turn it into an all-encompassing worldview or life philosophy. People are hungry for meaning in their lives, I get it, I am too, but there are healthier and unhealthier ways to pursue that, ways that are more constructive and more destructive.
Why on earth would you set 2017 as a cutoff? Language changes, there is nothing wrong with a word being coined for a concept, and then applied to uses of the concept that predate the word. That is usually how it goes. So I think your exclusion of existential risk is just wrong. The various interventions for existential risks, of which there are many, are the answer to your question.
If you’re saying that longtermism is not a novel idea, then I think we might agree.
Everything is relative to expectations. I tried to make that clear in the post, but let me try again. I think if something is pitched as a new idea, then it should be a new idea. If it’s not a new idea, that should be made more clear. The kind of talk and activity I’ve observed around “longtermism” is incongruent with the notion that it’s an idea that’s at least decades and quite possibly many centuries old, about which much, if not most, if not all, the low-hanging fruit has already been plucked — if not in practice, than at least in research.
For instance, if you held that notion, you would probably not think the amount of resources — time, attention, money, etc. — that was reallocated around “longtermism” roughly in the 2017-2025 period would be justified, nor would the rhetoric around “longtermism” be justified.
You can find places where Will MacAskill says that longtermism is not a new idea, and references things like Nick Bostrom’s previous work, the Long Now Foundation, and the Seventh Generation philosophy. That’s all fine and good. But What We Owe The Future and MacAskill’s discussions of it, like on the 80,000 Hours Podcast, don’t come across to me as a recapitulation of a decades-old or centuries-old idea. I also don’t think the effective altruism community’s energy around “longtermism” would have been what it’s been if they genuinely saw longtermism as non-novel.
For example, MacAskill defines longtermism as “the idea that positively influencing the long-term future is a key moral priority of our time.” Why our time? Why not also the time of the founders of Oxford University 929 years ago or whenever it was? Sure, there’s the time of perils argument, but, objections to the time of perils argument aside, why would a time of perils-esque argument also apply to all the non-existential risk-related things like economic growth, making moral progress, and so on?
I’m not especially familiar with the history—I came to EA after the term “longtermism” was coined so that’s just always been the vocabulary for me. But you seem to be equating an idea being chronologically old with it already being well studied and explored and the low hanging fruit having been picked. You seem to think that old → not neglected. And that does not follow. I don’t know how old the idea of longtermism is. I don’t particularly care. It is certainly older than the word. But it does seem to be pretty much completely neglected outside EA, as well as important and, at least with regard to x-risks, tractable. That makes it an important EA cause area.
Wow, this makes me feel old, haha! (Feeling old feels much better than I thought it would. It’s good to be alive.)
There was a lot of scholarship on existential risks and global catastrophic risks going back to the 2000s. There was Nick Bostrom and the Future of Humanity Institute at Oxford, the Global Catastrophic Risks Conference (e.g. I love this talk from the 2008 conference), the Global Catastrophic Risks anthology published in 2008, and so on. So, existential risk/global catastrophic risk was an idea about which there had already been a lot of study even going back about a decade before the coining of “longtermism”. Imagine my disappointment when I hear about this hot new idea called longtermism — I love hot new ideas! — and it just turns out to be rewarmed existential risk.
I agree that it might be perfectly fine to re-brand old, good ideas, and give them a fresh coat of paint. Sure, go for it. But I’m just asking for a little truth in advertising here.
Yes. One of the Four Focus Areas of Effective Altruism (2013) was “The Long-Term Future” and “Far future-focused EAs” are on the map of Bay Area memespace (2013). This social and ideological cluster has existed long before this exact name was coined to refer to it.
The only intervention discussed in relation to the far future at that first link is existential risk mitigation, which indeed has been a topic discussed within the EA community for a long time. My point is that if such discussions were happening as early as 2013 and, indeed, even earlier than that, and even before effective altruism existed, then that part of longtermism is not a new idea. (And none of the longtermist interventions that have been proposed other than those relating to existential risk are novel, realistic, important, and genuinely motivated by longtermism.) Whether people care if longtermism is a new idea or not is, I guess, another matter.
MacAskill:
People also talked about “astronomical waste” (per the nick bostrom paper) -- the idea that we should race to colonize the galaxy as quickly as possible because we’re losing literally a couple galaxies every second we delay. (But everyone seemed to agree that this wasn’t practical, racing to colonize the galaxy soonest would have all kinds of bad consequences that would cause the whole thing to backfire, etc)
People since long before EA existed have been concerned about environmentalist causes like preventing species extinctions, based on a kind of emotional proto-longtermist feeling that “extinction is forever” and it isn’t right that humanity, for its short-term benefit, should cause irreversible losses to the natural world. (Similar “extinction is forever” thinking applies to the way that genocide—essentially seeking the extinction of a cultural / religious / racial / etc group, is considered a uniquely terrible horror, worse than just killing an equal number of randomly-selected people.)
A lot of “improving institutional decisionmaking” style interventions make more and more sense as timelines get longer (since the improved institutions and better decisions have more time to snowball into better outcomes).
That’s a good and interesting point about environmentalism. I took an environmental philosophy class sometime in the early-to-mid-2010s and very long-term thinking was definitely part of the conversation. As in, thinking many centuries, millennia, or even millions of years in the future. One paper (published in 2010) we read imagined humans in the fourth millennium (i.e. from the year 3000 to 4000) living in “civilization reserves”, the inverse of wilderness reverses.
My problem with interventions like improving institutional decision-making is that we are already maximally motivated to do this based on neartermist concerns. Everyone wants governments and other powerful institutions to do a better a job making decisions, to do as good a job as possible.
Let’s say you are alarmed about the Trump administration’s illiberalism or creeping authoritarianism in the United States. Does thinking about the future in 1,000 or 10,000 years actually motivate you to care about this more, to do more about it, to try harder? I don’t see how it would. Even if it did make you care a little bit more about it inside yourself, I don’t see how it would make a practical difference to what you do about it.
And taking such a long-term perspective might bring to mind all the nations and empires that have risen and fallen over the ages, and make you wonder if what happens this decade or the next might fade away just as easily. So, the effect on how much you care might be neutral, or it might make you care a little less. I don’t know — it depends on subjective gut intuition and each individual’s personal perspective.
Also, something like improving governments or institutions is a relay race where the baton is passed between generations, each of which makes its own contribution and has its own impact. Deflecting a big asteroid heading toward Earth is a way for a single organization like NASA to have a direct impact on the far future. But there are very few interventions of that kind. The clearest cases are existential risks or global catastrophic risks originating from natural sources, such as asteroids and pandemics. Every step you take to widen the circle of interventions you consider introduces more irreducible uncertainty and fundamental unpredictability.
I think asteroids and anti-asteroid interventions like NASA’s NEO Surveyor should be a global priority for governments and space agencies (and anyone else who can help). The total cost of solving like 95% of the problem (or whatever it is) is in the ballpark of the cost of building a bridge. I think people look at the asteroid example and think ‘ah, there must be a hundred more examples of things just like that’. But in reality it’s a very short list, something like: asteroids, pandemics, nuclear weapons, bioterror, climate change, and large volcanoes. And each of these varies a lot in terms of how neglected they are.
So, I think longtermism is an instance of taking a good idea — protect the world from asteroids for the price of building a bridge, and maybe a half a dozen other things like that such as launch a satellite to observe volcanoes — and running with it way too far. I don’t think there is enough meat on this bone to constitute a worldview or a life philosophy that can be generally embraced (although hat’s off to the few who make keeping the world safe from asteroids or big volcanoes). Which overall is the mistake of effective altruism over the last decade: take one good idea or a few — like donating a lot of money to cost-effective global health charities — and try to turn it into an all-encompassing worldview or life philosophy. People are hungry for meaning in their lives, I get it, I am too, but there are healthier and unhealthier ways to pursue that, ways that are more constructive and more destructive.