Yes. One of the Four Focus Areas of Effective Altruism (2013) was āThe Long-Term Futureā and āFar future-focused EAsā are on the map of Bay Area memespace (2013). This social and ideological cluster has existed long before this exact name was coined to refer to it.
The only intervention discussed in relation to the far future at that first link is existential risk mitigation, which indeed has been a topic discussed within the EA community for a long time. My point is that if such discussions were happening as early as 2013 and, indeed, even earlier than that, and even before effective altruism existed, then that part of longtermism is not a new idea. (And none of the longtermist interventions that have been proposed other than those relating to existential risk are novel, realistic, important, and genuinely motivated by longtermism.) Whether people care if longtermism is a new idea or not is, I guess, another matter.
Up until recently, there was no name for the cluster of views that involved concern about ensuring the long-run future goes as well as possible. The most common language to refer to this cluster of views was just to say something like āpeople interested in x-risk reductionā. There are a few reasons why this terminology isnāt ideal [...]
For these reasons, and with Toby Ordās in-progress book on existential risk providing urgency, Toby and Joe Carlsmith started leading discussions about whether there were better terms to use. In October 2017, I proposed the term ālongtermismā, with the following definition:
People also talked about āastronomical wasteā (per the nick bostrom paper) -- the idea that we should race to colonize the galaxy as quickly as possible because weāre losing literally a couple galaxies every second we delay. (But everyone seemed to agree that this wasnāt practical, racing to colonize the galaxy soonest would have all kinds of bad consequences that would cause the whole thing to backfire, etc)
People since long before EA existed have been concerned about environmentalist causes like preventing species extinctions, based on a kind of emotional proto-longtermist feeling that āextinction is foreverā and it isnāt right that humanity, for its short-term benefit, should cause irreversible losses to the natural world. (Similar āextinction is foreverā thinking applies to the way that genocideāessentially seeking the extinction of a cultural /ā religious /ā racial /ā etc group, is considered a uniquely terrible horror, worse than just killing an equal number of randomly-selected people.)
A lot of āimproving institutional decisionmakingā style interventions make more and more sense as timelines get longer (since the improved institutions and better decisions have more time to snowball into better outcomes).
Thatās a good and interesting point about environmentalism. I took an environmental philosophy class sometime in the early-to-mid-2010s and very long-term thinking was definitely part of the conversation. As in, thinking many centuries, millennia, or even millions of years in the future. One paper (published in 2010) we read imagined humans in the fourth millennium (i.e. from the year 3000 to 4000) living in ācivilization reservesā, the inverse of wilderness reverses.
My problem with interventions like improving institutional decision-making is that we are already maximally motivated to do this based on neartermist concerns. Everyone wants governments and other powerful institutions to do a better a job making decisions, to do as good a job as possible.
Letās say you are alarmed about the Trump administrationās illiberalism or creeping authoritarianism in the United States. Does thinking about the future in 1,000 or 10,000 years actually motivate you to care about this more, to do more about it, to try harder? I donāt see how it would. Even if it did make you care a little bit more about it inside yourself, I donāt see how it would make a practical difference to what you do about it.
And taking such a long-term perspective might bring to mind all the nations and empires that have risen and fallen over the ages, and make you wonder if what happens this decade or the next might fade away just as easily. So, the effect on how much you care might be neutral, or it might make you care a little less. I donāt know ā it depends on subjective gut intuition and each individualās personal perspective.
Also, something like improving governments or institutions is a relay race where the baton is passed between generations, each of which makes its own contribution and has its own impact. Deflecting a big asteroid heading toward Earth is a way for a single organization like NASA to have a direct impact on the far future. But there are very few interventions of that kind. The clearest cases are existential risks or global catastrophic risks originating from natural sources, such as asteroids and pandemics. Every step you take to widen the circle of interventions you consider introduces more irreducible uncertainty and fundamental unpredictability.
I think asteroids and anti-asteroid interventions like NASAās NEO Surveyor should be a global priority for governments and space agencies (and anyone else who can help). The total cost of solving like 95% of the problem (or whatever it is) is in the ballpark of the cost of building a bridge. I think people look at the asteroid example and think āah, there must be a hundred more examples of things just like thatā. But in reality itās a very short list, something like: asteroids, pandemics, nuclear weapons, bioterror, climate change, and large volcanoes. And each of these varies a lot in terms of how neglected they are.
So, I think longtermism is an instance of taking a good idea ā protect the world from asteroids for the price of building a bridge, and maybe a half a dozen other things like that such as launch a satellite to observe volcanoes ā and running with it way too far. I donāt think there is enough meat on this bone to constitute a worldview or a life philosophy that can be generally embraced (although hatās off to the few who make keeping the world safe from asteroids or big volcanoes). Which overall is the mistake of effective altruism over the last decade: take one good idea or a few ā like donating a lot of money to cost-effective global health charities ā and try to turn it into an all-encompassing worldview or life philosophy. People are hungry for meaning in their lives, I get it, I am too, but there are healthier and unhealthier ways to pursue that, ways that are more constructive and more destructive.
Yes. One of the Four Focus Areas of Effective Altruism (2013) was āThe Long-Term Futureā and āFar future-focused EAsā are on the map of Bay Area memespace (2013). This social and ideological cluster has existed long before this exact name was coined to refer to it.
The only intervention discussed in relation to the far future at that first link is existential risk mitigation, which indeed has been a topic discussed within the EA community for a long time. My point is that if such discussions were happening as early as 2013 and, indeed, even earlier than that, and even before effective altruism existed, then that part of longtermism is not a new idea. (And none of the longtermist interventions that have been proposed other than those relating to existential risk are novel, realistic, important, and genuinely motivated by longtermism.) Whether people care if longtermism is a new idea or not is, I guess, another matter.
MacAskill:
People also talked about āastronomical wasteā (per the nick bostrom paper) -- the idea that we should race to colonize the galaxy as quickly as possible because weāre losing literally a couple galaxies every second we delay. (But everyone seemed to agree that this wasnāt practical, racing to colonize the galaxy soonest would have all kinds of bad consequences that would cause the whole thing to backfire, etc)
People since long before EA existed have been concerned about environmentalist causes like preventing species extinctions, based on a kind of emotional proto-longtermist feeling that āextinction is foreverā and it isnāt right that humanity, for its short-term benefit, should cause irreversible losses to the natural world. (Similar āextinction is foreverā thinking applies to the way that genocideāessentially seeking the extinction of a cultural /ā religious /ā racial /ā etc group, is considered a uniquely terrible horror, worse than just killing an equal number of randomly-selected people.)
A lot of āimproving institutional decisionmakingā style interventions make more and more sense as timelines get longer (since the improved institutions and better decisions have more time to snowball into better outcomes).
Thatās a good and interesting point about environmentalism. I took an environmental philosophy class sometime in the early-to-mid-2010s and very long-term thinking was definitely part of the conversation. As in, thinking many centuries, millennia, or even millions of years in the future. One paper (published in 2010) we read imagined humans in the fourth millennium (i.e. from the year 3000 to 4000) living in ācivilization reservesā, the inverse of wilderness reverses.
My problem with interventions like improving institutional decision-making is that we are already maximally motivated to do this based on neartermist concerns. Everyone wants governments and other powerful institutions to do a better a job making decisions, to do as good a job as possible.
Letās say you are alarmed about the Trump administrationās illiberalism or creeping authoritarianism in the United States. Does thinking about the future in 1,000 or 10,000 years actually motivate you to care about this more, to do more about it, to try harder? I donāt see how it would. Even if it did make you care a little bit more about it inside yourself, I donāt see how it would make a practical difference to what you do about it.
And taking such a long-term perspective might bring to mind all the nations and empires that have risen and fallen over the ages, and make you wonder if what happens this decade or the next might fade away just as easily. So, the effect on how much you care might be neutral, or it might make you care a little less. I donāt know ā it depends on subjective gut intuition and each individualās personal perspective.
Also, something like improving governments or institutions is a relay race where the baton is passed between generations, each of which makes its own contribution and has its own impact. Deflecting a big asteroid heading toward Earth is a way for a single organization like NASA to have a direct impact on the far future. But there are very few interventions of that kind. The clearest cases are existential risks or global catastrophic risks originating from natural sources, such as asteroids and pandemics. Every step you take to widen the circle of interventions you consider introduces more irreducible uncertainty and fundamental unpredictability.
I think asteroids and anti-asteroid interventions like NASAās NEO Surveyor should be a global priority for governments and space agencies (and anyone else who can help). The total cost of solving like 95% of the problem (or whatever it is) is in the ballpark of the cost of building a bridge. I think people look at the asteroid example and think āah, there must be a hundred more examples of things just like thatā. But in reality itās a very short list, something like: asteroids, pandemics, nuclear weapons, bioterror, climate change, and large volcanoes. And each of these varies a lot in terms of how neglected they are.
So, I think longtermism is an instance of taking a good idea ā protect the world from asteroids for the price of building a bridge, and maybe a half a dozen other things like that such as launch a satellite to observe volcanoes ā and running with it way too far. I donāt think there is enough meat on this bone to constitute a worldview or a life philosophy that can be generally embraced (although hatās off to the few who make keeping the world safe from asteroids or big volcanoes). Which overall is the mistake of effective altruism over the last decade: take one good idea or a few ā like donating a lot of money to cost-effective global health charities ā and try to turn it into an all-encompassing worldview or life philosophy. People are hungry for meaning in their lives, I get it, I am too, but there are healthier and unhealthier ways to pursue that, ways that are more constructive and more destructive.