The only intervention discussed in relation to the far future at that first link is existential risk mitigation, which indeed has been a topic discussed within the EA community for a long time. My point is that if such discussions were happening as early as 2013 and, indeed, even earlier than that, and even before effective altruism existed, then that part of longtermism is not a new idea. (And none of the longtermist interventions that have been proposed other than those relating to existential risk are novel, realistic, important, and genuinely motivated by longtermism.) Whether people care if longtermism is a new idea or not is, I guess, another matter.
Up until recently, there was no name for the cluster of views that involved concern about ensuring the long-run future goes as well as possible. The most common language to refer to this cluster of views was just to say something like âpeople interested in x-risk reductionâ. There are a few reasons why this terminology isnât ideal [...]
For these reasons, and with Toby Ordâs in-progress book on existential risk providing urgency, Toby and Joe Carlsmith started leading discussions about whether there were better terms to use. In October 2017, I proposed the term âlongtermismâ, with the following definition:
People also talked about âastronomical wasteâ (per the nick bostrom paper) -- the idea that we should race to colonize the galaxy as quickly as possible because weâre losing literally a couple galaxies every second we delay. (But everyone seemed to agree that this wasnât practical, racing to colonize the galaxy soonest would have all kinds of bad consequences that would cause the whole thing to backfire, etc)
People since long before EA existed have been concerned about environmentalist causes like preventing species extinctions, based on a kind of emotional proto-longtermist feeling that âextinction is foreverâ and it isnât right that humanity, for its short-term benefit, should cause irreversible losses to the natural world. (Similar âextinction is foreverâ thinking applies to the way that genocideâessentially seeking the extinction of a cultural /â religious /â racial /â etc group, is considered a uniquely terrible horror, worse than just killing an equal number of randomly-selected people.)
A lot of âimproving institutional decisionmakingâ style interventions make more and more sense as timelines get longer (since the improved institutions and better decisions have more time to snowball into better outcomes).
Thatâs a good and interesting point about environmentalism. I took an environmental philosophy class sometime in the early-to-mid-2010s and very long-term thinking was definitely part of the conversation. As in, thinking many centuries, millennia, or even millions of years in the future. One paper (published in 2010) we read imagined humans in the fourth millennium (i.e. from the year 3000 to 4000) living in âcivilization reservesâ, the inverse of wilderness reverses.
My problem with interventions like improving institutional decision-making is that we are already maximally motivated to do this based on neartermist concerns. Everyone wants governments and other powerful institutions to do a better a job making decisions, to do as good a job as possible.
Letâs say you are alarmed about the Trump administrationâs illiberalism or creeping authoritarianism in the United States. Does thinking about the future in 1,000 or 10,000 years actually motivate you to care about this more, to do more about it, to try harder? I donât see how it would. Even if it did make you care a little bit more about it inside yourself, I donât see how it would make a practical difference to what you do about it.
And taking such a long-term perspective might bring to mind all the nations and empires that have risen and fallen over the ages, and make you wonder if what happens this decade or the next might fade away just as easily. So, the effect on how much you care might be neutral, or it might make you care a little less. I donât know â it depends on subjective gut intuition and each individualâs personal perspective.
Also, something like improving governments or institutions is a relay race where the baton is passed between generations, each of which makes its own contribution and has its own impact. Deflecting a big asteroid heading toward Earth is a way for a single organization like NASA to have a direct impact on the far future. But there are very few interventions of that kind. The clearest cases are existential risks or global catastrophic risks originating from natural sources, such as asteroids and pandemics. Every step you take to widen the circle of interventions you consider introduces more irreducible uncertainty and fundamental unpredictability.
I think asteroids and anti-asteroid interventions like NASAâs NEO Surveyor should be a global priority for governments and space agencies (and anyone else who can help). The total cost of solving like 95% of the problem (or whatever it is) is in the ballpark of the cost of building a bridge. I think people look at the asteroid example and think âah, there must be a hundred more examples of things just like thatâ. But in reality itâs a very short list, something like: asteroids, pandemics, nuclear weapons, bioterror, climate change, and large volcanoes. And each of these varies a lot in terms of how neglected they are.
So, I think longtermism is an instance of taking a good idea â protect the world from asteroids for the price of building a bridge, and maybe a half a dozen other things like that such as launch a satellite to observe volcanoes â and running with it way too far. I donât think there is enough meat on this bone to constitute a worldview or a life philosophy that can be generally embraced (although hatâs off to the few who make keeping the world safe from asteroids or big volcanoes). Which overall is the mistake of effective altruism over the last decade: take one good idea or a few â like donating a lot of money to cost-effective global health charities â and try to turn it into an all-encompassing worldview or life philosophy. People are hungry for meaning in their lives, I get it, I am too, but there are healthier and unhealthier ways to pursue that, ways that are more constructive and more destructive.
The only intervention discussed in relation to the far future at that first link is existential risk mitigation, which indeed has been a topic discussed within the EA community for a long time. My point is that if such discussions were happening as early as 2013 and, indeed, even earlier than that, and even before effective altruism existed, then that part of longtermism is not a new idea. (And none of the longtermist interventions that have been proposed other than those relating to existential risk are novel, realistic, important, and genuinely motivated by longtermism.) Whether people care if longtermism is a new idea or not is, I guess, another matter.
MacAskill:
People also talked about âastronomical wasteâ (per the nick bostrom paper) -- the idea that we should race to colonize the galaxy as quickly as possible because weâre losing literally a couple galaxies every second we delay. (But everyone seemed to agree that this wasnât practical, racing to colonize the galaxy soonest would have all kinds of bad consequences that would cause the whole thing to backfire, etc)
People since long before EA existed have been concerned about environmentalist causes like preventing species extinctions, based on a kind of emotional proto-longtermist feeling that âextinction is foreverâ and it isnât right that humanity, for its short-term benefit, should cause irreversible losses to the natural world. (Similar âextinction is foreverâ thinking applies to the way that genocideâessentially seeking the extinction of a cultural /â religious /â racial /â etc group, is considered a uniquely terrible horror, worse than just killing an equal number of randomly-selected people.)
A lot of âimproving institutional decisionmakingâ style interventions make more and more sense as timelines get longer (since the improved institutions and better decisions have more time to snowball into better outcomes).
Thatâs a good and interesting point about environmentalism. I took an environmental philosophy class sometime in the early-to-mid-2010s and very long-term thinking was definitely part of the conversation. As in, thinking many centuries, millennia, or even millions of years in the future. One paper (published in 2010) we read imagined humans in the fourth millennium (i.e. from the year 3000 to 4000) living in âcivilization reservesâ, the inverse of wilderness reverses.
My problem with interventions like improving institutional decision-making is that we are already maximally motivated to do this based on neartermist concerns. Everyone wants governments and other powerful institutions to do a better a job making decisions, to do as good a job as possible.
Letâs say you are alarmed about the Trump administrationâs illiberalism or creeping authoritarianism in the United States. Does thinking about the future in 1,000 or 10,000 years actually motivate you to care about this more, to do more about it, to try harder? I donât see how it would. Even if it did make you care a little bit more about it inside yourself, I donât see how it would make a practical difference to what you do about it.
And taking such a long-term perspective might bring to mind all the nations and empires that have risen and fallen over the ages, and make you wonder if what happens this decade or the next might fade away just as easily. So, the effect on how much you care might be neutral, or it might make you care a little less. I donât know â it depends on subjective gut intuition and each individualâs personal perspective.
Also, something like improving governments or institutions is a relay race where the baton is passed between generations, each of which makes its own contribution and has its own impact. Deflecting a big asteroid heading toward Earth is a way for a single organization like NASA to have a direct impact on the far future. But there are very few interventions of that kind. The clearest cases are existential risks or global catastrophic risks originating from natural sources, such as asteroids and pandemics. Every step you take to widen the circle of interventions you consider introduces more irreducible uncertainty and fundamental unpredictability.
I think asteroids and anti-asteroid interventions like NASAâs NEO Surveyor should be a global priority for governments and space agencies (and anyone else who can help). The total cost of solving like 95% of the problem (or whatever it is) is in the ballpark of the cost of building a bridge. I think people look at the asteroid example and think âah, there must be a hundred more examples of things just like thatâ. But in reality itâs a very short list, something like: asteroids, pandemics, nuclear weapons, bioterror, climate change, and large volcanoes. And each of these varies a lot in terms of how neglected they are.
So, I think longtermism is an instance of taking a good idea â protect the world from asteroids for the price of building a bridge, and maybe a half a dozen other things like that such as launch a satellite to observe volcanoes â and running with it way too far. I donât think there is enough meat on this bone to constitute a worldview or a life philosophy that can be generally embraced (although hatâs off to the few who make keeping the world safe from asteroids or big volcanoes). Which overall is the mistake of effective altruism over the last decade: take one good idea or a few â like donating a lot of money to cost-effective global health charities â and try to turn it into an all-encompassing worldview or life philosophy. People are hungry for meaning in their lives, I get it, I am too, but there are healthier and unhealthier ways to pursue that, ways that are more constructive and more destructive.