Thatās a good and interesting point about environmentalism. I took an environmental philosophy class sometime in the early-to-mid-2010s and very long-term thinking was definitely part of the conversation. As in, thinking many centuries, millennia, or even millions of years in the future. One paper (published in 2010) we read imagined humans in the fourth millennium (i.e. from the year 3000 to 4000) living in ācivilization reservesā, the inverse of wilderness reverses.
My problem with interventions like improving institutional decision-making is that we are already maximally motivated to do this based on neartermist concerns. Everyone wants governments and other powerful institutions to do a better a job making decisions, to do as good a job as possible.
Letās say you are alarmed about the Trump administrationās illiberalism or creeping authoritarianism in the United States. Does thinking about the future in 1,000 or 10,000 years actually motivate you to care about this more, to do more about it, to try harder? I donāt see how it would. Even if it did make you care a little bit more about it inside yourself, I donāt see how it would make a practical difference to what you do about it.
And taking such a long-term perspective might bring to mind all the nations and empires that have risen and fallen over the ages, and make you wonder if what happens this decade or the next might fade away just as easily. So, the effect on how much you care might be neutral, or it might make you care a little less. I donāt know ā it depends on subjective gut intuition and each individualās personal perspective.
Also, something like improving governments or institutions is a relay race where the baton is passed between generations, each of which makes its own contribution and has its own impact. Deflecting a big asteroid heading toward Earth is a way for a single organization like NASA to have a direct impact on the far future. But there are very few interventions of that kind. The clearest cases are existential risks or global catastrophic risks originating from natural sources, such as asteroids and pandemics. Every step you take to widen the circle of interventions you consider introduces more irreducible uncertainty and fundamental unpredictability.
I think asteroids and anti-asteroid interventions like NASAās NEO Surveyor should be a global priority for governments and space agencies (and anyone else who can help). The total cost of solving like 95% of the problem (or whatever it is) is in the ballpark of the cost of building a bridge. I think people look at the asteroid example and think āah, there must be a hundred more examples of things just like thatā. But in reality itās a very short list, something like: asteroids, pandemics, nuclear weapons, bioterror, climate change, and large volcanoes. And each of these varies a lot in terms of how neglected they are.
So, I think longtermism is an instance of taking a good idea ā protect the world from asteroids for the price of building a bridge, and maybe a half a dozen other things like that such as launch a satellite to observe volcanoes ā and running with it way too far. I donāt think there is enough meat on this bone to constitute a worldview or a life philosophy that can be generally embraced (although hatās off to the few who make keeping the world safe from asteroids or big volcanoes). Which overall is the mistake of effective altruism over the last decade: take one good idea or a few ā like donating a lot of money to cost-effective global health charities ā and try to turn it into an all-encompassing worldview or life philosophy. People are hungry for meaning in their lives, I get it, I am too, but there are healthier and unhealthier ways to pursue that, ways that are more constructive and more destructive.
Thatās a good and interesting point about environmentalism. I took an environmental philosophy class sometime in the early-to-mid-2010s and very long-term thinking was definitely part of the conversation. As in, thinking many centuries, millennia, or even millions of years in the future. One paper (published in 2010) we read imagined humans in the fourth millennium (i.e. from the year 3000 to 4000) living in ācivilization reservesā, the inverse of wilderness reverses.
My problem with interventions like improving institutional decision-making is that we are already maximally motivated to do this based on neartermist concerns. Everyone wants governments and other powerful institutions to do a better a job making decisions, to do as good a job as possible.
Letās say you are alarmed about the Trump administrationās illiberalism or creeping authoritarianism in the United States. Does thinking about the future in 1,000 or 10,000 years actually motivate you to care about this more, to do more about it, to try harder? I donāt see how it would. Even if it did make you care a little bit more about it inside yourself, I donāt see how it would make a practical difference to what you do about it.
And taking such a long-term perspective might bring to mind all the nations and empires that have risen and fallen over the ages, and make you wonder if what happens this decade or the next might fade away just as easily. So, the effect on how much you care might be neutral, or it might make you care a little less. I donāt know ā it depends on subjective gut intuition and each individualās personal perspective.
Also, something like improving governments or institutions is a relay race where the baton is passed between generations, each of which makes its own contribution and has its own impact. Deflecting a big asteroid heading toward Earth is a way for a single organization like NASA to have a direct impact on the far future. But there are very few interventions of that kind. The clearest cases are existential risks or global catastrophic risks originating from natural sources, such as asteroids and pandemics. Every step you take to widen the circle of interventions you consider introduces more irreducible uncertainty and fundamental unpredictability.
I think asteroids and anti-asteroid interventions like NASAās NEO Surveyor should be a global priority for governments and space agencies (and anyone else who can help). The total cost of solving like 95% of the problem (or whatever it is) is in the ballpark of the cost of building a bridge. I think people look at the asteroid example and think āah, there must be a hundred more examples of things just like thatā. But in reality itās a very short list, something like: asteroids, pandemics, nuclear weapons, bioterror, climate change, and large volcanoes. And each of these varies a lot in terms of how neglected they are.
So, I think longtermism is an instance of taking a good idea ā protect the world from asteroids for the price of building a bridge, and maybe a half a dozen other things like that such as launch a satellite to observe volcanoes ā and running with it way too far. I donāt think there is enough meat on this bone to constitute a worldview or a life philosophy that can be generally embraced (although hatās off to the few who make keeping the world safe from asteroids or big volcanoes). Which overall is the mistake of effective altruism over the last decade: take one good idea or a few ā like donating a lot of money to cost-effective global health charities ā and try to turn it into an all-encompassing worldview or life philosophy. People are hungry for meaning in their lives, I get it, I am too, but there are healthier and unhealthier ways to pursue that, ways that are more constructive and more destructive.