Moral pluralism and longtermism | Sunyshore

Link post

Crossposted from my personal blog, Sunyshore.

NASA /​ Unsplash

A few months ago, my friend Sophia wrote an excellent response to my blog post on utilitarianism, in which they advocated for moral pluralism. Moral pluralism is the meta-ethical theory that several moral theories are equally valid and fundamental, even though they sometimes conflict with each other. Sophia argues that moral pluralism is a stronger foundation for a liberal society than either utilitarianism or Rawlsianism, because it forces society to tolerate many different ways that humans can evaluate the world.

I am sympathetic to pluralism for theoretical and practical reasons. First, I believe in moral uncertainty: that no one actually knows what the correct moral theory is, and that the correct moral theory could be one of the theories that people have invented over the years, or it could be one that we’ve never thought of. Although moral uncertainty is distinct from moral pluralism—whereas moral pluralism asserts that several ethical theories are correct, moral uncertainty states that any of them could be correct—they’re similar enough that I often use pluralism in practice.

Because of moral uncertainty, I’m open to the idea that utilitarianism, or consequentialism in general, could be wrong. On the one hand, I think utilitarianism is very useful for figuring out what the “best” society would look like and how to prioritize among problems to work on, because it provides a meterstick on which all outcomes can be compared. On the other hand, utilitarianism is hard. Like all consequentialist theories, it requires us to compare actions or policies based on their effects on the world, and it can be impractical to predict all of the consequences of our actions, especially over the very long term.

Second, I think it’s often pragmatic to cooperate with people who don’t share your moral views, and to find actions or policies that a broad range of moral theories support—or, as I like to call them, moral win-wins. One of these is “open borders,” or radically relaxing immigration laws so that anyone can live, work, and study in any country with only minimal restrictions (such as passing a background check). From a utilitarian perspective, open borders makes sense because it would dramatically increase the productivity and well-being of migrants, promote broad economic growth, and reduce global poverty. From a deontological, libertarian perspective, it makes sense because restrictions on migration violate people’s freedom of movement by forcing them to stay in one country. And from an egalitarian perspective, an open-borders world is good because it would alleviate one of the starkest inequalities among people: inequality between countries.

Longtermism

Another moral win-win is the importance of shaping the long-term future. The idea that future generations and their interests matter should be familiar because it underlies the notion of sustainability. But it has deep implications: whatever you value, there could be much more of it in the future than the present, so making sure the future goes well is extremely important. This idea is called longtermism. As Ben Todd explains in this article on the 80,000 Hours website:

What are the things you most value in human civilization today? People being happy? People fulfilling their potential? Knowledge? Art?

In almost all of these cases, there’s potentially a lot more of it to come in the future:

The Earth could remain habitable for 600-800 million years, so there could be about 21 million future generations, and they could lead great lives, whatever you think “great” consists of. Even if you don’t think future generations matter as much as the present generation, since there could be so many of them, they could still be our key concern.

Civilization could also eventually reach other planets—there are 100 billion planets in the Milky Way alone. So, even if there’s only a small chance of this happening, there could also be dramatically more people per generation than there are today. By reaching other planets, civilization could also last even longer than if we stay on the Earth.

If you think it’s good for people to live happier and more flourishing lives, there’s a possibility that technology and social progress will let people have much better and longer lives in the future (including those in the present generation). So, putting these first three points together, there could be many more generations, with far more people, living much better lives. The three dimensions multiply together to give the potential scale of the future.

If what you value is justice and virtue, then the future could be far more just and virtuous than the world today.

If what you value is artistic and intellectual achievement, a far wealthier and bigger civilization could have far greater achievements than our own.

And so on.

Personally, I put a lot of stock in this belief. NASA’s Artemis program is aiming to land the first women and people of color on the Moon by 2024, and the long-term goals of this program include setting up a lunar economy and eventually landing humans on Mars. Earth will be able to support life for another billion years, during which humanity will have ample time to figure out interstellar travel. So I think it’s very likely that we and our descendants will eventually settle other star systems in the galaxy, as long as we don’t go extinct first. [1]

Even in the next century, we have many technological advances to look forward to. For example, RNA vaccines, such as the Moderna and Pfizer–BioNTech COVID-19 vaccines, can be adapted to respond quickly to new epidemics, because they can be developed more cheaply than traditional vaccines and manufactured using standardized infrastructure. And low-carbon energy production and storage technologies have gotten much cheaper over time, which is a necessary part of the global transition to a clean energy future.

So, from a longtermist perspective, one of our top priorities should be to ensure that humanity has a future. Existential risks, or x-risks, are risks that threaten to destroy the value that humanity and its descendants can achieve in the future; they include human extinction, permanent dystopia, and the permanent collapse of civilization. For example, infectious diseases engineered to be more potent than even COVID-19 could drive humanity extinct or cause modern society to collapse. Climate change increases our vulnerability to other x-risks, such as food shortages and armed conflict, and extreme climate change could directly threaten human existence. If we don’t know what is ultimately valuable, that makes it especially important to avoid getting the world permanently locked into a bad state.

Another category of things we should prioritize is ways to increase future generations’ capacity to shape their own future. One way we can do this is by protecting and improving liberal democratic institutions, so our descendants can more effectively make choices about what kinds of futures are desirable to them. Better global institutions like the United Nations can also improve our collective capacity to respond to x-risks. Another is by doing research on what’s valuable, so our descendants have more knowledge to go on when shaping their future.

This doesn’t mean that everyone should work on managing existential risks, improving governance, or figuring out what’s valuable. We still need people working on the immediate challenges facing society, including systemic racism, global poverty, and the recovery from COVID-19. But it means that society as a whole should devote more resources to these efforts to shape the long-term future.

Objections to longtermism

One objection to this view is that it’s exceedingly hard to influence the future—indeed, we can’t predict all the consequences that our actions in the present will have over the next 1,000 years, let alone the entire future. But I think we can do a lot of good over the long run by improving the future indirectly: by reducing x-risks so that we don’t lose the potential for a good future, and by empowering future generations to improve the future themselves.

Even so, it can be hard to predict whether our actions to reduce x-risks will actually do so. Take geoengineering. Spraying the atmosphere with particles that reflect sunlight might reduce the impact of climate change. However, it might cause droughts or cool the planet too much, drastically reducing our capacity to grow food, or have other unintended consequences. Still, even though we don’t know whether geoengineering would raise or lower the risk of environmental disasters, we can do further research to reduce our uncertainty about its effects.

Another objection is that by focusing on actions to improve the long-term future, we neglect our responsibilities to help people in the present. But this assumes a false dichotomy between “longtermist” and “neartermist” actions. Many actions we can take to improve the long-term future also improve the present, and vice versa. For example, speeding up vaccine development helps the present generation by reducing the duration and death toll of pandemics, and it helps future generations by reducing existential risk from pandemics.

Conclusion

As I’ve shown, many moral theories lead to the conclusion that making sure the long-term future goes well is important. Whether you subscribe to a single moral theory or a composite theory like moral pluralism, preventing existential catastrophes—situations in which we lose most of the value we can ever create—is paramount, and so is creating the knowledge and institutions that our descendants will use to shape humanity’s development. Although we have only the faintest idea what the future will be like, these actions will most likely lead to a better future for our descendants.


Thanks to MĂ©abh Murphy for feedback on this post, and to Sophia Hottel for starting this discussion.

[1]: Of course, Homo sapiens could go extinct in the biological sense by evolving into a new species, but human civilization would continue. By “humanity,” I mean humans and their descendants.

No comments.