The need for convergence on an ethical theory

For this post, I’m going to use the scenario outlined in the science fiction book Seveneves by Neal Stephenson. It’s a far-fetched scenario (and I leave out a lot of detail), but it sets up my point nicely, so bear with me. Full credit for the intro, of course, to Stephenson.

This is cross-posted from my blog.


Introduction

Humanity is in a near future state. Technology is slightly more advanced than it is today, and the International Space Station (ISS) is somewhat larger and more sophisticated. Long story short, the Moon blows up, and scientists determine humanity has two years before the surface of the Earth becomes uninhabitable for 5,000 years due to rubble bombardment.

Immediately, humanity works together to increase the size and sustainability of the ISS to ensure that humanity and its heritage (e.g. history, culture, animals and plants stored in a genetic format) can survive for 5,000 years to eventually repopulate the Earth. That this is a good thing to do is not once questioned. Humanity simply accepts as its duty that the diversity of life that exists today will continue at some point in the future. This is done with the acceptance that the inhabitants and descendants of the ISS will not have any easy life by any stretch of the imagination. But it is apparently their ‘duty’ to persevere.

The problem

It is taken as a given that stopping humanity from going extinct is a good thing, and I tend to agree, though not as strongly as some (I hold uncertainty about the expected value of the future assuming humanity/​life in general survive). However, if we consider different ethical theories, we find that many come up with different answers to the question of what we ought to do in this case. Below I outline some of these possible differences. I say ‘might’ instead of ‘will’ because I’ve oversimplified things and if you tweak the specifics you might come up wit ha different answer. Take this as illustrative only.

Classical hedonistic utilitarian

If you think the chances of there being more wellbeing in the future are greater than there being more suffering (or put another way, you think the expected value of the future is positive), you might want to support the ISS.

Negative utilitarian

If you think all life on Earth and therefore suffering will cease to exist if the ISS plan fails, you might want to actively disrupt the project to increase the probability that happens. At the very least, you probably won’t want to support it.

Deontologist

I’m not really sure what a deontologist would think of this, but I suspect that they would at least be motivated to a different extent than a classical utilitarian.

Person affecting view

Depending on how you see the specifics of the scenario, the ‘ISS survives’ case is roughly as good as the ‘ISS fails’ case.


Each of these ethical frameworks have significantly different answers to the question of ‘what ought we do in this one specific case?’ They also have very different answers to many current and future ethical dilemmas that are much more likely. This is worrying.

And yet, to my knowledge, there does not seem to be a concerted push towards convergence on a single ethical theory (and I’m not just talking about compromise). Perhaps if you’re not a moral realist, this isn’t so important to you. But I would argue that getting society at large to converge on a single ethical theory is very important, and not just for thinking about the great questions, like what to do about existential risk and the far future. It also possibly results in a lot of zero-sum games and a lot of wasted effort. Even Effective Altruists disagree on certain aspects of ethics, or hold entirely different ethical codes. At some point, this is going to result in a major misalignment of objectives, if it hasn’t already.

I’d like to propose that simply seeking convergence on ethics is a highly neglected and important cause. To date, most of this seems to involve advocates for each ethical theory promoting their view, resulting in another zero-sum game. Perhaps we need to agree on another way to do this.

If ethics were a game of soccer, we’d all be kicking the ball in different directions. Sometimes, we happen to kick in the same direction, sometimes in opposite directions. What could be more important than agreeing on what direction to kick the ball and kicking it to the best possible world.