The problem with person-affecting views

‘We are in favour of making people happy, but neutral about making happy people’

This quotation from Jan Narveson seems so intuitive. We can’t make the world better by just bringing more people into it...can we? It’s also important. If true, then perhaps human extinction isn’t so concerning after all...

I used to hold this ‘person-affecting’ view and thought that anyone who didn’t was, well...a bit mad. However, all it took was a very simple argument to completely change my mind. In this short post I want to share this argument with those who may not have come across it before and demonstrate that a person-affecting view, whilst perhaps not dead in the water, faces a serious issue.

Note: This is a short post and I do not claim to cover everything of relevance. I would recommend reading Greaves (2017) for a more in-depth exploration of population axiology

The (false) intuition of neutrality

There is no single ‘person-affecting view’, instead a variety of formulations that all capture the intuition that an act can only be bad if it is bad for someone. Similarly something can be good only if it is good for someone. Therefore, according to standard person-affecting views, there is no moral obligation to create people, nor moral good in creating people because nonexistence means “there is never a person who could have benefited from being created”.

As noted in Greaves (2017), the idea can be captured in the following

Neutrality Principle: Adding an extra person to the world, if it is done in such a way as to leave the well-being levels of others unaffected, does not make a state of affairs either better or worse.

Seems reasonable right? Well, let’s dig a bit deeper. If adding the extra person neither makes the state of affairs better or worse, what does it do? Let’s consider that it leaves the state of affairs equally as good as the original state of affairs.

In this case we can say that states A and B below are equally as good as each other. A has four people and B has the same four people with the same wellbeing levels, but also an additional fifth person with (let’s say) a positive welfare level.

We can also say that states A and C are equally as good as each other. C again has the same people as A, and an additional fifth person with positive welfare.

So A is as good as B, and A is as good as C. Therefore surely it should be the case that B is as good C (invoking a reasonable property called transitivity). But now let’s look at B and C next to each other.

Any reasonable theory of population ethics must surely accept that C is better than B. C and B contain all of the same people, but one of them is significantly better off in C (with all the others equally well off in both cases). Invoking a person-affecting view implies that B and C are equally as good as each other, but this is clearly wrong.

You might be able to save the person-affecting view by rejecting the requirement of transitivity. For example, you could just say that yes… A is as good as B, A is as good as C, and C is better than B! Well...this just seemed too barmy to me. I’d sooner divorce my person-affecting view than transitivity.

Where do we go from here?

If the above troubles you, you essentially have two options:

  1. You try and save person-affecting views in some way

  2. You adopt a population axiology that doesn’t invoke neutrality (or at least one which says bringing a person into existence can only be neutral if that person has a specific “zero” welfare level)

To my knowledge no one has really achieved number 1 yet, at least not in a particularly compelling way. That’s not to say it can’t be done and I look forward to seeing if anyone can make progress.

Number 2 seemed to me like the best route. As noted by Greaves (2017) however, a series of impossibility theorems have demonstrated that, logically, any population axiology we can think up will violate one or more of a number of initially very compelling intuitive constraints. One’s choice of population axiology then appears to be a choice of which intuition one is least unwilling to give up.

For what it’s worth, after some deliberation I have begrudgingly accepted that the majority of prominent thinkers in EA may have it right: total utilitarianism seems to be the ‘least objectionable’ population axiology. Total utilitarianism just says that A is better than B if and only if total well-being in A is higher than total wellbeing in B. So bringing someone with positive welfare into existence is a good thing, and bringing someone with negative welfare into existence is a bad thing. Bringing someone into existence with “zero” welfare is neutral. Pretty simple right?

Under total utilitarianism, human extinction becomes a dreadful prospect as it would result in perhaps trillions of lives never coming into existence that would have otherwise. Of course we have to assume these lives are of positive welfare to make avoiding extinction desirable.

It may be a simple axiology, but total utilitarianism runs into some arguably ‘repugnant’ conclusions of its own. To be frank, I’m inclined to leave that can of worms unopened for now...

References

Broome, J., 2004. Weighing lives. OUP Catalogue. (I got the diagrams from here—although slightly edited them)

Greaves, H., 2017. Population axiology. Philosophy Compass, 12(11), p.e12442.

Narveson, J.,1973. Moral problems of population. The Monist, 57(1), 62–86.