The problem with person-affecting views

‘We are in favour of mak­ing peo­ple happy, but neu­tral about mak­ing happy peo­ple’

This quo­ta­tion from Jan Narve­son seems so in­tu­itive. We can’t make the world bet­ter by just bring­ing more peo­ple into it...can we? It’s also im­por­tant. If true, then per­haps hu­man ex­tinc­tion isn’t so con­cern­ing af­ter all...

I used to hold this ‘per­son-af­fect­ing’ view and thought that any­one who didn’t was, well...a bit mad. How­ever, all it took was a very sim­ple ar­gu­ment to com­pletely change my mind. In this short post I want to share this ar­gu­ment with those who may not have come across it be­fore and demon­strate that a per­son-af­fect­ing view, whilst per­haps not dead in the wa­ter, faces a se­ri­ous is­sue.

Note: This is a short post and I do not claim to cover ev­ery­thing of rele­vance. I would recom­mend read­ing Greaves (2017) for a more in-depth ex­plo­ra­tion of pop­u­la­tion axiology

The (false) in­tu­ition of neutrality

There is no sin­gle ‘per­son-af­fect­ing view’, in­stead a va­ri­ety of for­mu­la­tions that all cap­ture the in­tu­ition that an act can only be bad if it is bad for some­one. Similarly some­thing can be good only if it is good for some­one. There­fore, ac­cord­ing to stan­dard per­son-af­fect­ing views, there is no moral obli­ga­tion to cre­ate peo­ple, nor moral good in cre­at­ing peo­ple be­cause nonex­is­tence means “there is never a per­son who could have benefited from be­ing cre­ated”.

As noted in Greaves (2017), the idea can be cap­tured in the following

Neu­tral­ity Prin­ci­ple: Ad­ding an ex­tra per­son to the world, if it is done in such a way as to leave the well-be­ing lev­els of oth­ers un­af­fected, does not make a state of af­fairs ei­ther bet­ter or worse.

Seems rea­son­able right? Well, let’s dig a bit deeper. If adding the ex­tra per­son nei­ther makes the state of af­fairs bet­ter or worse, what does it do? Let’s con­sider that it leaves the state of af­fairs equally as good as the origi­nal state of af­fairs.

In this case we can say that states A and B be­low are equally as good as each other. A has four peo­ple and B has the same four peo­ple with the same wellbe­ing lev­els, but also an ad­di­tional fifth per­son with (let’s say) a pos­i­tive welfare level.

We can also say that states A and C are equally as good as each other. C again has the same peo­ple as A, and an ad­di­tional fifth per­son with pos­i­tive welfare.

So A is as good as B, and A is as good as C. There­fore surely it should be the case that B is as good C (in­vok­ing a rea­son­able prop­erty called tran­si­tivity). But now let’s look at B and C next to each other.

Any rea­son­able the­ory of pop­u­la­tion ethics must surely ac­cept that C is bet­ter than B. C and B con­tain all of the same peo­ple, but one of them is sig­nifi­cantly bet­ter off in C (with all the oth­ers equally well off in both cases). In­vok­ing a per­son-af­fect­ing view im­plies that B and C are equally as good as each other, but this is clearly wrong.

You might be able to save the per­son-af­fect­ing view by re­ject­ing the re­quire­ment of tran­si­tivity. For ex­am­ple, you could just say that yes… A is as good as B, A is as good as C, and C is bet­ter than B! Well...this just seemed too barmy to me. I’d sooner di­vorce my per­son-af­fect­ing view than tran­si­tivity.

Where do we go from here?

If the above trou­bles you, you es­sen­tially have two op­tions:

  1. You try and save per­son-af­fect­ing views in some way

  2. You adopt a pop­u­la­tion ax­iol­ogy that doesn’t in­voke neu­tral­ity (or at least one which says bring­ing a per­son into ex­is­tence can only be neu­tral if that per­son has a spe­cific “zero” welfare level)

To my knowl­edge no one has re­ally achieved num­ber 1 yet, at least not in a par­tic­u­larly com­pel­ling way. That’s not to say it can’t be done and I look for­ward to see­ing if any­one can make progress.

Num­ber 2 seemed to me like the best route. As noted by Greaves (2017) how­ever, a se­ries of im­pos­si­bil­ity the­o­rems have demon­strated that, log­i­cally, any pop­u­la­tion ax­iol­ogy we can think up will vi­o­late one or more of a num­ber of ini­tially very com­pel­ling in­tu­itive con­straints. One’s choice of pop­u­la­tion ax­iol­ogy then ap­pears to be a choice of which in­tu­ition one is least un­will­ing to give up.

For what it’s worth, af­ter some de­liber­a­tion I have be­grudg­ingly ac­cepted that the ma­jor­ity of promi­nent thinkers in EA may have it right: to­tal util­i­tar­i­anism seems to be the ‘least ob­jec­tion­able’ pop­u­la­tion ax­iol­ogy. To­tal util­i­tar­i­anism just says that A is bet­ter than B if and only if to­tal well-be­ing in A is higher than to­tal wellbe­ing in B. So bring­ing some­one with pos­i­tive welfare into ex­is­tence is a good thing, and bring­ing some­one with nega­tive welfare into ex­is­tence is a bad thing. Bring­ing some­one into ex­is­tence with “zero” welfare is neu­tral. Pretty sim­ple right?

Un­der to­tal util­i­tar­i­anism, hu­man ex­tinc­tion be­comes a dread­ful prospect as it would re­sult in per­haps trillions of lives never com­ing into ex­is­tence that would have oth­er­wise. Of course we have to as­sume these lives are of pos­i­tive welfare to make avoid­ing ex­tinc­tion de­sir­able.

It may be a sim­ple ax­iol­ogy, but to­tal util­i­tar­i­anism runs into some ar­guably ‘re­pug­nant’ con­clu­sions of its own. To be frank, I’m in­clined to leave that can of worms un­opened for now...

References

Broome, J., 2004. Weigh­ing lives. OUP Cat­a­logue. (I got the di­a­grams from here—al­though slightly ed­ited them)

Greaves, H., 2017. Pop­u­la­tion ax­iol­ogy. Philos­o­phy Com­pass, 12(11), p.e12442.

Narve­son, J.,1973. Mo­ral prob­lems of pop­u­la­tion. The Mon­ist, 57(1), 62–86.