Person-affecting intuitions can often be money pumped

This is a short reference post for an argument I wish was better known. Note that it is primarily about person-affecting intuitions that normal people have, rather than a serious engagement with the population ethics literature, which contains many person-affecting views not subject to the argument in this post.

EDIT: Turns out there was a previous post making the same argument.

A common intuition people have is that our goal is “Making People Happy, not Making Happy People”. That is:

  1. Making people happy: if some person Alice will definitely exist, then it is good to improve her welfare

  2. Not making happy people: it is neutral to go from “Alice won’t exist” to “Alice will exist”[1]. Intuitively, if Alice doesn’t exist, she can’t care that she doesn’t live a happy life, and so no harm was done.

This position is vulnerable to a money pump[2], that is, there is a set of trades that it would make that would achieve nothing and lose money with certainty. Consider the following worlds:

  • World 1: Alice won’t exist in the future.

  • World 2: Alice will exist in the future, and will be slightly happy.

  • World 3: Alice will exist in the future, and will be very happy.

(The worlds are the same in every other aspect. It’s a thought experiment.)

Then this view would be happy to make the following trades:

  1. Receive $0.01[3] to move from World 1 to World 2 (“Not making happy people”)

  2. Pay $1.00 to move from World 2 to World 3 (“Making people happy”)

  3. Receive $0.01 to move from World 3 to World 1 (“Not making happy people”)

The net result is to lose $0.98 to move from World 1 to World 1.

FAQ

Q. Why should I care if my preferences lead to money pumping?

This is a longstanding debate that I’m not going to get into here. I’d recommend Holden’s series on this general topic, starting with Future-proof ethics.

Q. In the real world we’d never have such clean options to choose from. Does this matter at all in the real world?

See previous answer.

Q. What if we instead have <slight variant on a person-affecting view>?

Often these variants are also vulnerable to the same issue. For example, if you have a “moderate view” where making happy people is not worthless but is discounted by a factor of (say) 10, the same example works with slightly different numbers:

Let’s say that “Alice is very happy” has an undiscounted worth of 2 utilons. Then you would be happy to (1) move from World 1 to World 2 for free, (2) pay 1 utilon to move from World 2 to World 3, and (3) receive 0.5 utilons to move from World 3 to World 1.

The philosophical literature does consider person-affecting views to which this money pump does not apply. I’ve found these views to be unappealing for other reasons but I have not considered all of them and am not an expert in the topic.

If you’re interested in this topic, Arrhenius proves an impossibility result that applies to all possible population ethics (not just person-affecting views), so you need to bite at least one bullet.

EDIT: Adding more FAQs based on comments:

Q. Why doesn’t this view anticipate that trade 2 will be available, and so reject trade 1?

You can either have a local decision rule that doesn’t take into account future actions (and so excludes this sort of reasoning), or you can have a global decision rule that selects an entire policy at once. I’m talking about the local kind.

You could have a global decision rule that compares worlds and ignores happy people who don’t exist in all worlds. In that case you avoid this money pump, but have other problems—see Chapter 4 of On the Overwhelming Importance of Shaping the Far Future.

You could also take the local decision rule and try to turn it into a global decision rule by giving it information about what decisions it would make in the future. I’m not sure how you’d make this work but I don’t expect great results.

Q. This is a very consequentialist take on person-affecting views. Wouldn’t a non-consequentialist version (e.g. this comment) make more sense?

Personally I think of non-consequentialist theories as good heuristics that approximate the hard-to-compute consequentialist answer, and so I often find them irrelevant when thinking about theories applied in idealized thought experiments. If you are instead sympathetic to non-consequentialist theories as being the true answer, then the argument in this post probably shouldn’t sway you too much. If you are in a real-world situation where you have person-affecting intuitions, those intuitions are there for a reason and you probably shouldn’t completely ignore them until you know that reason.

Q. Doesn’t total utilitarianism also have problems?

Yes! While I am more sympathetic to total utilitarianism than person-affecting views, this post is just a short reference post about one particular argument. I am not defending claims like “this argument demolishes person-affecting views” or “total utilitarianism is the correct theory” in this post.

Further resources

  1. ^

    For this post I’ll assume that Alice’s life is net positive, since “asymmetric” views say that if Alice would have a net negative life, then it would be actively bad (rather than neutral) to move Alice from “won’t exist” to “will exist”.

  2. ^

    A previous version of this post incorrectly called this a Dutch book.

  3. ^

    By giving it $0.01, I’m making it so that it strictly prefers to take the trade (rather than being indifferent to the trade, as it would be if there was no money involved).