Increasingly vague interpersonal welfare comparisons

Some have argued that all interpersonal welfare comparisons should be possible or take it as a strong mark against a theory in which they are not all possible. Others have argued against their possibility, e.g. Hausman (1995) for preference views. Here, I will illustrate an intermediate position: interpersonal welfare comparisons are vague, with tighter bounds on reasonable comparisons between beings whose welfare states are realized more similarly, and wider or no bounds when more different.

The obvious case is two completely or at least functionally identical brains (at the right level of abstraction for our functionalist theory). As long as we grant intrapersonal comparisons, then we should get interpersonal comparisons between identical brains. We map the first brain’s state(s) to the equivalent state(s) in the second, and compare them in the second brain. Of course, this is not a very interesting case, and it seems only directly useful for artificial duplicates of minds.

Still, we can go further. Consider an experience in brain and an experience in brain . If and only differ by the fact that some of ‘s unpleasantness-contributing neurons are less sensitive or removed, and and receive the same input signals that cause pain, then it seems likely to me that ‘s painful experience is at least as unpleasant as ’s and possibly more. We may be able to say roughly how much more unpleasant it is by comparing in directly to less intense states in , sandwiching in unpleasantness between two states in .

Maybe going from to changes the unpleasantness by between −0.01 and 0, i.e. , where . There may be no fact of the matter about the exact value of .

For small enough local differences between brains, we could make fairly precise comparisons.

I use unpleasantness for the purpose of a more concrete illustration, but it’s plausible other potential types of welfare could be used instead, like preferences. A slight difference in how some preferences are realized should typically result in a slight difference in the preferences themselves and how we value them, but the extent of the difference in value could be vague and only boundable by fairly tight inequalities. We can use the same example, too: a slight difference in how unpleasant a pain is through the same kinds of differences in neurons as above typically results in a slight difference in preferences about that pain and preference-based value.

In general, for arbitrary brains and and respective experiences and , we can ask whether there’s a sequence of changes from and to and , possibly passing through different hypothetical intermediate brains and states, that lets us compare and by combining bounds and inequalities from each step along the sequence. Some changes could have opposite sign effects on the realized welfare, but with only bounds rather than precise values, the bounds widen between brains farther apart in the sequence.

For example, a change with a range of +1 to +4 in additional unpleasantness and a change with a range of −3 to −1 could give a net change between −2=+1-3 and +3=+4-1. Adding one more change of between +1 and +4 and another of between −3 and −1 gives between −4 and +6. Adding another change of between +2 and +3 gives between −2 and +9. The gap between the bounds widens with each additional change.[1]

The more or larger such changes are necessary to get from one brain to another, the less tight the bounds on the comparisons could become, the further they may go both negative and positive overall,[2] and the less reasonable it seems to make such comparisons at all.

  1. ^

    In princinple, the gap between the bounds could sometimes shrink with additional changes. In the simplest case, if you make a change of between +1 and +3 in unpleasantness, and then reverse the change, which means adding a change of between −3 and −1, the two together is no net physical change and should give 0 net change in unpleasantness, not between −2 and +2.

  2. ^

    However, they could aggregate to definitely be positive, or aggregate to definitely be negative.

Crossposted to LessWrong (0 points, 0 comments)