The crux I think lies in, “is not meant to be sensitive to how resources are allocated or how resources convert to wellbeing.” I guess the point established here is that it is, in fact, sensitive to these parameters.
In particular if one takes this ‘total utility’ approach of adding up everyone’s individual utility we have to ask what each individual’s utility is a function of.
It seems easy to argue that the utility of existing individuals will be affected by expanding or contacting the total pool of individuals. There will be opposing forces of division of scarce resources vs network effects etc.
are ruled out by stipulation.
A way the argument above could be taken down would be writing down some example of a utility function, plugging it into the total utility calculation and showing the RC does hold. Then pointing out that the function comes from a broad class which covers most situations of practical interest.
If the best defence is indeed just pointing out that it’s true for a narrow range of assumptions, my reaction will be like, “OK, but that means I don’t have to pay much attention whenever it crops up in arguments because it probably doesn’t apply.”
The crux I think lies in, “is not meant to be sensitive to how resources are allocated or how resources convert to wellbeing.” I guess the point established here is that it is, in fact, sensitive to these parameters.
In particular if one takes this ‘total utility’ approach of adding up everyone’s individual utility we have to ask what each individual’s utility is a function of.
Yes, that is a question that needs to be answered, but population ethics is not an attempt to answer it. This subdiscipline treats distributions of wellbeing across individuals in different hypothetical worlds as a given input, and seeks to find a function that outputs a plausible ranking of those worlds. The Repugnant Conclusion arises because some of those functions produce rankings that seem intuitively very implausible.
The claim is not that the worlds so ranked are likely to arise in practice. Rather, the claim is that, intuitively, a theory should never generate those rankings. This is how philosophers generally assess moral theories: they construct thought experiments intended to elicit an intuitive response, and they contrast this response with the implications of the theory. For example, in one of the trolley thought experiments, it seems intuitively wrong (to most humans, at least) to push a big man to stop the trolley, but this is what utilitarianism apparently says we should do (kill one to save five). It is of no consequence, in this context, that we will almost certainly never face such a dilemma in practice.
As someone with a mathematical background, I see a claim about a general implication (the RC) arising from Total Utilitarianism. I ask ‘what is Total Utilitarianism?’ I understand ‘add up all the utilities’. I ask ‘what would the utility functions have to look like for the claim to hold?’ The answer is, ‘quite special’.
I don’t think any of us should be comfortable with not checking the claim works at a gears level. The claim here being, approximately, that the RC is implied under Total Utilitarianism regardless of the choice of utility function. Which is false, as demonstrated above.
> This subdiscipline treats distributions of wellbeing across individuals in different hypothetical worlds as a given input, and seeks to find a function that outputs a plausible ranking of those worlds.
If you’d be interested formalising what this means, I could try and show that either the formalisation is uninteresting or that some form of my counterexamples to the RC still holds.
Thanks for the considered reply :)
The crux I think lies in, “is not meant to be sensitive to how resources are allocated or how resources convert to wellbeing.” I guess the point established here is that it is, in fact, sensitive to these parameters.
In particular if one takes this ‘total utility’ approach of adding up everyone’s individual utility we have to ask what each individual’s utility is a function of.
It seems easy to argue that the utility of existing individuals will be affected by expanding or contacting the total pool of individuals. There will be opposing forces of division of scarce resources vs network effects etc.
A way the argument above could be taken down would be writing down some example of a utility function, plugging it into the total utility calculation and showing the RC does hold. Then pointing out that the function comes from a broad class which covers most situations of practical interest.
If the best defence is indeed just pointing out that it’s true for a narrow range of assumptions, my reaction will be like, “OK, but that means I don’t have to pay much attention whenever it crops up in arguments because it probably doesn’t apply.”
Yes, that is a question that needs to be answered, but population ethics is not an attempt to answer it. This subdiscipline treats distributions of wellbeing across individuals in different hypothetical worlds as a given input, and seeks to find a function that outputs a plausible ranking of those worlds. The Repugnant Conclusion arises because some of those functions produce rankings that seem intuitively very implausible.
The claim is not that the worlds so ranked are likely to arise in practice. Rather, the claim is that, intuitively, a theory should never generate those rankings. This is how philosophers generally assess moral theories: they construct thought experiments intended to elicit an intuitive response, and they contrast this response with the implications of the theory. For example, in one of the trolley thought experiments, it seems intuitively wrong (to most humans, at least) to push a big man to stop the trolley, but this is what utilitarianism apparently says we should do (kill one to save five). It is of no consequence, in this context, that we will almost certainly never face such a dilemma in practice.
As someone with a mathematical background, I see a claim about a general implication (the RC) arising from Total Utilitarianism. I ask ‘what is Total Utilitarianism?’ I understand ‘add up all the utilities’. I ask ‘what would the utility functions have to look like for the claim to hold?’ The answer is, ‘quite special’.
I don’t think any of us should be comfortable with not checking the claim works at a gears level. The claim here being, approximately, that the RC is implied under Total Utilitarianism regardless of the choice of utility function. Which is false, as demonstrated above.
> This subdiscipline treats distributions of wellbeing across individuals in different hypothetical worlds as a given input, and seeks to find a function that outputs a plausible ranking of those worlds.
If you’d be interested formalising what this means, I could try and show that either the formalisation is uninteresting or that some form of my counterexamples to the RC still holds.