This criticism is most similar to that of the ‘Variable value principles’ of the Plato article. The difference here is that we are not trying to find a ‘modification’ of total utilitarianism. Instead we argue that the Conclusion doesn’t follow from the premises in the general case, even if we are total utilitarians.
Superficially, the difference seems merely verbal: what they call a modification of total utilitarianism, you call a version of total utilitarianism. Is there anything substantive at stake?
Well, on the basis of the description in the SEP article:
The idea behind this view is that the value of adding worthwhile lives to a population varies with the number of already existing lives in such a way that it has more value when the number of these lives is small than when it is large
It’s not the same thing, since above we’re saying that each individual’s utility is a function of the whole setup. So when you add new people you change the existing population’s utilities. The SEP description instead sounds like changing only what happens at the margin.
The main argument above is more or less technical, rather than ‘verbal’. And reliance on verbal argument is pretty much the root of the original issue.
In the event someone else said something similar some other time, there’s still value in a rederivation from a different starting position. I’m not so much concerned with credit for coming up with an idea than that I less frequently encounter instances of this issue.
Thanks for the clarification. My intention was not to dismiss your proposal, but to understand it better.
After reading your comment and re-reading your post, I understand you to be claiming that the Repugnant Conclusion follows only if the mapping of resources to wellbeing takes a particular form, which can’t be taken for granted. I agree that this is substantively different from the proposals in the section of the SEP article, so the difference is not verbal, contrary to what it seemed to me initially.
However, I don’t think this works as a reply to the Repugnant Conclusion, which is a thought experiment intended to test our moral intuitions about how the wellbeing of different people should be aggregated to determine the value of worlds, and is not meant to be sensitive to how resources are allocated or how resources convert to wellbeing. That is, the Repugnant Conclusion stipulates that individual wellbeing is very high in the low population world and slightly above neutrality in the high population world, and combinations of resources and utility functions incompatible with those wellbeing levels are ruled out by stipulation.
Apologies if this is not a correct interpretation of your proposal.
The crux I think lies in, “is not meant to be sensitive to how resources are allocated or how resources convert to wellbeing.” I guess the point established here is that it is, in fact, sensitive to these parameters.
In particular if one takes this ‘total utility’ approach of adding up everyone’s individual utility we have to ask what each individual’s utility is a function of.
It seems easy to argue that the utility of existing individuals will be affected by expanding or contacting the total pool of individuals. There will be opposing forces of division of scarce resources vs network effects etc.
are ruled out by stipulation.
A way the argument above could be taken down would be writing down some example of a utility function, plugging it into the total utility calculation and showing the RC does hold. Then pointing out that the function comes from a broad class which covers most situations of practical interest.
If the best defence is indeed just pointing out that it’s true for a narrow range of assumptions, my reaction will be like, “OK, but that means I don’t have to pay much attention whenever it crops up in arguments because it probably doesn’t apply.”
The crux I think lies in, “is not meant to be sensitive to how resources are allocated or how resources convert to wellbeing.” I guess the point established here is that it is, in fact, sensitive to these parameters.
In particular if one takes this ‘total utility’ approach of adding up everyone’s individual utility we have to ask what each individual’s utility is a function of.
Yes, that is a question that needs to be answered, but population ethics is not an attempt to answer it. This subdiscipline treats distributions of wellbeing across individuals in different hypothetical worlds as a given input, and seeks to find a function that outputs a plausible ranking of those worlds. The Repugnant Conclusion arises because some of those functions produce rankings that seem intuitively very implausible.
The claim is not that the worlds so ranked are likely to arise in practice. Rather, the claim is that, intuitively, a theory should never generate those rankings. This is how philosophers generally assess moral theories: they construct thought experiments intended to elicit an intuitive response, and they contrast this response with the implications of the theory. For example, in one of the trolley thought experiments, it seems intuitively wrong (to most humans, at least) to push a big man to stop the trolley, but this is what utilitarianism apparently says we should do (kill one to save five). It is of no consequence, in this context, that we will almost certainly never face such a dilemma in practice.
As someone with a mathematical background, I see a claim about a general implication (the RC) arising from Total Utilitarianism. I ask ‘what is Total Utilitarianism?’ I understand ‘add up all the utilities’. I ask ‘what would the utility functions have to look like for the claim to hold?’ The answer is, ‘quite special’.
I don’t think any of us should be comfortable with not checking the claim works at a gears level. The claim here being, approximately, that the RC is implied under Total Utilitarianism regardless of the choice of utility function. Which is false, as demonstrated above.
> This subdiscipline treats distributions of wellbeing across individuals in different hypothetical worlds as a given input, and seeks to find a function that outputs a plausible ranking of those worlds.
If you’d be interested formalising what this means, I could try and show that either the formalisation is uninteresting or that some form of my counterexamples to the RC still holds.
Superficially, the difference seems merely verbal: what they call a modification of total utilitarianism, you call a version of total utilitarianism. Is there anything substantive at stake?
Well, on the basis of the description in the SEP article:
It’s not the same thing, since above we’re saying that each individual’s utility is a function of the whole setup. So when you add new people you change the existing population’s utilities. The SEP description instead sounds like changing only what happens at the margin.
The main argument above is more or less technical, rather than ‘verbal’. And reliance on verbal argument is pretty much the root of the original issue.
In the event someone else said something similar some other time, there’s still value in a rederivation from a different starting position. I’m not so much concerned with credit for coming up with an idea than that I less frequently encounter instances of this issue.
Thanks for the clarification. My intention was not to dismiss your proposal, but to understand it better.
After reading your comment and re-reading your post, I understand you to be claiming that the Repugnant Conclusion follows only if the mapping of resources to wellbeing takes a particular form, which can’t be taken for granted. I agree that this is substantively different from the proposals in the section of the SEP article, so the difference is not verbal, contrary to what it seemed to me initially.
However, I don’t think this works as a reply to the Repugnant Conclusion, which is a thought experiment intended to test our moral intuitions about how the wellbeing of different people should be aggregated to determine the value of worlds, and is not meant to be sensitive to how resources are allocated or how resources convert to wellbeing. That is, the Repugnant Conclusion stipulates that individual wellbeing is very high in the low population world and slightly above neutrality in the high population world, and combinations of resources and utility functions incompatible with those wellbeing levels are ruled out by stipulation.
Apologies if this is not a correct interpretation of your proposal.
Thanks for the considered reply :)
The crux I think lies in, “is not meant to be sensitive to how resources are allocated or how resources convert to wellbeing.” I guess the point established here is that it is, in fact, sensitive to these parameters.
In particular if one takes this ‘total utility’ approach of adding up everyone’s individual utility we have to ask what each individual’s utility is a function of.
It seems easy to argue that the utility of existing individuals will be affected by expanding or contacting the total pool of individuals. There will be opposing forces of division of scarce resources vs network effects etc.
A way the argument above could be taken down would be writing down some example of a utility function, plugging it into the total utility calculation and showing the RC does hold. Then pointing out that the function comes from a broad class which covers most situations of practical interest.
If the best defence is indeed just pointing out that it’s true for a narrow range of assumptions, my reaction will be like, “OK, but that means I don’t have to pay much attention whenever it crops up in arguments because it probably doesn’t apply.”
Yes, that is a question that needs to be answered, but population ethics is not an attempt to answer it. This subdiscipline treats distributions of wellbeing across individuals in different hypothetical worlds as a given input, and seeks to find a function that outputs a plausible ranking of those worlds. The Repugnant Conclusion arises because some of those functions produce rankings that seem intuitively very implausible.
The claim is not that the worlds so ranked are likely to arise in practice. Rather, the claim is that, intuitively, a theory should never generate those rankings. This is how philosophers generally assess moral theories: they construct thought experiments intended to elicit an intuitive response, and they contrast this response with the implications of the theory. For example, in one of the trolley thought experiments, it seems intuitively wrong (to most humans, at least) to push a big man to stop the trolley, but this is what utilitarianism apparently says we should do (kill one to save five). It is of no consequence, in this context, that we will almost certainly never face such a dilemma in practice.
As someone with a mathematical background, I see a claim about a general implication (the RC) arising from Total Utilitarianism. I ask ‘what is Total Utilitarianism?’ I understand ‘add up all the utilities’. I ask ‘what would the utility functions have to look like for the claim to hold?’ The answer is, ‘quite special’.
I don’t think any of us should be comfortable with not checking the claim works at a gears level. The claim here being, approximately, that the RC is implied under Total Utilitarianism regardless of the choice of utility function. Which is false, as demonstrated above.
> This subdiscipline treats distributions of wellbeing across individuals in different hypothetical worlds as a given input, and seeks to find a function that outputs a plausible ranking of those worlds.
If you’d be interested formalising what this means, I could try and show that either the formalisation is uninteresting or that some form of my counterexamples to the RC still holds.