Ah, well maybe we should just defer to Broome and Greaves and not engage in the object-level discussions at all!
Hah perhaps I deserved this. I was just trying to indicate that there are people who both ‘understand the theory’ and hold that the <A, B1, B2> argument is important which was a response to your “I find people do tend to very easily dismiss the view, but usually without really understanding how it works!” comment. I concede though that you weren’t saying that of everyone.
All views in pop ethics have bonkers results, something that is widely agreed by population ethicists.
Yes I understand that it’s a matter of accepting the least bonkers result. Personally I find the idea that it might be neutral to bring miserable lives into this world is up there with some of the more bonkers results.
You may just write me off as a monster, but I quite like symmetries and I’m minded to accept a symmetrical person-affecting view
I don’t write you off as a monster! We all have different intuitions about what is repugnant. It is useful to have (I think) reached a better understanding of both of our views.
My view goes something like:
I am not willing to concede that it might be neutral to bring terrible lives into this world which means I reject necessitarianism and therefore feel the force of the <A, B1, B2> argument (as I also hold transitivity to be an important axiom). I’m not sure if I’m convinced by your argument that necessitarianism gets you out the quandary (maybe it does, I would have to think about it more) but ultimately it doesn’t matter to me as I reject necessitarianism anyway.
I note that MichaelStJules says that you can hold onto transitivity at the expense of IIA, but I don’t think this does a whole lot for me. I am also concerned by the non-identity problem. Ultimately I’m not really convinced by arguably the least objectionable person-affecting view out there (you can see my top-level comment on this post), and this all leads me to having more credence in total utilitarianism than person-affecting views (which certainly wasn’t always the case).
The ‘bonkers result’ with total utilitarianism is the repugnant conclusion which I don’t find to be repugnant as I think “lives barely worth living” are actually pretty decent—they are worth living after all! But then there’s the “very repugnant conclusion” which still somewhat bothers me. (EDIT: I am also interested by the claim in this paper that the repugnant conclusion afflicts all population axiologies, including person-affecting views, although I haven’t actually read through the paper yet to understand it completely).
So overall I’m still somewhat morally uncertain about population axiology, but probably have highest credence in total utilitarianism. In any case it is interesting to note that it has been argued that even minimal credence in total utilitarianism can justify acting as a total utilitarian, if one resolves moral uncertainty by maximising expected moral value.
So all in all I’m content to act as a total utilitarian, at least for now.
I am also interested by the claim in this paper that the repugnant conclusion afflicts all population axiologies, including person-affecting views, although I haven’t actually read through the paper yet to understand it completely
I’d just check the definition of the Extended very repugnant conclusion (XVRC) on p. 19. Roughly, tiny changes in welfare (e.g. pin pricks, dust specks) to an appropriate base population can make up for the addition of any number of arbitrarily bad lives and the foregoing of any number of arbitrarily good lives. The base population depends on the magnitude of the change in welfare, and the bad and good lives.
The claim of the paper is that basically all theories so far have led to the XVRC.
It’s possible to come up with theories that don’t. Take Meacham’s approach, and instead of using the sum of harms, use the maximum individual harm (and the counterpart relations should be defined to minimize the max harm in the world).
Or do something like this for pairwise comparisons only, and then extend using some kind of voting method, like beatpath, as discussed in Thomas’s paper on the asymmetry.
This is similar to the view the animal rights ethicist Tom Regan described here:
Given that these conditions are fulfilled, the choice concerning who should be saved must be decided by what I term the harm principle. Space prevents me from explaining that principle fully here (see The Case, chapters 3 and 8, for my considered views). Suffice it to say that no one has a right to have his lesser harm count for more than the greater harm of another. Thus, if death would be a lesser harm for the dog than it would be for any of the human survivors—(and this is an assumption Singer does not dispute)—then the dog’s right not to be harmed would not be violated if he were cast overboard. In these perilous circumstances, assuming that no one’s right to be treated with respect has been part of their creation, the dog’s individual right not to be harmed must be weighed equitably against the same right of each of the individual human survivors.
To weigh these rights in this fashion is not to violate anyone’s right to be treated with respect; just the opposite is true, which is why numbers make no difference in such a case. Given, that is, that what we must do is weigh the harm faced by any one individual against the harm faced by each other individual, on an individual, not a group or collective basis, it then makes no difference how many individuals will each suffer a lesser, or who will each suffer a greater, harm. It would not be wrong to cast a million dogs overboard to save the four human survivors, assuming the lifeboat case were otherwise the same. But neither would it be wrong to cast a million humans overboard to save a canine survivor, if the harm death would be for the humans was, in each case, less than the harm death would be for the dog.
These approaches all sacrifice the independence of irrelevant alternatives or transitivity.
Another way to “avoid” it is to recognize gaps in welfare, so that the smallest change in welfare (in one direction from a given level) allowed is intuitively large. For example, maybe there’s a lexical threshold for sufficiently intense suffering, and a gap in welfare just before it. Suffering may be bearable to different degrees, but some kinds may just be completely unbearable, and the threshold could be where it becomes completely unbearable; see some discussion of thresholds here. Then people people past the threshold is extremely bad, no matter where they start, whether that’s right next to the threshold, or from non-existence.
Or, maybe there’s no gap, but just barely pushing people past that threshold is extremely bad anyway, and roughly as bad as bringing people into existence already past that threshold. I think a gap in welfare is functionally the same, but explains this better.
I am also interested by the claim in this paper that the repugnant conclusion afflicts all population axiologies, including person-affecting views
Not negative utilitarian axiology. The proof relies on the assumption that the utility variable u can be positive.
What if “utility” is meant to refer to the objective aspects of the beings’ experience etc. that axiologies would judge as good or bad—rather than to moral goodness or badness themselves? Then I think there are two problems:
1) Supposing it’s a fair move to aggregate all these aspects into one scalar, the theorem assumes the function f must be strictly increasing. Under this interpretation the NU function would be f(u) = min(u, 0).
2) I deny that such aggregation even is a reasonable move. Restricting to hedonic welfare for simplicity, it would be more appropriate for f to be a function of two variables, happiness and suffering. Collapsing this into a scalar input, I think, obscures some massive moral differences between different formulations of the Repugnant Conclusion, for example. Interestingly, though, if we formulate the VRC as in that paper by treating all positive values of u as “only happiness, no suffering” and all negative values as “only suffering, no happiness” (thereby making my objection on this point irrelevant) the theorem still goes through for all those axiologies. But not for NU.
Edit: The paper seems to acknowledge point #2, though not the implications for NU:
One way to see that a ε increase could be very repugnant is to recall Portmore’s (1999) suggestion that ε lives in the restricted RC could be “roller coaster” lives, in which there is much that is wonderful, but also much terribly suffering, such that the good ever-so-slightly outweighs the bad. Here, one admitted possibility is that an ε-change could substantially increase the terrible suffering in a life, and also increase good components; such a ε-change is not the only possible ε-change, but it would have the consequence of increasing the total amount of suffering. … Moreover, if ε-changes are of the “roller coaster” form, they could increase deep suffering considerably beyond even the arbitrarily many [u < 0] lives, and in fact could require everyone in the chosen population to experience terrible suffering.
Plenty of theories avoid the RC and VRC, but this paper extends the VRC on p. 19. Basically, you can make up for the addition of an arbitrary number of arbitrarily bad lives instead of an arbitrary number of arbitrarily good lives with arbitrarily small changes to welfare to a base population, which depends on the previous factors.
For NU (including lexical threshold NU), this can mean adding an arbitrarily huge number of new people to hell to barely reduce the suffering for each person in a sufficiently large population already in hell. (And also not getting the very positive lives, but NU treats them as 0 welfare anyway.)
Also, related to your edit, epsilon changes could flip a huge number of good or neutral lives in a base population to marginally bad lives.
For NU (including lexical threshold NU), this can mean adding an arbitrarily huge number of new people to hell to barely reduce the suffering for each person in a sufficiently large population already in hell. (And also not getting the very positive lives, but NU treats them as 0 welfare anyway.)
This may be counterintuitive to an extent, but to me it doesn’t reach “very repugnant” territory. Misery is still reduced here; an epsilon change of the “reducing extreme suffering” sort, evenly if barely so, doesn’t seem morally frivolous like the creation of an epsilon-happy life or, worse, creation of an epsilon roller coaster life. But I’ll have to think about this more. It’s a good point, thanks for bringing it to my attention.
For NU (including lexical threshold NU), this can mean adding an arbitrarily huge number of new people to hell to barely reduce the suffering for each person in a sufficiently large population already in hell.
What would it mean to repeat this step (up to an infinite number of times)?
Intuitively, it sounds to me like the suffering gets divided more equally between those who already exist and those who do not, which ultimately leads to an infinite population where everyone has a subjectively perfect experience.
In the finite case, it leads to an extremely large population of almost perfectly untroubled lives.
If extrapolated in this way, it seems quite plausible that the population we eventually get by repeating this step is much better than the initial population.
FWIW, there’s a sense in which total utilitarianism is my 2nd favourite view: I like its symmetry and I think it has the right approach to aggregation. In so far as I am totalist, I don’t find the repugnant conclusion repugant. I just have issues with comparativism and impersonal value.
It’s not obvious to me totalism does ‘swamp’ if one appeals to moral uncertainty, but that’s another promissory note.
Hah perhaps I deserved this. I was just trying to indicate that there are people who both ‘understand the theory’ and hold that the <A, B1, B2> argument is important which was a response to your “I find people do tend to very easily dismiss the view, but usually without really understanding how it works!” comment. I concede though that you weren’t saying that of everyone.
Yes I understand that it’s a matter of accepting the least bonkers result. Personally I find the idea that it might be neutral to bring miserable lives into this world is up there with some of the more bonkers results.
I don’t write you off as a monster! We all have different intuitions about what is repugnant. It is useful to have (I think) reached a better understanding of both of our views.
My view goes something like:
I am not willing to concede that it might be neutral to bring terrible lives into this world which means I reject necessitarianism and therefore feel the force of the <A, B1, B2> argument (as I also hold transitivity to be an important axiom). I’m not sure if I’m convinced by your argument that necessitarianism gets you out the quandary (maybe it does, I would have to think about it more) but ultimately it doesn’t matter to me as I reject necessitarianism anyway.
I note that MichaelStJules says that you can hold onto transitivity at the expense of IIA, but I don’t think this does a whole lot for me. I am also concerned by the non-identity problem. Ultimately I’m not really convinced by arguably the least objectionable person-affecting view out there (you can see my top-level comment on this post), and this all leads me to having more credence in total utilitarianism than person-affecting views (which certainly wasn’t always the case).
The ‘bonkers result’ with total utilitarianism is the repugnant conclusion which I don’t find to be repugnant as I think “lives barely worth living” are actually pretty decent—they are worth living after all! But then there’s the “very repugnant conclusion” which still somewhat bothers me. (EDIT: I am also interested by the claim in this paper that the repugnant conclusion afflicts all population axiologies, including person-affecting views, although I haven’t actually read through the paper yet to understand it completely).
So overall I’m still somewhat morally uncertain about population axiology, but probably have highest credence in total utilitarianism. In any case it is interesting to note that it has been argued that even minimal credence in total utilitarianism can justify acting as a total utilitarian, if one resolves moral uncertainty by maximising expected moral value.
So all in all I’m content to act as a total utilitarian, at least for now.
It was actually fairly useful to write that out.
I’d just check the definition of the Extended very repugnant conclusion (XVRC) on p. 19. Roughly, tiny changes in welfare (e.g. pin pricks, dust specks) to an appropriate base population can make up for the addition of any number of arbitrarily bad lives and the foregoing of any number of arbitrarily good lives. The base population depends on the magnitude of the change in welfare, and the bad and good lives.
The claim of the paper is that basically all theories so far have led to the XVRC.
It’s possible to come up with theories that don’t. Take Meacham’s approach, and instead of using the sum of harms, use the maximum individual harm (and the counterpart relations should be defined to minimize the max harm in the world).
Or do something like this for pairwise comparisons only, and then extend using some kind of voting method, like beatpath, as discussed in Thomas’s paper on the asymmetry.
This is similar to the view the animal rights ethicist Tom Regan described here:
These approaches all sacrifice the independence of irrelevant alternatives or transitivity.
Another way to “avoid” it is to recognize gaps in welfare, so that the smallest change in welfare (in one direction from a given level) allowed is intuitively large. For example, maybe there’s a lexical threshold for sufficiently intense suffering, and a gap in welfare just before it. Suffering may be bearable to different degrees, but some kinds may just be completely unbearable, and the threshold could be where it becomes completely unbearable; see some discussion of thresholds here. Then people people past the threshold is extremely bad, no matter where they start, whether that’s right next to the threshold, or from non-existence.
Or, maybe there’s no gap, but just barely pushing people past that threshold is extremely bad anyway, and roughly as bad as bringing people into existence already past that threshold. I think a gap in welfare is functionally the same, but explains this better.
Not negative utilitarian axiology. The proof relies on the assumption that the utility variable u can be positive.
What if “utility” is meant to refer to the objective aspects of the beings’ experience etc. that axiologies would judge as good or bad—rather than to moral goodness or badness themselves? Then I think there are two problems:
1) Supposing it’s a fair move to aggregate all these aspects into one scalar, the theorem assumes the function f must be strictly increasing. Under this interpretation the NU function would be f(u) = min(u, 0).
2) I deny that such aggregation even is a reasonable move. Restricting to hedonic welfare for simplicity, it would be more appropriate for f to be a function of two variables, happiness and suffering. Collapsing this into a scalar input, I think, obscures some massive moral differences between different formulations of the Repugnant Conclusion, for example. Interestingly, though, if we formulate the VRC as in that paper by treating all positive values of u as “only happiness, no suffering” and all negative values as “only suffering, no happiness” (thereby making my objection on this point irrelevant) the theorem still goes through for all those axiologies. But not for NU.
Edit: The paper seems to acknowledge point #2, though not the implications for NU:
Plenty of theories avoid the RC and VRC, but this paper extends the VRC on p. 19. Basically, you can make up for the addition of an arbitrary number of arbitrarily bad lives instead of an arbitrary number of arbitrarily good lives with arbitrarily small changes to welfare to a base population, which depends on the previous factors.
For NU (including lexical threshold NU), this can mean adding an arbitrarily huge number of new people to hell to barely reduce the suffering for each person in a sufficiently large population already in hell. (And also not getting the very positive lives, but NU treats them as 0 welfare anyway.)
Also, related to your edit, epsilon changes could flip a huge number of good or neutral lives in a base population to marginally bad lives.
This may be counterintuitive to an extent, but to me it doesn’t reach “very repugnant” territory. Misery is still reduced here; an epsilon change of the “reducing extreme suffering” sort, evenly if barely so, doesn’t seem morally frivolous like the creation of an epsilon-happy life or, worse, creation of an epsilon roller coaster life. But I’ll have to think about this more. It’s a good point, thanks for bringing it to my attention.
What would it mean to repeat this step (up to an infinite number of times)?
Intuitively, it sounds to me like the suffering gets divided more equally between those who already exist and those who do not, which ultimately leads to an infinite population where everyone has a subjectively perfect experience.
In the finite case, it leads to an extremely large population of almost perfectly untroubled lives.
If extrapolated in this way, it seems quite plausible that the population we eventually get by repeating this step is much better than the initial population.
I wrote some more about this here in reply to Jack.
Glad we made some progress!
FWIW, there’s a sense in which total utilitarianism is my 2nd favourite view: I like its symmetry and I think it has the right approach to aggregation. In so far as I am totalist, I don’t find the repugnant conclusion repugant. I just have issues with comparativism and impersonal value.
It’s not obvious to me totalism does ‘swamp’ if one appeals to moral uncertainty, but that’s another promissory note.
Anyway, a useful discussion.
Definitely a useful discussion and I look forward to seeing you write more on all of this!