I have several issues with the internal consistency of this argument:
If individuals are allowed to select their own critical levels to respect their autonomy and preferences in any meaningful sense, that seems to imply respecting those people who value their existence and so would set a low critical level; then you get an approximately total view with regards to those sorts of creatures, and so a future populated with such beings can still be astronomically great
The treatment of zero levels seems inconsistent: if it is contradictory to set a critical level below the level one would prefer to exist, it seems likewise nonsensical to set it above that level
You suggest that people set their critical levels based on their personal preferences about their own lives, but then you make claims about their choices based on your intuitions about global properties like the Repugnant Conclusion, with no link between the two
The article makes much about avoiding repugnant sadistic conclusion, but the view you seem to endorse at the end would support creating arbitrary numbers of lives consisting of nothing but intense suffering to prevent the existence of happy people with no suffering who set their critical level to an even higher level than the actual one
On the first point, you suggest that that individuals get to set their own critical levels based on their preferences about their own lives. E.g.
The lowest preferred critical level is zero: if a person would choose a negative critical level, that person would accept a situation where he or she can have a negative utility, such as a life not worth living. Accepting a situation that one would not prefer, is basically a contradiction.
So if my desires and attitudes are such that I set a critical level well below the maximum, then my life can add substantial global value. E.g. if A has utility +5 and sets critical value 0, B has utility +5 and chooses critical value 10, and C has utility −5 and critical value 10, then 3 lives like A will offset one life like C, and you can get most of the implications of the total view, and in particular an overwhelmingly high value of the future if the future is mostly populated with beings who favor existing and set low critical levels for themselves (which one could expect from people choosing features of their descendants or selection).
On the second point, returning to this quote:
The lowest preferred critical level is zero: if a person would choose a negative critical level, that person would accept a situation where he or she can have a negative utility, such as a life not worth living. Accepting a situation that one would not prefer, is basically a contradiction.
I would note that utility in the sense of preferences over choices, or a utility function, need not correspond to pleasure or pain. The article is unclear on the concept of utility it is using but the above quote seems to require a preference base, i.e. zero utility is defined as the point at which the person would prefer to be alive rather than not. But then if 0 is the level at which one would prefer to exist, isn’t it equally contradictory to have a higher critical level and reject lives that you would prefer? Perhaps you are imagining someone who thinks ‘given that I am alive I would rather live than die, but I dislike having coming into existence in the first place, which death would not change.’ But in this framework that would just be negative utility part of the assessment of the overall life (and people without that attitude can be unbothered).
Regarding the third point, if each of us choose our own critical level autonomously, I do not get to decree a level for others. But the article makes several arguments that seem to conflate individual and global choice by talking about everyone choosing a certain level, e.g.:
If people want to move safely away from the sadistic repugnant conclusion and other problems of rigid critical level utilitarianism, they should choose a critical level infinitesimally close to (but still below) their highest preferred levels.
But if I set a very high critical level for myself, that doesn’t lower the critical levels of others, and so the repugnant conclusion can proceed just fine with the mildly good lives of those who choose low critical levels for themselves. Having the individuals choose for themselves based on prior prejudices about global population ethics also defeats the role of the individual choice as a way to derive the global conclusion. I don’t need to be a total utilitarian in general to approve of my existence in cases in which I would prefer to exist.
Lastly, a standard objection to critical level views is that they treat lives below the critical level (but better than nothing by the person’s own lights and containing happiness but not pain) as negative, and so will endorse creating lives of intense suffering by people who wish they had never existed to prevent the creation of multiple mildly good lives. With the variable critical level account all those cases would still go through using people who choose high critical levels (with the quasi-negative view, it would favor creating suicidal lives of torment to offset the creation of blissful beings a bit below the maximum). I don’t see that addressed in the article.
The existence of critical level theories all but confirms the common claim that those who deny the Repugnant Conclusion underrate low quality lives. An inevitable symptom of this is the confused attempt to set a critical level that is positive.
When we think about the Z population, we try to conceive of a life that is only slightly positive using intuitive affect/aversion heuristics. In such a life, something as trivial as one additional bad day could make them net negative. Spread across the whole Z population, this makes the difference between Z being extremely good and extremely bad. But this difference, although massive in terms of total welfare, is small from the point of view of heuristic intuitions that focus on the quality of life of a single individual in the Z population.
For this reason, the A vs Z comparison is extremely unreliable, and we should expect our intuitions to go completely haywire when asked to make judgements about it. In such cases, it is best to return to obvious arguments and axioms such as that good lives are good and that more is better etc. Numerous persuasive propositions and axioms all imply the Repugnant Conclusion from numerous different directions.
Critical level theories are a symptom of a flawed and failed approach to ethics that relies on intuitions we have reason to believe will be unreliable and are contradicted by numerous highly plausible lines of argument.
I don’t see why the A-Z comparison is unreliable, based on your example. Why would the intuitions behind the repugnant conclusion be less reliable than intuitions behind our choice for some axioms? And we’re not merely talking about the repugnant conclusion, but about the sadistic repugnant conclusion, which is intuitivelly more repugnant. So suppose we have to choose between two situations. In the first situation, there is only one next future human generation after us (let’s say a few billion people), all with very long and extremely happy lives. In the second situation, there are quadrillions of future human generations, with billions of people, but they only live for 1 minute where they can experience the joy of taking a bite from an apple. Except for the first generation who will extremely suffer for many years. So in order to have many future generations, the first of those future generations will have to live lives of extreme misery. And all the other future lives are nothing more than tasting an apple. Can the joy of quadrillions of people tasting an apple trump the extreme misery of billions of people for many years?
On the critical level theory, the lives of the people who come into the world experiencing the joy of an apple for 1 minute have negative value. This seems clearly wrong, which illustrates my point. You would have to say that the world was made worse by the existence of a being who lived for one minute, enjoyed their apple, then died (and there were no instrumental costs to their life). This is extremely peculiar, from a welfarist perspective. Welfarists should be positive about additional welfare! Also, do you think it is bad for me to enjoy a nice juicy Pink Lady now? If not, then why is it bad for someone to come into existence and only do that?
Methodologically, rather than noting that the sadistic repugnant conclusion is countintuitive and then trying to conjure up theories that avoid it, I think it would make more sense to ask why the sadistic repugnant conclusion would be false. The Z lives are positive, so it is better for them to live than not. The value aggregates in a non-diminishing way—the first life adds as much value as the quadrillionth. This means that the Z population can have arbitrarily large value depending on its size, which means that it can outweigh lots of other things. In my view, it is completely wrongheaded to start by observing that a conclusion is counterintuitive and ignoring the arguments for it when building alternatives. This is an approach that has lead to meagre progress in population ethics over the last 30 years—can you name a theory developed in this fashion that now commands widespread assent in the field? The approach leads people to develop theories such as CLU, that commit you to holding that a life of positive welfare is negative, which is difficult to understand from a welfarist perspective.
Perhaps there is more of importance than merely welfare. Concerning the repugnant sadistic conclusion I can say two things. First, I am not willing to put myself and all my friends in extreme misery merely for the extra existence of quadrillions of people who have nothing but a small positive experience of tasting an apple. Second, when I would be one of those extra people living for a minute and tasting an apple, knowing that my existence involved the extreme suffering of billions of people who could otherwise have been very happy, I would rather not exist. That means even if my welfare of briefly tasting the apple (a nice juicy Pink Lady) is positive, I still have a preference for the other situation where I don’t exist, so my preference (relative utility) in the situation where I exist is negative. So in the second situation where the extra people exist, if I’m one of the suffering people or one of the extra, apple-eating people, in both cases I have a negative preference for that situation. Or stated differently: in the first situation where only the billion happy people exist, no-one can complain (the non-existing people are not able to complain against their non-existence and against their forgone happiness of tasting an apple). In the second situation, where those billion people are in extreme misery, they could complain. The axiom that we should minimize the sum of complaints is as reasonable as the axiom that we should maximize the sum of welfare.
One argument I advance there is that these theories appear not to be applicable to moral patients who lack rational agency. Suppose that mice have net positive lives. What would it mean to say of them that they have a preference for not putting millions in extreme misery for the sake of their small net positive welfare? If you say that we should nevertheless not put millions in extreme misery for the sake of quadrillions of mice, then it looks like you are appealing to something other than a complaints-based theory to justify your anti-aggregative conclusion. So, the complaints-based theory isn’t doing any work in the argument.
Thanks for the paper!
Concerning the moral patients and mice: they indeed lack a capability to determine their reference values (critical levels) and express their utility functions (perhaps we can derive them from their revealed preferences). So actually this means those mice do not have a preference for a critical level or for a population ethical theory. They don’t have a preference for total utilitarianism or negative utilitarianism or whatever. That could mean that we can choose for them a critical level and hence the population ethical implications, and those mice cannot complain against our choices if they are indifferent. If we strongly want total utilitarianism and hence a zero critical level, fine, then we can say that those mice also have a zero critical level. But if we want to avoid the sadistic repugnant conclusion in the example with the mice, fine, then we can set the critical levels of those mice higher, such that we choose the situation where those quadrillions of mice don’t exist. Even the mice who do exist cannot complain against our choice for the non-existence of those extra quadrillion mice, because they are indifferent about our choice.
“If individuals are allowed to select their own critical levels to respect their autonomy and preferences in any meaningful sense, that seems to imply respecting those people who value their existence and so would set a low critical level; then you get an approximately total view with regards to those sorts of creatures, and so a future populated with such beings can still be astronomically great.” Indeed: if everyone in the future (except me) would be a total utilitarian, willing to bite the bullet and accept the repugnant sadistic conclusion, setting a very low critical level for themselves, I would accept their choices and we end up with a variable critical level utilitarianism that is very very close to total utilitarianism (it is not exactly total utilitarianism, because I would be the only one with a higher critical level). So the question is: how many people in the future are willing to accept the repugnant sadistic conclusion?
“The treatment of zero levels seems inconsistent: if it is contradictory to set a critical level below the level one would prefer to exist, it seems likewise nonsensical to set it above that level.” Utility measures a preference for a certain situation, but this is independent from other possible situations. However, the critical level and hence the relative utility also takes into account other possible situations. For example: I have a happy life with a positive utility. But if one could choose another situation where I did not exist and everyone else was maximally happy and satisfied, I would prefer (if that would still be an option) that second situation, even if I don’t exist in that situation. That means my relative utility could be negative, if that second situation was eligible. So in a sense, in a particular choice set (i.e. when the second situation is available), I prefer my non-existence. Preferring my non-existence, even if my utility is positive, means I choose a critical level that is higher than my utility.
“You suggest that people set their critical levels based on their personal preferences about their own lives, but then you make claims about their choices based on your intuitions about global properties like the Repugnant Conclusion, with no link between the two.” I do not make claims about their choices based on my intuitions. All I can say is that if people really want to avoid the repugnant sadistic conclusion, they can do so by setting a high critical level. But to be altruistic, I have to accept the choices of everyone else. So if you all choose a critical level of zero, I will accept that, even if that means accepting the repugnant sadistic conclusion, which is very counter intuitive to me.
“The article makes much about avoiding repugnant sadistic conclusion, but the view you seem to endorse at the end would support creating arbitrary numbers of lives consisting of nothing but intense suffering to prevent the existence of happy people with no suffering who set their critical level to an even higher level than the actual one.” This objection to fixed critical level utilitarianism can be easily avoided with variable critical level utilitarianism. Suppose there is someone with a positive utility (a very happy person), who sets his critical level so high that a situation should be chosen where he does not exist, and where extra people with negative utilities exist. Why would he set such a high critical level? He cannot want that. This is even more counter-intuitive than the repugnant sadistic conclusion. With fixed critical level utilitarianism, such counter-intuitive conclusion can occur because everyone would have to accept the high critical level. But variable critical level utilitarianism can easily avoid it by taking lower critical levels.
This objection to fixed critical level utilitarianism can be easily avoided with variable critical level utilitarianism. Suppose there is someone with a positive utility (a very happy person), who sets his critical level so high that a situation should be chosen where he does not exist, and where extra people with negative utilities exist. Why would he set such a high critical level? He cannot want that. This is even more counter-intuitive than the repugnant sadistic conclusion. With fixed critical level utilitarianism, such counter-intuitive conclusion can occur because everyone would have to accept the high critical level. But variable critical level utilitarianism can easily avoid it by taking lower critical levels.
Such situations exist for any critical level above zero, since any critical level above zero means treating people with positive welfare as a bad thing, to be avoided even at the expense of some amount of negative welfare.
If you think the idea of people with negative utility being created to prevent your happy existence is even more counterintuitive than people having negative welfare to produce your happy existence, it would seem your view would demand that you set a critical value of 0 for yourself.
For example: I have a happy life with a positive utility. But if one could choose another situation where I did not exist and everyone else was maximally happy and satisfied, I would prefer (if that would still be an option) that second situation, even if I don’t exist in that situation.
A situation where you don’t exist but uncounted trillions of others are made maximally happy is going to be better in utilitarian terms (normal, critical-level, variable, whatever), regardless of your critical level (or theirs, for that matter). A change in your personal critical level only changes the actions recommended by your variable CLU when it changes the rankings of actions in terms of relative utilities, such that the actions were close to within a distance on the scale of one life.
In other words, that’s a result of the summing up of (relative) welfare, not a reason to misstate your valuation of your own existence.
“If you think the idea of people with negative utility being created to prevent your happy existence is even more counterintuitive than people having negative welfare to produce your happy existence, it would seem your view would demand that you set a critical value of 0 for yourself.” No, my view demands that we should not set the critical level too high. A strictly positive critical level that is low enough such that it would not result in the choice for that counter-intuitive situation, is still posiible.
“A situation where you don’t exist but uncounted trillions of others are made maximally happy is going to be better in utilitarian terms (normal, critical-level, variable, whatever), regardless of your critical level (or theirs, for that matter).” That can be true, but still I prefer my non-existence in that case, so something must be negative. I call that thing relative utility. My relative utility is not about overall betterness, but about my own preference. A can be better than B in utilitarian terms, but still I could prefer B over A.
A strictly positive critical level that is low enough such that it would not result in the choice for that counter-intuitive situation, is still posiible.
As a matter of mathematics this appears impossible. For any critical level c that you pick where c>0, there is some level of positive welfare w where c>w>0, with relative utility u, 0>u, u=c-w.
There will then be some quantity of expected negative utility and relative utility people with relative utility between 0 and u that variable CLU would prefer to the existence of you with c and w. You can use gambles (with arbitrarily divisible probabilities) or aggregation across similar people to get arbitrarily close to zero. So either c<=0 or CLU will recommend creation of negative utility and relative utility people to prevent your existence for some positive welfare levels.
but the critical level c is variable, and can depend on the choice set. So suppose the choice set consists of two situations. In the first, I exist and I have a positive welfare (or utility) w>0. In the second case, I don’t exist and there is another person with a negative utility u<0. His relative utility will also be u’<0. For any positive welfare I can pick a critical level c>0, but c<w-u’, such that my relative utility w-c>u’, which means it would be better if I exist. So you turned it around: instead of saying “for any critical level c there is a welfare w...”, we should say: “for any welfare w there is a critical level c...”
I have several issues with the internal consistency of this argument:
If individuals are allowed to select their own critical levels to respect their autonomy and preferences in any meaningful sense, that seems to imply respecting those people who value their existence and so would set a low critical level; then you get an approximately total view with regards to those sorts of creatures, and so a future populated with such beings can still be astronomically great
The treatment of zero levels seems inconsistent: if it is contradictory to set a critical level below the level one would prefer to exist, it seems likewise nonsensical to set it above that level
You suggest that people set their critical levels based on their personal preferences about their own lives, but then you make claims about their choices based on your intuitions about global properties like the Repugnant Conclusion, with no link between the two
The article makes much about avoiding repugnant sadistic conclusion, but the view you seem to endorse at the end would support creating arbitrary numbers of lives consisting of nothing but intense suffering to prevent the existence of happy people with no suffering who set their critical level to an even higher level than the actual one
On the first point, you suggest that that individuals get to set their own critical levels based on their preferences about their own lives. E.g.
So if my desires and attitudes are such that I set a critical level well below the maximum, then my life can add substantial global value. E.g. if A has utility +5 and sets critical value 0, B has utility +5 and chooses critical value 10, and C has utility −5 and critical value 10, then 3 lives like A will offset one life like C, and you can get most of the implications of the total view, and in particular an overwhelmingly high value of the future if the future is mostly populated with beings who favor existing and set low critical levels for themselves (which one could expect from people choosing features of their descendants or selection).
On the second point, returning to this quote:
I would note that utility in the sense of preferences over choices, or a utility function, need not correspond to pleasure or pain. The article is unclear on the concept of utility it is using but the above quote seems to require a preference base, i.e. zero utility is defined as the point at which the person would prefer to be alive rather than not. But then if 0 is the level at which one would prefer to exist, isn’t it equally contradictory to have a higher critical level and reject lives that you would prefer? Perhaps you are imagining someone who thinks ‘given that I am alive I would rather live than die, but I dislike having coming into existence in the first place, which death would not change.’ But in this framework that would just be negative utility part of the assessment of the overall life (and people without that attitude can be unbothered).
Regarding the third point, if each of us choose our own critical level autonomously, I do not get to decree a level for others. But the article makes several arguments that seem to conflate individual and global choice by talking about everyone choosing a certain level, e.g.:
But if I set a very high critical level for myself, that doesn’t lower the critical levels of others, and so the repugnant conclusion can proceed just fine with the mildly good lives of those who choose low critical levels for themselves. Having the individuals choose for themselves based on prior prejudices about global population ethics also defeats the role of the individual choice as a way to derive the global conclusion. I don’t need to be a total utilitarian in general to approve of my existence in cases in which I would prefer to exist.
Lastly, a standard objection to critical level views is that they treat lives below the critical level (but better than nothing by the person’s own lights and containing happiness but not pain) as negative, and so will endorse creating lives of intense suffering by people who wish they had never existed to prevent the creation of multiple mildly good lives. With the variable critical level account all those cases would still go through using people who choose high critical levels (with the quasi-negative view, it would favor creating suicidal lives of torment to offset the creation of blissful beings a bit below the maximum). I don’t see that addressed in the article.
The existence of critical level theories all but confirms the common claim that those who deny the Repugnant Conclusion underrate low quality lives. An inevitable symptom of this is the confused attempt to set a critical level that is positive.
When we think about the Z population, we try to conceive of a life that is only slightly positive using intuitive affect/aversion heuristics. In such a life, something as trivial as one additional bad day could make them net negative. Spread across the whole Z population, this makes the difference between Z being extremely good and extremely bad. But this difference, although massive in terms of total welfare, is small from the point of view of heuristic intuitions that focus on the quality of life of a single individual in the Z population.
For this reason, the A vs Z comparison is extremely unreliable, and we should expect our intuitions to go completely haywire when asked to make judgements about it. In such cases, it is best to return to obvious arguments and axioms such as that good lives are good and that more is better etc. Numerous persuasive propositions and axioms all imply the Repugnant Conclusion from numerous different directions.
Critical level theories are a symptom of a flawed and failed approach to ethics that relies on intuitions we have reason to believe will be unreliable and are contradicted by numerous highly plausible lines of argument.
I don’t see why the A-Z comparison is unreliable, based on your example. Why would the intuitions behind the repugnant conclusion be less reliable than intuitions behind our choice for some axioms? And we’re not merely talking about the repugnant conclusion, but about the sadistic repugnant conclusion, which is intuitivelly more repugnant. So suppose we have to choose between two situations. In the first situation, there is only one next future human generation after us (let’s say a few billion people), all with very long and extremely happy lives. In the second situation, there are quadrillions of future human generations, with billions of people, but they only live for 1 minute where they can experience the joy of taking a bite from an apple. Except for the first generation who will extremely suffer for many years. So in order to have many future generations, the first of those future generations will have to live lives of extreme misery. And all the other future lives are nothing more than tasting an apple. Can the joy of quadrillions of people tasting an apple trump the extreme misery of billions of people for many years?
On the critical level theory, the lives of the people who come into the world experiencing the joy of an apple for 1 minute have negative value. This seems clearly wrong, which illustrates my point. You would have to say that the world was made worse by the existence of a being who lived for one minute, enjoyed their apple, then died (and there were no instrumental costs to their life). This is extremely peculiar, from a welfarist perspective. Welfarists should be positive about additional welfare! Also, do you think it is bad for me to enjoy a nice juicy Pink Lady now? If not, then why is it bad for someone to come into existence and only do that?
Methodologically, rather than noting that the sadistic repugnant conclusion is countintuitive and then trying to conjure up theories that avoid it, I think it would make more sense to ask why the sadistic repugnant conclusion would be false. The Z lives are positive, so it is better for them to live than not. The value aggregates in a non-diminishing way—the first life adds as much value as the quadrillionth. This means that the Z population can have arbitrarily large value depending on its size, which means that it can outweigh lots of other things. In my view, it is completely wrongheaded to start by observing that a conclusion is counterintuitive and ignoring the arguments for it when building alternatives. This is an approach that has lead to meagre progress in population ethics over the last 30 years—can you name a theory developed in this fashion that now commands widespread assent in the field? The approach leads people to develop theories such as CLU, that commit you to holding that a life of positive welfare is negative, which is difficult to understand from a welfarist perspective.
Perhaps there is more of importance than merely welfare. Concerning the repugnant sadistic conclusion I can say two things. First, I am not willing to put myself and all my friends in extreme misery merely for the extra existence of quadrillions of people who have nothing but a small positive experience of tasting an apple. Second, when I would be one of those extra people living for a minute and tasting an apple, knowing that my existence involved the extreme suffering of billions of people who could otherwise have been very happy, I would rather not exist. That means even if my welfare of briefly tasting the apple (a nice juicy Pink Lady) is positive, I still have a preference for the other situation where I don’t exist, so my preference (relative utility) in the situation where I exist is negative. So in the second situation where the extra people exist, if I’m one of the suffering people or one of the extra, apple-eating people, in both cases I have a negative preference for that situation. Or stated differently: in the first situation where only the billion happy people exist, no-one can complain (the non-existing people are not able to complain against their non-existence and against their forgone happiness of tasting an apple). In the second situation, where those billion people are in extreme misery, they could complain. The axiom that we should minimize the sum of complaints is as reasonable as the axiom that we should maximize the sum of welfare.
I have a paper about complaints-based theories that may be of interest—https://www.journals.uchicago.edu/doi/abs/10.1086/684707
One argument I advance there is that these theories appear not to be applicable to moral patients who lack rational agency. Suppose that mice have net positive lives. What would it mean to say of them that they have a preference for not putting millions in extreme misery for the sake of their small net positive welfare? If you say that we should nevertheless not put millions in extreme misery for the sake of quadrillions of mice, then it looks like you are appealing to something other than a complaints-based theory to justify your anti-aggregative conclusion. So, the complaints-based theory isn’t doing any work in the argument.
Thanks for the paper! Concerning the moral patients and mice: they indeed lack a capability to determine their reference values (critical levels) and express their utility functions (perhaps we can derive them from their revealed preferences). So actually this means those mice do not have a preference for a critical level or for a population ethical theory. They don’t have a preference for total utilitarianism or negative utilitarianism or whatever. That could mean that we can choose for them a critical level and hence the population ethical implications, and those mice cannot complain against our choices if they are indifferent. If we strongly want total utilitarianism and hence a zero critical level, fine, then we can say that those mice also have a zero critical level. But if we want to avoid the sadistic repugnant conclusion in the example with the mice, fine, then we can set the critical levels of those mice higher, such that we choose the situation where those quadrillions of mice don’t exist. Even the mice who do exist cannot complain against our choice for the non-existence of those extra quadrillion mice, because they are indifferent about our choice.
“If individuals are allowed to select their own critical levels to respect their autonomy and preferences in any meaningful sense, that seems to imply respecting those people who value their existence and so would set a low critical level; then you get an approximately total view with regards to those sorts of creatures, and so a future populated with such beings can still be astronomically great.” Indeed: if everyone in the future (except me) would be a total utilitarian, willing to bite the bullet and accept the repugnant sadistic conclusion, setting a very low critical level for themselves, I would accept their choices and we end up with a variable critical level utilitarianism that is very very close to total utilitarianism (it is not exactly total utilitarianism, because I would be the only one with a higher critical level). So the question is: how many people in the future are willing to accept the repugnant sadistic conclusion?
“The treatment of zero levels seems inconsistent: if it is contradictory to set a critical level below the level one would prefer to exist, it seems likewise nonsensical to set it above that level.” Utility measures a preference for a certain situation, but this is independent from other possible situations. However, the critical level and hence the relative utility also takes into account other possible situations. For example: I have a happy life with a positive utility. But if one could choose another situation where I did not exist and everyone else was maximally happy and satisfied, I would prefer (if that would still be an option) that second situation, even if I don’t exist in that situation. That means my relative utility could be negative, if that second situation was eligible. So in a sense, in a particular choice set (i.e. when the second situation is available), I prefer my non-existence. Preferring my non-existence, even if my utility is positive, means I choose a critical level that is higher than my utility.
“You suggest that people set their critical levels based on their personal preferences about their own lives, but then you make claims about their choices based on your intuitions about global properties like the Repugnant Conclusion, with no link between the two.” I do not make claims about their choices based on my intuitions. All I can say is that if people really want to avoid the repugnant sadistic conclusion, they can do so by setting a high critical level. But to be altruistic, I have to accept the choices of everyone else. So if you all choose a critical level of zero, I will accept that, even if that means accepting the repugnant sadistic conclusion, which is very counter intuitive to me.
“The article makes much about avoiding repugnant sadistic conclusion, but the view you seem to endorse at the end would support creating arbitrary numbers of lives consisting of nothing but intense suffering to prevent the existence of happy people with no suffering who set their critical level to an even higher level than the actual one.” This objection to fixed critical level utilitarianism can be easily avoided with variable critical level utilitarianism. Suppose there is someone with a positive utility (a very happy person), who sets his critical level so high that a situation should be chosen where he does not exist, and where extra people with negative utilities exist. Why would he set such a high critical level? He cannot want that. This is even more counter-intuitive than the repugnant sadistic conclusion. With fixed critical level utilitarianism, such counter-intuitive conclusion can occur because everyone would have to accept the high critical level. But variable critical level utilitarianism can easily avoid it by taking lower critical levels.
Such situations exist for any critical level above zero, since any critical level above zero means treating people with positive welfare as a bad thing, to be avoided even at the expense of some amount of negative welfare.
If you think the idea of people with negative utility being created to prevent your happy existence is even more counterintuitive than people having negative welfare to produce your happy existence, it would seem your view would demand that you set a critical value of 0 for yourself.
A situation where you don’t exist but uncounted trillions of others are made maximally happy is going to be better in utilitarian terms (normal, critical-level, variable, whatever), regardless of your critical level (or theirs, for that matter). A change in your personal critical level only changes the actions recommended by your variable CLU when it changes the rankings of actions in terms of relative utilities, such that the actions were close to within a distance on the scale of one life.
In other words, that’s a result of the summing up of (relative) welfare, not a reason to misstate your valuation of your own existence.
“If you think the idea of people with negative utility being created to prevent your happy existence is even more counterintuitive than people having negative welfare to produce your happy existence, it would seem your view would demand that you set a critical value of 0 for yourself.” No, my view demands that we should not set the critical level too high. A strictly positive critical level that is low enough such that it would not result in the choice for that counter-intuitive situation, is still posiible.
“A situation where you don’t exist but uncounted trillions of others are made maximally happy is going to be better in utilitarian terms (normal, critical-level, variable, whatever), regardless of your critical level (or theirs, for that matter).” That can be true, but still I prefer my non-existence in that case, so something must be negative. I call that thing relative utility. My relative utility is not about overall betterness, but about my own preference. A can be better than B in utilitarian terms, but still I could prefer B over A.
As a matter of mathematics this appears impossible. For any critical level c that you pick where c>0, there is some level of positive welfare w where c>w>0, with relative utility u, 0>u, u=c-w.
There will then be some quantity of expected negative utility and relative utility people with relative utility between 0 and u that variable CLU would prefer to the existence of you with c and w. You can use gambles (with arbitrarily divisible probabilities) or aggregation across similar people to get arbitrarily close to zero. So either c<=0 or CLU will recommend creation of negative utility and relative utility people to prevent your existence for some positive welfare levels.
but the critical level c is variable, and can depend on the choice set. So suppose the choice set consists of two situations. In the first, I exist and I have a positive welfare (or utility) w>0. In the second case, I don’t exist and there is another person with a negative utility u<0. His relative utility will also be u’<0. For any positive welfare I can pick a critical level c>0, but c<w-u’, such that my relative utility w-c>u’, which means it would be better if I exist. So you turned it around: instead of saying “for any critical level c there is a welfare w...”, we should say: “for any welfare w there is a critical level c...”