“In particular, you’ll have a very hard time convincing anyone who takes morality to be mind-independent to accept this view. I would find the view much more plausible if the critical level were determined for each person by some other means.” But choosing a mind-independent critical level seems difficult. By what other means could we determine a critical level? And why should that critical level be the same for everyone and the same in all possible situations? If we can’t find an objective rule to select a universal and constant critical level, picking a critical level introduces an arbitrariness. This arbitrariness can be avoided by letting everyone choose for themselves their own critical levels. If I choose 5 as my critical level, and you choose 10 for your critical level, these choices are in a sense also arbitrary (e.g. why 5 and not 4?) but at least they respect our autonomy.
Furthermore: I argued elsewhere that there is no predetermined universal critical level: https://stijnbruers.wordpress.com/2018/07/03/on-the-interpersonal-comparability-of-well-being/

“If you don’t allow any, then I am free to choose a low negative critical level and live a very painful life, and this could be morally good. But that’s more absurd than the sadistic repugnant conclusion, so you need some constraints.” I don’t think you are free to choose a negative critical level, because that would mean you would be ok to have a negative utility, and by definition that is something you cannot want. If your brain doesn’t like pain, you are not free to choose that from now on you like pain. And if your brain doesn’t want to be altered such that it likes pain, you are not free to choose to alter your brain. Neither are you free to invert your utility function, for example.

“You seem to want to allow people the autonomy to choose their own critical level but also require that everyone chooses a level that is infinitesimally less than their welfare level in order to avoid the sadistic repugnant conclusion.” That requirement is merely a logical requirement. If people want to avoid the sadistic repugnant conclusion, they will have to choose a high critical level (e.g. the maximum preferred level, to be safe). But there may be some total utilitarians who are willing to bite the bullet and accept the sadistic repugnant conclusion. I wonder how many total utilitarians there are.

“But also, I don’t see how you can use the need to avoid the sadistic repugnant conclusion as a constraint for choosing critical levels without being really ad hoc.” What is ad hoc about it? If people want to avoid this sadistic conclusion, that doesn’t seem to be ad hoc to me. And if in order to avoid that conclusion they choose a maximum preferred critical level, that doesn’t seem ad hoc either.

“you might claim that all positive welfare is only of infinitesimal moral value but that (at least some) suffering is of non-infinitesimal moral disvalue.” As you mention, that also generates some counter-intuitive implications. The variable critical level utilitarianism (including the quasi-negative utilitarianism) can avoid those counter-intuitive implications that result from such lexicalities with infinitesimals. For example, suppose we can bring two people into existence. The first one will have a negative utility of −10, and suppose that person chooses 5 as his critical level. So his relative utility will be −15. The second person will have a utility +30. In order to allow his existence, that person can select a critical value infinitesimally below 15 (which is his maximally preferred critical level). Bringing those two people into existence will become infinitesimally good. And the second person will have a relative utility of 15, which is not infinitesimal (hence no lexicality issues here).

“If the expected value of working on x-risk according to CU is many times greater than the expected value of working on WAS according to QNU (which is plausible), then all else being equal, you need your credence in QNU to be many times greater than your credence in CU. We could easily be looking at a factor of 1000 here, which would require something like a credence < 0.1 in CU, but that’s surely way too low, despite the sadistic repugnant conclusion.” I agree with this line of reasoning, and the ‘maximise expected choice-worthiness’ idea is reasonable. Personally, I consider this sadistic repugnant conclusion to be so extremely counter-intuitive that I give total utilitarianism a very very very low credence. But if say a majority of people are willing to bite the bullet and are really total utilitarians, my credence in this theory can strongly increase. In the end I am a variable critical level utilitarian, so people can decide for themselves their critical levels and hence their preferred population ethical theory. If more than say 0,1% of people are total utilitarianism (i.e. choose 0 as their critical level), reducing X-risks becomes dominant.

“I imagine we’d be better off working on large scale s-risks directly.” I agree with the concerns about s-risk and the level of priority of s-risk reduction, but I consider a continued wild animal suffering for millions of years as the most concrete example that we have so far about an s-risk.

I agree that it’s difficult to see how to pick a non-zero critical level non-arbitrarily—that’s one of the reasons I think it should be zero. I also agree that, given critical level utilitarianism, it’s plausible that the critical level can vary across people (and across the same person at different times). But I do think that whatever the critical level for a person in some situation is, it should be independent of other people’s well-being and critical levels. Imagine two scenarios consisting of the same group of people: in each, you have have the exact same life/experiences and level of well-being, say, 5; you’re causally isolated from everyone else; the other people have different levels of well-being and different critical levels in each scenario such that in the first scenario, the aggregate of their moral value (sum well-being minus critical level for each person) is 1, and in the second this quantity is 7. If I’ve understood you correctly, in the first case, you should set your critical level to 6 - a, and in the second you should set it to 12 - a, where a is infinitesimal, so that the total moral value in each case is a, so that you avoid the sadistic repugnant conclusion. Why have a different level in each case? You aren’t affected by anyone else—if you were, you would be in a different situation/live a different life so could maybe justify a different critical level. But I don’t see how you can justify that here.

This relates to my point on it seeming ad hoc. You’re selecting your critical level to be the number such that when you aggregate moral value, you get an infinitesimal so that you avoid the sadistic repugnant conclusion, without other justification for setting the critical level at that level. That strikes me as ad hoc.

I think you introduce another element of arbitrariness too. Why set your critical level to 12 - a, when the others could set theirs to something else such that you need only set yours to 10 - a? There are multiple different critical levels you could set yours to, if others change theirs too, that give you the result you want. Why pick one solution over any other?

Finally, I don’t think you really avoid the problems facing lexical value theories, at least not without entailing the sadistic repugnant conclusion. This is a bit technical. I’ve edited it to make it as clear as I can, but I think I need to stop now; I hope it makes sense. The main idea is to highlight a trade-off you have to make between avoiding the repugnant conclusion and avoiding the counter-intuitive implications of lexical value theories.

Let’s go with your example: 1 person at well-being −10, critical level 5; 1 person at well-being 30, so they set their critical level to 15 - a, so that the overall moral value is a. Now suppose:

(A) We can improve the first person’s well-being to 0 and leave the second person at 30, or
(B) We can improve the second person’s well-being to 300,000 and leave the first person at −10.

Assume the first person keeps their critical level at 5 in each case. If I’ve understood you correctly, in the first case, the second person should set their critical level to 25 - b, so that the total moral value is an infinitesimal, b; and in the second case, they should set it to 299,985 - c, so that again, the total moral value is an infinitesimal, c. If b > c or b = c, we get the problems facing lexical theories. So let’s say we choose b and c such that c > b. But if we also consider:

(C) improving the second person’s well-being to 31 and leave the first person at −10

We choose critical level 16 - d, I assume you want b > d because I assume you want to say that (C) is worse than (A). So if x(n) is the infinitesimal used when we can increase the second person’s well-being to n, we have x(300,000) > b > x(31). At some point, we’ll have m such that x(m+1) > b > x(m) (assuming some continuity, which I think is very plausible), but for simplicity, let’s say there’s an m such that x(m) = b. For concreteness, let’s say m = 50, so that we’re indifferent between increasing the second person’s well-being to 50 and increasing the first person’s to 0.

Now for a positive integer q, consider:

(Bq) We have q people at positive well-being level k, and the first person at well-being level −10.

Repeating the above procedure (for fixed q, letting k vary), there’s a well-being level k(q) such that we’re indifferent between (A) and (Bq). We can do this for each q. Then let’s say k(2) = 20, k(4) = 10, k(10) = 4, k(20) = 2, k(40) = 1 and so on… (This just gives the same ordering as totalism in these cases; I just chose factors of 40 in that sequence to make the arithmetic nice.) This means we’re indifferent between (A) and 40 people at well-being 1 with one person at −10, so we’d rather have 41 people at 1 and one person at −10 than (A). Increasing 41 allows us to get the same result with well-being levels even lower than 1 -- so this is just the sadistic repugnant conclusion. You can make it less bad by discounting positive well-being, but then you’ll inherit the problems facing lexical theories. Say you discount so that as q (the number of people) tends to infinity, the well-being level at which you’re indifferent with (A) tends to some positive number—say 10. Then 300,000 people at level 10 and one person at level −10 is worse than (A). But that means you face the same problem as lexical theories because you’ve traded vast amounts of positive well-being for a relatively small reduction in negative well-being. The lower you let this limit be, the closer you get to the sadistic repugnant conclusion, and the higher you let it be, the more your theory looks like lexical negative utilitarianism. You might try to get round this by appealing to something like vagueness/indeterminacy or incommensurability, but these approaches also have counter-intuitive results.

You’re theory is an interesting way to avoid the repugnant conclusions, and in some sense, it strikes a nice balance between totalism and lexical negative utilitarianism, but it also inherits the weaknesses of at least one of them. And I must admit, I find the complete subjectiveness of the critical levels bizarre and very hard to stomach. Why not just drop the messy and counter-intuitive subjectively set variable critical level utilitarianism and prefer quasi-negative utilitarianism based on lexical value? As we’ve both noted, that view is problematic, but I don’t think it’s more problematic than what you’re proposing and I don’t think its problems are absolutely devastating.

I guess your argument fails because it still contains too much rigidity. For example: the choice of critical level can depend on the choice set: the set of all situations that we can choose. I have added a section in my original blog post, which I copy here.
<0. However, suppose another situation S2 is available for us (i.e. we can choose situation S2), in which that person i will not exist, but everyone else is maximally happy, with maximum positive utilities. Although person i in situation S1 will have a positive utility, that person can still prefer the situation where he or she does not exist and everyone else is maximally happy. It is as if that person is a bit altruistic and prefers his or her non-existence in order to improve the well-being of others. That means his or her critical level C(i,S1) can be higher than the utility U(i,S1), such that his or her relative utility becomes negative in situation S1. In that case, it is better to choose situation S2 and not let the extra person be born.
If instead of situation S2, another situation S2’ becomes available, where the extra person does not exist and everyone else has the same utility levels as in situation S1, then the extra person in situation S1 could prefer situation S1 above S2’, which means that his or her new critical level C(i,S1)’ remains lower than the utility U(i,S1).
In other words: the choice of the critical level can depend on the possible situations that are eligible or available to the people who must make the choice about who will exist. If situations S1 and S2 are available, the chosen critical level will be C(i,S1), but if situations S1 and S2’ are available, the critical level can change into another value C(i,S1)’. Each person is free to decide whether or not his or her own critical level depends on the choice set.>>
So suppose we can choose between two situations. In situation A, one person has utility 0 and another person has utility 30. In situation Bq, the first person has utility −10 and instead of a second person there are now a huge number of q persons, with very low but still positive utilities (i.e. low levels of k). If the extra people think that preferring Bq is sadistic/repugnant, they can choose higher critical levels such that in this choice set between A and Bq, situation A should be chosen. If instead of situation A we can choose situations B or C, the critical levels may change again. In the end, what this means is something like: let’s present to all (potential) people the choice set of all possible (electable) situations that we can choose. Now we let them choose their preferred situation, and let them then determine their own critical levels to obtain that preferred situation given that choice set.

I’m not entirely sure what you mean by ‘rigidity’, but if it’s something like ‘having strong requirements on critical levels’, then I don’t think my argument is very rigid at all. I’m allowing for agents to choose a wide range of critical levels. The point is though, that given the well-being of all agents and critical levels of all agents except one, there is a unique critical level that the last agent has to choose, if they want to avoid the sadistic repugnant conclusion (or something very similar). At any point in my argument, feel free to let agents choose a different critical level to the one I have suggested, but note that doing so leaves you open to the sadistic repugnant conclusion. That is, I have suggested the critical levels that agents would choose, given the same choice set and given that they have preferences to avoid the sadistic repugnant conclusion.

Sure, if k is very low, you can claim that A is better than Bq, even if q is really really big. But, keeping q fixed, there’s a k (e.g. 10^10^10) such that Bq is better than A (feel free to deny this, but then your theory is lexical). Then at some point (assuming something like the continuity), there’s a k such that A and Bq are equally good. Call this k’. If k’ is very low, then you get the sadistic repugnant conclusion. If k’ is very high, you face the same problems as lexical theories. If k’ not too high or low, you strike a compromise that makes the conclusions of each less bad, but you face both of them, so it’s not clear this is preferable. I should note that I thought of and wrote up my argument fairly quickly and quite late last night, so it could be wrong and is worth checking carefully, but I don’t see how what you’ve said so far refutes it.

My earlier points relate to the strangeness of the choice set dependence of relative utility. We agree that well-being should be choice set independent. But by letting the critical level be choice set dependent, you make relative utility choice set dependent. I guess you’re OK with that, but I find that undesirable.

I honestly don’t see yet how setting a high critical level to avoid the repugnant sadistic conclusion would automatically result in counter-intuitive problems with lexicality of a quasi-negative utilitarianism. Why would striking a compromise be less preferable than going all the way to a sadistic conclusion? (for me your example and calculations are still unclear: what is the choice set? What is the distribution of utilities in each possible situation?)

With rigidity I indeed mean having strong requirements on critical levels. Allowing to choose critical levels dependent on the choice set is an example that introduces much more flexibility. But again, I’ll leave it up to everyone to decide for themselves how rigidly they prefer to choose their own critical levels. If you find the choice set dependence of critical levels and relative utilities undesirable, you are allowed to pick your critical level independently from the choice set. That’s fine, but we should accept the freedom of others not to do so.

Thanks for your comments

“In particular, you’ll have a very hard time convincing anyone who takes morality to be mind-independent to accept this view. I would find the view much more plausible if the critical level were determined for each person by some other means.” But choosing a mind-independent critical level seems difficult. By what other means could we determine a critical level? And why should that critical level be the same for everyone and the same in all possible situations? If we can’t find an objective rule to select a universal and constant critical level, picking a critical level introduces an arbitrariness. This arbitrariness can be avoided by letting everyone choose for themselves their own critical levels. If I choose 5 as my critical level, and you choose 10 for your critical level, these choices are in a sense also arbitrary (e.g. why 5 and not 4?) but at least they respect our autonomy. Furthermore: I argued elsewhere that there is no predetermined universal critical level: https://stijnbruers.wordpress.com/2018/07/03/on-the-interpersonal-comparability-of-well-being/

“If you don’t allow any, then I am free to choose a low negative critical level and live a very painful life, and this could be morally good. But that’s more absurd than the sadistic repugnant conclusion, so you need some constraints.” I don’t think you are free to choose a negative critical level, because that would mean you would be ok to have a negative utility, and by definition that is something you cannot want. If your brain doesn’t like pain, you are not free to choose that from now on you like pain. And if your brain doesn’t want to be altered such that it likes pain, you are not free to choose to alter your brain. Neither are you free to invert your utility function, for example.

“You seem to want to allow people the autonomy to choose their own critical level but also require that everyone chooses a level that is infinitesimally less than their welfare level in order to avoid the sadistic repugnant conclusion.” That requirement is merely a logical requirement. If people want to avoid the sadistic repugnant conclusion, they will have to choose a high critical level (e.g. the maximum preferred level, to be safe). But there may be some total utilitarians who are willing to bite the bullet and accept the sadistic repugnant conclusion. I wonder how many total utilitarians there are.

“But also, I don’t see how you can use the need to avoid the sadistic repugnant conclusion as a constraint for choosing critical levels without being really ad hoc.” What is ad hoc about it? If people want to avoid this sadistic conclusion, that doesn’t seem to be ad hoc to me. And if in order to avoid that conclusion they choose a maximum preferred critical level, that doesn’t seem ad hoc either.

“you might claim that all positive welfare is only of infinitesimal moral value but that (at least some) suffering is of non-infinitesimal moral disvalue.” As you mention, that also generates some counter-intuitive implications. The variable critical level utilitarianism (including the quasi-negative utilitarianism) can avoid those counter-intuitive implications that result from such lexicalities with infinitesimals. For example, suppose we can bring two people into existence. The first one will have a negative utility of −10, and suppose that person chooses 5 as his critical level. So his relative utility will be −15. The second person will have a utility +30. In order to allow his existence, that person can select a critical value infinitesimally below 15 (which is his maximally preferred critical level). Bringing those two people into existence will become infinitesimally good. And the second person will have a relative utility of 15, which is not infinitesimal (hence no lexicality issues here).

“If the expected value of working on x-risk according to CU is many times greater than the expected value of working on WAS according to QNU (which is plausible), then all else being equal, you need your credence in QNU to be many times greater than your credence in CU. We could easily be looking at a factor of 1000 here, which would require something like a credence < 0.1 in CU, but that’s surely way too low, despite the sadistic repugnant conclusion.” I agree with this line of reasoning, and the ‘maximise expected choice-worthiness’ idea is reasonable. Personally, I consider this sadistic repugnant conclusion to be so extremely counter-intuitive that I give total utilitarianism a very very very low credence. But if say a majority of people are willing to bite the bullet and are really total utilitarians, my credence in this theory can strongly increase. In the end I am a variable critical level utilitarian, so people can decide for themselves their critical levels and hence their preferred population ethical theory. If more than say 0,1% of people are total utilitarianism (i.e. choose 0 as their critical level), reducing X-risks becomes dominant.

“I imagine we’d be better off working on large scale s-risks directly.” I agree with the concerns about s-risk and the level of priority of s-risk reduction, but I consider a continued wild animal suffering for millions of years as the most concrete example that we have so far about an s-risk.

Thanks for the reply!

I agree that it’s difficult to see how to pick a non-zero critical level non-arbitrarily—that’s one of the reasons I think it should be zero. I also agree that, given critical level utilitarianism, it’s plausible that the critical level can vary across people (and across the same person at different times). But I do think that whatever the critical level for a person in some situation is, it should be independent of other people’s well-being and critical levels. Imagine two scenarios consisting of the same group of people: in each, you have have the exact same life/experiences and level of well-being, say, 5; you’re causally isolated from everyone else; the other people have different levels of well-being and different critical levels in each scenario such that in the first scenario, the aggregate of their moral value (sum well-being minus critical level for each person) is 1, and in the second this quantity is 7. If I’ve understood you correctly, in the first case, you should set your critical level to 6 - a, and in the second you should set it to 12 - a, where a is infinitesimal, so that the total moral value in each case is a, so that you avoid the sadistic repugnant conclusion. Why have a different level in each case? You aren’t affected by anyone else—if you were, you would be in a different situation/live a different life so could maybe justify a different critical level. But I don’t see how you can justify that here.

This relates to my point on it seeming ad hoc. You’re selecting your critical level to be the number such that when you aggregate moral value, you get an infinitesimal so that you avoid the sadistic repugnant conclusion, without other justification for setting the critical level at that level. That strikes me as ad hoc.

I think you introduce another element of arbitrariness too. Why set your critical level to 12 - a, when the others could set theirs to something else such that you need only set yours to 10 - a? There are multiple different critical levels you could set yours to, if others change theirs too, that give you the result you want. Why pick one solution over any other?

Finally, I don’t think you really avoid the problems facing lexical value theories, at least not without entailing the sadistic repugnant conclusion. This is a bit technical. I’ve edited it to make it as clear as I can, but I think I need to stop now; I hope it makes sense. The main idea is to highlight a trade-off you have to make between avoiding the repugnant conclusion and avoiding the counter-intuitive implications of lexical value theories.

Let’s go with your example: 1 person at well-being −10, critical level 5; 1 person at well-being 30, so they set their critical level to 15 - a, so that the overall moral value is a. Now suppose:

(A) We can improve the first person’s well-being to 0 and leave the second person at 30, or (B) We can improve the second person’s well-being to 300,000 and leave the first person at −10.

Assume the first person keeps their critical level at 5 in each case. If I’ve understood you correctly, in the first case, the second person should set their critical level to 25 - b, so that the total moral value is an infinitesimal, b; and in the second case, they should set it to 299,985 - c, so that again, the total moral value is an infinitesimal, c. If b > c or b = c, we get the problems facing lexical theories. So let’s say we choose b and c such that c > b. But if we also consider:

(C) improving the second person’s well-being to 31 and leave the first person at −10

We choose critical level 16 - d, I assume you want b > d because I assume you want to say that (C) is worse than (A). So if x(n) is the infinitesimal used when we can increase the second person’s well-being to n, we have x(300,000) > b > x(31). At some point, we’ll have m such that x(m+1) > b > x(m) (assuming some continuity, which I think is very plausible), but for simplicity, let’s say there’s an m such that x(m) = b. For concreteness, let’s say m = 50, so that we’re indifferent between increasing the second person’s well-being to 50 and increasing the first person’s to 0.

Now for a positive integer q, consider:

(Bq) We have q people at positive well-being level k, and the first person at well-being level −10.

Repeating the above procedure (for fixed q, letting k vary), there’s a well-being level k(q) such that we’re indifferent between (A) and (Bq). We can do this for each q. Then let’s say k(2) = 20, k(4) = 10, k(10) = 4, k(20) = 2, k(40) = 1 and so on… (This just gives the same ordering as totalism in these cases; I just chose factors of 40 in that sequence to make the arithmetic nice.) This means we’re indifferent between (A) and 40 people at well-being 1 with one person at −10, so we’d rather have 41 people at 1 and one person at −10 than (A). Increasing 41 allows us to get the same result with well-being levels even lower than 1 -- so this is just the sadistic repugnant conclusion. You can make it less bad by discounting positive well-being, but then you’ll inherit the problems facing lexical theories. Say you discount so that as q (the number of people) tends to infinity, the well-being level at which you’re indifferent with (A) tends to some positive number—say 10. Then 300,000 people at level 10 and one person at level −10 is worse than (A). But that means you face the same problem as lexical theories because you’ve traded vast amounts of positive well-being for a relatively small reduction in negative well-being. The lower you let this limit be, the closer you get to the sadistic repugnant conclusion, and the higher you let it be, the more your theory looks like lexical negative utilitarianism. You might try to get round this by appealing to something like vagueness/indeterminacy or incommensurability, but these approaches also have counter-intuitive results.

You’re theory is an interesting way to avoid the repugnant conclusions, and in some sense, it strikes a nice balance between totalism and lexical negative utilitarianism, but it also inherits the weaknesses of at least one of them. And I must admit, I find the complete subjectiveness of the critical levels bizarre and very hard to stomach. Why not just drop the messy and counter-intuitive subjectively set variable critical level utilitarianism and prefer quasi-negative utilitarianism based on lexical value? As we’ve both noted, that view is problematic, but I don’t think it’s more problematic than what you’re proposing and I don’t think its problems are absolutely devastating.

I guess your argument fails because it still contains too much rigidity. For example: the choice of critical level can depend on the choice set: the set of all situations that we can choose. I have added a section in my original blog post, which I copy here. <0. However, suppose another situation S2 is available for us (i.e. we can choose situation S2), in which that person i will not exist, but everyone else is maximally happy, with maximum positive utilities. Although person i in situation S1 will have a positive utility, that person can still prefer the situation where he or she does not exist and everyone else is maximally happy. It is as if that person is a bit altruistic and prefers his or her non-existence in order to improve the well-being of others. That means his or her critical level C(i,S1) can be higher than the utility U(i,S1), such that his or her relative utility becomes negative in situation S1. In that case, it is better to choose situation S2 and not let the extra person be born. If instead of situation S2, another situation S2’ becomes available, where the extra person does not exist and everyone else has the same utility levels as in situation S1, then the extra person in situation S1 could prefer situation S1 above S2’, which means that his or her new critical level C(i,S1)’ remains lower than the utility U(i,S1). In other words: the choice of the critical level can depend on the possible situations that are eligible or available to the people who must make the choice about who will exist. If situations S1 and S2 are available, the chosen critical level will be C(i,S1), but if situations S1 and S2’ are available, the critical level can change into another value C(i,S1)’. Each person is free to decide whether or not his or her own critical level depends on the choice set.>> So suppose we can choose between two situations. In situation A, one person has utility 0 and another person has utility 30. In situation Bq, the first person has utility −10 and instead of a second person there are now a huge number of q persons, with very low but still positive utilities (i.e. low levels of k). If the extra people think that preferring Bq is sadistic/repugnant, they can choose higher critical levels such that in this choice set between A and Bq, situation A should be chosen. If instead of situation A we can choose situations B or C, the critical levels may change again. In the end, what this means is something like: let’s present to all (potential) people the choice set of all possible (electable) situations that we can choose. Now we let them choose their preferred situation, and let them then determine their own critical levels to obtain that preferred situation given that choice set.

I’m not entirely sure what you mean by ‘rigidity’, but if it’s something like ‘having strong requirements on critical levels’, then I don’t think my argument is very rigid at all. I’m allowing for agents to choose a wide range of critical levels. The point is though, that given the well-being of all agents and critical levels of all agents except one, there is a unique critical level that the last agent has to choose, if they want to avoid the sadistic repugnant conclusion (or something very similar). At any point in my argument, feel free to let agents choose a different critical level to the one I have suggested, but note that doing so leaves you open to the sadistic repugnant conclusion. That is, I have suggested the critical levels that agents would choose, given the same choice set and given that they have preferences to avoid the sadistic repugnant conclusion.

Sure, if k is very low, you can claim that A is better than Bq, even if q is really really big. But, keeping q fixed, there’s a k (e.g. 10^10^10) such that Bq is better than A (feel free to deny this, but then your theory is lexical). Then at some point (assuming something like the continuity), there’s a k such that A and Bq are equally good. Call this k’. If k’ is very low, then you get the sadistic repugnant conclusion. If k’ is very high, you face the same problems as lexical theories. If k’ not too high or low, you strike a compromise that makes the conclusions of each less bad, but you face both of them, so it’s not clear this is preferable. I should note that I thought of and wrote up my argument fairly quickly and quite late last night, so it could be wrong and is worth checking carefully, but I don’t see how what you’ve said so far refutes it.

My earlier points relate to the strangeness of the choice set dependence of relative utility. We agree that well-being should be choice set independent. But by letting the critical level be choice set dependent, you make relative utility choice set dependent. I guess you’re OK with that, but I find that undesirable.

I honestly don’t see yet how setting a high critical level to avoid the repugnant sadistic conclusion would automatically result in counter-intuitive problems with lexicality of a quasi-negative utilitarianism. Why would striking a compromise be less preferable than going all the way to a sadistic conclusion? (for me your example and calculations are still unclear: what is the choice set? What is the distribution of utilities in each possible situation?)

With rigidity I indeed mean having strong requirements on critical levels. Allowing to choose critical levels dependent on the choice set is an example that introduces much more flexibility. But again, I’ll leave it up to everyone to decide for themselves how rigidly they prefer to choose their own critical levels. If you find the choice set dependence of critical levels and relative utilities undesirable, you are allowed to pick your critical level independently from the choice set. That’s fine, but we should accept the freedom of others not to do so.