I’m not entirely sure what you mean by ‘rigidity’, but if it’s something like ‘having strong requirements on critical levels’, then I don’t think my argument is very rigid at all. I’m allowing for agents to choose a wide range of critical levels. The point is though, that given the well-being of all agents and critical levels of all agents except one, there is a unique critical level that the last agent has to choose, if they want to avoid the sadistic repugnant conclusion (or something very similar). At any point in my argument, feel free to let agents choose a different critical level to the one I have suggested, but note that doing so leaves you open to the sadistic repugnant conclusion. That is, I have suggested the critical levels that agents would choose, given the same choice set and given that they have preferences to avoid the sadistic repugnant conclusion.
Sure, if k is very low, you can claim that A is better than Bq, even if q is really really big. But, keeping q fixed, there’s a k (e.g. 10^10^10) such that Bq is better than A (feel free to deny this, but then your theory is lexical). Then at some point (assuming something like the continuity), there’s a k such that A and Bq are equally good. Call this k’. If k’ is very low, then you get the sadistic repugnant conclusion. If k’ is very high, you face the same problems as lexical theories. If k’ not too high or low, you strike a compromise that makes the conclusions of each less bad, but you face both of them, so it’s not clear this is preferable. I should note that I thought of and wrote up my argument fairly quickly and quite late last night, so it could be wrong and is worth checking carefully, but I don’t see how what you’ve said so far refutes it.
My earlier points relate to the strangeness of the choice set dependence of relative utility. We agree that well-being should be choice set independent. But by letting the critical level be choice set dependent, you make relative utility choice set dependent. I guess you’re OK with that, but I find that undesirable.
I honestly don’t see yet how setting a high critical level to avoid the repugnant sadistic conclusion would automatically result in counter-intuitive problems with lexicality of a quasi-negative utilitarianism. Why would striking a compromise be less preferable than going all the way to a sadistic conclusion? (for me your example and calculations are still unclear: what is the choice set? What is the distribution of utilities in each possible situation?)
With rigidity I indeed mean having strong requirements on critical levels. Allowing to choose critical levels dependent on the choice set is an example that introduces much more flexibility. But again, I’ll leave it up to everyone to decide for themselves how rigidly they prefer to choose their own critical levels. If you find the choice set dependence of critical levels and relative utilities undesirable, you are allowed to pick your critical level independently from the choice set. That’s fine, but we should accept the freedom of others not to do so.
I’m not entirely sure what you mean by ‘rigidity’, but if it’s something like ‘having strong requirements on critical levels’, then I don’t think my argument is very rigid at all. I’m allowing for agents to choose a wide range of critical levels. The point is though, that given the well-being of all agents and critical levels of all agents except one, there is a unique critical level that the last agent has to choose, if they want to avoid the sadistic repugnant conclusion (or something very similar). At any point in my argument, feel free to let agents choose a different critical level to the one I have suggested, but note that doing so leaves you open to the sadistic repugnant conclusion. That is, I have suggested the critical levels that agents would choose, given the same choice set and given that they have preferences to avoid the sadistic repugnant conclusion.
Sure, if k is very low, you can claim that A is better than Bq, even if q is really really big. But, keeping q fixed, there’s a k (e.g. 10^10^10) such that Bq is better than A (feel free to deny this, but then your theory is lexical). Then at some point (assuming something like the continuity), there’s a k such that A and Bq are equally good. Call this k’. If k’ is very low, then you get the sadistic repugnant conclusion. If k’ is very high, you face the same problems as lexical theories. If k’ not too high or low, you strike a compromise that makes the conclusions of each less bad, but you face both of them, so it’s not clear this is preferable. I should note that I thought of and wrote up my argument fairly quickly and quite late last night, so it could be wrong and is worth checking carefully, but I don’t see how what you’ve said so far refutes it.
My earlier points relate to the strangeness of the choice set dependence of relative utility. We agree that well-being should be choice set independent. But by letting the critical level be choice set dependent, you make relative utility choice set dependent. I guess you’re OK with that, but I find that undesirable.
I honestly don’t see yet how setting a high critical level to avoid the repugnant sadistic conclusion would automatically result in counter-intuitive problems with lexicality of a quasi-negative utilitarianism. Why would striking a compromise be less preferable than going all the way to a sadistic conclusion? (for me your example and calculations are still unclear: what is the choice set? What is the distribution of utilities in each possible situation?)
With rigidity I indeed mean having strong requirements on critical levels. Allowing to choose critical levels dependent on the choice set is an example that introduces much more flexibility. But again, I’ll leave it up to everyone to decide for themselves how rigidly they prefer to choose their own critical levels. If you find the choice set dependence of critical levels and relative utilities undesirable, you are allowed to pick your critical level independently from the choice set. That’s fine, but we should accept the freedom of others not to do so.