It seems to me that your proposed theory has severe flaws that are analogous to those of Lexical Threshold Negative Utilitarianism, and that you significantly understate the severity of these flaws in your discussion.
Your characterization of welfare gains for people with above-neutral welfare as “giving [...] welfare to people who don’t truly need it” seems to assume something close to negative utilitarianism and begs the question of how we should weigh happiness gains versus losses, and suffering versus happiness.
Your “It is too implausible” defence is not convincing. It seems theoretically unfounded and ad hoc. It applies different standards to different theories without justification by explaining away an uncomfortable example for your favored theory while at the same time considering the (arguably more ‘extreme’ and ‘implausible’) Repugnant Conclusion to be a flaw worth avoiding.
The example case you use to indicate your theory’s flaw arguably isn’t the most problematic one, and certainly not the only one. Instead, consider a population A consisting of one person with welfare ω and a very large number of people with welfare 0. Compare this to a population B of the same size in which everyone has welfare ω−ϵ. While your theory does say that A is not better than B, it also denies that B is better than A. So for instance your theory denies that we should be willing to inflict a pinprick on Jeff Bezos to lift billions of people out of poverty (unless you think that everyone living in poverty has welfare below zero). In other words, when considering different populations in which everyone has positive welfare, your theory is deeply conservative (in the sense of denying that many intuitively good changes, in particular ones involving ‘redistribution’, are in fact good) and anti-egalitarian (it is in fact closer to being ‘perfectionist’, i.e., valuing the peak welfare level enjoyed by anyone in the population).
It also has various other problems that plague lexical and negative-utilitarian theories, such as involving arguably theoretically unfounded discontinuities that lead to counterintuitive results, and being prima facie inconsistent with happiness/suffering and gains/losses tradeoffs we routinely make in our own lives.
(Also, one at least prima facie flaw that you don’t discuss at all is that your theory involves incomparability – i.e. there are populations A and B such that neither is better than the other.)
I agree that the theory experiences significant flaws akin to those of negative utilitarianism. However, something else that perhaps I understated is that the fleshed-out theory was only one example to demonstrate how the Axiom of Comparison could be deployed. With the axiom in hand, I imagine that better theories could be developed which avoid your compelling example with Bezos (and the Repugnant Conclusion). I would point out that most of the flaws you point out are not results of the axiom itself.
In fact, off the top of my head. If I now said that real people gaining welfare above 0 still counts as real welfare, I think that the Repugnant Conclusion and Bezos are avoided.
I am not sure why the “it is too implausible” defence is not convincing. The Repugnant Conclusion is not implausible to me—one can imagine that, if we accepted it, we might direct the development of civilisation from A to Z. Isn’t it much more unlikely that we possess an ability to do exactly one of <improve the welfare of many happy people> and <improve the welfare of one unhappy person a bit>?
“It also has various other problems that plague lexical and negative-utilitarian theories, such as involving arguably theoretically unfounded discontinuities that lead to counterintuitive results, and being prima facie inconsistent with happiness/suffering and gains/losses tradeoffs we routinely make in our own lives.”
I would request some clarification on the above.
“Also, one at least prima facie flaw that you don’t discuss at all is that your theory involves incomparability – i.e. there are populations A and B such that neither is better than the other.”
I did derive a pair of such populations A,B. If you meant that I did not discuss this further, then I am not sure why it is a problem at all. Suppose that we are population A. Do we truly need a way to assign a welfare score to our population? Isn’t our primary (and I’d suggest only) consideration how we might improve? For this latter goal, the theory does produce any comparisons you could ask for.
Edit: I have read through the reasoning again and now it seems to me that the negative utilitarian aspect can indeed be removed (solving Bezos) and without reinstating the Repugnant Conclusion. Naturally, I may be wrong and this may also lead to new flaws. I would be interested to hear your thoughts on this. (The main post has been edited to reflect all new thoughts.)
It seems to me that your proposed theory has severe flaws that are analogous to those of Lexical Threshold Negative Utilitarianism, and that you significantly understate the severity of these flaws in your discussion.
Your characterization of welfare gains for people with above-neutral welfare as “giving [...] welfare to people who don’t truly need it” seems to assume something close to negative utilitarianism and begs the question of how we should weigh happiness gains versus losses, and suffering versus happiness.
Your “It is too implausible” defence is not convincing. It seems theoretically unfounded and ad hoc. It applies different standards to different theories without justification by explaining away an uncomfortable example for your favored theory while at the same time considering the (arguably more ‘extreme’ and ‘implausible’) Repugnant Conclusion to be a flaw worth avoiding.
The example case you use to indicate your theory’s flaw arguably isn’t the most problematic one, and certainly not the only one. Instead, consider a population A consisting of one person with welfare ω and a very large number of people with welfare 0. Compare this to a population B of the same size in which everyone has welfare ω−ϵ. While your theory does say that A is not better than B, it also denies that B is better than A. So for instance your theory denies that we should be willing to inflict a pinprick on Jeff Bezos to lift billions of people out of poverty (unless you think that everyone living in poverty has welfare below zero). In other words, when considering different populations in which everyone has positive welfare, your theory is deeply conservative (in the sense of denying that many intuitively good changes, in particular ones involving ‘redistribution’, are in fact good) and anti-egalitarian (it is in fact closer to being ‘perfectionist’, i.e., valuing the peak welfare level enjoyed by anyone in the population).
It also has various other problems that plague lexical and negative-utilitarian theories, such as involving arguably theoretically unfounded discontinuities that lead to counterintuitive results, and being prima facie inconsistent with happiness/suffering and gains/losses tradeoffs we routinely make in our own lives.
(Also, one at least prima facie flaw that you don’t discuss at all is that your theory involves incomparability – i.e. there are populations A and B such that neither is better than the other.)
I’ll give my immediate thoughts.
I agree that the theory experiences significant flaws akin to those of negative utilitarianism. However, something else that perhaps I understated is that the fleshed-out theory was only one example to demonstrate how the Axiom of Comparison could be deployed. With the axiom in hand, I imagine that better theories could be developed which avoid your compelling example with Bezos (and the Repugnant Conclusion). I would point out that most of the flaws you point out are not results of the axiom itself.
In fact, off the top of my head. If I now said that real people gaining welfare above 0 still counts as real welfare, I think that the Repugnant Conclusion and Bezos are avoided.
I am not sure why the “it is too implausible” defence is not convincing. The Repugnant Conclusion is not implausible to me—one can imagine that, if we accepted it, we might direct the development of civilisation from A to Z. Isn’t it much more unlikely that we possess an ability to do exactly one of <improve the welfare of many happy people> and <improve the welfare of one unhappy person a bit>?
“It also has various other problems that plague lexical and negative-utilitarian theories, such as involving arguably theoretically unfounded discontinuities that lead to counterintuitive results, and being prima facie inconsistent with happiness/suffering and gains/losses tradeoffs we routinely make in our own lives.”
I would request some clarification on the above.
“Also, one at least prima facie flaw that you don’t discuss at all is that your theory involves incomparability – i.e. there are populations A and B such that neither is better than the other.”
I did derive a pair of such populations A,B. If you meant that I did not discuss this further, then I am not sure why it is a problem at all. Suppose that we are population A. Do we truly need a way to assign a welfare score to our population? Isn’t our primary (and I’d suggest only) consideration how we might improve? For this latter goal, the theory does produce any comparisons you could ask for.
Edit: I have read through the reasoning again and now it seems to me that the negative utilitarian aspect can indeed be removed (solving Bezos) and without reinstating the Repugnant Conclusion. Naturally, I may be wrong and this may also lead to new flaws. I would be interested to hear your thoughts on this. (The main post has been edited to reflect all new thoughts.)