I agree that the theory experiences significant flaws akin to those of negative utilitarianism. However, something else that perhaps I understated is that the fleshed-out theory was only one example to demonstrate how the Axiom of Comparison could be deployed. With the axiom in hand, I imagine that better theories could be developed which avoid your compelling example with Bezos (and the Repugnant Conclusion). I would point out that most of the flaws you point out are not results of the axiom itself.
In fact, off the top of my head. If I now said that real people gaining welfare above 0 still counts as real welfare, I think that the Repugnant Conclusion and Bezos are avoided.
I am not sure why the “it is too implausible” defence is not convincing. The Repugnant Conclusion is not implausible to me—one can imagine that, if we accepted it, we might direct the development of civilisation from A to Z. Isn’t it much more unlikely that we possess an ability to do exactly one of <improve the welfare of many happy people> and <improve the welfare of one unhappy person a bit>?
“It also has various other problems that plague lexical and negative-utilitarian theories, such as involving arguably theoretically unfounded discontinuities that lead to counterintuitive results, and being prima facie inconsistent with happiness/suffering and gains/losses tradeoffs we routinely make in our own lives.”
I would request some clarification on the above.
“Also, one at least prima facie flaw that you don’t discuss at all is that your theory involves incomparability – i.e. there are populations A and B such that neither is better than the other.”
I did derive a pair of such populations A,B. If you meant that I did not discuss this further, then I am not sure why it is a problem at all. Suppose that we are population A. Do we truly need a way to assign a welfare score to our population? Isn’t our primary (and I’d suggest only) consideration how we might improve? For this latter goal, the theory does produce any comparisons you could ask for.
Edit: I have read through the reasoning again and now it seems to me that the negative utilitarian aspect can indeed be removed (solving Bezos) and without reinstating the Repugnant Conclusion. Naturally, I may be wrong and this may also lead to new flaws. I would be interested to hear your thoughts on this. (The main post has been edited to reflect all new thoughts.)
I’ll give my immediate thoughts.
I agree that the theory experiences significant flaws akin to those of negative utilitarianism. However, something else that perhaps I understated is that the fleshed-out theory was only one example to demonstrate how the Axiom of Comparison could be deployed. With the axiom in hand, I imagine that better theories could be developed which avoid your compelling example with Bezos (and the Repugnant Conclusion). I would point out that most of the flaws you point out are not results of the axiom itself.
In fact, off the top of my head. If I now said that real people gaining welfare above 0 still counts as real welfare, I think that the Repugnant Conclusion and Bezos are avoided.
I am not sure why the “it is too implausible” defence is not convincing. The Repugnant Conclusion is not implausible to me—one can imagine that, if we accepted it, we might direct the development of civilisation from A to Z. Isn’t it much more unlikely that we possess an ability to do exactly one of <improve the welfare of many happy people> and <improve the welfare of one unhappy person a bit>?
“It also has various other problems that plague lexical and negative-utilitarian theories, such as involving arguably theoretically unfounded discontinuities that lead to counterintuitive results, and being prima facie inconsistent with happiness/suffering and gains/losses tradeoffs we routinely make in our own lives.”
I would request some clarification on the above.
“Also, one at least prima facie flaw that you don’t discuss at all is that your theory involves incomparability – i.e. there are populations A and B such that neither is better than the other.”
I did derive a pair of such populations A,B. If you meant that I did not discuss this further, then I am not sure why it is a problem at all. Suppose that we are population A. Do we truly need a way to assign a welfare score to our population? Isn’t our primary (and I’d suggest only) consideration how we might improve? For this latter goal, the theory does produce any comparisons you could ask for.
Edit: I have read through the reasoning again and now it seems to me that the negative utilitarian aspect can indeed be removed (solving Bezos) and without reinstating the Repugnant Conclusion. Naturally, I may be wrong and this may also lead to new flaws. I would be interested to hear your thoughts on this. (The main post has been edited to reflect all new thoughts.)