One argument I advance there is that these theories appear not to be applicable to moral patients who lack rational agency. Suppose that mice have net positive lives. What would it mean to say of them that they have a preference for not putting millions in extreme misery for the sake of their small net positive welfare? If you say that we should nevertheless not put millions in extreme misery for the sake of quadrillions of mice, then it looks like you are appealing to something other than a complaints-based theory to justify your anti-aggregative conclusion. So, the complaints-based theory isn’t doing any work in the argument.
Thanks for the paper!
Concerning the moral patients and mice: they indeed lack a capability to determine their reference values (critical levels) and express their utility functions (perhaps we can derive them from their revealed preferences). So actually this means those mice do not have a preference for a critical level or for a population ethical theory. They don’t have a preference for total utilitarianism or negative utilitarianism or whatever. That could mean that we can choose for them a critical level and hence the population ethical implications, and those mice cannot complain against our choices if they are indifferent. If we strongly want total utilitarianism and hence a zero critical level, fine, then we can say that those mice also have a zero critical level. But if we want to avoid the sadistic repugnant conclusion in the example with the mice, fine, then we can set the critical levels of those mice higher, such that we choose the situation where those quadrillions of mice don’t exist. Even the mice who do exist cannot complain against our choice for the non-existence of those extra quadrillion mice, because they are indifferent about our choice.
I have a paper about complaints-based theories that may be of interest—https://www.journals.uchicago.edu/doi/abs/10.1086/684707
One argument I advance there is that these theories appear not to be applicable to moral patients who lack rational agency. Suppose that mice have net positive lives. What would it mean to say of them that they have a preference for not putting millions in extreme misery for the sake of their small net positive welfare? If you say that we should nevertheless not put millions in extreme misery for the sake of quadrillions of mice, then it looks like you are appealing to something other than a complaints-based theory to justify your anti-aggregative conclusion. So, the complaints-based theory isn’t doing any work in the argument.
Thanks for the paper! Concerning the moral patients and mice: they indeed lack a capability to determine their reference values (critical levels) and express their utility functions (perhaps we can derive them from their revealed preferences). So actually this means those mice do not have a preference for a critical level or for a population ethical theory. They don’t have a preference for total utilitarianism or negative utilitarianism or whatever. That could mean that we can choose for them a critical level and hence the population ethical implications, and those mice cannot complain against our choices if they are indifferent. If we strongly want total utilitarianism and hence a zero critical level, fine, then we can say that those mice also have a zero critical level. But if we want to avoid the sadistic repugnant conclusion in the example with the mice, fine, then we can set the critical levels of those mice higher, such that we choose the situation where those quadrillions of mice don’t exist. Even the mice who do exist cannot complain against our choice for the non-existence of those extra quadrillion mice, because they are indifferent about our choice.