Beware ethical systems without repugnant conclusions
Summary
In order to tell you something you didn’t already know, a model must sometimes be surprising
Object level: An ethical system that doesn’t ever differ from your intuition is not more useful than just a gut feeling and gut feelings are unreliable
There are very compelling responses to the repugnant conclusion, but I think the most compelling is to just accept it as both intuitively unappealing and true.
If you’re unfamiliar with the repugnant conclusion, see the Stanford Encyclopedia of Philosophy article on the subject.
Useful models are surprising
Useful models are surprising.[1] If a model is never surprising, then it hasn’t told you anything new. (More formally: if it doesn’t cause you to update on either answer or confidence, then you haven’t learned anything). Evolution, Pangea, and climate change are all somewhat surprising.[2] Because they’re both true and surprising, learning them is useful. Utilitarianism is a useful tool for evaluating moral situations because it sometimes surprises you. If it never disagreed with your gut, then it wouldn’t be helpful.
The repugnant conclusion is surprising, but I think that’s actually a good sign: it means that utilitarianism can sometimes surprise you, which means (if it’s true), that it’s probably useful.
The corollary is more important: beware ethical systems without repugnant conclusions because they’re not telling you anything new. They totally agree with your gut, and your gut is unreliable.
Reality is sometimes surprising
True models are ones that accurately represent reality. Reality is often surprising, so we should expect true models to surprise us often.
If useful models are surprising, and useful models are true, then it means that being surprising does not mean that a model is false. (Of course, in the absence of other evidence, you should be more skeptical of surprising models).
- ^
This isn’t the same as saying surprising models are useful. I only intend this to rebut the claim “the repugnant conclusion is a counter-example to act utilitarianism because it gives the incorrect answer.” In practice, I think this claim often is actually saying “the repugnant conclusion is a counter-example to act utilitarianism because it gives an answer that doesn’t seem right intuitively.”
- ^
Historically, anyway, and to small children. If these don’t surprise you, I suspect it’s because you’ve already accepted them as true (many years or decades ago)
I think it’s useful to distinguish between when a theory conflicts with intuition in some cases, and when there’s no single intuition that can be applied on its own to the cases, and so the theory has specific implications where there would otherwise have been none. Disagreement with intuition is often evidence against a theory, although possibly defeasible. Indeed, in ethics, what we’re doing is just fitting theories to our intuitions, by generalizing and combining some intuitions, and possibly abandoning others.
Having implications that go beyond each single intuition in isolation without conflicting with any is (basically?) never evidence against a theory, but is still “useful”.
The fact that (total symmetric) utilitarianism has any surprising conclusions at all may be a good sign, but I wouldn’t give much weight to any particular surprising conclusion being a good sign, and the more surprising any particular conclusion or the more of them there are, plausibly the worse, since disagreement with intuition is exactly what we take to be evidence against a moral theory, so it can’t be the case that in general and all else equal, the more surprising a theory, the better (e.g. “involuntary suffering is good, and the more, the better” would be even more surprising). Utilitarianism already has many other surprising conclusions, e.g. impartiality, concern for nonhuman animals, demandingness, many tiny harms or benefits outweighing large harms, the permissibility of actively causing harm (and in principle sometimes the obligation to commit harm for greater good). I don’t think adding one more should count (much or at all) in the favour of total symmetric utilitarianism relative to other views, and may indeed count against. Of course, alternatives also have surprising conclusions, other than the RC and the others I mentioned, so the same reasoning can apply, and this doesn’t really tell us anything without being more specific.
Furthermore, because the alternatives rejecting the RC also have surprising conclusions that the total view doesn’t (based on the impossibility theorems or directly considering specific alternatives), rejecting the RC would count in their favour, too, based on your argument. It’s not clear what view your argument supports most without further investigation and comparison.
(Of course, some may not find a particular conclusion surprising even if it’s surprising to many or even most people.)
This is a good point, sorry for getting back to it so late.
One idea I cut from the post: I think scope insensitivity means we should be suspicious of our gut intuitions in situations dealing with lots of people, so I think that’s another point in favor of accepting the RC. My main goal with this point was to suggest this central idea: “sometimes trust your ethical framework in situations where you expect your intuituon to be wrong.”
That being said, the rest of your point still stands.