I’ve always found Parfit’s response to be pretty compelling. As I summarize it here:
Rather than discounting smaller benefits (or refusing to aggregate them), Parfit suggests that we do better to simply weight harms and benefits in a way that gives priority to the worse-off. Two appealing implications of this view are that: (1) We generally should not allow huge harms to befall a single person, if that leaves them much worse off than the others with competing interests. (2) But we should allow (sufficient) small benefits to the worse-off to (in sum) outweigh a single large benefit to someone better-off.
Since we need aggregation in order to secure verdict (2), and we can secure verdict (1) without having to reject aggregation, it looks like our intuitions are overall best served by accepting an aggregative moral theory.
I’ll just add that it’s a mistake to see the Transmitter Room case as an objection to consequentialism per se. Nobody (afaict) has the intuition that it would be better for the guy to be electrocuted, but we’re just not allowed to let that happen. Rather, the standard intuition is that it wouldn’t even be a good result. But that’s to call for an axiological refinement, not to reject the claim that we should bring about the better outcome.
I’ve always found Parfit’s response to be pretty compelling. As I summarize it here:
I’ll just add that it’s a mistake to see the Transmitter Room case as an objection to consequentialism per se. Nobody (afaict) has the intuition that it would be better for the guy to be electrocuted, but we’re just not allowed to let that happen. Rather, the standard intuition is that it wouldn’t even be a good result. But that’s to call for an axiological refinement, not to reject the claim that we should bring about the better outcome.
Thank you! Your article on Parfit is very helpful—I’m looking forward to reading the rest in the series.