Thanks for writing this, Holden! I agree that potential harms from the naive (mis-)application of maximizing consequentialism is a risk that’s important to bear in mind, and to ward against. It’s an interesting question whether this is best done by (i) raising concerns about maximizing in principle, or (ii) stressing the instrumental reasons why maximizers should be co-operative and pluralistic.
I strongly prefer the latter strategy, myself. It’s something we take care to stress on utilitarianism.net (following the example of historical utilitarians from J.S. Mill to R.M. Hare, who have always urged the importance of wise rules of thumb to temper the risks of miscalculation). A newer move in this vicinity is to bring in moral uncertainty as an additional reason to avoid fanaticism, even if utilitarianism is correct and one could somehow be confident that violating commonsense norms was actually utility-maximizing on this occasion, unlike all the other times that following crude calculations unwittingly leads to disaster. (I’m excited that we have a guest essay in the works by a leading philosopher that will explore the moral uncertainty argument in more detail.)
One reason why I opt for option (ii) is honesty: I really think these principles are right, in principle! We should be careful not to misapply them. But I don’t think that practical point does anything to cast doubt on the principles as a matter of principle. (Others may disagree, of course, which is fine: route (i) might then be an available option for them!)
Another reason to favour (ii) is the risk of otherwise shoring up harmful anti-consequentialist views. I think encouraging more people to think in a more utilitarian way (at least on current margins, for most people—there could always be exceptions, of course) is on average very good. I’ve even argued on this basis that non-consequentialism may be self-effacing.
That said, some sort of loosely utilitarian-leaning meta-pluralism (of the sort Will MacAskill has been endorsing in recent interviews) may well be optimal. (It also seems more reasonable than dogmatic certainty in any one ethical approach.)
Will’s conversation with Tyler: “I say I’m not a utilitarian because — though it’s the view I’m most inclined to argue for in seminar rooms because I think it’s most underappreciated by the academy — I think we should have some degree of belief in a variety of moral views and take a compromise between them.”
Thanks for writing this, Holden! I agree that potential harms from the naive (mis-)application of maximizing consequentialism is a risk that’s important to bear in mind, and to ward against. It’s an interesting question whether this is best done by (i) raising concerns about maximizing in principle, or (ii) stressing the instrumental reasons why maximizers should be co-operative and pluralistic.
I strongly prefer the latter strategy, myself. It’s something we take care to stress on utilitarianism.net (following the example of historical utilitarians from J.S. Mill to R.M. Hare, who have always urged the importance of wise rules of thumb to temper the risks of miscalculation). A newer move in this vicinity is to bring in moral uncertainty as an additional reason to avoid fanaticism, even if utilitarianism is correct and one could somehow be confident that violating commonsense norms was actually utility-maximizing on this occasion, unlike all the other times that following crude calculations unwittingly leads to disaster. (I’m excited that we have a guest essay in the works by a leading philosopher that will explore the moral uncertainty argument in more detail.)
One reason why I opt for option (ii) is honesty: I really think these principles are right, in principle! We should be careful not to misapply them. But I don’t think that practical point does anything to cast doubt on the principles as a matter of principle. (Others may disagree, of course, which is fine: route (i) might then be an available option for them!)
Another reason to favour (ii) is the risk of otherwise shoring up harmful anti-consequentialist views. I think encouraging more people to think in a more utilitarian way (at least on current margins, for most people—there could always be exceptions, of course) is on average very good. I’ve even argued on this basis that non-consequentialism may be self-effacing.
That said, some sort of loosely utilitarian-leaning meta-pluralism (of the sort Will MacAskill has been endorsing in recent interviews) may well be optimal. (It also seems more reasonable than dogmatic certainty in any one ethical approach.)
First I’ve heard of utilitarian-leaning meta-pluralism! Sounds interesting — have any links?
Will’s conversation with Tyler: “I say I’m not a utilitarian because — though it’s the view I’m most inclined to argue for in seminar rooms because I think it’s most underappreciated by the academy — I think we should have some degree of belief in a variety of moral views and take a compromise between them.”