Great post! On the tension between “maximization” vs “common-sense”, it can be helpful to distinguish two aspects of utilitarianism that are highly psychologically separable:
(1) Acceptance of instrumental harm (i.e. rejection of deontic constraints against this); and
(2) Moral ambition / scope-sensitivity / beneficentrism / optimizing within the range of the permissible. (There may be subtle differences between these different characterizations, but they clearly form a common cluster.)
Both could be seen as contrasting with “common sense”. But I think EA as a project has only ever been about the second. And I don’t think there’s any essential connection between the two—no reason why a commitment to the second should imply the first.
As generously noted by the OP [though I would encourage anyone interested in my views here to read my recentposts instead of the old one from my undergraduate days!], I’ve long argued that utilitarianism is nonetheless compatible with:
(1*) Being guided by commonsense deontic constraints, on heuristic grounds, and distrusting explicit calculations to the contrary (unless it would clearly be best for most people similarly subjectively situated to trust such calculations).
fwiw, my sense is that this is very much the mainstream view in the utilitarian tradition. Strikingly, those who deny that utilitarianism implies this are, overwhelmingly, non-utilitarians. (Of course, there are possible cases where utilitarianism will clearly advise instrumental harm, but the same is true of common-sense deontology; absolutism is very much not commonsensical.)
So when folks like Will affirm the need for EA to be guided by “commonsense moral norms”, I take it they mean something like the specific disjunction of rejecting (1) or affirming (1*), rather than a wholehearted embrace of commonsense morality, including its lax rejection of (2). But yeah, it could be helpful to come up with a concise way of expressing this more precise idea, rather than just relying on contextual understanding to fill that in!
Great post! On the tension between “maximization” vs “common-sense”, it can be helpful to distinguish two aspects of utilitarianism that are highly psychologically separable:
(1) Acceptance of instrumental harm (i.e. rejection of deontic constraints against this); and
(2) Moral ambition / scope-sensitivity / beneficentrism / optimizing within the range of the permissible. (There may be subtle differences between these different characterizations, but they clearly form a common cluster.)
Both could be seen as contrasting with “common sense”. But I think EA as a project has only ever been about the second. And I don’t think there’s any essential connection between the two—no reason why a commitment to the second should imply the first.
As generously noted by the OP [though I would encourage anyone interested in my views here to read my recent posts instead of the old one from my undergraduate days!], I’ve long argued that utilitarianism is nonetheless compatible with:
(1*) Being guided by commonsense deontic constraints, on heuristic grounds, and distrusting explicit calculations to the contrary (unless it would clearly be best for most people similarly subjectively situated to trust such calculations).
fwiw, my sense is that this is very much the mainstream view in the utilitarian tradition. Strikingly, those who deny that utilitarianism implies this are, overwhelmingly, non-utilitarians. (Of course, there are possible cases where utilitarianism will clearly advise instrumental harm, but the same is true of common-sense deontology; absolutism is very much not commonsensical.)
So when folks like Will affirm the need for EA to be guided by “commonsense moral norms”, I take it they mean something like the specific disjunction of rejecting (1) or affirming (1*), rather than a wholehearted embrace of commonsense morality, including its lax rejection of (2). But yeah, it could be helpful to come up with a concise way of expressing this more precise idea, rather than just relying on contextual understanding to fill that in!