I think part of whatâs driving the feeling that there is something bad for consequentialism here is something like the following. People donât think they *actually* value integrity (just) because it has good consequences, but rather think they assign (some) intrinsic value to integrity. And then they find it a suspicious coincidence if a view which assign *zero* *intrinsic* value to integrity is being claimed to get the same results as their own intuitions about all cases we actually care about, given that (they think) their intuition are being driven partly by the fact that they value integrity intrinsically. Obviously, this doesnât show that in any particular case the claim that acting without integrity doesnât really maximize utility is wrong. But I think it contributes to peopleâs sense that utilitarians are trying to cheat here somehow.
Interesting diagnosis! But unless theyâre absolutists, shouldnât they be equally suspicious of themselves? That is, nobody (but Kant) thinks the intrinsic value of integrity is so high that you should never tell a lie even if the entire future of humanity depended on it. So I donât really see how they could think that the intrinsic value of integrity makes any practical difference to what a longtermist really ought to do.
(Incidentally, I think this is also a reason to be a bit suspicious of Will MacAskillâs appeals to ânormative uncertaintyâ in these contexts. Every reasonable view converges with utilitarian verdicts when the stakes are high.)
I think part of whatâs driving the feeling that there is something bad for consequentialism here is something like the following. People donât think they *actually* value integrity (just) because it has good consequences, but rather think they assign (some) intrinsic value to integrity. And then they find it a suspicious coincidence if a view which assign *zero* *intrinsic* value to integrity is being claimed to get the same results as their own intuitions about all cases we actually care about, given that (they think) their intuition are being driven partly by the fact that they value integrity intrinsically. Obviously, this doesnât show that in any particular case the claim that acting without integrity doesnât really maximize utility is wrong. But I think it contributes to peopleâs sense that utilitarians are trying to cheat here somehow.
Interesting diagnosis! But unless theyâre absolutists, shouldnât they be equally suspicious of themselves? That is, nobody (but Kant) thinks the intrinsic value of integrity is so high that you should never tell a lie even if the entire future of humanity depended on it. So I donât really see how they could think that the intrinsic value of integrity makes any practical difference to what a longtermist really ought to do.
(Incidentally, I think this is also a reason to be a bit suspicious of Will MacAskillâs appeals to ânormative uncertaintyâ in these contexts. Every reasonable view converges with utilitarian verdicts when the stakes are high.)
Iâm inclined to agree with both those claims, yes.
Preference utilitarianism is perfectly compatible with people preferring to have integrity and preferring others to behave with integrity.