Classically, deontology is the view that some actions are intrinsically right or wrong, regardless of the consequences. It holds that there are certain things that we ought never to do, even if doing them would lead to good consequences. For example, it is always wrong to torture an innocent person, regardless of whether or not doing so would lead to some greater good.
I think that what you describe is better termed “absolute deontology”. Arguably, absolute deontology is (under most precisifications) a pretty radical view that most people would reject (at least if they really thought about it). More common is what Zamir and Medina call “moderate” (or “threshold”) deontology. Moderate deontology allows one to do things like lie, cheat, steal, or cause direct bodily harm to innocents when the benefits of doing so massively outweigh the costs. It prohibits one from doing so when the benefits only moderately outweigh the costs.
Does stealing from the rich and giving to the poor count as a case where the benefit massively outweighs the costs?
It should be noted that, under these views, the ‘benefits and costs’ will not be just total welfare, but may instead involve things like deservingness, automony, rights, equality, justice.
Of course, there are different degrees of moderate deontology.
How best to think about moderate deontology under uncertainty may depend on the case, but notions of ex-ante and ex-post harm can be used.
Although I agree with your general point that thinking in somewhat non-consequentialist ways can often be a more effective way to pursue good consequences (if that’s what you want to do), this is not always true. If you are a real a utilitarian, there is a non-trival chance that at some point in your life, you’re gonna have to do something which is seriously at odds with conventional morality.
These issues about how to discipline one’s mind to effectively pursue good consequences (directly or indirectly) are very important in practice for consequentialists, but I think are probably not fully resolvable in theory. Ultimately, pursuing good consequences is an art.
Yes / I mostly tried to describe “pure” version of the theory, not moderated by applications of other types of reasoning.
I don’t think the way I’m using ‘deontology ’ and ‘virtue ethics’ reduce to ‘conventional morality’ either.
For example, I currently have something like a deontic commitment to not do/say things which would predictably damage epistemics—either of me or of others. (Even if the epistemic damage would have an upside like someone taking some good actions)
I think this is ultimately more of an ‘attempt at approximating true consequentialism’ rather then ‘conventional morality’ which does not care about it.
The theory & current state of cognitive science & applied rationality certainly don’t fully resolve the problem, but I’m optimistic they provide at least some decent approximate guesses
I think that what you describe is better termed “absolute deontology”. Arguably, absolute deontology is (under most precisifications) a pretty radical view that most people would reject (at least if they really thought about it). More common is what Zamir and Medina call “moderate” (or “threshold”) deontology. Moderate deontology allows one to do things like lie, cheat, steal, or cause direct bodily harm to innocents when the benefits of doing so massively outweigh the costs. It prohibits one from doing so when the benefits only moderately outweigh the costs.
Does stealing from the rich and giving to the poor count as a case where the benefit massively outweighs the costs?
It should be noted that, under these views, the ‘benefits and costs’ will not be just total welfare, but may instead involve things like deservingness, automony, rights, equality, justice.
Of course, there are different degrees of moderate deontology.
How best to think about moderate deontology under uncertainty may depend on the case, but notions of ex-ante and ex-post harm can be used.
Although I agree with your general point that thinking in somewhat non-consequentialist ways can often be a more effective way to pursue good consequences (if that’s what you want to do), this is not always true. If you are a real a utilitarian, there is a non-trival chance that at some point in your life, you’re gonna have to do something which is seriously at odds with conventional morality.
These issues about how to discipline one’s mind to effectively pursue good consequences (directly or indirectly) are very important in practice for consequentialists, but I think are probably not fully resolvable in theory. Ultimately, pursuing good consequences is an art.
Yes / I mostly tried to describe “pure” version of the theory, not moderated by applications of other types of reasoning.
I don’t think the way I’m using ‘deontology ’ and ‘virtue ethics’ reduce to ‘conventional morality’ either.
For example, I currently have something like a deontic commitment to not do/say things which would predictably damage epistemics—either of me or of others. (Even if the epistemic damage would have an upside like someone taking some good actions)
I think this is ultimately more of an ‘attempt at approximating true consequentialism’ rather then ‘conventional morality’ which does not care about it.
The theory & current state of cognitive science & applied rationality certainly don’t fully resolve the problem, but I’m optimistic they provide at least some decent approximate guesses