Richard—excellent post—it’s clear, compelling, reasonable, and actionable.
A key question concerning your three options is more psychological than philosophical: which kinds of people, with which cognitive, personality, and moral traits, should adopt options 1, 2, or 3, in terms of keeping them from using bad utilitarian reasoning (e.g. self-serving, biased, unempirical, convenient moral reasoning) that violates other people’s ‘rights’ or deters them from pursuing ‘virtues’? (Just to be clear, I endorse utilitarianism as a a normative ethical theory; the question here is just how to weave some good norms and rules into our prescriptive morality that we use ourselves in day-to-day life, and promote to others.)
I suspect that many deontologists assume that most people can’t handle options 1 or 2, in the sense that those options wouldn’t reliably protect us against rights-violating faulty-utilitarian reasoning of the sort that humans evolved to be very good at (according to the ‘argumentative theory of reasoning’ from Huge Mercier). So these deontologists see it as their job to promote option 3 as if it’s true—even though they might, in their heart of hearts, know that they’re really promoting option 2. However, I suspect that, following science, Nietzsche, secularism, and the collapse of traditional theological and metaphysical bases for deontology, lots of intelligent people simply can’t buy into option 3 any more. So, option-3-style arguments just can’t carry as much weight as they used to, and can’t motivate rule-like constraints on faulty-utilitarian reasoning.
Conversely, if most people adopt option 1 (prudent two-level consequentialism), I think they might be too tempted to engage in self-serving faulty-utilitarian reasoning. (Arguably this is what we saw with the FTX debacle—‘it’s OK to steal clients’ crypto deposits if it’s for the greater good’). However, that’s an empirical question, and I’m open to updating.
My hunch is that for most people most of the time, option 2 (deontic fictionalism) strikes the best balance between evidence-based consequentialism and fairly strong guide-rails against self-serving faulty-utilitarian reasoning. So, I think it’s worth developing further as a sort of psychologically pragmatic meta-ethics that could work pretty well for our species, given human nature.
Thanks! Yes, agreed it’s an open empirical question how well people (in general, or particular individuals) can pull off the specified options.
I wouldn’t be terribly surprised if something like (2) turned out to be best for most people most of the time. But I guess I’m sufficiently Aristotelian to think that if we’re raised since childhood to abide by good norms, later learning that they’re instrumentally justified shouldn’t really undermine them that much. (They certainly haven’t for me—my wife finds it funny how strongly averse I am to any kind of dishonesty, “despite” my utilitarian beliefs!)
Richard—excellent post—it’s clear, compelling, reasonable, and actionable.
A key question concerning your three options is more psychological than philosophical: which kinds of people, with which cognitive, personality, and moral traits, should adopt options 1, 2, or 3, in terms of keeping them from using bad utilitarian reasoning (e.g. self-serving, biased, unempirical, convenient moral reasoning) that violates other people’s ‘rights’ or deters them from pursuing ‘virtues’? (Just to be clear, I endorse utilitarianism as a a normative ethical theory; the question here is just how to weave some good norms and rules into our prescriptive morality that we use ourselves in day-to-day life, and promote to others.)
I suspect that many deontologists assume that most people can’t handle options 1 or 2, in the sense that those options wouldn’t reliably protect us against rights-violating faulty-utilitarian reasoning of the sort that humans evolved to be very good at (according to the ‘argumentative theory of reasoning’ from Huge Mercier). So these deontologists see it as their job to promote option 3 as if it’s true—even though they might, in their heart of hearts, know that they’re really promoting option 2. However, I suspect that, following science, Nietzsche, secularism, and the collapse of traditional theological and metaphysical bases for deontology, lots of intelligent people simply can’t buy into option 3 any more. So, option-3-style arguments just can’t carry as much weight as they used to, and can’t motivate rule-like constraints on faulty-utilitarian reasoning.
Conversely, if most people adopt option 1 (prudent two-level consequentialism), I think they might be too tempted to engage in self-serving faulty-utilitarian reasoning. (Arguably this is what we saw with the FTX debacle—‘it’s OK to steal clients’ crypto deposits if it’s for the greater good’). However, that’s an empirical question, and I’m open to updating.
My hunch is that for most people most of the time, option 2 (deontic fictionalism) strikes the best balance between evidence-based consequentialism and fairly strong guide-rails against self-serving faulty-utilitarian reasoning. So, I think it’s worth developing further as a sort of psychologically pragmatic meta-ethics that could work pretty well for our species, given human nature.
Thanks! Yes, agreed it’s an open empirical question how well people (in general, or particular individuals) can pull off the specified options.
I wouldn’t be terribly surprised if something like (2) turned out to be best for most people most of the time. But I guess I’m sufficiently Aristotelian to think that if we’re raised since childhood to abide by good norms, later learning that they’re instrumentally justified shouldn’t really undermine them that much. (They certainly haven’t for me—my wife finds it funny how strongly averse I am to any kind of dishonesty, “despite” my utilitarian beliefs!)