Although doing something because it is the intuitive, traditional, habitual, or whatever way of doing things doesn’t necessarily have a great record of getting good results, many philosophers (particularly those in the virtue ethics tradition, but also “virtue consequentialists” and the like) argue that cultivating good intuitions, traditions, habits, and so on is probably more effective at actually having good consequences on the world rather than evaluating each act individually. This is partly probably due to quirks of human psychology, but partly due to the general limitations of finite beings of any sort—we need to operate under heuristics rather than unboundedly complex rules or calculations. (You’re probably getting at something like this point towards the end.
On the Harsanyi results—I think there’s a bit more flexibility than your discussion suggests. I don’t think there’s any solid argument that rules out non-Archimedean value scales, where some things count infinitely more than others. I’m not convinced that there are such things, but I don’t think they cause all the problems for utilitarianism and related views than they are sometimes said to. Also, I don’t think the argument for expected-value reasoning and equal-weight consideration for all individuals are quite as knock-down as is sometimes suggested—Lara Buchak’s work on risk aversion is very interesting to me, and it is formally analogous (through the same Harsanyi/Rawls veil of ignorance thought experiment) to one standard form of inequality aversion (I always forget whether it’s “prioritarianism” or “egalitarianism”—one says that value counts for more at lower points on the value scale and is formally like “diminishing marginal utility of utility” if that wasn’t a contradiction; the other says that improvements for people who are relatively low off in the social ordering count more than improvements for people who are relatively high off, and this one is analogous to Buchak’s risk aversion, where improvements in the worst outcomes matter more than improvement in the best outcomes, regardless of the absolute level those improvements occur at).
You endorse sentientism, based on “the key question is the extent to which they’re sentient: capable of experiencing pleasure and suffering.” It seems like it might be a friendly amendment to this to define “sentient” as “capable of preferring some states to others”—that seems to get away from some of the deeper metaphysical questions of consciousness, and allow us to consider pleasure and pain as preference-like states, but not the only ones.
That seems reasonable re: sentientism.
I agree that there’s no knockdown argument against lexicographic preferences, though I find them unappealing for reasons gestured at in this dialogue.
A few comments:
Although doing something because it is the intuitive, traditional, habitual, or whatever way of doing things doesn’t necessarily have a great record of getting good results, many philosophers (particularly those in the virtue ethics tradition, but also “virtue consequentialists” and the like) argue that cultivating good intuitions, traditions, habits, and so on is probably more effective at actually having good consequences on the world rather than evaluating each act individually. This is partly probably due to quirks of human psychology, but partly due to the general limitations of finite beings of any sort—we need to operate under heuristics rather than unboundedly complex rules or calculations. (You’re probably getting at something like this point towards the end.
On the Harsanyi results—I think there’s a bit more flexibility than your discussion suggests. I don’t think there’s any solid argument that rules out non-Archimedean value scales, where some things count infinitely more than others. I’m not convinced that there are such things, but I don’t think they cause all the problems for utilitarianism and related views than they are sometimes said to. Also, I don’t think the argument for expected-value reasoning and equal-weight consideration for all individuals are quite as knock-down as is sometimes suggested—Lara Buchak’s work on risk aversion is very interesting to me, and it is formally analogous (through the same Harsanyi/Rawls veil of ignorance thought experiment) to one standard form of inequality aversion (I always forget whether it’s “prioritarianism” or “egalitarianism”—one says that value counts for more at lower points on the value scale and is formally like “diminishing marginal utility of utility” if that wasn’t a contradiction; the other says that improvements for people who are relatively low off in the social ordering count more than improvements for people who are relatively high off, and this one is analogous to Buchak’s risk aversion, where improvements in the worst outcomes matter more than improvement in the best outcomes, regardless of the absolute level those improvements occur at).
You endorse sentientism, based on “the key question is the extent to which they’re sentient: capable of experiencing pleasure and suffering.” It seems like it might be a friendly amendment to this to define “sentient” as “capable of preferring some states to others”—that seems to get away from some of the deeper metaphysical questions of consciousness, and allow us to consider pleasure and pain as preference-like states, but not the only ones.
Thanks for this, Kenny. I’ve always thought Rawls’ Veil of Ignorance can do a lot of heavy lifting.
https://www.mattball.org/2017/03/a-theory-of-ethics.html