Although I grant that this position has some initial intuitive appeal, I find it difficult to endorse—or, frankly, really understand—upon reflection. For this position to succeed, there would have to exist some sort of unbridgeable value gap between small interests and big interests. And while the mere existence of such a gap is perhaps not so strange, the placement of the gap at any particular point on a welfare or status scale seems unjustifiably arbitrary. It’s not clear what could explain the fact that the slight happiness of a sufficient number of squirrels never outweighs the large happiness of a single chimpanzee. If happiness is all that non-instrumentally matters, as Kazez assumes for the sake of argument, we can’t appeal to any qualitative differences in chimpanzee versus squirrel happiness.[76] (It’s not as if, for example, that chimpanzee happiness is deserved while squirrel happiness is obtained unfairly.) And how much happier must chimpanzees be before their happiness can definitively outweigh the lesser happiness of other creatures? What about meerkats, who we might assume for the sake of argument are generally happier than squirrels but not so happy as chimpanzees? There seems to be little principled ground to stand on. Hence, while we should acknowledge the possibility of non-additivity here, we should probably assign it a fairly low credence.
“Consent-based” approaches might work. They’ve been framed in the case of suffering, but could possibly work for happiness, too. Actually, I suppose this is similar to Mill’s higher and lower pleasures (EDIT: as you mention in footnote 76), but without being dogmatic about what counts as a higher or lower pleasure even to the point of rejecting the preferences of those who experience both. See:
And, indeed, if we want to determine levels of suffering and pleasure based on the tradeoffs people would make, you will get lexicality unless you reject some tradeoffs, because some people have lexical views (myself included, if I had a very long life, I’d prefer many pin pricks one at a time spread out across days than a full day of torture with no long-term effects). How else could we ground cardinal degrees of suffering and pleasure except through individual tradeoffs?
And while it might be the case that nonhuman animals act lexically, since they aren’t as future-oriented and reflective like us, their behaviour on its own might not be a good indication of moral lexicality. If we establish that an animal is suffering to an extent similar to how we suffer when we suffer lexically, then that’s a reason to believe that suffering matters lexically, and if we establish that an animal is suffering to an extent similar to how we suffer when we don’t suffer lexically, then that’s a reason to believe that suffering doesn’t matter lexically. In this way, it could turn out to be the case that insects act lexically, but their suffering doesn’t matter lexically. Of course, it could turn out to be the case that insects do suffer in ways that matter lexically.
Thanks for the comment. The question of value lexicality is a big issue, and I can’t possibly do it justice in these comments alone, so if you want to schedule a call to discuss in more detail, I’m happy to do so.
That caveat aside, I’m pretty skeptical consent-based views can ground the relevant thresholds in a way that escapes the arbitrariness worry. The basic concern is that we can expect differences in ability to consent across circumstances and species that don’t track morally relevant facts. A lot hangs on the exact nature of consent, which is surprisingly hard to pin down. See recent debates about the nature of consent in clinical trials, political legitimacy, human organ sales, sex, and general decision-making capacity.
I think the word “consent” might have been a somewhat poor choice, since it has more connotations than we need. Rather, the concept is closer to “bearability” or just the fact that an individual’s personal preferences seem to involve lexicality, which the two articles I linked to get into. For suffering, it’s when someone wants to make it stop, at any cost (or any cost in certain kinds of experiences, say, e.g. any number of sufficiently mild pains, or any amount of pleasure).
There are objections to this, too, of course:
1. We have unreliable intuitions/preferences involving large numbers (e.g. a large number of pin pricks vs torture).
2. We may be trying to generalize from imagining ourselves in situations like sufficiently intense suffering in which we can’t possibly be reflective or rational, so any intuitions coming out of this would be unreliable. Lexicality might happen only (perhaps by definition) when we can’t possibly be reflective or rational. Furthermore, if this is the case, then this is a reason against the conjunction of trusting our own lexicality directly and not directly trusting the lexicality of nonhuman animals, including simpler ones like insects.
3. We mostly have unreliable intuitions about the kinds of intense suffering people have lexical preferences about, since few of us actually experience it.
That being said, I think each of these objections cuts both ways: they only tell us our intuitions are unreliable in these cases, they don’t tell us whether lexicality should be accepted or rejected. I can think of arguments for each:
1. We should trust personal preferences (at least when informed by personal experience), even when they’re unreliable, unless they are actually inconsistent with intuitions we think are more important and less unreliable, which isn’t the case for me, but might be for others.
2. We should reject unreliable personal preferences that cost us uniformity in our theory. (The personal preferences are unreliable either way, but accommodating lexical ones make our theory less uniform, assuming we want to accept aggregating in certain ways in our theory in the first place, which itself might be contentious.)
I would be happy to discuss over a call, but it might actually be more productive to talk to Magnus Vinding if you can, since he’s read and thought much more about this.
“Consent-based” approaches might work. They’ve been framed in the case of suffering, but could possibly work for happiness, too. Actually, I suppose this is similar to Mill’s higher and lower pleasures (EDIT: as you mention in footnote 76), but without being dogmatic about what counts as a higher or lower pleasure even to the point of rejecting the preferences of those who experience both. See:
https://reducing-suffering.org/happiness-suffering-symmetric/#Consent-based_negative_utilitarianism
http://centerforreducingsuffering.org/clarifying-lexical-thresholds/
And, indeed, if we want to determine levels of suffering and pleasure based on the tradeoffs people would make, you will get lexicality unless you reject some tradeoffs, because some people have lexical views (myself included, if I had a very long life, I’d prefer many pin pricks one at a time spread out across days than a full day of torture with no long-term effects). How else could we ground cardinal degrees of suffering and pleasure except through individual tradeoffs?
And while it might be the case that nonhuman animals act lexically, since they aren’t as future-oriented and reflective like us, their behaviour on its own might not be a good indication of moral lexicality. If we establish that an animal is suffering to an extent similar to how we suffer when we suffer lexically, then that’s a reason to believe that suffering matters lexically, and if we establish that an animal is suffering to an extent similar to how we suffer when we don’t suffer lexically, then that’s a reason to believe that suffering doesn’t matter lexically. In this way, it could turn out to be the case that insects act lexically, but their suffering doesn’t matter lexically. Of course, it could turn out to be the case that insects do suffer in ways that matter lexically.
Hi Michael,
Thanks for the comment. The question of value lexicality is a big issue, and I can’t possibly do it justice in these comments alone, so if you want to schedule a call to discuss in more detail, I’m happy to do so.
That caveat aside, I’m pretty skeptical consent-based views can ground the relevant thresholds in a way that escapes the arbitrariness worry. The basic concern is that we can expect differences in ability to consent across circumstances and species that don’t track morally relevant facts. A lot hangs on the exact nature of consent, which is surprisingly hard to pin down. See recent debates about the nature of consent in clinical trials, political legitimacy, human organ sales, sex, and general decision-making capacity.
I think the word “consent” might have been a somewhat poor choice, since it has more connotations than we need. Rather, the concept is closer to “bearability” or just the fact that an individual’s personal preferences seem to involve lexicality, which the two articles I linked to get into. For suffering, it’s when someone wants to make it stop, at any cost (or any cost in certain kinds of experiences, say, e.g. any number of sufficiently mild pains, or any amount of pleasure).
There are objections to this, too, of course:
1. We have unreliable intuitions/preferences involving large numbers (e.g. a large number of pin pricks vs torture).
2. We may be trying to generalize from imagining ourselves in situations like sufficiently intense suffering in which we can’t possibly be reflective or rational, so any intuitions coming out of this would be unreliable. Lexicality might happen only (perhaps by definition) when we can’t possibly be reflective or rational. Furthermore, if this is the case, then this is a reason against the conjunction of trusting our own lexicality directly and not directly trusting the lexicality of nonhuman animals, including simpler ones like insects.
3. We mostly have unreliable intuitions about the kinds of intense suffering people have lexical preferences about, since few of us actually experience it.
That being said, I think each of these objections cuts both ways: they only tell us our intuitions are unreliable in these cases, they don’t tell us whether lexicality should be accepted or rejected. I can think of arguments for each:
1. We should trust personal preferences (at least when informed by personal experience), even when they’re unreliable, unless they are actually inconsistent with intuitions we think are more important and less unreliable, which isn’t the case for me, but might be for others.
2. We should reject unreliable personal preferences that cost us uniformity in our theory. (The personal preferences are unreliable either way, but accommodating lexical ones make our theory less uniform, assuming we want to accept aggregating in certain ways in our theory in the first place, which itself might be contentious.)
I would be happy to discuss over a call, but it might actually be more productive to talk to Magnus Vinding if you can, since he’s read and thought much more about this.