I want to respond to a small but salient part of this post, its treatment of moral uncertainty. I first heard this observation from Carl Shulman, though I believe that both Brian and Buck are familiar with it.
You write:
There are roughly 10^18 insects in the world. Suppose we give insects a .1% chance of being sentient, with their sentience being .1% of a human’s… [then] 10^18 insects comes out to 1 trillion human equivalents
Compare:
There are roughly 10^10 humans in the world. Suppose we give each human a 10% chance of mattering 10^9 times as much as an insect… [then] 10^10 humans comes out to 10^18 insect equivalents.
Needless to say, many thoughtful people endorse multiplicative factor considerably larger than 10^9 for the ratio of human and insect moral relevance, and assigning their view an infinitesimal probability seems to require overconfidence. (10^8 is about the ratio between [number of synapses in human brain] and [number of synapses in insect brain]).
As you might guess from their differing conclusions, neither of these arguments work. This is not an appropriate way to reason under any kind of uncertainty, either moral or empirical.
Setting aside the technical issues, the simplest reason that I reach a different conclusion:
On total utilitarian grounds, the suffering of the insects doesn’t seem to matter much compared to changes in the long-term trajectory of civilization.
The effect of reductions in wild animal suffering on the long-term trajectory of civilization is probably much smaller than more straightforward changes.
I often consider moral consequences in the near term based on other altruistic considerations and frameworks, for example out of respect for other moral agents or potential moral agents. But very few of these alternative considerations recommend overwhelming concern for insect suffering.
Thanks, Paul. :) It’s worth noting that if the uncertainty is regarded as wholly factual (which is unrealistic, of course), then the insect wager does work, since then there’s only a single unit of moral value in play. For example, if the uncertainty were only a 0.1% chance that insects in fact possess specific neural structures that I know for certain I care about equally with the brains of humans, then the argument works. This sort of reasoning would be most applicable to, say, those property dualists who think there’s a single, ontological property of consciousness that’s either present or not in a binary way, since then there’s little moral uncertainty (consciousness matters and unconsciousness doesn’t) but still lots of factual uncertainty (is ontologically primitive phenomenal consciousness present or not?).
I think what you are saying is something like: if I’m certain that it’s the number of neurons which matter morally, but I’m just uncertain how many neurons this organism has, then a 2 envelope problem doesn’t apply.
But why can’t I leverage that trick to avoid the 2 envelope problem in general? I’m confident that utilons are what matter morally, but I’m uncertain how many utilons an insect has.
The problem to me doesn’t seem to be “factual” versus “moral”, but rather if there are some nonarbitrary units.
I think what you are saying is something like: if I’m certain that it’s the number of neurons which matter morally, but I’m just uncertain how many neurons this organism has, then a 2 envelope problem doesn’t apply.
Yeah, that’s a cartoon version of what I was saying.
I’m confident that utilons are what matter morally, but I’m uncertain how many utilons an insect has.
The problem is that “No, Virginia, there really is no such thing as a utilon in any non-arbitrary sense. Happiness and suffering are not actually cardinal numbers that live in the physics of the universe that we can measure. Rather, we use numbers to express how much we care about an experience.”
If moral realism were true and the moral truth were utilitarianism, then I suppose there would be a “right answer” for how many utilons a given system possessed (up to positive affine transformation). But I don’t take moral realism seriously. In the moral-realism case, the two-envelopes problem for moral uncertainty would be analogous to the ordinary two-envelopes problem, where it’s also the case that there’s a right answer. For moral non-realists, the two-envelopes problem is just a way to describe the paradox that calculations over moral uncertainty depend on one’s unit of measurement.
Okay, I think I understand now, thanks for the explanation.
For what it’s worth, the “factual” versus “moral” contrast you are drawing seems to me to be a distinction without a difference. Both the moral realist and the moral non-realist are looking for “nonarbitrary” units of measurement, and an argument that a certain unit was nonarbitrary seems like it would probably be persuasive to both the realist and the non-realist.
Well, the moral realist just assumes there exist non-arbitrary units by faith, since moral realism implies non-arbitrariness. The non-realist believes no such thing. :)
It’s worth noting that if the uncertainty is regarded as wholly factual (which is unrealistic, of course), then the insect wager does work, since then there’s only a single unit of moral value in play.
Under some assumptions, that might be correct, but not in the general case. In the general case, the degree of sentience of a human compared to an insect could plausibly be from 1 to 10^9+.
To push through the reasoning about the wager, you need an extra assumption—that none of your probability mass has the human being much more important (or sentient) than the insect. You need to enforce an upper-bound ratio between importance of humans compared to insects. And that’s not specific to the moral or empirical question, unless you argue from other philosophical assumptions like certain types of property dualism, but you ought to also apply some uncertainty over those things.
Yeah. My point was that the “two envelopes problem for moral uncertainty” doesn’t apply to factual uncertainty. But certainly what conclusion results from applying factual uncertainty depends on one’s priors.
I agree that the two-envelope problem doesn’t apply to factual uncertainty. The whole thing gets significantly complicated by situations where it’s hard to specify how much of the uncertainty is moral and how much is factual.
I regard getting some good and usable answers to this as an important research topic within philosophy. Given that questions of pure moral uncertainty are still unresolved, you’d probably need to make assumptions about that half of things in order to get going.
I want to respond to a small but salient part of this post, its treatment of moral uncertainty. I first heard this observation from Carl Shulman, though I believe that both Brian and Buck are familiar with it.
I agree that the reasoning based on epsilon chance of a high multiplier is suspicious, for the reasons that you say. However, I think the pro-insect version is more likely to work out than the pro-human version. In the pro-insect version, you’re taking the moral value of humans as fixed, and taking the expectation of the moral value of insects, whereas in the pro-human version, you’re taking the moral value of insects as fixed, and taking the expectation of the moral value of humans. My uncertainty seems a lot closer to the first: I have a pretty good handle on what being a human is like, and how much I care about humans, but I’m not so sure about insects.
I think you at least get an edge on the problem from this (related to the point you make about factual uncertainty—familiarity with humans means we have more factual uncertainty about insect experience). But I wouldn’t want to rest the whole of my argument on this edge.
I agree that “I assign probability p that humans are 10^3 times as important as insects and probability q that humans are 10^10 times as important as insects” is logically equivalent to “I assign probability p that insects are 10^-3 times as important as humans and probability q that insects are 10^-10 times as important as humans”. However, when people turn these into the argument that “you should care about insects/humans much more than about humans/insects”, they’re implicitly doing an expected utility calculation.
In one version of the calculation, you’re saying that insect lives have utility 1, and human lives have utility 10^3 with probability p and 10^10 with probability q. When you take the expectation (for certain values of p and q), you end up with the human population being more important than the insect population.
In another version of the calculation, you’re saying that human lives have utility 1, and insect lives have utility 10^-3 with probability p and 10^-10 with probability q. Then, when you take the expectation (for values of p and q compatible with the pro-human conclusion above), you end up with the insect population being more important than the human population.
My point is that although both these probability assignments are compatible with the description “humans are 10^3 times as important with probability p and 10^10 times as important with probability q”, they are not equivalent distributions over utilities. Furthermore, I think my probability distribution looks more like the second than the first.
I want to respond to a small but salient part of this post, its treatment of moral uncertainty. I first heard this observation from Carl Shulman, though I believe that both Brian and Buck are familiar with it.
You write:
Compare:
Needless to say, many thoughtful people endorse multiplicative factor considerably larger than 10^9 for the ratio of human and insect moral relevance, and assigning their view an infinitesimal probability seems to require overconfidence. (10^8 is about the ratio between [number of synapses in human brain] and [number of synapses in insect brain]).
As you might guess from their differing conclusions, neither of these arguments work. This is not an appropriate way to reason under any kind of uncertainty, either moral or empirical.
Setting aside the technical issues, the simplest reason that I reach a different conclusion:
On total utilitarian grounds, the suffering of the insects doesn’t seem to matter much compared to changes in the long-term trajectory of civilization.
The effect of reductions in wild animal suffering on the long-term trajectory of civilization is probably much smaller than more straightforward changes.
I often consider moral consequences in the near term based on other altruistic considerations and frameworks, for example out of respect for other moral agents or potential moral agents. But very few of these alternative considerations recommend overwhelming concern for insect suffering.
Thanks, Paul. :) It’s worth noting that if the uncertainty is regarded as wholly factual (which is unrealistic, of course), then the insect wager does work, since then there’s only a single unit of moral value in play. For example, if the uncertainty were only a 0.1% chance that insects in fact possess specific neural structures that I know for certain I care about equally with the brains of humans, then the argument works. This sort of reasoning would be most applicable to, say, those property dualists who think there’s a single, ontological property of consciousness that’s either present or not in a binary way, since then there’s little moral uncertainty (consciousness matters and unconsciousness doesn’t) but still lots of factual uncertainty (is ontologically primitive phenomenal consciousness present or not?).
I’m a little unclear about what this means.
I think what you are saying is something like: if I’m certain that it’s the number of neurons which matter morally, but I’m just uncertain how many neurons this organism has, then a 2 envelope problem doesn’t apply.
But why can’t I leverage that trick to avoid the 2 envelope problem in general? I’m confident that utilons are what matter morally, but I’m uncertain how many utilons an insect has.
The problem to me doesn’t seem to be “factual” versus “moral”, but rather if there are some nonarbitrary units.
Yeah, that’s a cartoon version of what I was saying.
The problem is that “No, Virginia, there really is no such thing as a utilon in any non-arbitrary sense. Happiness and suffering are not actually cardinal numbers that live in the physics of the universe that we can measure. Rather, we use numbers to express how much we care about an experience.”
If moral realism were true and the moral truth were utilitarianism, then I suppose there would be a “right answer” for how many utilons a given system possessed (up to positive affine transformation). But I don’t take moral realism seriously. In the moral-realism case, the two-envelopes problem for moral uncertainty would be analogous to the ordinary two-envelopes problem, where it’s also the case that there’s a right answer. For moral non-realists, the two-envelopes problem is just a way to describe the paradox that calculations over moral uncertainty depend on one’s unit of measurement.
Okay, I think I understand now, thanks for the explanation.
For what it’s worth, the “factual” versus “moral” contrast you are drawing seems to me to be a distinction without a difference. Both the moral realist and the moral non-realist are looking for “nonarbitrary” units of measurement, and an argument that a certain unit was nonarbitrary seems like it would probably be persuasive to both the realist and the non-realist.
Well, the moral realist just assumes there exist non-arbitrary units by faith, since moral realism implies non-arbitrariness. The non-realist believes no such thing. :)
Under some assumptions, that might be correct, but not in the general case. In the general case, the degree of sentience of a human compared to an insect could plausibly be from 1 to 10^9+.
To push through the reasoning about the wager, you need an extra assumption—that none of your probability mass has the human being much more important (or sentient) than the insect. You need to enforce an upper-bound ratio between importance of humans compared to insects. And that’s not specific to the moral or empirical question, unless you argue from other philosophical assumptions like certain types of property dualism, but you ought to also apply some uncertainty over those things.
Yeah. My point was that the “two envelopes problem for moral uncertainty” doesn’t apply to factual uncertainty. But certainly what conclusion results from applying factual uncertainty depends on one’s priors.
I agree that the two-envelope problem doesn’t apply to factual uncertainty. The whole thing gets significantly complicated by situations where it’s hard to specify how much of the uncertainty is moral and how much is factual.
I regard getting some good and usable answers to this as an important research topic within philosophy. Given that questions of pure moral uncertainty are still unresolved, you’d probably need to make assumptions about that half of things in order to get going.
Some links to discussions of this.
I agree that the reasoning based on epsilon chance of a high multiplier is suspicious, for the reasons that you say. However, I think the pro-insect version is more likely to work out than the pro-human version. In the pro-insect version, you’re taking the moral value of humans as fixed, and taking the expectation of the moral value of insects, whereas in the pro-human version, you’re taking the moral value of insects as fixed, and taking the expectation of the moral value of humans. My uncertainty seems a lot closer to the first: I have a pretty good handle on what being a human is like, and how much I care about humans, but I’m not so sure about insects.
I don’t think you can avoid the problem that way, since it’s logically equivalent to re-cast your statement in terms of insect value units?
I think you at least get an edge on the problem from this (related to the point you make about factual uncertainty—familiarity with humans means we have more factual uncertainty about insect experience). But I wouldn’t want to rest the whole of my argument on this edge.
I agree that “I assign probability p that humans are 10^3 times as important as insects and probability q that humans are 10^10 times as important as insects” is logically equivalent to “I assign probability p that insects are 10^-3 times as important as humans and probability q that insects are 10^-10 times as important as humans”. However, when people turn these into the argument that “you should care about insects/humans much more than about humans/insects”, they’re implicitly doing an expected utility calculation.
In one version of the calculation, you’re saying that insect lives have utility 1, and human lives have utility 10^3 with probability p and 10^10 with probability q. When you take the expectation (for certain values of p and q), you end up with the human population being more important than the insect population.
In another version of the calculation, you’re saying that human lives have utility 1, and insect lives have utility 10^-3 with probability p and 10^-10 with probability q. Then, when you take the expectation (for values of p and q compatible with the pro-human conclusion above), you end up with the insect population being more important than the human population.
My point is that although both these probability assignments are compatible with the description “humans are 10^3 times as important with probability p and 10^10 times as important with probability q”, they are not equivalent distributions over utilities. Furthermore, I think my probability distribution looks more like the second than the first.