Thanks, Paul. :) It’s worth noting that if the uncertainty is regarded as wholly factual (which is unrealistic, of course), then the insect wager does work, since then there’s only a single unit of moral value in play. For example, if the uncertainty were only a 0.1% chance that insects in fact possess specific neural structures that I know for certain I care about equally with the brains of humans, then the argument works. This sort of reasoning would be most applicable to, say, those property dualists who think there’s a single, ontological property of consciousness that’s either present or not in a binary way, since then there’s little moral uncertainty (consciousness matters and unconsciousness doesn’t) but still lots of factual uncertainty (is ontologically primitive phenomenal consciousness present or not?).
I think what you are saying is something like: if I’m certain that it’s the number of neurons which matter morally, but I’m just uncertain how many neurons this organism has, then a 2 envelope problem doesn’t apply.
But why can’t I leverage that trick to avoid the 2 envelope problem in general? I’m confident that utilons are what matter morally, but I’m uncertain how many utilons an insect has.
The problem to me doesn’t seem to be “factual” versus “moral”, but rather if there are some nonarbitrary units.
I think what you are saying is something like: if I’m certain that it’s the number of neurons which matter morally, but I’m just uncertain how many neurons this organism has, then a 2 envelope problem doesn’t apply.
Yeah, that’s a cartoon version of what I was saying.
I’m confident that utilons are what matter morally, but I’m uncertain how many utilons an insect has.
The problem is that “No, Virginia, there really is no such thing as a utilon in any non-arbitrary sense. Happiness and suffering are not actually cardinal numbers that live in the physics of the universe that we can measure. Rather, we use numbers to express how much we care about an experience.”
If moral realism were true and the moral truth were utilitarianism, then I suppose there would be a “right answer” for how many utilons a given system possessed (up to positive affine transformation). But I don’t take moral realism seriously. In the moral-realism case, the two-envelopes problem for moral uncertainty would be analogous to the ordinary two-envelopes problem, where it’s also the case that there’s a right answer. For moral non-realists, the two-envelopes problem is just a way to describe the paradox that calculations over moral uncertainty depend on one’s unit of measurement.
Okay, I think I understand now, thanks for the explanation.
For what it’s worth, the “factual” versus “moral” contrast you are drawing seems to me to be a distinction without a difference. Both the moral realist and the moral non-realist are looking for “nonarbitrary” units of measurement, and an argument that a certain unit was nonarbitrary seems like it would probably be persuasive to both the realist and the non-realist.
Well, the moral realist just assumes there exist non-arbitrary units by faith, since moral realism implies non-arbitrariness. The non-realist believes no such thing. :)
It’s worth noting that if the uncertainty is regarded as wholly factual (which is unrealistic, of course), then the insect wager does work, since then there’s only a single unit of moral value in play.
Under some assumptions, that might be correct, but not in the general case. In the general case, the degree of sentience of a human compared to an insect could plausibly be from 1 to 10^9+.
To push through the reasoning about the wager, you need an extra assumption—that none of your probability mass has the human being much more important (or sentient) than the insect. You need to enforce an upper-bound ratio between importance of humans compared to insects. And that’s not specific to the moral or empirical question, unless you argue from other philosophical assumptions like certain types of property dualism, but you ought to also apply some uncertainty over those things.
Yeah. My point was that the “two envelopes problem for moral uncertainty” doesn’t apply to factual uncertainty. But certainly what conclusion results from applying factual uncertainty depends on one’s priors.
I agree that the two-envelope problem doesn’t apply to factual uncertainty. The whole thing gets significantly complicated by situations where it’s hard to specify how much of the uncertainty is moral and how much is factual.
I regard getting some good and usable answers to this as an important research topic within philosophy. Given that questions of pure moral uncertainty are still unresolved, you’d probably need to make assumptions about that half of things in order to get going.
Thanks, Paul. :) It’s worth noting that if the uncertainty is regarded as wholly factual (which is unrealistic, of course), then the insect wager does work, since then there’s only a single unit of moral value in play. For example, if the uncertainty were only a 0.1% chance that insects in fact possess specific neural structures that I know for certain I care about equally with the brains of humans, then the argument works. This sort of reasoning would be most applicable to, say, those property dualists who think there’s a single, ontological property of consciousness that’s either present or not in a binary way, since then there’s little moral uncertainty (consciousness matters and unconsciousness doesn’t) but still lots of factual uncertainty (is ontologically primitive phenomenal consciousness present or not?).
I’m a little unclear about what this means.
I think what you are saying is something like: if I’m certain that it’s the number of neurons which matter morally, but I’m just uncertain how many neurons this organism has, then a 2 envelope problem doesn’t apply.
But why can’t I leverage that trick to avoid the 2 envelope problem in general? I’m confident that utilons are what matter morally, but I’m uncertain how many utilons an insect has.
The problem to me doesn’t seem to be “factual” versus “moral”, but rather if there are some nonarbitrary units.
Yeah, that’s a cartoon version of what I was saying.
The problem is that “No, Virginia, there really is no such thing as a utilon in any non-arbitrary sense. Happiness and suffering are not actually cardinal numbers that live in the physics of the universe that we can measure. Rather, we use numbers to express how much we care about an experience.”
If moral realism were true and the moral truth were utilitarianism, then I suppose there would be a “right answer” for how many utilons a given system possessed (up to positive affine transformation). But I don’t take moral realism seriously. In the moral-realism case, the two-envelopes problem for moral uncertainty would be analogous to the ordinary two-envelopes problem, where it’s also the case that there’s a right answer. For moral non-realists, the two-envelopes problem is just a way to describe the paradox that calculations over moral uncertainty depend on one’s unit of measurement.
Okay, I think I understand now, thanks for the explanation.
For what it’s worth, the “factual” versus “moral” contrast you are drawing seems to me to be a distinction without a difference. Both the moral realist and the moral non-realist are looking for “nonarbitrary” units of measurement, and an argument that a certain unit was nonarbitrary seems like it would probably be persuasive to both the realist and the non-realist.
Well, the moral realist just assumes there exist non-arbitrary units by faith, since moral realism implies non-arbitrariness. The non-realist believes no such thing. :)
Under some assumptions, that might be correct, but not in the general case. In the general case, the degree of sentience of a human compared to an insect could plausibly be from 1 to 10^9+.
To push through the reasoning about the wager, you need an extra assumption—that none of your probability mass has the human being much more important (or sentient) than the insect. You need to enforce an upper-bound ratio between importance of humans compared to insects. And that’s not specific to the moral or empirical question, unless you argue from other philosophical assumptions like certain types of property dualism, but you ought to also apply some uncertainty over those things.
Yeah. My point was that the “two envelopes problem for moral uncertainty” doesn’t apply to factual uncertainty. But certainly what conclusion results from applying factual uncertainty depends on one’s priors.
I agree that the two-envelope problem doesn’t apply to factual uncertainty. The whole thing gets significantly complicated by situations where it’s hard to specify how much of the uncertainty is moral and how much is factual.
I regard getting some good and usable answers to this as an important research topic within philosophy. Given that questions of pure moral uncertainty are still unresolved, you’d probably need to make assumptions about that half of things in order to get going.