Similarly in this case we could set up an (admittedly construed) situation where you start by doing a bunch of reasoning about what’s best, under a veil of ignorance about whether you’re human or alien. Then it’s revealed which you are, you remember all your experiences and can reason about how big a deal they are — and then you will predictably pay some utility in order to benefit the other species more.
In this case, assuming you have no first-person experience with suffering to value directly (or memory of it), you would develop your concept of suffering third-personally — based on observations of and hypotheses about humans, aliens, chickens and others, say — and could base your ethics on that concept. This is not how humans or the aliens would typically understand and value suffering, which is largely first-personally. The human has their own vague revisable placeholder concept of suffering on which they ground value, and the alien has their own (and the chicken might have their own). Each also differ from the hypothetical third-personal concept.
Technically, we could say the humans and aliens have developed different ethical theories from each other, even if everyone’s a classical utilitarian, say, because they’re picking out different concepts of suffering on which to ground value.[1] And your third-personal account would give a different ethical theory from each, too. All three (human, alien, third-personal) ethical theories could converge under full information, though, if the concepts of suffering would converge under full information (and if everything else would converge).[2]
With the third-personal concept, I doubt there’d be a good solution to this two envelopes problem that actually gives you exactly one common moral scale and corresponding prior when you have enough uncertainty about the nature of suffering. You could come up with such a scale and prior, but you’d have to fix something pretty arbitrarily to do so. Instead, I think the thing to do is to assign credences across multiple scales (and corresponding priors) and use an approach to moral uncertainty that doesn’t depend on comparisons between them. (EDIT: And these could be the alien stance and human stance which relatively prioritize the other and result in a two envelopes problem.) But what I’ll say below applies even if you use a single common scale and prior.
When you have first-person experience with suffering, you can narrow down the common moral scales under consideration to ones based on your own experience. This would also have implications for your credences compared to the hypothetical third-person perspective.
If you started from no experience of suffering and then became a human, alien or chicken and experienced suffering as one of them, you could then rule out a bunch of scales (and corresponding priors). This would also result in big updates from your prior(s). You’d end up in a human-relative, alien-relative or chicken-relative account (or multiple such accounts, but for one species only).
A typical chicken’s concept of suffering wouldn’t converge, but we could capture/explain it. Their apparent normative stances wouldn’t converge either, unless you imagine radically different beings.
I understand that you’re explaining why you don’t really think it’s well modelled as a two-envelope problem, but I’m not sure whether you’re biting the bullet that you’re predictably paying some utility in unnecessary ways (in this admittedly convoluted hypothetical), or if you don’t think there’s a bullet there to bite, or something else?
Alternatively, you might assume you actually already are a human, alien or chicken, have (and remember) experience with suffering as one of them, but are uncertain about which you in fact are. For illustration, let’s suppose human or alien. Because you’re uncertain about whether you’re an alien or human, your concept of suffering points to one that will turn out to be human suffering with some probability, p, and alien suffering with the rest of the probability, 1-p. You ground value relative to your own concept of suffering, which could turn out to be (or revised to) the human concept or the alien concept with respective probabilities.
Let H_H be the moral weight of human suffering according to a human concept of suffering, directly valued, and A_H be the moral weight of alien suffering according to a human concept of suffering, indirectly valued. Similarly, let A_A and H_A be the moral weights of alien suffering and human suffering according to the alien concept of suffering. A human would fix H_H, build a probability distribution for A_H relative to H_H and evaluate A_H in terms of it. An alien would fix A_A, build a probability distribution for H_A relative to A_A and evaluate H_A in terms of it.
You’re uncertain about whether you’re an alien or human. Still, you directly value your direct experiences. Assume A_A and H_H specifically represent the moral value of an experience of suffering you’ve actually had,[1] e.g. the moral value of a toe stub, and you’re doing ethics relative to your toe stubs as the reference point. You therefore set A_A = H_H. You can think of this as a unit conversion, e.g. 1 unit of alien toe stub-relative suffering = 10 units of human toe stub-relative suffering.
This solves the two envelopes problem. You can either use A_A or H_H to set your common scale, and the answer will be the same either way, because you’ve fixed the ratio between them. The moral value of a human toe stub, H, will be H_H with probability p, and H_A with probability 1-p. The moral weight of an alien toe stub, A, will be A_H with probability p and A_A with probability 1-p. You can just take expected values in either the alien or human units and compare.
We could also allow you to have some probability of being a chicken under this thought experiment. Then you could set A_A = H_H = C_C, with C_C representing the value of a chicken toe stub to a chicken, and C_A, C_H, A_C and H_C defined like above.
But if you’re actually a chicken, then you’re valuing human and alien welfare as a chicken, which is presumably not much, since chickens are very partial (unless you idealize). Also, if you’re a human, it’s hard to imagine being uncertain about whether you’re a chicken. There’s way too much information you need to screen off from consideration, like your capacities for reasoning and language and everything that follows from these. And if you’re a chicken, you couldn’t imagine yourself as a human or being impartial at all.
So, maybe this doesn’t make sense, or we have to imagine some hypothetically cognitively enhanced chicken or an intelligent being who suffers like a chicken. You could also idealize chickens to be impartial and actually care about humans, but then you’re definitely forcing them into a different normative stance than the ones chickens actually take (if any).
It would have to be something “common” to the beings under consideration, or you’d have to screen off information about who does and doesn’t have access to it or use of that information, because otherwise you’d be able to rule out some possibilities for what kind of being you are. This will look less reasonable with more types of beings under consideration, in case there’s nothing “common” to all of them. For example, not all moral patients have toes to stub.
In this case, assuming you have no first-person experience with suffering to value directly (or memory of it), you would develop your concept of suffering third-personally — based on observations of and hypotheses about humans, aliens, chickens and others, say — and could base your ethics on that concept. This is not how humans or the aliens would typically understand and value suffering, which is largely first-personally. The human has their own vague revisable placeholder concept of suffering on which they ground value, and the alien has their own (and the chicken might have their own). Each also differ from the hypothetical third-personal concept.
Technically, we could say the humans and aliens have developed different ethical theories from each other, even if everyone’s a classical utilitarian, say, because they’re picking out different concepts of suffering on which to ground value.[1] And your third-personal account would give a different ethical theory from each, too. All three (human, alien, third-personal) ethical theories could converge under full information, though, if the concepts of suffering would converge under full information (and if everything else would converge).[2]
With the third-personal concept, I doubt there’d be a good solution to this two envelopes problem that actually gives you exactly one common moral scale and corresponding prior when you have enough uncertainty about the nature of suffering. You could come up with such a scale and prior, but you’d have to fix something pretty arbitrarily to do so. Instead, I think the thing to do is to assign credences across multiple scales (and corresponding priors) and use an approach to moral uncertainty that doesn’t depend on comparisons between them. (EDIT: And these could be the alien stance and human stance which relatively prioritize the other and result in a two envelopes problem.) But what I’ll say below applies even if you use a single common scale and prior.
When you have first-person experience with suffering, you can narrow down the common moral scales under consideration to ones based on your own experience. This would also have implications for your credences compared to the hypothetical third-person perspective.
If you started from no experience of suffering and then became a human, alien or chicken and experienced suffering as one of them, you could then rule out a bunch of scales (and corresponding priors). This would also result in big updates from your prior(s). You’d end up in a human-relative, alien-relative or chicken-relative account (or multiple such accounts, but for one species only).
A typical chicken very probably couldn’t be a classical utilitarian.
A typical chicken’s concept of suffering wouldn’t converge, but we could capture/explain it. Their apparent normative stances wouldn’t converge either, unless you imagine radically different beings.
I understand that you’re explaining why you don’t really think it’s well modelled as a two-envelope problem, but I’m not sure whether you’re biting the bullet that you’re predictably paying some utility in unnecessary ways (in this admittedly convoluted hypothetical), or if you don’t think there’s a bullet there to bite, or something else?
Alternatively, you might assume you actually already are a human, alien or chicken, have (and remember) experience with suffering as one of them, but are uncertain about which you in fact are. For illustration, let’s suppose human or alien. Because you’re uncertain about whether you’re an alien or human, your concept of suffering points to one that will turn out to be human suffering with some probability, p, and alien suffering with the rest of the probability, 1-p. You ground value relative to your own concept of suffering, which could turn out to be (or revised to) the human concept or the alien concept with respective probabilities.
Let H_H be the moral weight of human suffering according to a human concept of suffering, directly valued, and A_H be the moral weight of alien suffering according to a human concept of suffering, indirectly valued. Similarly, let A_A and H_A be the moral weights of alien suffering and human suffering according to the alien concept of suffering. A human would fix H_H, build a probability distribution for A_H relative to H_H and evaluate A_H in terms of it. An alien would fix A_A, build a probability distribution for H_A relative to A_A and evaluate H_A in terms of it.
You’re uncertain about whether you’re an alien or human. Still, you directly value your direct experiences. Assume A_A and H_H specifically represent the moral value of an experience of suffering you’ve actually had,[1] e.g. the moral value of a toe stub, and you’re doing ethics relative to your toe stubs as the reference point. You therefore set A_A = H_H. You can think of this as a unit conversion, e.g. 1 unit of alien toe stub-relative suffering = 10 units of human toe stub-relative suffering.
This solves the two envelopes problem. You can either use A_A or H_H to set your common scale, and the answer will be the same either way, because you’ve fixed the ratio between them. The moral value of a human toe stub, H, will be H_H with probability p, and H_A with probability 1-p. The moral weight of an alien toe stub, A, will be A_H with probability p and A_A with probability 1-p. You can just take expected values in either the alien or human units and compare.
We could also allow you to have some probability of being a chicken under this thought experiment. Then you could set A_A = H_H = C_C, with C_C representing the value of a chicken toe stub to a chicken, and C_A, C_H, A_C and H_C defined like above.
But if you’re actually a chicken, then you’re valuing human and alien welfare as a chicken, which is presumably not much, since chickens are very partial (unless you idealize). Also, if you’re a human, it’s hard to imagine being uncertain about whether you’re a chicken. There’s way too much information you need to screen off from consideration, like your capacities for reasoning and language and everything that follows from these. And if you’re a chicken, you couldn’t imagine yourself as a human or being impartial at all.
So, maybe this doesn’t make sense, or we have to imagine some hypothetically cognitively enhanced chicken or an intelligent being who suffers like a chicken. You could also idealize chickens to be impartial and actually care about humans, but then you’re definitely forcing them into a different normative stance than the ones chickens actually take (if any).
It would have to be something “common” to the beings under consideration, or you’d have to screen off information about who does and doesn’t have access to it or use of that information, because otherwise you’d be able to rule out some possibilities for what kind of being you are. This will look less reasonable with more types of beings under consideration, in case there’s nothing “common” to all of them. For example, not all moral patients have toes to stub.