One important difference is that weâre never in this situation if and because weâve already committed to human-based units, so thereâs no risk of such a money pump or such irrational behaviour.
And thereâs good reason for this. We have direct access to our own experiences, and understand, study and conceptualize consciousness, suffering, desires, preferences and other kinds of welfare in reference to our own and via conservative projections, e.g. assuming typical humans are similar to each other.
To be in the kind of position this thought experiment requires, I think youâd need to study and conceptualize welfare third-personally and fairly independently of human experiences, the only cases we have direct access to and the ones weâre most confident in.
I sprain my ankle while playing soccer, donât notice it for 5 seconds, and then feel a ârush of painâ suddenly âfloodâ my conscious experience, and I think âGosh, well, whatever this is, I sure hope nothing like it happens to fish!â And then I reflect on what was happening prior to my conscious experience of the pain, and I think âBut if that is all that happens when a fish is physically injured, then Iâm not sure I care.â And so on.
It might be possible to conceptualize consciousness and welfare entirely third-personally, but itâs not clear weâd even be talking about the same things anymore. That also seems to be throwing out or underusing important information: our direct impressions from our own experiences. That might be epistemically irrational.
I also discuss this thought experiment here, here (the section that immediately follows) and in the comments on that post with Owen.
FWIW, I could imagine an AI in the position of your thought experiment, though, and then it could use a moral parliament or some other approach to moral uncertainty that doesnât depend on common units or intertheoretic comparisons. But we humans are starting from somewhere else.
Also, notably, in chickens vs humans, say, a factory farmed chicken doesnât actually hold a human-favouring position, like the alien does. We could imagine a hypothetical rational moral agent with hedonic states and felt desires like a chicken, although their specific reasoned desires and preferences wouldnât be found in chickens. And this is also very weird.
If we make reasoning about chickens that is correct, it should also be able to scale up to aliens without causing problems. If your framework doesnât work for aliens, thatâs an indication that something is wrong with it.
Chickens donât hold a human-favouring position because they are not hedonic utilitarians, and arenât intelligent enough to grasp the concept. But your framework explicitly does not weight the worth of beings by their intelligence, only their capacity to feel pain.
I think itâs simply wrong to switch in the case of the human vs alien tradeoff, because of the inherent symmetry of the situation. And if itâs wrong in that case, what is it about the elephant case that has changed?
One important difference is that weâre never in this situation if and because weâve already committed to human-based units, so thereâs no risk of such a money pump or such irrational behaviour.
And thereâs good reason for this. We have direct access to our own experiences, and understand, study and conceptualize consciousness, suffering, desires, preferences and other kinds of welfare in reference to our own and via conservative projections, e.g. assuming typical humans are similar to each other.
To be in the kind of position this thought experiment requires, I think youâd need to study and conceptualize welfare third-personally and fairly independently of human experiences, the only cases we have direct access to and the ones weâre most confident in.
Probably no human has ever started conceptualizing consciousness and welfare in the third-person without first experiencing welfare-relevant states themselves. Luke Muehlhauser also illustrated how he understood animal pain in reference to his own in his report for Open Phil:
It might be possible to conceptualize consciousness and welfare entirely third-personally, but itâs not clear weâd even be talking about the same things anymore. That also seems to be throwing out or underusing important information: our direct impressions from our own experiences. That might be epistemically irrational.
I also discuss this thought experiment here, here (the section that immediately follows) and in the comments on that post with Owen.
FWIW, I could imagine an AI in the position of your thought experiment, though, and then it could use a moral parliament or some other approach to moral uncertainty that doesnât depend on common units or intertheoretic comparisons. But we humans are starting from somewhere else.
Also, notably, in chickens vs humans, say, a factory farmed chicken doesnât actually hold a human-favouring position, like the alien does. We could imagine a hypothetical rational moral agent with hedonic states and felt desires like a chicken, although their specific reasoned desires and preferences wouldnât be found in chickens. And this is also very weird.
If we make reasoning about chickens that is correct, it should also be able to scale up to aliens without causing problems. If your framework doesnât work for aliens, thatâs an indication that something is wrong with it.
Chickens donât hold a human-favouring position because they are not hedonic utilitarians, and arenât intelligent enough to grasp the concept. But your framework explicitly does not weight the worth of beings by their intelligence, only their capacity to feel pain.
I think itâs simply wrong to switch in the case of the human vs alien tradeoff, because of the inherent symmetry of the situation. And if itâs wrong in that case, what is it about the elephant case that has changed?