One important difference is that we’re never in this situation if and because we’ve already committed to human-based units, so there’s no risk of such a money pump or such irrational behaviour.
And there’s good reason for this. We have direct access to our own experiences, and understand, study and conceptualize consciousness, suffering, desires, preferences and other kinds of welfare in reference to our own and via conservative projections, e.g. assuming typical humans are similar to each other.
To be in the kind of position this thought experiment requires, I think you’d need to study and conceptualize welfare third-personally and fairly independently of human experiences, the only cases we have direct access to and the ones we’re most confident in.
I sprain my ankle while playing soccer, don’t notice it for 5 seconds, and then feel a “rush of pain” suddenly “flood” my conscious experience, and I think “Gosh, well, whatever this is, I sure hope nothing like it happens to fish!” And then I reflect on what was happening prior to my conscious experience of the pain, and I think “But if that is all that happens when a fish is physically injured, then I’m not sure I care.” And so on.
It might be possible to conceptualize consciousness and welfare entirely third-personally, but it’s not clear we’d even be talking about the same things anymore. That also seems to be throwing out or underusing important information: our direct impressions from our own experiences. That might be epistemically irrational.
I also discuss this thought experiment here, here (the section that immediately follows) and in the comments on that post with Owen.
FWIW, I could imagine an AI in the position of your thought experiment, though, and then it could use a moral parliament or some other approach to moral uncertainty that doesn’t depend on common units or intertheoretic comparisons. But we humans are starting from somewhere else.
Also, notably, in chickens vs humans, say, a factory farmed chicken doesn’t actually hold a human-favouring position, like the alien does. We could imagine a hypothetical rational moral agent with hedonic states and felt desires like a chicken, although their specific reasoned desires and preferences wouldn’t be found in chickens. And this is also very weird.
If we make reasoning about chickens that is correct, it should also be able to scale up to aliens without causing problems. If your framework doesn’t work for aliens, that’s an indication that something is wrong with it.
Chickens don’t hold a human-favouring position because they are not hedonic utilitarians, and aren’t intelligent enough to grasp the concept. But your framework explicitly does not weight the worth of beings by their intelligence, only their capacity to feel pain.
I think it’s simply wrong to switch in the case of the human vs alien tradeoff, because of the inherent symmetry of the situation. And if it’s wrong in that case, what is it about the elephant case that has changed?
One important difference is that we’re never in this situation if and because we’ve already committed to human-based units, so there’s no risk of such a money pump or such irrational behaviour.
And there’s good reason for this. We have direct access to our own experiences, and understand, study and conceptualize consciousness, suffering, desires, preferences and other kinds of welfare in reference to our own and via conservative projections, e.g. assuming typical humans are similar to each other.
To be in the kind of position this thought experiment requires, I think you’d need to study and conceptualize welfare third-personally and fairly independently of human experiences, the only cases we have direct access to and the ones we’re most confident in.
Probably no human has ever started conceptualizing consciousness and welfare in the third-person without first experiencing welfare-relevant states themselves. Luke Muehlhauser also illustrated how he understood animal pain in reference to his own in his report for Open Phil:
It might be possible to conceptualize consciousness and welfare entirely third-personally, but it’s not clear we’d even be talking about the same things anymore. That also seems to be throwing out or underusing important information: our direct impressions from our own experiences. That might be epistemically irrational.
I also discuss this thought experiment here, here (the section that immediately follows) and in the comments on that post with Owen.
FWIW, I could imagine an AI in the position of your thought experiment, though, and then it could use a moral parliament or some other approach to moral uncertainty that doesn’t depend on common units or intertheoretic comparisons. But we humans are starting from somewhere else.
Also, notably, in chickens vs humans, say, a factory farmed chicken doesn’t actually hold a human-favouring position, like the alien does. We could imagine a hypothetical rational moral agent with hedonic states and felt desires like a chicken, although their specific reasoned desires and preferences wouldn’t be found in chickens. And this is also very weird.
If we make reasoning about chickens that is correct, it should also be able to scale up to aliens without causing problems. If your framework doesn’t work for aliens, that’s an indication that something is wrong with it.
Chickens don’t hold a human-favouring position because they are not hedonic utilitarians, and aren’t intelligent enough to grasp the concept. But your framework explicitly does not weight the worth of beings by their intelligence, only their capacity to feel pain.
I think it’s simply wrong to switch in the case of the human vs alien tradeoff, because of the inherent symmetry of the situation. And if it’s wrong in that case, what is it about the elephant case that has changed?