If we make reasoning about chickens that is correct, it should also be able to scale up to aliens without causing problems. If your framework doesn’t work for aliens, that’s an indication that something is wrong with it.
Chickens don’t hold a human-favouring position because they are not hedonic utilitarians, and aren’t intelligent enough to grasp the concept. But your framework explicitly does not weight the worth of beings by their intelligence, only their capacity to feel pain.
I think it’s simply wrong to switch in the case of the human vs alien tradeoff, because of the inherent symmetry of the situation. And if it’s wrong in that case, what is it about the elephant case that has changed?
If we make reasoning about chickens that is correct, it should also be able to scale up to aliens without causing problems. If your framework doesn’t work for aliens, that’s an indication that something is wrong with it.
Chickens don’t hold a human-favouring position because they are not hedonic utilitarians, and aren’t intelligent enough to grasp the concept. But your framework explicitly does not weight the worth of beings by their intelligence, only their capacity to feel pain.
I think it’s simply wrong to switch in the case of the human vs alien tradeoff, because of the inherent symmetry of the situation. And if it’s wrong in that case, what is it about the elephant case that has changed?