Interesting application of SIA, but I wonder if it shows too much to help average utilitarianism.
SIA seems to support metaphysical pictures in which more people actually exist. This is how you discount the probability of solipsism. But do you think you can simultaneously avoid the conclusion that there are an infinite number of people?
This would be problematic: if you’re sure that there are an infinite number of people, average utilitarianism won’t offer much guidance because you almost certainly won’t have any ability to influence the average utility.
Very interesting point, I have not thought of this.
I do think, however, that SIA, Utilitarianism, SSA, and Average Utilitarianism all kind of break down, once we have an infinite amount of people. I think people, like Bostrom, have thought about infinite ethics, but I have not read anything on that topic.
I agree that there are challenges for each of them in the case of an infinite number of people. My impression is that total utilitarianism can handle infinite cases pretty respectably, by supplementing the standard maxim of maximizing utility with a dominance principle to the effect of ‘do what’s best for the finite subset of everyone that you’re capable of affecting’, though it also isn’t something I’ve thought about too much either. I initially was thinking that average utilitarians can’t make a similar move without undermining it’s spirit, but maybe they can. However, if they can, I suspect they can make the same move in the finite case (‘just focus on the average among the population you can affect’) and that will throw off your calculations. Maybe in that case, if you can only affect a small number of individuals, the threat from solipsism can’t even get going.
In any case, I would hope that SIA is at least able to accommodate an infinite number of possible people, or the possibility of an infinite number of people, without becoming useless. I take it that there are an infinite number of epistemically possible people, and so this isn’t just an exercise.
Interesting application of SIA, but I wonder if it shows too much to help average utilitarianism.
SIA seems to support metaphysical pictures in which more people actually exist. This is how you discount the probability of solipsism. But do you think you can simultaneously avoid the conclusion that there are an infinite number of people?
This would be problematic: if you’re sure that there are an infinite number of people, average utilitarianism won’t offer much guidance because you almost certainly won’t have any ability to influence the average utility.
Very interesting point, I have not thought of this.
I do think, however, that SIA, Utilitarianism, SSA, and Average Utilitarianism all kind of break down, once we have an infinite amount of people. I think people, like Bostrom, have thought about infinite ethics, but I have not read anything on that topic.
I agree that there are challenges for each of them in the case of an infinite number of people. My impression is that total utilitarianism can handle infinite cases pretty respectably, by supplementing the standard maxim of maximizing utility with a dominance principle to the effect of ‘do what’s best for the finite subset of everyone that you’re capable of affecting’, though it also isn’t something I’ve thought about too much either. I initially was thinking that average utilitarians can’t make a similar move without undermining it’s spirit, but maybe they can. However, if they can, I suspect they can make the same move in the finite case (‘just focus on the average among the population you can affect’) and that will throw off your calculations. Maybe in that case, if you can only affect a small number of individuals, the threat from solipsism can’t even get going.
In any case, I would hope that SIA is at least able to accommodate an infinite number of possible people, or the possibility of an infinite number of people, without becoming useless. I take it that there are an infinite number of epistemically possible people, and so this isn’t just an exercise.