I just don’t think total sum utilitarianism maps well with the kind of intuitions I’d like a functional moral system to match. I think ideally a good aggregation system for utility should not be vulnerable to being gamed via utility monsters.
In practice, I think smaller beings will produce welfare more efficiently. For example, I guess bees produce it 5 k times as effectively as humans. To the extent the way of producing welfare in the most efficient way involves a single or a few beings, they would have to be consuming lots and lots of energy of many many galaxies. So it would be more like the universe itself being a single being or organism, which is arguably more compelling than what the term “utility monster” suggests.
I mean, ok, one can construct these hypothetical scenarios, but the one you suggested wasn’t about preventing deaths, but ensuring the existence of more lives in the future. And those are very different things.
How about this scenario. There is a terrorist who is going to release a very infectious virus which will infect all humans on Earth. The virus makes people infertile forever, thus effectively leading to human extinction, but it also makes people fully lose their desire to have children, and have much better self-assessed lives. Would it make sense to kill the terrorist? Killing the terrorist would worsen the lives of all humans alive, but it would also prevent human extinction.
But obviously if you count future beings too—as you are—then it becomes inevitable that this approach does justify genocide.
Yes, it can justify genocide, although I am sceptical it would in practice. Genocide involves suffering, and suffering is bad, so I assume there would be a better option to maximise impartial welfare. For example, ASI could arguably persuade humans that their extinction was for the better, or just pass everyone to a simulation without anyone noticing, and then shutting down the simulation in a way that no suffering is caused in the process.
See the problem with the logic? As long as you have better technology and precommit to high population densities you can justify all sorts of brutal colonization efforts as a net good, if not maximal good.
I agree you can justify the replacement of beings who produce welfare less efficiently by beings who produce welfare more efficiently. For example, replacing rocks by humans is fine, and so might be replacing humans by digitals minds. However, the replacement process itself should maximise welfare, and I am very sceptical that “brutal colonization efforts” would be the most efficient way for ASI to perform the replacement.
I think smaller beings will produce welfare more efficiently. For example, I guess bees produce it 5 k times as effectively as humans.
I just don’t think it makes any sense to have an aggregated total measure of “welfare”. We can describe what is the distribution of welfare across the sentient beings of the universe, but to simply bunch it all up has essentially no meaning. In what way is a world with a billion very happy people any worse than a world with a trillion merely okay ones? I know which one I’d rather be born into! How can a world be worse for everyone individually yet somehow better, if the only meaning of welfare is that it is experienced by sentient beings to begin with?
There is a terrorist who is going to release a very infectious virus which will infect all humans on Earth. The virus makes people infertile forever, thus effectively leading to human extinction, but it also makes people fully lose their desire to have children, and have much better self-assessed lives. Would it make sense to kill the terrorist? Killing the terrorist would worsen the lives of all humans alive, but it would also prevent human extinction.
It’s moral because the terrorist is infringing the wishes of those people right now, and violating their self-determination. If the people decided to infect themselves, then it would be ok.
Genocide involves suffering, and suffering is bad, so I assume there would be a better option to maximise impartial welfare. For example, ASI could arguably persuade humans that their extinction was for the better, or just pass everyone to a simulation without anyone noticing, and then shutting down the simulation in a way that no suffering is caused in the process.
I disagree that the genocide is made permissible by making the death a sufficiently painless euthanasia. Sure, the suffering is an additional evil, but the killing is an evil unto itself. Honestly, consider where these arguments could lead in realistic situations and consider whether you would be okay with that, or if you feel like relying on a circumstantial “well but actually in reality this would always come out negative net utility due to the suffering” is protection enough. If you get conclusions like these from your ethical framework it’s probably a good sign that it might have some flaws.
For example, replacing rocks by humans is fine, and so might be replacing humans by digitals minds. However, the replacement process itself should maximise welfare, and I am very sceptical that “brutal colonization efforts” would be the most efficient way for ASI to perform the replacement.
Rocks aren’t sentient, they don’t count. And your logic still doesn’t work. What if you can instantly vaporize everyone with a thermonuclear bomb, as they are all concentrated within the radius of the fireball? Death would then be instantaneous. Would that make it acceptable? Very much doubt it.
I just don’t think it makes any sense to have an aggregated total measure of “welfare”. We can describe what is the distribution of welfare across the sentient beings of the universe, but to simply bunch it all up has essentially no meaning.
I find it hard to understand this. I think 10 billion happy people is better than no people. I guess you disagree with this?
It’s moral because the terrorist is infringing the wishes of those people right now, and violating their self-determination. If the people decided to infect themselves, then it would be ok.
I think respecting people’s preferences is a great heuristic to do good. However, I still endorse hedonic utilitarianism rather than preference utilitarianism, because it is possible for someone to have preferences which are not ideal to maximise one’s goals. (As an aside, Peter Singer used to be a preference utilitarian, but now is a hedonistic utilitarianism.)
Sure, the suffering is an additional evil, but the killing is an evil unto itself.
No killing is necessary given an ASI. The preferences of humans could be modified such that everyone is happy with ASI taking over the universe. In addition, even if you think killing without suffering is bad in itself (and note ASI may even make the killing pleasant to humans), do you think that badness would outweigh an arbitrarity large happiness?
Rocks aren’t sentient, they don’t count.
I think rocks are sentient in the sense they have a non-null expected welfare range, but it does not matter because I have no idea how to make them happier.
What if you can instantly vaporize everyone with a thermonuclear bomb, as they are all concentrated within the radius of the fireball? Death would then be instantaneous. Would that make it acceptable? Very much doubt it.
No, it would not be acceptable. I am strongly against negative utilitarianism. Vaporising all beings without any suffering would prevent all future suffering, but it would also prevent all future happiness. I think the expected value of the future is positive, so I would rather not vaporise all beings.
I find it hard to understand this. I think 10 billion happy people is better than no people. I guess you disagree with this?
“No people” is a special case—even if one looks at e.g. average utilitarianism, that’s a division by zero. I think a universe with no sentient beings in it does not have a well-defined moral value: moral value only exists with respect to sentients, so without any of them, the categories of “good” or “bad” stop even making sense. But obviously any path from a universe with sentients to a universe without implies extermination, which is bad.
However, given an arbitrary amount of sentient beings of comparable happiness, I don’t think the precise amount matters to how good things are, no. No one experiences all that good at once, hence 10 billion happy people are as good as 10 million—if they are indeed just as happy.
because it is possible for someone to have preferences which are not ideal to maximise one’s goals
I think any moral philosophy that leaves the door open to too much of “trust me, it’s for your own good, even though it’s not your preference you’ll enjoy the outcome far more” is rife for dangerous derailments.
No killing is necessary given an ASI. The preferences of humans could be modified such that everyone is happy with ASI taking over the universe. In addition, even if you think killing without suffering is bad in itself (and note ASI may even make the killing pleasant to humans), do you think that badness would outweigh an arbitrarity large happiness?
Yes, because I don’t care if the ASI is very very happy, it still counts for one. I also don’t think you can reasonably conceive of unbounded amounts of happiness felt by a single entity, so much as to compensate for all that suffering. Also try to describe to anyone “hey what if a supercomputer that wanted to take over the universe brainwashed you to be ok with it taking over the universe”, see their horrified reaction, and consider whether it makes sense for any moral system to reach conclusions that are obviously so utterly, instinctively repugnant to almost everyone.
I think rocks are sentient in the sense they have a non-null expected welfare range, but it does not matter because I have no idea how to make them happier.
I’m… not even sure how to parse that. Do you think rocks have conscious experiences?
No, it would not be acceptable. I am strongly against negative utilitarianism. Vaporising all beings without any suffering would prevent all future suffering, but it would also prevent all future happiness. I think the expected value of the future is positive, so I would rather not vaporise all beings.
The idea was that the vaporization is required to free the land for a much more numerous and technologically advanced populace, who can then go on to live off its resources a much more leisurely life with less hard work, less child mortality, less disease etc. So you replace, say, 50,000 vaporised indigenous people living like hunter gatherers with 5 million colonists living like we do now in the first world (and I’m talking new people, children that can only be born thanks to the possibility of expanding in that space). Does that make the genocide any better? If not, why? And how do those same arguments not apply to the ASI too?
No one experiences all that good at once, hence 10 billion happy people are as good as 10 million—if they are indeed just as happy.
This is the crux. Some questions:
Would 10^100 happy people be just as good as 1 happy person (assuming everyone is just as happy individually)?
Would 10^100 people being tortured be just as bad as 1 person being tortured (assuming everyone is feeling just as bad individually)?
Would you agree that a happy life of 100 years is just as good as a happy life of 1 year (assuming the annual happiness is the same in both cases)? If not, why is a person the relevant unit of analysis, and not a person-year?
Would 10^100 happy people be just as good as 1 happy person (assuming everyone is just as happy individually)?
Hypothetically, yes, if we take it into a vacuum. I find the scenario unrealistic in any real world circumstance though because obviously people’s happiness tends to depend on having other people around, and also, because any trajectory that ended in there being only one person, happy or not, from the current situation, seems likely bad.
Would 10^100 people being tortured be just as bad as 1 person being tortured (assuming everyone is feeling just as bad individually)?
Pretty much the same reasoning applies. For “everyone has it equally good/bad” worlds, I don’t think sheer numbers make a difference. What makes things more complicated is when inequality is involved.
Would you agree that a happy life of 100 years is just as good as a happy life of 1 year (assuming the annual happiness is the same in both cases)? If not, why is a person the relevant unit of analysis, and not a person-year?
I think length of life matters a lot; if I know I’ll have just one year of life, my happiness is kind of tainted by the knowledge of imminent death, you know? We experience all of our life ourselves. For an edge scenario, there’s one character in “Permutation City” (a Greg Egan novel) who is a mind upload and puts themselves into a sort of mental state loop; after a period T their mental state maps exactly to itself and repeats identically, forever. If you considered such a fantastical scenario, then I’d argue the precise length of the loop doesn’t matter much.
I strongly believe 10^100 people being tortured is much much worse than 1 person being tortured (even assuming everyone is feeling just as bad individually). I do not know what to say more, but thanks for the chat!
I think the reason for those intuitions is that (reasonably enough!) we can’t imagine there being 10^100 people without there also being a story behind that situation. A world in which e.g. some kind of entity breeds humans on purpose to then torture them, leading to those insane amounts, sounds indeed absolutely hellish! But the badness of it is due to the context; a world in which there exists only one person, and that person is being horribly tortured, is also extremely upsetting and sad, just in a different way and for different reasons (and all paths to there are also very disturbing; but we’ll maybe think “at least everyone else just died without suffering as much” so it feels less horrible than the 10^100 humans torture world).
But my intuition on the situation alone is more along the lines of: imagine you know you’re going to be born into this world. Would you like you odds? And in both the “one tortured human” and the “10^100 tortured humans” worlds, your odds would be exactly the same: 100% chance of being tortured.
But all of these are just abstract thought experiments. In any realistic situations, torture worlds don’t just happen—there is a story leading to them, and for any kind of torture world, that story is godawful. So in practice the two things can’t be separated. I think it’s fairly correct to say that in all realistic scenarios the 10^100 world will be in practice worse, or have a worse past, though both worlds would be awful.
I think the reason for those intuitions is that (reasonably enough!) we can’t imagine there being 10^100 people without there also being a story behind that situation.
At least for me, that does not matter. I would always pick a world where 1 person is tortured over a world where 10^100 are tortured (holding the amount of torture per person constant), regardless of the past history. Here is another question. You are just at the beginning of the universe, so no past history, and you can either click one button which would create 10^100 people who would be tortured for 100 years, or another button which would create 1 person who would be tortured for 100 years. If you had to then pick one world to live in, as an individual, you would suffer the same in both worlds, as you would have 100 % chance of being tortured for 100 years either way. So you could conclude which button you chose does not really matter. However, I certainly think such choice would matter! I care not only about my own welfare, but also that of others, so I would certainly pick the button leading to less total torture.
You said before that total utilitarianism is problematic because it can, at least in principle, lead one to endorse situations where a population is made extinct in order for its resources to be used more efficiently to produce welfare. However, average utilitarianism is way more problematic. It can, at least in principle, lead to a situation where the average welfare is kept constant, but we arbitrarily expand the amount of torture by increasing the number of beings. Even in practice this would be possible. For example, net global welfare accounting for animals may well be negative due to wild animal suffering (or even just farmed animal suffering; see this analysis), which means just replicating Earth’s ecosystem in Earth-like planets across the universe may well be a way of expanding suffering (and average suffering per being can be kept roughly the same for the sake of a thought experiment). If net global welfare is indeed negative, I would consider this super bad!
I don’t know if it makes a lot of sense because yes, in theory from my viewpoint all “torture worlds” (N agents, all suffering the same amount of torture) are equivalent. I feel like that intuition is more right than just “more people = more torture”. I would call them equally bad worlds, and if the torture is preternatural and inescapable I have no way of choosing between them. But I also feel like this is twisting ourselves into examples that are completely unrealistic, to the point of almost uselessness; it is no wonder that our theories of ethics break down, same as most physics does at a black hole singularity.
In practice, I think smaller beings will produce welfare more efficiently. For example, I guess bees produce it 5 k times as effectively as humans. To the extent the way of producing welfare in the most efficient way involves a single or a few beings, they would have to be consuming lots and lots of energy of many many galaxies. So it would be more like the universe itself being a single being or organism, which is arguably more compelling than what the term “utility monster” suggests.
How about this scenario. There is a terrorist who is going to release a very infectious virus which will infect all humans on Earth. The virus makes people infertile forever, thus effectively leading to human extinction, but it also makes people fully lose their desire to have children, and have much better self-assessed lives. Would it make sense to kill the terrorist? Killing the terrorist would worsen the lives of all humans alive, but it would also prevent human extinction.
Yes, it can justify genocide, although I am sceptical it would in practice. Genocide involves suffering, and suffering is bad, so I assume there would be a better option to maximise impartial welfare. For example, ASI could arguably persuade humans that their extinction was for the better, or just pass everyone to a simulation without anyone noticing, and then shutting down the simulation in a way that no suffering is caused in the process.
I agree you can justify the replacement of beings who produce welfare less efficiently by beings who produce welfare more efficiently. For example, replacing rocks by humans is fine, and so might be replacing humans by digitals minds. However, the replacement process itself should maximise welfare, and I am very sceptical that “brutal colonization efforts” would be the most efficient way for ASI to perform the replacement.
I just don’t think it makes any sense to have an aggregated total measure of “welfare”. We can describe what is the distribution of welfare across the sentient beings of the universe, but to simply bunch it all up has essentially no meaning. In what way is a world with a billion very happy people any worse than a world with a trillion merely okay ones? I know which one I’d rather be born into! How can a world be worse for everyone individually yet somehow better, if the only meaning of welfare is that it is experienced by sentient beings to begin with?
It’s moral because the terrorist is infringing the wishes of those people right now, and violating their self-determination. If the people decided to infect themselves, then it would be ok.
I disagree that the genocide is made permissible by making the death a sufficiently painless euthanasia. Sure, the suffering is an additional evil, but the killing is an evil unto itself. Honestly, consider where these arguments could lead in realistic situations and consider whether you would be okay with that, or if you feel like relying on a circumstantial “well but actually in reality this would always come out negative net utility due to the suffering” is protection enough. If you get conclusions like these from your ethical framework it’s probably a good sign that it might have some flaws.
Rocks aren’t sentient, they don’t count. And your logic still doesn’t work. What if you can instantly vaporize everyone with a thermonuclear bomb, as they are all concentrated within the radius of the fireball? Death would then be instantaneous. Would that make it acceptable? Very much doubt it.
Thanks for elaborating further!
I find it hard to understand this. I think 10 billion happy people is better than no people. I guess you disagree with this?
I think respecting people’s preferences is a great heuristic to do good. However, I still endorse hedonic utilitarianism rather than preference utilitarianism, because it is possible for someone to have preferences which are not ideal to maximise one’s goals. (As an aside, Peter Singer used to be a preference utilitarian, but now is a hedonistic utilitarianism.)
No killing is necessary given an ASI. The preferences of humans could be modified such that everyone is happy with ASI taking over the universe. In addition, even if you think killing without suffering is bad in itself (and note ASI may even make the killing pleasant to humans), do you think that badness would outweigh an arbitrarity large happiness?
I think rocks are sentient in the sense they have a non-null expected welfare range, but it does not matter because I have no idea how to make them happier.
No, it would not be acceptable. I am strongly against negative utilitarianism. Vaporising all beings without any suffering would prevent all future suffering, but it would also prevent all future happiness. I think the expected value of the future is positive, so I would rather not vaporise all beings.
“No people” is a special case—even if one looks at e.g. average utilitarianism, that’s a division by zero. I think a universe with no sentient beings in it does not have a well-defined moral value: moral value only exists with respect to sentients, so without any of them, the categories of “good” or “bad” stop even making sense. But obviously any path from a universe with sentients to a universe without implies extermination, which is bad.
However, given an arbitrary amount of sentient beings of comparable happiness, I don’t think the precise amount matters to how good things are, no. No one experiences all that good at once, hence 10 billion happy people are as good as 10 million—if they are indeed just as happy.
I think any moral philosophy that leaves the door open to too much of “trust me, it’s for your own good, even though it’s not your preference you’ll enjoy the outcome far more” is rife for dangerous derailments.
Yes, because I don’t care if the ASI is very very happy, it still counts for one. I also don’t think you can reasonably conceive of unbounded amounts of happiness felt by a single entity, so much as to compensate for all that suffering. Also try to describe to anyone “hey what if a supercomputer that wanted to take over the universe brainwashed you to be ok with it taking over the universe”, see their horrified reaction, and consider whether it makes sense for any moral system to reach conclusions that are obviously so utterly, instinctively repugnant to almost everyone.
I’m… not even sure how to parse that. Do you think rocks have conscious experiences?
The idea was that the vaporization is required to free the land for a much more numerous and technologically advanced populace, who can then go on to live off its resources a much more leisurely life with less hard work, less child mortality, less disease etc. So you replace, say, 50,000 vaporised indigenous people living like hunter gatherers with 5 million colonists living like we do now in the first world (and I’m talking new people, children that can only be born thanks to the possibility of expanding in that space). Does that make the genocide any better? If not, why? And how do those same arguments not apply to the ASI too?
This is the crux. Some questions:
Would 10^100 happy people be just as good as 1 happy person (assuming everyone is just as happy individually)?
Would 10^100 people being tortured be just as bad as 1 person being tortured (assuming everyone is feeling just as bad individually)?
Would you agree that a happy life of 100 years is just as good as a happy life of 1 year (assuming the annual happiness is the same in both cases)? If not, why is a person the relevant unit of analysis, and not a person-year?
Hypothetically, yes, if we take it into a vacuum. I find the scenario unrealistic in any real world circumstance though because obviously people’s happiness tends to depend on having other people around, and also, because any trajectory that ended in there being only one person, happy or not, from the current situation, seems likely bad.
Pretty much the same reasoning applies. For “everyone has it equally good/bad” worlds, I don’t think sheer numbers make a difference. What makes things more complicated is when inequality is involved.
I think length of life matters a lot; if I know I’ll have just one year of life, my happiness is kind of tainted by the knowledge of imminent death, you know? We experience all of our life ourselves. For an edge scenario, there’s one character in “Permutation City” (a Greg Egan novel) who is a mind upload and puts themselves into a sort of mental state loop; after a period T their mental state maps exactly to itself and repeats identically, forever. If you considered such a fantastical scenario, then I’d argue the precise length of the loop doesn’t matter much.
I strongly believe 10^100 people being tortured is much much worse than 1 person being tortured (even assuming everyone is feeling just as bad individually). I do not know what to say more, but thanks for the chat!
I think the reason for those intuitions is that (reasonably enough!) we can’t imagine there being 10^100 people without there also being a story behind that situation. A world in which e.g. some kind of entity breeds humans on purpose to then torture them, leading to those insane amounts, sounds indeed absolutely hellish! But the badness of it is due to the context; a world in which there exists only one person, and that person is being horribly tortured, is also extremely upsetting and sad, just in a different way and for different reasons (and all paths to there are also very disturbing; but we’ll maybe think “at least everyone else just died without suffering as much” so it feels less horrible than the 10^100 humans torture world).
But my intuition on the situation alone is more along the lines of: imagine you know you’re going to be born into this world. Would you like you odds? And in both the “one tortured human” and the “10^100 tortured humans” worlds, your odds would be exactly the same: 100% chance of being tortured.
But all of these are just abstract thought experiments. In any realistic situations, torture worlds don’t just happen—there is a story leading to them, and for any kind of torture world, that story is godawful. So in practice the two things can’t be separated. I think it’s fairly correct to say that in all realistic scenarios the 10^100 world will be in practice worse, or have a worse past, though both worlds would be awful.
At least for me, that does not matter. I would always pick a world where 1 person is tortured over a world where 10^100 are tortured (holding the amount of torture per person constant), regardless of the past history. Here is another question. You are just at the beginning of the universe, so no past history, and you can either click one button which would create 10^100 people who would be tortured for 100 years, or another button which would create 1 person who would be tortured for 100 years. If you had to then pick one world to live in, as an individual, you would suffer the same in both worlds, as you would have 100 % chance of being tortured for 100 years either way. So you could conclude which button you chose does not really matter. However, I certainly think such choice would matter! I care not only about my own welfare, but also that of others, so I would certainly pick the button leading to less total torture.
You said before that total utilitarianism is problematic because it can, at least in principle, lead one to endorse situations where a population is made extinct in order for its resources to be used more efficiently to produce welfare. However, average utilitarianism is way more problematic. It can, at least in principle, lead to a situation where the average welfare is kept constant, but we arbitrarily expand the amount of torture by increasing the number of beings. Even in practice this would be possible. For example, net global welfare accounting for animals may well be negative due to wild animal suffering (or even just farmed animal suffering; see this analysis), which means just replicating Earth’s ecosystem in Earth-like planets across the universe may well be a way of expanding suffering (and average suffering per being can be kept roughly the same for the sake of a thought experiment). If net global welfare is indeed negative, I would consider this super bad!
I don’t know if it makes a lot of sense because yes, in theory from my viewpoint all “torture worlds” (N agents, all suffering the same amount of torture) are equivalent. I feel like that intuition is more right than just “more people = more torture”. I would call them equally bad worlds, and if the torture is preternatural and inescapable I have no way of choosing between them. But I also feel like this is twisting ourselves into examples that are completely unrealistic, to the point of almost uselessness; it is no wonder that our theories of ethics break down, same as most physics does at a black hole singularity.