I just donāt think total sum utilitarianism maps well with the kind of intuitions Iād like a functional moral system to match. I think ideally a good aggregation system for utility should not be vulnerable to being gamed via utility monsters.
In practice, I think smaller beings will produce welfare more efficiently. For example, I guess bees produce it 5 k times as effectively as humans. To the extent the way of producing welfare in the most efficient way involves a single or a few beings, they would have to be consuming lots and lots of energy of many many galaxies. So it would be more like the universe itself being a single being or organism, which is arguably more compelling than what the term āutility monsterā suggests.
I mean, ok, one can construct these hypothetical scenarios, but the one you suggested wasnāt about preventing deaths, but ensuring the existence of more lives in the future. And those are very different things.
How about this scenario. There is a terrorist who is going to release a very infectious virus which will infect all humans on Earth. The virus makes people infertile forever, thus effectively leading to human extinction, but it also makes people fully lose their desire to have children, and have much better self-assessed lives. Would it make sense to kill the terrorist? Killing the terrorist would worsen the lives of all humans alive, but it would also prevent human extinction.
But obviously if you count future beings tooāas you areāthen it becomes inevitable that this approach does justify genocide.
Yes, it can justify genocide, although I am sceptical it would in practice. Genocide involves suffering, and suffering is bad, so I assume there would be a better option to maximise impartial welfare. For example, ASI could arguably persuade humans that their extinction was for the better, or just pass everyone to a simulation without anyone noticing, and then shutting down the simulation in a way that no suffering is caused in the process.
See the problem with the logic? As long as you have better technology and precommit to high population densities you can justify all sorts of brutal colonization efforts as a net good, if not maximal good.
I agree you can justify the replacement of beings who produce welfare less efficiently by beings who produce welfare more efficiently. For example, replacing rocks by humans is fine, and so might be replacing humans by digitals minds. However, the replacement process itself should maximise welfare, and I am very sceptical that ābrutal colonization effortsā would be the most efficient way for ASI to perform the replacement.
I think smaller beings will produce welfare more efficiently. For example, I guess bees produce it 5 k times as effectively as humans.
I just donāt think it makes any sense to have an aggregated total measure of āwelfareā. We can describe what is the distribution of welfare across the sentient beings of the universe, but to simply bunch it all up has essentially no meaning. In what way is a world with a billion very happy people any worse than a world with a trillion merely okay ones? I know which one Iād rather be born into! How can a world be worse for everyone individually yet somehow better, if the only meaning of welfare is that it is experienced by sentient beings to begin with?
There is a terrorist who is going to release a very infectious virus which will infect all humans on Earth. The virus makes people infertile forever, thus effectively leading to human extinction, but it also makes people fully lose their desire to have children, and have much better self-assessed lives. Would it make sense to kill the terrorist? Killing the terrorist would worsen the lives of all humans alive, but it would also prevent human extinction.
Itās moral because the terrorist is infringing the wishes of those people right now, and violating their self-determination. If the people decided to infect themselves, then it would be ok.
Genocide involves suffering, and suffering is bad, so I assume there would be a better option to maximise impartial welfare. For example, ASI could arguably persuade humans that their extinction was for the better, or just pass everyone to a simulation without anyone noticing, and then shutting down the simulation in a way that no suffering is caused in the process.
I disagree that the genocide is made permissible by making the death a sufficiently painless euthanasia. Sure, the suffering is an additional evil, but the killing is an evil unto itself. Honestly, consider where these arguments could lead in realistic situations and consider whether you would be okay with that, or if you feel like relying on a circumstantial āwell but actually in reality this would always come out negative net utility due to the sufferingā is protection enough. If you get conclusions like these from your ethical framework itās probably a good sign that it might have some flaws.
For example, replacing rocks by humans is fine, and so might be replacing humans by digitals minds. However, the replacement process itself should maximise welfare, and I am very sceptical that ābrutal colonization effortsā would be the most efficient way for ASI to perform the replacement.
Rocks arenāt sentient, they donāt count. And your logic still doesnāt work. What if you can instantly vaporize everyone with a thermonuclear bomb, as they are all concentrated within the radius of the fireball? Death would then be instantaneous. Would that make it acceptable? Very much doubt it.
I just donāt think it makes any sense to have an aggregated total measure of āwelfareā. We can describe what is the distribution of welfare across the sentient beings of the universe, but to simply bunch it all up has essentially no meaning.
I find it hard to understand this. I think 10 billion happy people is better than no people. I guess you disagree with this?
Itās moral because the terrorist is infringing the wishes of those people right now, and violating their self-determination. If the people decided to infect themselves, then it would be ok.
I think respecting peopleās preferences is a great heuristic to do good. However, I still endorse hedonic utilitarianism rather than preference utilitarianism, because it is possible for someone to have preferences which are not ideal to maximise oneās goals. (As an aside, Peter Singer used to be a preference utilitarian, but now is a hedonistic utilitarianism.)
Sure, the suffering is an additional evil, but the killing is an evil unto itself.
No killing is necessary given an ASI. The preferences of humans could be modified such that everyone is happy with ASI taking over the universe. In addition, even if you think killing without suffering is bad in itself (and note ASI may even make the killing pleasant to humans), do you think that badness would outweigh an arbitrarity large happiness?
Rocks arenāt sentient, they donāt count.
I think rocks are sentient in the sense they have a non-null expected welfare range, but it does not matter because I have no idea how to make them happier.
What if you can instantly vaporize everyone with a thermonuclear bomb, as they are all concentrated within the radius of the fireball? Death would then be instantaneous. Would that make it acceptable? Very much doubt it.
No, it would not be acceptable. I am strongly against negative utilitarianism. Vaporising all beings without any suffering would prevent all future suffering, but it would also prevent all future happiness. I think the expected value of the future is positive, so I would rather not vaporise all beings.
I find it hard to understand this. I think 10 billion happy people is better than no people. I guess you disagree with this?
āNo peopleā is a special caseāeven if one looks at e.g. average utilitarianism, thatās a division by zero. I think a universe with no sentient beings in it does not have a well-defined moral value: moral value only exists with respect to sentients, so without any of them, the categories of āgoodā or ābadā stop even making sense. But obviously any path from a universe with sentients to a universe without implies extermination, which is bad.
However, given an arbitrary amount of sentient beings of comparable happiness, I donāt think the precise amount matters to how good things are, no. No one experiences all that good at once, hence 10 billion happy people are as good as 10 millionāif they are indeed just as happy.
because it is possible for someone to have preferences which are not ideal to maximise oneās goals
I think any moral philosophy that leaves the door open to too much of ātrust me, itās for your own good, even though itās not your preference youāll enjoy the outcome far moreā is rife for dangerous derailments.
No killing is necessary given an ASI. The preferences of humans could be modified such that everyone is happy with ASI taking over the universe. In addition, even if you think killing without suffering is bad in itself (and note ASI may even make the killing pleasant to humans), do you think that badness would outweigh an arbitrarity large happiness?
Yes, because I donāt care if the ASI is very very happy, it still counts for one. I also donāt think you can reasonably conceive of unbounded amounts of happiness felt by a single entity, so much as to compensate for all that suffering. Also try to describe to anyone āhey what if a supercomputer that wanted to take over the universe brainwashed you to be ok with it taking over the universeā, see their horrified reaction, and consider whether it makes sense for any moral system to reach conclusions that are obviously so utterly, instinctively repugnant to almost everyone.
I think rocks are sentient in the sense they have a non-null expected welfare range, but it does not matter because I have no idea how to make them happier.
Iām⦠not even sure how to parse that. Do you think rocks have conscious experiences?
No, it would not be acceptable. I am strongly against negative utilitarianism. Vaporising all beings without any suffering would prevent all future suffering, but it would also prevent all future happiness. I think the expected value of the future is positive, so I would rather not vaporise all beings.
The idea was that the vaporization is required to free the land for a much more numerous and technologically advanced populace, who can then go on to live off its resources a much more leisurely life with less hard work, less child mortality, less disease etc. So you replace, say, 50,000 vaporised indigenous people living like hunter gatherers with 5 million colonists living like we do now in the first world (and Iām talking new people, children that can only be born thanks to the possibility of expanding in that space). Does that make the genocide any better? If not, why? And how do those same arguments not apply to the ASI too?
No one experiences all that good at once, hence 10 billion happy people are as good as 10 millionāif they are indeed just as happy.
This is the crux. Some questions:
Would 10^100 happy people be just as good as 1 happy person (assuming everyone is just as happy individually)?
Would 10^100 people being tortured be just as bad as 1 person being tortured (assuming everyone is feeling just as bad individually)?
Would you agree that a happy life of 100 years is just as good as a happy life of 1 year (assuming the annual happiness is the same in both cases)? If not, why is a person the relevant unit of analysis, and not a person-year?
Would 10^100 happy people be just as good as 1 happy person (assuming everyone is just as happy individually)?
Hypothetically, yes, if we take it into a vacuum. I find the scenario unrealistic in any real world circumstance though because obviously peopleās happiness tends to depend on having other people around, and also, because any trajectory that ended in there being only one person, happy or not, from the current situation, seems likely bad.
Would 10^100 people being tortured be just as bad as 1 person being tortured (assuming everyone is feeling just as bad individually)?
Pretty much the same reasoning applies. For āeveryone has it equally good/ābadā worlds, I donāt think sheer numbers make a difference. What makes things more complicated is when inequality is involved.
Would you agree that a happy life of 100 years is just as good as a happy life of 1 year (assuming the annual happiness is the same in both cases)? If not, why is a person the relevant unit of analysis, and not a person-year?
I think length of life matters a lot; if I know Iāll have just one year of life, my happiness is kind of tainted by the knowledge of imminent death, you know? We experience all of our life ourselves. For an edge scenario, thereās one character in āPermutation Cityā (a Greg Egan novel) who is a mind upload and puts themselves into a sort of mental state loop; after a period T their mental state maps exactly to itself and repeats identically, forever. If you considered such a fantastical scenario, then Iād argue the precise length of the loop doesnāt matter much.
I strongly believe 10^100 people being tortured is much much worse than 1 person being tortured (even assuming everyone is feeling just as bad individually). I do not know what to say more, but thanks for the chat!
I think the reason for those intuitions is that (reasonably enough!) we canāt imagine there being 10^100 people without there also being a story behind that situation. A world in which e.g. some kind of entity breeds humans on purpose to then torture them, leading to those insane amounts, sounds indeed absolutely hellish! But the badness of it is due to the context; a world in which there exists only one person, and that person is being horribly tortured, is also extremely upsetting and sad, just in a different way and for different reasons (and all paths to there are also very disturbing; but weāll maybe think āat least everyone else just died without suffering as muchā so it feels less horrible than the 10^100 humans torture world).
But my intuition on the situation alone is more along the lines of: imagine you know youāre going to be born into this world. Would you like you odds? And in both the āone tortured humanā and the ā10^100 tortured humansā worlds, your odds would be exactly the same: 100% chance of being tortured.
But all of these are just abstract thought experiments. In any realistic situations, torture worlds donāt just happenāthere is a story leading to them, and for any kind of torture world, that story is godawful. So in practice the two things canāt be separated. I think itās fairly correct to say that in all realistic scenarios the 10^100 world will be in practice worse, or have a worse past, though both worlds would be awful.
I think the reason for those intuitions is that (reasonably enough!) we canāt imagine there being 10^100 people without there also being a story behind that situation.
At least for me, that does not matter. I would always pick a world where 1 person is tortured over a world where 10^100 are tortured (holding the amount of torture per person constant), regardless of the past history. Here is another question. You are just at the beginning of the universe, so no past history, and you can either click one button which would create 10^100 people who would be tortured for 100 years, or another button which would create 1 person who would be tortured for 100 years. If you had to then pick one world to live in, as an individual, you would suffer the same in both worlds, as you would have 100 % chance of being tortured for 100 years either way. So you could conclude which button you chose does not really matter. However, I certainly think such choice would matter! I care not only about my own welfare, but also that of others, so I would certainly pick the button leading to less total torture.
You said before that total utilitarianism is problematic because it can, at least in principle, lead one to endorse situations where a population is made extinct in order for its resources to be used more efficiently to produce welfare. However, average utilitarianism is way more problematic. It can, at least in principle, lead to a situation where the average welfare is kept constant, but we arbitrarily expand the amount of torture by increasing the number of beings. Even in practice this would be possible. For example, net global welfare accounting for animals may well be negative due to wild animal suffering (or even just farmed animal suffering; see this analysis), which means just replicating Earthās ecosystem in Earth-like planets across the universe may well be a way of expanding suffering (and average suffering per being can be kept roughly the same for the sake of a thought experiment). If net global welfare is indeed negative, I would consider this super bad!
I donāt know if it makes a lot of sense because yes, in theory from my viewpoint all ātorture worldsā (N agents, all suffering the same amount of torture) are equivalent. I feel like that intuition is more right than just āmore people = more tortureā. I would call them equally bad worlds, and if the torture is preternatural and inescapable I have no way of choosing between them. But I also feel like this is twisting ourselves into examples that are completely unrealistic, to the point of almost uselessness; it is no wonder that our theories of ethics break down, same as most physics does at a black hole singularity.
In practice, I think smaller beings will produce welfare more efficiently. For example, I guess bees produce it 5 k times as effectively as humans. To the extent the way of producing welfare in the most efficient way involves a single or a few beings, they would have to be consuming lots and lots of energy of many many galaxies. So it would be more like the universe itself being a single being or organism, which is arguably more compelling than what the term āutility monsterā suggests.
How about this scenario. There is a terrorist who is going to release a very infectious virus which will infect all humans on Earth. The virus makes people infertile forever, thus effectively leading to human extinction, but it also makes people fully lose their desire to have children, and have much better self-assessed lives. Would it make sense to kill the terrorist? Killing the terrorist would worsen the lives of all humans alive, but it would also prevent human extinction.
Yes, it can justify genocide, although I am sceptical it would in practice. Genocide involves suffering, and suffering is bad, so I assume there would be a better option to maximise impartial welfare. For example, ASI could arguably persuade humans that their extinction was for the better, or just pass everyone to a simulation without anyone noticing, and then shutting down the simulation in a way that no suffering is caused in the process.
I agree you can justify the replacement of beings who produce welfare less efficiently by beings who produce welfare more efficiently. For example, replacing rocks by humans is fine, and so might be replacing humans by digitals minds. However, the replacement process itself should maximise welfare, and I am very sceptical that ābrutal colonization effortsā would be the most efficient way for ASI to perform the replacement.
I just donāt think it makes any sense to have an aggregated total measure of āwelfareā. We can describe what is the distribution of welfare across the sentient beings of the universe, but to simply bunch it all up has essentially no meaning. In what way is a world with a billion very happy people any worse than a world with a trillion merely okay ones? I know which one Iād rather be born into! How can a world be worse for everyone individually yet somehow better, if the only meaning of welfare is that it is experienced by sentient beings to begin with?
Itās moral because the terrorist is infringing the wishes of those people right now, and violating their self-determination. If the people decided to infect themselves, then it would be ok.
I disagree that the genocide is made permissible by making the death a sufficiently painless euthanasia. Sure, the suffering is an additional evil, but the killing is an evil unto itself. Honestly, consider where these arguments could lead in realistic situations and consider whether you would be okay with that, or if you feel like relying on a circumstantial āwell but actually in reality this would always come out negative net utility due to the sufferingā is protection enough. If you get conclusions like these from your ethical framework itās probably a good sign that it might have some flaws.
Rocks arenāt sentient, they donāt count. And your logic still doesnāt work. What if you can instantly vaporize everyone with a thermonuclear bomb, as they are all concentrated within the radius of the fireball? Death would then be instantaneous. Would that make it acceptable? Very much doubt it.
Thanks for elaborating further!
I find it hard to understand this. I think 10 billion happy people is better than no people. I guess you disagree with this?
I think respecting peopleās preferences is a great heuristic to do good. However, I still endorse hedonic utilitarianism rather than preference utilitarianism, because it is possible for someone to have preferences which are not ideal to maximise oneās goals. (As an aside, Peter Singer used to be a preference utilitarian, but now is a hedonistic utilitarianism.)
No killing is necessary given an ASI. The preferences of humans could be modified such that everyone is happy with ASI taking over the universe. In addition, even if you think killing without suffering is bad in itself (and note ASI may even make the killing pleasant to humans), do you think that badness would outweigh an arbitrarity large happiness?
I think rocks are sentient in the sense they have a non-null expected welfare range, but it does not matter because I have no idea how to make them happier.
No, it would not be acceptable. I am strongly against negative utilitarianism. Vaporising all beings without any suffering would prevent all future suffering, but it would also prevent all future happiness. I think the expected value of the future is positive, so I would rather not vaporise all beings.
āNo peopleā is a special caseāeven if one looks at e.g. average utilitarianism, thatās a division by zero. I think a universe with no sentient beings in it does not have a well-defined moral value: moral value only exists with respect to sentients, so without any of them, the categories of āgoodā or ābadā stop even making sense. But obviously any path from a universe with sentients to a universe without implies extermination, which is bad.
However, given an arbitrary amount of sentient beings of comparable happiness, I donāt think the precise amount matters to how good things are, no. No one experiences all that good at once, hence 10 billion happy people are as good as 10 millionāif they are indeed just as happy.
I think any moral philosophy that leaves the door open to too much of ātrust me, itās for your own good, even though itās not your preference youāll enjoy the outcome far moreā is rife for dangerous derailments.
Yes, because I donāt care if the ASI is very very happy, it still counts for one. I also donāt think you can reasonably conceive of unbounded amounts of happiness felt by a single entity, so much as to compensate for all that suffering. Also try to describe to anyone āhey what if a supercomputer that wanted to take over the universe brainwashed you to be ok with it taking over the universeā, see their horrified reaction, and consider whether it makes sense for any moral system to reach conclusions that are obviously so utterly, instinctively repugnant to almost everyone.
Iām⦠not even sure how to parse that. Do you think rocks have conscious experiences?
The idea was that the vaporization is required to free the land for a much more numerous and technologically advanced populace, who can then go on to live off its resources a much more leisurely life with less hard work, less child mortality, less disease etc. So you replace, say, 50,000 vaporised indigenous people living like hunter gatherers with 5 million colonists living like we do now in the first world (and Iām talking new people, children that can only be born thanks to the possibility of expanding in that space). Does that make the genocide any better? If not, why? And how do those same arguments not apply to the ASI too?
This is the crux. Some questions:
Would 10^100 happy people be just as good as 1 happy person (assuming everyone is just as happy individually)?
Would 10^100 people being tortured be just as bad as 1 person being tortured (assuming everyone is feeling just as bad individually)?
Would you agree that a happy life of 100 years is just as good as a happy life of 1 year (assuming the annual happiness is the same in both cases)? If not, why is a person the relevant unit of analysis, and not a person-year?
Hypothetically, yes, if we take it into a vacuum. I find the scenario unrealistic in any real world circumstance though because obviously peopleās happiness tends to depend on having other people around, and also, because any trajectory that ended in there being only one person, happy or not, from the current situation, seems likely bad.
Pretty much the same reasoning applies. For āeveryone has it equally good/ābadā worlds, I donāt think sheer numbers make a difference. What makes things more complicated is when inequality is involved.
I think length of life matters a lot; if I know Iāll have just one year of life, my happiness is kind of tainted by the knowledge of imminent death, you know? We experience all of our life ourselves. For an edge scenario, thereās one character in āPermutation Cityā (a Greg Egan novel) who is a mind upload and puts themselves into a sort of mental state loop; after a period T their mental state maps exactly to itself and repeats identically, forever. If you considered such a fantastical scenario, then Iād argue the precise length of the loop doesnāt matter much.
I strongly believe 10^100 people being tortured is much much worse than 1 person being tortured (even assuming everyone is feeling just as bad individually). I do not know what to say more, but thanks for the chat!
I think the reason for those intuitions is that (reasonably enough!) we canāt imagine there being 10^100 people without there also being a story behind that situation. A world in which e.g. some kind of entity breeds humans on purpose to then torture them, leading to those insane amounts, sounds indeed absolutely hellish! But the badness of it is due to the context; a world in which there exists only one person, and that person is being horribly tortured, is also extremely upsetting and sad, just in a different way and for different reasons (and all paths to there are also very disturbing; but weāll maybe think āat least everyone else just died without suffering as muchā so it feels less horrible than the 10^100 humans torture world).
But my intuition on the situation alone is more along the lines of: imagine you know youāre going to be born into this world. Would you like you odds? And in both the āone tortured humanā and the ā10^100 tortured humansā worlds, your odds would be exactly the same: 100% chance of being tortured.
But all of these are just abstract thought experiments. In any realistic situations, torture worlds donāt just happenāthere is a story leading to them, and for any kind of torture world, that story is godawful. So in practice the two things canāt be separated. I think itās fairly correct to say that in all realistic scenarios the 10^100 world will be in practice worse, or have a worse past, though both worlds would be awful.
At least for me, that does not matter. I would always pick a world where 1 person is tortured over a world where 10^100 are tortured (holding the amount of torture per person constant), regardless of the past history. Here is another question. You are just at the beginning of the universe, so no past history, and you can either click one button which would create 10^100 people who would be tortured for 100 years, or another button which would create 1 person who would be tortured for 100 years. If you had to then pick one world to live in, as an individual, you would suffer the same in both worlds, as you would have 100 % chance of being tortured for 100 years either way. So you could conclude which button you chose does not really matter. However, I certainly think such choice would matter! I care not only about my own welfare, but also that of others, so I would certainly pick the button leading to less total torture.
You said before that total utilitarianism is problematic because it can, at least in principle, lead one to endorse situations where a population is made extinct in order for its resources to be used more efficiently to produce welfare. However, average utilitarianism is way more problematic. It can, at least in principle, lead to a situation where the average welfare is kept constant, but we arbitrarily expand the amount of torture by increasing the number of beings. Even in practice this would be possible. For example, net global welfare accounting for animals may well be negative due to wild animal suffering (or even just farmed animal suffering; see this analysis), which means just replicating Earthās ecosystem in Earth-like planets across the universe may well be a way of expanding suffering (and average suffering per being can be kept roughly the same for the sake of a thought experiment). If net global welfare is indeed negative, I would consider this super bad!
I donāt know if it makes a lot of sense because yes, in theory from my viewpoint all ātorture worldsā (N agents, all suffering the same amount of torture) are equivalent. I feel like that intuition is more right than just āmore people = more tortureā. I would call them equally bad worlds, and if the torture is preternatural and inescapable I have no way of choosing between them. But I also feel like this is twisting ourselves into examples that are completely unrealistic, to the point of almost uselessness; it is no wonder that our theories of ethics break down, same as most physics does at a black hole singularity.