“Bob: agree, to make lots of suffering, it needs pretty human-like utility functions that lead to simulations or making many sentient beings.”
I’m pretty sure this is false. Superintelligent singletons that don’t specifically disvalue suffering will make lots of it (relative to the current amount, i.e. one planetful) in pursuit of other ends. (They’ll make ancestor simulations, for example, for a variety of reasons.) The amount of suffering they’ll make will be far less than the theoretical maximum, but far more than what e.g. classical utilitarians would do.
If you disagree, I’d love to hear that you do—because I’m thinking about writing a paper on this anyway, it will help to know that people are interested in the topic.
The ways that I envision suffering potentially happening in the future are these:
—People deciding that obeying the law and respecting the sovereignty of other nations is more important than preventing the suffering of people inside them
—People deciding that doing scientific research (simulations are an example of this) is well worth the suffering of the people and animals experimented on
—People deciding that the insults and microagressions that affect some groups are not as bad as the inefficiencies that come from preventing them
—People deciding that it’s better to have a few lives without suffering than many many many lives with suffering (even when the many lives are all still all things considered good.)
—People deciding that AI systems should be designed in ways that make them suffer in their daily jobs, because it’s most efficient that way.
Utilitarianism comes down pretty strongly in favor of these decisions, at least in many cases. My guess is that in post-scarcity conditions, ordinary people will be more inclined to resist these decisions than utilitarians. The big exception is the sovereignty thing; in those cases I think utilitarians will lead to less suffering than the average humans. But those cases will only happen for a decade or so and will be relatively small-scale.
“Bob: agree, to make lots of suffering, it needs pretty human-like utility functions that lead to simulations or making many sentient beings.”
I’m pretty sure this is false. Superintelligent singletons that don’t specifically disvalue suffering will make lots of it (relative to the current amount, i.e. one planetful) in pursuit of other ends. (They’ll make ancestor simulations, for example, for a variety of reasons.) The amount of suffering they’ll make will be far less than the theoretical maximum, but far more than what e.g. classical utilitarians would do.
If you disagree, I’d love to hear that you do—because I’m thinking about writing a paper on this anyway, it will help to know that people are interested in the topic.
And I think normal humans, if given command of the future, would make even less suffering than classical utilitarians.
Can you elaborate on this?
Sure, sorry for the delay.
The ways that I envision suffering potentially happening in the future are these: —People deciding that obeying the law and respecting the sovereignty of other nations is more important than preventing the suffering of people inside them —People deciding that doing scientific research (simulations are an example of this) is well worth the suffering of the people and animals experimented on —People deciding that the insults and microagressions that affect some groups are not as bad as the inefficiencies that come from preventing them —People deciding that it’s better to have a few lives without suffering than many many many lives with suffering (even when the many lives are all still all things considered good.) —People deciding that AI systems should be designed in ways that make them suffer in their daily jobs, because it’s most efficient that way.
Utilitarianism comes down pretty strongly in favor of these decisions, at least in many cases. My guess is that in post-scarcity conditions, ordinary people will be more inclined to resist these decisions than utilitarians. The big exception is the sovereignty thing; in those cases I think utilitarians will lead to less suffering than the average humans. But those cases will only happen for a decade or so and will be relatively small-scale.