Nice post. :) My question “Human-inspired colonization of space will cause net suffering if it happens” that I, Pablo, and you answered was worded poorly. I later rewrote it to be more clear: “Human-inspired colonization of space will cause more suffering than it prevents if it happens”. As he explains in his post, Pablo (a classical utilitarian) interpreted my original wording to refer to the net balance of happiness minus suffering, while I (a negative utilitarian) meant merely the net balance of suffering. Which way did you read it?
While Pablo gave 1% probability of more suffering than happiness, he gave 99% probability that suffering itself would increase, saying: “But maybe Brian meant that colonization will cause a surplus of suffering relative to the amount present before colonization. I think this is virtually certain; I’d give it a 99% chance.”
Ok. :) For that question I might give a slightly lower than 50% chance that human-inspired space colonization would create more suffering than happiness (where the numerical magnitudes of happiness and suffering are as judged by a typical classical utilitarian). I think the default should be around 50% because for a typical classical utilitarian, it seems unclear whether a random collection of minds contains more suffering or happiness. There are some scenarios in which a human-inspired future might either be relatively altruistic with wide moral circles or relatively egalitarian such that selfishness alone can produce a significant surplus of happiness over suffering. However, there are also many possible futures where a powerful few oppressively control a powerless many with little concern for their welfare. Such political systems were very common historically and are still widespread today. And there may also be situations analogous to animal suffering of today in which most of the sentience that exists goes largely ignored.
The expected value of human-inspired space colonization may be less symmetric than this because it may be dominated by a few low-probability scenarios in which the future is very good or very bad, with very good futures plausibly being more likely.
Just to say that I would put very little weight on my responses in that post, many of which are highly unstable, and some of which I no longer endorse (including the 1% and 99% estimates quoted above). I hope to revise it soon, adding measures of resilience as Greg Lewis suggests here.
Nice post. :) My question “Human-inspired colonization of space will cause net suffering if it happens” that I, Pablo, and you answered was worded poorly. I later rewrote it to be more clear: “Human-inspired colonization of space will cause more suffering than it prevents if it happens”. As he explains in his post, Pablo (a classical utilitarian) interpreted my original wording to refer to the net balance of happiness minus suffering, while I (a negative utilitarian) meant merely the net balance of suffering. Which way did you read it?
While Pablo gave 1% probability of more suffering than happiness, he gave 99% probability that suffering itself would increase, saying: “But maybe Brian meant that colonization will cause a surplus of suffering relative to the amount present before colonization. I think this is virtually certain; I’d give it a 99% chance.”
I interpreted it as the balance of happiness minus suffering.
Ok. :) For that question I might give a slightly lower than 50% chance that human-inspired space colonization would create more suffering than happiness (where the numerical magnitudes of happiness and suffering are as judged by a typical classical utilitarian). I think the default should be around 50% because for a typical classical utilitarian, it seems unclear whether a random collection of minds contains more suffering or happiness. There are some scenarios in which a human-inspired future might either be relatively altruistic with wide moral circles or relatively egalitarian such that selfishness alone can produce a significant surplus of happiness over suffering. However, there are also many possible futures where a powerful few oppressively control a powerless many with little concern for their welfare. Such political systems were very common historically and are still widespread today. And there may also be situations analogous to animal suffering of today in which most of the sentience that exists goes largely ignored.
The expected value of human-inspired space colonization may be less symmetric than this because it may be dominated by a few low-probability scenarios in which the future is very good or very bad, with very good futures plausibly being more likely.
Just to say that I would put very little weight on my responses in that post, many of which are highly unstable, and some of which I no longer endorse (including the 1% and 99% estimates quoted above). I hope to revise it soon, adding measures of resilience as Greg Lewis suggests here.