Thanks for the post! If lazy solutions reduce suffering by reducing consciousness, they also reduce happiness. So, for example, a future civilization optimizing for very alien values relative to what humans care about might not have much suffering or happiness (if you don’t think consciousness is useful for many things; I think it is), and the net balance of welfare would be unclear (even relative to a typical classical-utilitarian evaluation of net welfare).
Personally I find it very likely that the long-run future of Earth-originating intelligence will optimize for values relatively alien to human values. This has been the historical trend whenever one dominant life form replaces another. (Human values are relatively alien to those of our fish ancestors, for example.) The main way out of this conclusion is if humans’ abilities for self-understanding and cooperation make our own future evolution an exception to the general trend.
Post-humans will become something completely alien to us (e.g. mindless outsourcers). In this case, arguments that these post-humans will not have negative states equally imply that these post-humans won’t have positive states. Therefore, we might expect some (perhaps very strong) regression towards neutral moral value.
Post-humans will have some sort of abilities which are influenced by current humans’ values. In this case, it seems like these post-humans will have good lives (at least as measured by our current values).
This still seems to me to be asymmetric – as long as you have some positive probability on scenario (2), isn’t the expected value greater than zero?
I think maybe what I had in mind with my original comment was something like: “There’s a high probability (maybe >80%?) that the future will be very alien relative to our values, and it’s pretty unclear whether alien futures will be net positive or negative (say 50% for each), so there’s a moderate probability that the future will be net negative: namely, at least 80% * 50%.” This is a statement about P(future is positive), but probably what you had in mind was the expected value of the future, counting the IMO unlikely scenarios where human-like values persist. Relative to values of many people on this forum, that expected value does seem plausibly positive, though there are many scenarios where the future could be strongly and not just weakly negative. (Relative to my values, almost any scenario where space is colonized is likely negative.)
Thanks for the post! If lazy solutions reduce suffering by reducing consciousness, they also reduce happiness. So, for example, a future civilization optimizing for very alien values relative to what humans care about might not have much suffering or happiness (if you don’t think consciousness is useful for many things; I think it is), and the net balance of welfare would be unclear (even relative to a typical classical-utilitarian evaluation of net welfare).
Personally I find it very likely that the long-run future of Earth-originating intelligence will optimize for values relatively alien to human values. This has been the historical trend whenever one dominant life form replaces another. (Human values are relatively alien to those of our fish ancestors, for example.) The main way out of this conclusion is if humans’ abilities for self-understanding and cooperation make our own future evolution an exception to the general trend.
Thanks Brian!
I think you are describing two scenarios:
Post-humans will become something completely alien to us (e.g. mindless outsourcers). In this case, arguments that these post-humans will not have negative states equally imply that these post-humans won’t have positive states. Therefore, we might expect some (perhaps very strong) regression towards neutral moral value.
Post-humans will have some sort of abilities which are influenced by current humans’ values. In this case, it seems like these post-humans will have good lives (at least as measured by our current values).
This still seems to me to be asymmetric – as long as you have some positive probability on scenario (2), isn’t the expected value greater than zero?
I think maybe what I had in mind with my original comment was something like: “There’s a high probability (maybe >80%?) that the future will be very alien relative to our values, and it’s pretty unclear whether alien futures will be net positive or negative (say 50% for each), so there’s a moderate probability that the future will be net negative: namely, at least 80% * 50%.” This is a statement about P(future is positive), but probably what you had in mind was the expected value of the future, counting the IMO unlikely scenarios where human-like values persist. Relative to values of many people on this forum, that expected value does seem plausibly positive, though there are many scenarios where the future could be strongly and not just weakly negative. (Relative to my values, almost any scenario where space is colonized is likely negative.)