I think one concrete complexity-increasing ingredient that many (but not all) people would want in a utopia is for one’s interactions with other minds to be authentic – that is, they want the right kind of “contact with reality.”
So, something that would already seem significantly suboptimal (to some people at least) is lots of private experience machines where everyone is living a varied and happy life, but everyone’s life in the experience machines follows pretty much the same template and other characters in one’s simulation aren’t genuine, in the sense that they don’t exist independently of one’s interaction with them (meaning that your simulation is solipsistic and other characters in your simulation may be computed to be the most exciting response to you, but their memories from “off-screen time” are fake). So, while this scenario would already be a step upwards from “rats on heroin”/”brains in a vat with their pleasure hotspots wire-headed,” it’s still probably not the type of utopia many of us would find ideal. Instead, as social creatures who value meaning, we’d want worlds (whether simulated/virtual or not doesn’t seem to matter) where the interactions we have with other minds are genuine. That these other minds wouldn’t just be characters programmed to react to us, but real minds with real memories and “real” (as far as this is a coherent concept) choices. Utopian world setups that allow for this sort of “contact with reality” presumably cannot be packed too tightly with sentient minds.
By contrast, things seem different for dystopias, which can be packed tightly. For dystopias, it matters less whether they are repetitive, whether they’re lacking in options/freedom, or whether they have solipsistic aspects to them. (If anything, those features can make a particular dystopia more horrifying.)
To summarize, here’s an excerpt from my post on alignment researchers arguably having a comparative advantage in reducing s-risks:
Asymmetries between utopia and dystopia. It seems that we can “pack” more bad things into dystopia than we can “pack” good things into utopia. Many people presumably value freedom, autonomy, some kind of “contact with reality.” The opposites of these values are easier to implement and easier to stack together: dystopia can be repetitive, solipsistic, lacking in options/freedom, etc. For these reasons, it feels like there’s at least some type of asymmetry between good things and bad things – even if someone were to otherwise see them as completely symmetric.
Related to your point 1 :
I think one concrete complexity-increasing ingredient that many (but not all) people would want in a utopia is for one’s interactions with other minds to be authentic – that is, they want the right kind of “contact with reality.”
So, something that would already seem significantly suboptimal (to some people at least) is lots of private experience machines where everyone is living a varied and happy life, but everyone’s life in the experience machines follows pretty much the same template and other characters in one’s simulation aren’t genuine, in the sense that they don’t exist independently of one’s interaction with them (meaning that your simulation is solipsistic and other characters in your simulation may be computed to be the most exciting response to you, but their memories from “off-screen time” are fake). So, while this scenario would already be a step upwards from “rats on heroin”/”brains in a vat with their pleasure hotspots wire-headed,” it’s still probably not the type of utopia many of us would find ideal. Instead, as social creatures who value meaning, we’d want worlds (whether simulated/virtual or not doesn’t seem to matter) where the interactions we have with other minds are genuine. That these other minds wouldn’t just be characters programmed to react to us, but real minds with real memories and “real” (as far as this is a coherent concept) choices. Utopian world setups that allow for this sort of “contact with reality” presumably cannot be packed too tightly with sentient minds.
By contrast, things seem different for dystopias, which can be packed tightly. For dystopias, it matters less whether they are repetitive, whether they’re lacking in options/freedom, or whether they have solipsistic aspects to them. (If anything, those features can make a particular dystopia more horrifying.)
To summarize, here’s an excerpt from my post on alignment researchers arguably having a comparative advantage in reducing s-risks: