I don’t understand the point about the complexity of value being greater than the complexity of suffering (or disvalue). Can you possibly motivate the intuition here? It seems to me like I can reverse the complex valuable things that you name, and I get their “suffering equivalents” e.g. (e.g. friendship → hostility, happiness → sadness, love → hate … etc.), and they don’t feel significantly less complicated.
I don’t know exactly what it means for these things to be less complex; I’m imagining something like writing a Python program that simulates the behaviour of two robots in a way that is recognisable to many people as “friends” or “enemies” and measuring at the length of the program.
It’s not that there aren’t similarly complex reverses, it’s that there’s a type of bad that basically everyone agrees can be extremely bad, i.e. extreme suffering, and there’s no (or much less) consensus on a good with similar complexity and that can be as good as extreme suffering is bad. For example, many would discount pleasure/joy on the basis of false beliefs, like being happy that your partner loves you when they actually don’t, whether because they just happen not to love you and are deceiving you, or because they’re a simulation with no feelings at all. Extreme suffering wouldn’t get discounted (much) if it were based on inaccurate beliefs.
A torturous solipsistic experience machine is very bad, but a happy solipsistic experience machine might not be very good at all, if people’s desires aren’t actually being satisfied and they’re only deceived into believing they are.
I don’t understand the point about the complexity of value being greater than the complexity of suffering (or disvalue). Can you possibly motivate the intuition here? It seems to me like I can reverse the complex valuable things that you name, and I get their “suffering equivalents” e.g. (e.g. friendship → hostility, happiness → sadness, love → hate … etc.), and they don’t feel significantly less complicated.
I don’t know exactly what it means for these things to be less complex; I’m imagining something like writing a Python program that simulates the behaviour of two robots in a way that is recognisable to many people as “friends” or “enemies” and measuring at the length of the program.
It’s not that there aren’t similarly complex reverses, it’s that there’s a type of bad that basically everyone agrees can be extremely bad, i.e. extreme suffering, and there’s no (or much less) consensus on a good with similar complexity and that can be as good as extreme suffering is bad. For example, many would discount pleasure/joy on the basis of false beliefs, like being happy that your partner loves you when they actually don’t, whether because they just happen not to love you and are deceiving you, or because they’re a simulation with no feelings at all. Extreme suffering wouldn’t get discounted (much) if it were based on inaccurate beliefs.
A torturous solipsistic experience machine is very bad, but a happy solipsistic experience machine might not be very good at all, if people’s desires aren’t actually being satisfied and they’re only deceived into believing they are.