It is not necessarily bad for future sentients to be different. However, it is bad for them to be devoid of properties that make humans morally valuable (love, friendship, compassion, humor, curiosity, appreciation of beauty...). The only definition of “good” that makes sense to me is “things I want to happen” and I definitely don’t want a universe empty of love. A random UFAI is likely to have none of the above properties.
For the sake of argument I will start with your definition of good and add that what I want to happen is for all sentient beings to be free from suffering, or for all sentient beings to be happy (personally I don’t see a distinction between these two propositions, but that is a topic for another discussion).
Being general in this way allows me to let go of my attachment to specific human qualities I think are valuable. Considering how different most people’s values are from my own, and how different my needs are from Julie’s (my canine companion), I think our rationality and imagination are too limited for us to know what will be good for more evolved beings in the far future.
A slightly better, though still far from complete, definition of “good” (in my opinion) would run along the line of: “what is happening is what those beings it is happening to want to happen”. A future world may be one that is completely devoid of all human value and still be better (morally and in many other ways) than the current world. At least better for the beings living in it. In this way even happiness, or lack of suffering, can be tossed aside as mere human endeavors. John Stuart Mill famously wrote:
“It is better to be a human being dissatisfied than a pig satisfied; better to be Socrates dissatisfied than a fool satisfied. And if the fool, or the pig, is of a different opinion, it is only because they only know their own side of the question.”
And compared with the Super-Droids of tomorrow, we are the pigs...
If your only requirement is for all sentient beings to be happy, you should be satisfied with a universe completely devoid of sentient beings. However, I suspect you wouldn’t be (?)
Regarding definition of good, it’s pointless to argue about definitions. We should only make sure both of us know what each word we use means. So, let’s define “koodness(X)” to mean “the extent to which things X wants to happen actually happen” and “gudness” to mean “the extent to which what is happening to all beings is what they want to happen” (although the latter notion requires clarifications: how do we average between the beings? do we take non-existing beings into account? how do we define “happening to X”?)
So, by definition of kood, I want the future world to be kood(Squark). I also want the future world to be gud among other things (that is, gudness is a component of koodness(Squark)).
I disagree with Mill. It is probably better for a human being not become a pig, in the sense that a human being prefers not becoming a pig. However, I’m not at all convinced a pig prefers to become a human being. Certainly, I wouldn’t want to become a “Super-Droid” if it comes at a cost of losing my essential human qualities.
Hi Uri, thanks for the thoughtful reply!
It is not necessarily bad for future sentients to be different. However, it is bad for them to be devoid of properties that make humans morally valuable (love, friendship, compassion, humor, curiosity, appreciation of beauty...). The only definition of “good” that makes sense to me is “things I want to happen” and I definitely don’t want a universe empty of love. A random UFAI is likely to have none of the above properties.
For the sake of argument I will start with your definition of good and add that what I want to happen is for all sentient beings to be free from suffering, or for all sentient beings to be happy (personally I don’t see a distinction between these two propositions, but that is a topic for another discussion).
Being general in this way allows me to let go of my attachment to specific human qualities I think are valuable. Considering how different most people’s values are from my own, and how different my needs are from Julie’s (my canine companion), I think our rationality and imagination are too limited for us to know what will be good for more evolved beings in the far future.
A slightly better, though still far from complete, definition of “good” (in my opinion) would run along the line of: “what is happening is what those beings it is happening to want to happen”. A future world may be one that is completely devoid of all human value and still be better (morally and in many other ways) than the current world. At least better for the beings living in it. In this way even happiness, or lack of suffering, can be tossed aside as mere human endeavors. John Stuart Mill famously wrote:
“It is better to be a human being dissatisfied than a pig satisfied; better to be Socrates dissatisfied than a fool satisfied. And if the fool, or the pig, is of a different opinion, it is only because they only know their own side of the question.”
And compared with the Super-Droids of tomorrow, we are the pigs...
If your only requirement is for all sentient beings to be happy, you should be satisfied with a universe completely devoid of sentient beings. However, I suspect you wouldn’t be (?)
Regarding definition of good, it’s pointless to argue about definitions. We should only make sure both of us know what each word we use means. So, let’s define “koodness(X)” to mean “the extent to which things X wants to happen actually happen” and “gudness” to mean “the extent to which what is happening to all beings is what they want to happen” (although the latter notion requires clarifications: how do we average between the beings? do we take non-existing beings into account? how do we define “happening to X”?)
So, by definition of kood, I want the future world to be kood(Squark). I also want the future world to be gud among other things (that is, gudness is a component of koodness(Squark)).
I disagree with Mill. It is probably better for a human being not become a pig, in the sense that a human being prefers not becoming a pig. However, I’m not at all convinced a pig prefers to become a human being. Certainly, I wouldn’t want to become a “Super-Droid” if it comes at a cost of losing my essential human qualities.