Such lives wouldn’t be human or even “lives” in any real, biological sense, and so yes, I consider them to be of low value compared to biological sentient life (humans, other animals, even aliens should they exist). These “digital persons” would be AIs, machines- with some heritage from humanity, yes, but let’s be clear: they aren’t us. To be human is to be biological, mortal, and Earthbound—those three things are essential traits of Homo sapiens. If those traits aren’t there, one isn’t human, but something else, even if one was once human. “Digitizing” humanity (or even the entire universe, as suggested in the Newberry paper) would be destroying it, even if it is an evolution of sorts.
If there’s one issue with the EA movement that I see, it’s that our dreams are far too big. We are rationalists, but our ultimate vision for the future of humanity is no less esoteric than the visions of Heavens and Buddha fields written by the mystics—it is no less a fundamental shift in consciousness, identity, and mode of existence.
Am I wrong for being wary of this on a more than instrumental level (I would argue that even Yudkowsky’s objections are merely instrumental, centered on x- and s-risk alone)? I mean, what would be suboptimal about a sustainable, Earthen existence for us and our descendants? Is it just the numbers (can the value of human lives necessarily be measured mathematically, much less in numbers)?
Such lives wouldn’t be human or even “lives” in any real, biological sense, and so yes, I consider them to be of low value compared to biological sentient life (humans, other animals, even aliens should they exist). These “digital persons” would be AIs, machines- with some heritage from humanity, yes, but let’s be clear: they aren’t us. To be human is to be biological, mortal, and Earthbound—those three things are essential traits of Homo sapiens. If those traits aren’t there, one isn’t human, but something else, even if one was once human. “Digitizing” humanity (or even the entire universe, as suggested in the Newberry paper) would be destroying it, even if it is an evolution of sorts.
If there’s one issue with the EA movement that I see, it’s that our dreams are far too big. We are rationalists, but our ultimate vision for the future of humanity is no less esoteric than the visions of Heavens and Buddha fields written by the mystics—it is no less a fundamental shift in consciousness, identity, and mode of existence.
Am I wrong for being wary of this on a more than instrumental level (I would argue that even Yudkowsky’s objections are merely instrumental, centered on x- and s-risk alone)? I mean, what would be suboptimal about a sustainable, Earthen existence for us and our descendants? Is it just the numbers (can the value of human lives necessarily be measured mathematically, much less in numbers)?