The universe can probably support a lot more sentient life if we convert everything that we can into computronium (optimized computing substrate) and use it to run digital/artificial/simulated lives, instead of just colonizing the universe with biological humans. To conclude that such a future doesn’t have much more potential value than your 2010 world, we would have to assign zero value to such non-biological lives, or value each of them much less than a biological human, or make other very questionable assumptions. The Newberry 2021 paper that Vasco Grilo linked to has a section about about this:
If a significant fraction of humanity’s morally-relevant successors were instantiated digitally, rather than biologically, this would have truly staggering implications for the expected size of the future. As noted earlier, Bostrom (2014) estimates that 10^35 human lives could be created over the entire future, given known physical limits, and that 10^58 human lives could be created if we allow for the possibility of digital persons. While these figures were not intended to indicate a simple scaling law 31, they do imply that digital persons can in principle be far, far more resource efficient than biological life. Bostrom’s estimate of the number of digital lives is also conservative, in that it assumes all such lives will be emulations of human minds; it is by no means clear that whole-brain emulation represents the upper limit of what could be achieved. For a simple example, one can readily imagine digital persons that are similar to whole-brain emulations, but engineered so as to minimise waste energy, thereby increasing resource efficiency.
Such lives wouldn’t be human or even “lives” in any real, biological sense, and so yes, I consider them to be of low value compared to biological sentient life (humans, other animals, even aliens should they exist). These “digital persons” would be AIs, machines- with some heritage from humanity, yes, but let’s be clear: they aren’t us. To be human is to be biological, mortal, and Earthbound—those three things are essential traits of Homo sapiens. If those traits aren’t there, one isn’t human, but something else, even if one was once human. “Digitizing” humanity (or even the entire universe, as suggested in the Newberry paper) would be destroying it, even if it is an evolution of sorts.
If there’s one issue with the EA movement that I see, it’s that our dreams are far too big. We are rationalists, but our ultimate vision for the future of humanity is no less esoteric than the visions of Heavens and Buddha fields written by the mystics—it is no less a fundamental shift in consciousness, identity, and mode of existence.
Am I wrong for being wary of this on a more than instrumental level (I would argue that even Yudkowsky’s objections are merely instrumental, centered on x- and s-risk alone)? I mean, what would be suboptimal about a sustainable, Earthen existence for us and our descendants? Is it just the numbers (can the value of human lives necessarily be measured mathematically, much less in numbers)?
The universe can probably support a lot more sentient life if we convert everything that we can into computronium (optimized computing substrate) and use it to run digital/artificial/simulated lives, instead of just colonizing the universe with biological humans. To conclude that such a future doesn’t have much more potential value than your 2010 world, we would have to assign zero value to such non-biological lives, or value each of them much less than a biological human, or make other very questionable assumptions. The Newberry 2021 paper that Vasco Grilo linked to has a section about about this:
Such lives wouldn’t be human or even “lives” in any real, biological sense, and so yes, I consider them to be of low value compared to biological sentient life (humans, other animals, even aliens should they exist). These “digital persons” would be AIs, machines- with some heritage from humanity, yes, but let’s be clear: they aren’t us. To be human is to be biological, mortal, and Earthbound—those three things are essential traits of Homo sapiens. If those traits aren’t there, one isn’t human, but something else, even if one was once human. “Digitizing” humanity (or even the entire universe, as suggested in the Newberry paper) would be destroying it, even if it is an evolution of sorts.
If there’s one issue with the EA movement that I see, it’s that our dreams are far too big. We are rationalists, but our ultimate vision for the future of humanity is no less esoteric than the visions of Heavens and Buddha fields written by the mystics—it is no less a fundamental shift in consciousness, identity, and mode of existence.
Am I wrong for being wary of this on a more than instrumental level (I would argue that even Yudkowsky’s objections are merely instrumental, centered on x- and s-risk alone)? I mean, what would be suboptimal about a sustainable, Earthen existence for us and our descendants? Is it just the numbers (can the value of human lives necessarily be measured mathematically, much less in numbers)?