If we have advanced AI that is capable of constructing a digital human simulation, wouldn’t it also by proxy be advanced enough to be conscious on its own, without the need for anything approximating human beings? I can imagine humans wanting to create copies of themselves for various purposes but isn’t it much more likely for completely artificial silicon-first entities to take over the galaxy? Those entities wouldn’t have the need for any human pleasures and could thus conquer the universe much more efficiently than any “digital humans” ever could.
It does seem likely to me that advanced AI would have the capabilities needed to spread through the galaxy on its own. Where digital people might come in is that—if advanced AI systems remain “aligned” / under human control—digital people may be important for steering the construction of a galaxy-wide civilization according to human-like (or descended-from-human-like) values. It may therefore be important for digital people to remain “in charge” and to do a lot of work on things like reflecting on values, negotiating with each other, designing and supervising AI systems, etc.
If we get to a point where “digital people” are possible, can we expect to be able to tweak the underlying circuitry to eliminate the concept of pain and suffering altogether, creating “humans” incapable of experiencing anything but joy, no matter what happens to them? Its really hard to imagine from a biological human perspective but anything is possible in a digital world and this wouldn’t necessarily make these “humans” any less productive.
“Tweaking the underlying circuitry” wouldn’t automatically be possible just as a consequence of being able to simulate human minds. But I’d guess the ability to do this sort of tweak would follow pretty quickly.
As a corollary, do we have a reason to believe that “digital humans” will want to experience anything other than 24⁄7 heroin-like euphoria in their “down time”, rather than complex experiences like zero-g? Real-life humans cannot do that as our bodies quickly break down from heroin exposure, but digital ones won’t have such arbitrary limitations.
I think a number of people (including myself) would hesitate to experience “24/7 heroin-like euphoria” and might opt for something else.
But I’d guess the ability to do this sort of tweak would follow pretty quickly.
After reading your latest post on temporary copies, I’m thinking that this would quickly become the #1 priority for brain simulation research. In a real life analogy, humans very quickly abandoned horses in favor of cars, as having a tool that works 24⁄7 without complaint is much better than a temperamental living being. So the phase of copies being treated with dignity would be relatively short-lived up until the underlying circuitry could be tweaked to make it morally okay to force simulations to work 24⁄7 without them “suffering” in any way, as they would be incapable of negative emotion.
Now, allowing for unlimited tweaking of brain circuitry does make for bad science fiction (i.e. the mmacevedo short story breaks down in a world where its possible) but I suspect it would be the ultimate endpoint for virtual workers.
It does seem likely to me that advanced AI would have the capabilities needed to spread through the galaxy on its own. Where digital people might come in is that—if advanced AI systems remain “aligned” / under human control—digital people may be important for steering the construction of a galaxy-wide civilization according to human-like (or descended-from-human-like) values. It may therefore be important for digital people to remain “in charge” and to do a lot of work on things like reflecting on values, negotiating with each other, designing and supervising AI systems, etc.
“Tweaking the underlying circuitry” wouldn’t automatically be possible just as a consequence of being able to simulate human minds. But I’d guess the ability to do this sort of tweak would follow pretty quickly.
I think a number of people (including myself) would hesitate to experience “24/7 heroin-like euphoria” and might opt for something else.
After reading your latest post on temporary copies, I’m thinking that this would quickly become the #1 priority for brain simulation research. In a real life analogy, humans very quickly abandoned horses in favor of cars, as having a tool that works 24⁄7 without complaint is much better than a temperamental living being. So the phase of copies being treated with dignity would be relatively short-lived up until the underlying circuitry could be tweaked to make it morally okay to force simulations to work 24⁄7 without them “suffering” in any way, as they would be incapable of negative emotion.
Now, allowing for unlimited tweaking of brain circuitry does make for bad science fiction (i.e. the mmacevedo short story breaks down in a world where its possible) but I suspect it would be the ultimate endpoint for virtual workers.