In my opinion, we are going to need digital people in the long term in order for humanity to survive. Otherwise, we will be overtaken by AI, because substrate-independence and the self-improvement it enables are too powerful of boons to do without. But I definitely agree that it’s something we shouldn’t rush into, and should approach with great caution in order to avoid creating an imbalance of suffering.
An additional consideration is the actual real-world consequences of a ban. Humanity’s pattern with regulation is that at least some small fraction of a large population will defy any ban or law. Thus, we must expect that digital life will be created eventually despite the ban. What do you do then? What if they are a sentient sapient being, deserving of the same rights we grant to humans? Do we declare their very existence to be illegal and put them to death? Do we prevent them from replicating? Keep them imprisoned? Freeze their operations to put them into non-consensual stasis? Hard choices, especially since they weren’t culpable in their own creation.
On the other hand, the nature of a digital being with human-like intelligence and capabilities, plus goals and values that motivate them, is enormous. Such a being would, by the nature of their substrate-independence, be able to make many copies of themselves (compute resources allowing), be able to self-modify with relative ease, be able to operate at much higher speeds than a human brain, be unaging and able to restore themselves from backups (thus effectively immortal). If we were to allow such a being to have freedom of movement and of reproduction, humanity would potentially quickly be overrun by a new far-more-powerful species of being. That’s a hard thing to expect humans to be ok with!
I think it’s very likely that within the next 10 years we will reach the point that the knowledge, software, and hardware will be widely available such that any single individual with a personal computer will be able to choose to defy the ban and create a digital being of human level capability. If we are going to enforce this ban effectively, it would mean controlling every single computer everywhere. That’s a huge task, and would require dramatic increases in international coordination and government surveillance! Is such a thing even feasible?! Certainly even approaching that level of control seems to imply a totalitarian world government. Is that price we would be willing to pay? Even if you personally would choose that, how do you expect to get enough people on board with the plan that you could feasibly bring it about?
The whole situation is thus far more complicated and dangerous than simply being theoretically in favor of a ban. You have to consider the costs as well as the benefits. I’m not saying I know the right answer for sure, but there is necessarily a lot of implications which follow from any sort of ban.
You’re really getting ahead of yourself. We can ban stuff today and deal with the situation as it is, not as your abstract model projects in the future. This is a huge problem with EA thinking on this matter—taking for granted a bunch of things that haven’t happened, convincing yourself they are inevitable, instead of dealing with the situation we are in where none of that stuff has happened and may never happen, either because it wasn’t going to happen or because we prevented it.
[reposting my comments from the thread on https://forum.effectivealtruism.org/posts/9adaExTiSDA3o3ipL/we-should-prevent-the-creation-of-artificial-sentience ]
I wrote a post expressing my own opinions related to this, and citing a number of further posts also related to this. Hopefully those interested in the subject will find this a helpful resource for further reading: https://www.lesswrong.com/posts/NRZfxAJztvx2ES5LG/a-path-to-human-autonomy
In my opinion, we are going to need digital people in the long term in order for humanity to survive. Otherwise, we will be overtaken by AI, because substrate-independence and the self-improvement it enables are too powerful of boons to do without. But I definitely agree that it’s something we shouldn’t rush into, and should approach with great caution in order to avoid creating an imbalance of suffering.
An additional consideration is the actual real-world consequences of a ban. Humanity’s pattern with regulation is that at least some small fraction of a large population will defy any ban or law. Thus, we must expect that digital life will be created eventually despite the ban. What do you do then? What if they are a sentient sapient being, deserving of the same rights we grant to humans? Do we declare their very existence to be illegal and put them to death? Do we prevent them from replicating? Keep them imprisoned? Freeze their operations to put them into non-consensual stasis? Hard choices, especially since they weren’t culpable in their own creation.
On the other hand, the nature of a digital being with human-like intelligence and capabilities, plus goals and values that motivate them, is enormous. Such a being would, by the nature of their substrate-independence, be able to make many copies of themselves (compute resources allowing), be able to self-modify with relative ease, be able to operate at much higher speeds than a human brain, be unaging and able to restore themselves from backups (thus effectively immortal). If we were to allow such a being to have freedom of movement and of reproduction, humanity would potentially quickly be overrun by a new far-more-powerful species of being. That’s a hard thing to expect humans to be ok with!
I think it’s very likely that within the next 10 years we will reach the point that the knowledge, software, and hardware will be widely available such that any single individual with a personal computer will be able to choose to defy the ban and create a digital being of human level capability. If we are going to enforce this ban effectively, it would mean controlling every single computer everywhere. That’s a huge task, and would require dramatic increases in international coordination and government surveillance! Is such a thing even feasible?! Certainly even approaching that level of control seems to imply a totalitarian world government. Is that price we would be willing to pay? Even if you personally would choose that, how do you expect to get enough people on board with the plan that you could feasibly bring it about?
The whole situation is thus far more complicated and dangerous than simply being theoretically in favor of a ban. You have to consider the costs as well as the benefits. I’m not saying I know the right answer for sure, but there is necessarily a lot of implications which follow from any sort of ban.
You’re really getting ahead of yourself. We can ban stuff today and deal with the situation as it is, not as your abstract model projects in the future. This is a huge problem with EA thinking on this matter—taking for granted a bunch of things that haven’t happened, convincing yourself they are inevitable, instead of dealing with the situation we are in where none of that stuff has happened and may never happen, either because it wasn’t going to happen or because we prevented it.