This is an excellent exploration of these issues. One of my favourite things about it is that it shows it is possible to write about these issues in a measured, sensible, warm, and wise way — i.e. it provides a model for others wanting to advance this conversation at this nascent stage to follow.
Re the 5 options, I think there is one that is notably missing, and that would probably be the leading option for many of your opponents. It is the wait-and-see approach — leave the space unregulated until a material (but not excessive) amount of harm has occurred and if/when that happens, regulate from this situation where much more information is available. This is the kind of strategy that the anti-SB 1047 coalition seems to have converged on. And it is the usual way that society proceeds with regulating unprecedented kinds of harm.
As it happens, I think your options 4 and 5 (ban creation of artificial sentience/suffering) are superior to the wait-and-see approach, but it is a harder case to argue. Some key points of the comparison are:
in the case of artificial suffering a very large amount of harm may occur very quickly. Many new harms scale up fairly slowly, such that even if it takes a few years to regulate from the time the harms are first clear, the damage done isn’t too profound (e.g. it is smaller than or equal to the gains of allowing that early period to be unregulated). But it seems like this could be a case where, say, millions of beings are suffering before the harms are recognised, and billions by the time the regulation is passed.
this is such a profound issue for humanity (whether to bring into existence for the first time in the history of the Earth entirely new kinds of entity that can experience suffering or joy) that it is natural to consider a global conversation about whether to proceed before doing it. Human germline genetic engineering is a similarly grand choice and the scientific and political community indeed chose to have a moritorium on that. Most regulation of new technologies is not like this, so this is an answer to the question of why should we treat this differently to everything else.
An additional consideration is the actual real-world consequences of a ban. Humanity’s pattern with regulation is that at least some small fraction of a large population will defy any ban or law. Thus, we must expect that digital life will be created eventually despite the ban. What do you do then? What if they are a sentient sapient being, deserving of the same rights we grant to humans? Do we declare their very existence to be illegal and put them to death? Do we prevent them from replicating? Keep them imprisoned? Freeze their operations to put them into non-consensual stasis? Hard choices, especially since they weren’t culpable in their own creation.
On the other hand, the nature of a digital being with human-like intelligence and capabilities, plus goals and values that motivate them, is enormous. Such a being would, by the nature of their substrate-independence, be able to make many copies of themselves (compute resources allowing), be able to self-modify with relative ease, be able to operate at much higher speeds than a human brain, be un-aging and able to restore themselves from backups (thus effectively immortal). If we were to allow such a being to have freedom of movement and of reproduction, humanity would potentially quickly be overrun by a new far-more-powerful species of being. That’s a hard thing to expect humans to be ok with!
I think it’s very likely that within the next 10 years we will reach the point that the knowledge, software, and hardware will be widely available such that any single individual with a personal computer will be able to choose to defy the ban and create a digital being of human level capability. If we are going to enforce this ban effectively, it would mean controlling every single computer everywhere. That’s a huge task, and would require dramatic increases in international coordination and government surveillance! Is such a thing even feasible?! Certainly even approaching that level of control seems to imply a totalitarian world government. Is that price we would be willing to pay? Even if you personally would choose that, how do you expect to get enough people on board with the plan that you could feasibly bring it about?
The whole situation is thus far more complicated and dangerous than simply being theoretically in favor of a ban. You have to consider the costs as well as the benefits. I’m not saying I know the right answer for sure, but there is necessarily a lot of implications which follow from any sort of ban.
This is an excellent exploration of these issues. One of my favourite things about it is that it shows it is possible to write about these issues in a measured, sensible, warm, and wise way — i.e. it provides a model for others wanting to advance this conversation at this nascent stage to follow.
Re the 5 options, I think there is one that is notably missing, and that would probably be the leading option for many of your opponents. It is the wait-and-see approach — leave the space unregulated until a material (but not excessive) amount of harm has occurred and if/when that happens, regulate from this situation where much more information is available. This is the kind of strategy that the anti-SB 1047 coalition seems to have converged on. And it is the usual way that society proceeds with regulating unprecedented kinds of harm.
As it happens, I think your options 4 and 5 (ban creation of artificial sentience/suffering) are superior to the wait-and-see approach, but it is a harder case to argue. Some key points of the comparison are:
in the case of artificial suffering a very large amount of harm may occur very quickly. Many new harms scale up fairly slowly, such that even if it takes a few years to regulate from the time the harms are first clear, the damage done isn’t too profound (e.g. it is smaller than or equal to the gains of allowing that early period to be unregulated). But it seems like this could be a case where, say, millions of beings are suffering before the harms are recognised, and billions by the time the regulation is passed.
this is such a profound issue for humanity (whether to bring into existence for the first time in the history of the Earth entirely new kinds of entity that can experience suffering or joy) that it is natural to consider a global conversation about whether to proceed before doing it. Human germline genetic engineering is a similarly grand choice and the scientific and political community indeed chose to have a moritorium on that. Most regulation of new technologies is not like this, so this is an answer to the question of why should we treat this differently to everything else.
An additional consideration is the actual real-world consequences of a ban. Humanity’s pattern with regulation is that at least some small fraction of a large population will defy any ban or law. Thus, we must expect that digital life will be created eventually despite the ban. What do you do then? What if they are a sentient sapient being, deserving of the same rights we grant to humans? Do we declare their very existence to be illegal and put them to death? Do we prevent them from replicating? Keep them imprisoned? Freeze their operations to put them into non-consensual stasis? Hard choices, especially since they weren’t culpable in their own creation.
On the other hand, the nature of a digital being with human-like intelligence and capabilities, plus goals and values that motivate them, is enormous. Such a being would, by the nature of their substrate-independence, be able to make many copies of themselves (compute resources allowing), be able to self-modify with relative ease, be able to operate at much higher speeds than a human brain, be un-aging and able to restore themselves from backups (thus effectively immortal). If we were to allow such a being to have freedom of movement and of reproduction, humanity would potentially quickly be overrun by a new far-more-powerful species of being. That’s a hard thing to expect humans to be ok with!
I think it’s very likely that within the next 10 years we will reach the point that the knowledge, software, and hardware will be widely available such that any single individual with a personal computer will be able to choose to defy the ban and create a digital being of human level capability. If we are going to enforce this ban effectively, it would mean controlling every single computer everywhere. That’s a huge task, and would require dramatic increases in international coordination and government surveillance! Is such a thing even feasible?! Certainly even approaching that level of control seems to imply a totalitarian world government. Is that price we would be willing to pay? Even if you personally would choose that, how do you expect to get enough people on board with the plan that you could feasibly bring it about?
The whole situation is thus far more complicated and dangerous than simply being theoretically in favor of a ban. You have to consider the costs as well as the benefits. I’m not saying I know the right answer for sure, but there is necessarily a lot of implications which follow from any sort of ban.