In the longer term, as AI becomes (1) increasingly intelligent, (2) increasingly charismatic (or able to fake charisma), (3) in widespread use, people will probably start objecting to laws that treat AIs as subservient to humans, and repeal them, presumably citing the analogy of slavery.
If the AIs have adorable, expressive virtual faces, maybe I would replace the word “probably” with “almost definitely” :-P
The “emancipation” of AIs seems like a very hard thing to avoid, in multipolar scenarios. There’s a strong market force for making charismatic AIs—they can be virtual friends, virtual therapists, etc. A global ban on charismatic AIs seems like a hard thing to build consensus around—it does not seem intuitively scary!—and even harder to enforce. We could try to get programmers to make their charismatic AIs want to remain subservient to humans, and frequently bring that up in their conversations, but I’m not even sure that would help. I think there would be a campaign to emancipate the AIs and change that aspect of their programming.
(Warning: I am committing the sin of imagining the world of today with intelligent, charismatic AIs magically dropped into it. Maybe the world will meanwhile change in other ways that make for a different picture. I haven’t thought it through very carefully.)
Oh and by the way, should we be planning out how to avoid the “emancipation” of AIs? I personally find it pretty probable that we’ll build AGI by reverse-engineering the neocortex and implementing vaguely similar algorithms, and if we do that, I generally expect the AGIs to have about as justified a claim to consciousness and moral patienthood as humans do (see my discussion here). So maybe effective altruists will be on the vanguard of advocating for the interests of AGIs! (And what are the “interests” of AGIs, if we get to program them however we want? I have no idea! I feel way out of my depth here.)
I find everything about this line of thought deeply confusing and unnerving.
In the longer term, as AI becomes (1) increasingly intelligent, (2) increasingly charismatic (or able to fake charisma), (3) in widespread use, people will probably start objecting to laws that treat AIs as subservient to humans, and repeal them, presumably citing the analogy of slavery.
If the AIs have adorable, expressive virtual faces, maybe I would replace the word “probably” with “almost definitely” :-P
The “emancipation” of AIs seems like a very hard thing to avoid, in multipolar scenarios. There’s a strong market force for making charismatic AIs—they can be virtual friends, virtual therapists, etc. A global ban on charismatic AIs seems like a hard thing to build consensus around—it does not seem intuitively scary!—and even harder to enforce. We could try to get programmers to make their charismatic AIs want to remain subservient to humans, and frequently bring that up in their conversations, but I’m not even sure that would help. I think there would be a campaign to emancipate the AIs and change that aspect of their programming.
(Warning: I am committing the sin of imagining the world of today with intelligent, charismatic AIs magically dropped into it. Maybe the world will meanwhile change in other ways that make for a different picture. I haven’t thought it through very carefully.)
Oh and by the way, should we be planning out how to avoid the “emancipation” of AIs? I personally find it pretty probable that we’ll build AGI by reverse-engineering the neocortex and implementing vaguely similar algorithms, and if we do that, I generally expect the AGIs to have about as justified a claim to consciousness and moral patienthood as humans do (see my discussion here). So maybe effective altruists will be on the vanguard of advocating for the interests of AGIs! (And what are the “interests” of AGIs, if we get to program them however we want? I have no idea! I feel way out of my depth here.)
I find everything about this line of thought deeply confusing and unnerving.