The problem with “anthropomorphic AI” approaches is
The human mind is complicated and poorly understood.
Safety degrades fast with respect to errors.
Lets say you are fairly successful. You produce an AI that is really really close to the human mind in the space of all possible minds. A mind that wouldn’t be particularly out of place at a mental institution. They can produce paranoid ravings about the shapeshifting lizard conspiracy millions of times faster than any biological human.
Ok, so you make them a bit smarter. The paranoid conspiricies get more complicated and somewhat more plausible. But at some points, they are sane enough to attempt AI research and produce useful results. Their alignment plan is totally insane.
In order to be useful, anthropomorphic AI needs to not only make AI that thinks similarly to humans. They need to be able to target the more rational, smart and ethical portion of mind space.
Humans can chuck the odd insane person out of the AI labs. Sane people are more common and tend to think faster. A team of humans can stop any one of their number crowning themselves as world king.
In reality, I think your anthropomorphic AI approach gets an arguably kind of humanlike in some ways AI that takes over the world. Because it didn’t resemble the right parts of the right humans in the right ways closely enough in the places where it matters.
The problem with “anthropomorphic AI” approaches is
The human mind is complicated and poorly understood.
Safety degrades fast with respect to errors.
Lets say you are fairly successful. You produce an AI that is really really close to the human mind in the space of all possible minds. A mind that wouldn’t be particularly out of place at a mental institution. They can produce paranoid ravings about the shapeshifting lizard conspiracy millions of times faster than any biological human.
Ok, so you make them a bit smarter. The paranoid conspiricies get more complicated and somewhat more plausible. But at some points, they are sane enough to attempt AI research and produce useful results. Their alignment plan is totally insane.
In order to be useful, anthropomorphic AI needs to not only make AI that thinks similarly to humans. They need to be able to target the more rational, smart and ethical portion of mind space.
Humans can chuck the odd insane person out of the AI labs. Sane people are more common and tend to think faster. A team of humans can stop any one of their number crowning themselves as world king.
In reality, I think your anthropomorphic AI approach gets an arguably kind of humanlike in some ways AI that takes over the world. Because it didn’t resemble the right parts of the right humans in the right ways closely enough in the places where it matters.