I recognise that the brains of any AI will have been designed by humans, but the gap in puissance between humans and the type of AGI imagined and feared by people in EA (as outlined in this blog post, for example) is so extreme the fact of us having designed the AGI doesn’t seem hugely relevant.
Like if a colony of ants arranged its members to spell out in English “DONT HURT US WE ARE GOOD” humans would probably be like huh, wild, and for a few days or weeks there would be a lot of discussion about it, and vegans would feel vindicated, and Netflix would greenlight a ripoff of the Bachelor where the bachelor was an ant, but in general I think we would just continue as we were and not take it very seriously. Because the ants would not be communicating in a way that made us believe they were worthy of being taken seriously. And I don’t see why it would be different between us and an AGI of the type described at the link above.
The thing is, the author of that post kind of agrees with you, In other places he has noted the probability of AI extinction as being 1, and is desperately trying to come up with any way to prevent it.
On the other hand, I think the model of AI put forward in that post is absurdly unlikely, and the risk of AI extinction is orders of magnitude lower. Ai will not be a single minded fanatic utilitarian focused on a fixed goal, and is likely to absorb at least a little bit of our values.
Oh, no, to be clear I find the post extremely unpersuasive—I am interested in it only insofar as it seems to represent received wisdom within the EA community.
I recognise that the brains of any AI will have been designed by humans, but the gap in puissance between humans and the type of AGI imagined and feared by people in EA (as outlined in this blog post, for example) is so extreme the fact of us having designed the AGI doesn’t seem hugely relevant.
Like if a colony of ants arranged its members to spell out in English “DONT HURT US WE ARE GOOD” humans would probably be like huh, wild, and for a few days or weeks there would be a lot of discussion about it, and vegans would feel vindicated, and Netflix would greenlight a ripoff of the Bachelor where the bachelor was an ant, but in general I think we would just continue as we were and not take it very seriously. Because the ants would not be communicating in a way that made us believe they were worthy of being taken seriously. And I don’t see why it would be different between us and an AGI of the type described at the link above.
The thing is, the author of that post kind of agrees with you, In other places he has noted the probability of AI extinction as being 1, and is desperately trying to come up with any way to prevent it.
On the other hand, I think the model of AI put forward in that post is absurdly unlikely, and the risk of AI extinction is orders of magnitude lower. Ai will not be a single minded fanatic utilitarian focused on a fixed goal, and is likely to absorb at least a little bit of our values.
Oh, no, to be clear I find the post extremely unpersuasive—I am interested in it only insofar as it seems to represent received wisdom within the EA community.