The first thing to note is that we haven’t killed all the ants. We haven’t even tried. We kill ants only when they are inconvenient to our purposes. There is an argument that any AGI would always kill us all in order to tile the universe or whatever, but this is unproven, and IMO, false, for reasons I will explore in an upcoming post.
Secondly, we cannot communicate with ants. If we could, we could actually engage in mutually beneficial trade with them, as this post notes.
But the most important difference between the ant situation and the AI situation is that the ants didn’t design our brains. Imagine if ants had managed to program our brains in such a way that we found ants as cute and loveable as puppies, and found causing harm to ants to be as painful as touching a hot stove. Would ants really have much to fear from us in such a world? We might kill some ants when it was utterly and completely necessary, but mostly we would just do our own thing and leave the ants alone.
I recognise that the brains of any AI will have been designed by humans, but the gap in puissance between humans and the type of AGI imagined and feared by people in EA (as outlined in this blog post, for example) is so extreme the fact of us having designed the AGI doesn’t seem hugely relevant.
Like if a colony of ants arranged its members to spell out in English “DONT HURT US WE ARE GOOD” humans would probably be like huh, wild, and for a few days or weeks there would be a lot of discussion about it, and vegans would feel vindicated, and Netflix would greenlight a ripoff of the Bachelor where the bachelor was an ant, but in general I think we would just continue as we were and not take it very seriously. Because the ants would not be communicating in a way that made us believe they were worthy of being taken seriously. And I don’t see why it would be different between us and an AGI of the type described at the link above.
The thing is, the author of that post kind of agrees with you, In other places he has noted the probability of AI extinction as being 1, and is desperately trying to come up with any way to prevent it.
On the other hand, I think the model of AI put forward in that post is absurdly unlikely, and the risk of AI extinction is orders of magnitude lower. Ai will not be a single minded fanatic utilitarian focused on a fixed goal, and is likely to absorb at least a little bit of our values.
Oh, no, to be clear I find the post extremely unpersuasive—I am interested in it only insofar as it seems to represent received wisdom within the EA community.
Let’s explore the ant analogy further.
The first thing to note is that we haven’t killed all the ants. We haven’t even tried. We kill ants only when they are inconvenient to our purposes. There is an argument that any AGI would always kill us all in order to tile the universe or whatever, but this is unproven, and IMO, false, for reasons I will explore in an upcoming post.
Secondly, we cannot communicate with ants. If we could, we could actually engage in mutually beneficial trade with them, as this post notes.
But the most important difference between the ant situation and the AI situation is that the ants didn’t design our brains. Imagine if ants had managed to program our brains in such a way that we found ants as cute and loveable as puppies, and found causing harm to ants to be as painful as touching a hot stove. Would ants really have much to fear from us in such a world? We might kill some ants when it was utterly and completely necessary, but mostly we would just do our own thing and leave the ants alone.
I recognise that the brains of any AI will have been designed by humans, but the gap in puissance between humans and the type of AGI imagined and feared by people in EA (as outlined in this blog post, for example) is so extreme the fact of us having designed the AGI doesn’t seem hugely relevant.
Like if a colony of ants arranged its members to spell out in English “DONT HURT US WE ARE GOOD” humans would probably be like huh, wild, and for a few days or weeks there would be a lot of discussion about it, and vegans would feel vindicated, and Netflix would greenlight a ripoff of the Bachelor where the bachelor was an ant, but in general I think we would just continue as we were and not take it very seriously. Because the ants would not be communicating in a way that made us believe they were worthy of being taken seriously. And I don’t see why it would be different between us and an AGI of the type described at the link above.
The thing is, the author of that post kind of agrees with you, In other places he has noted the probability of AI extinction as being 1, and is desperately trying to come up with any way to prevent it.
On the other hand, I think the model of AI put forward in that post is absurdly unlikely, and the risk of AI extinction is orders of magnitude lower. Ai will not be a single minded fanatic utilitarian focused on a fixed goal, and is likely to absorb at least a little bit of our values.
Oh, no, to be clear I find the post extremely unpersuasive—I am interested in it only insofar as it seems to represent received wisdom within the EA community.