Any atom that isn’t being used in service of the AI’s goal could instead be used in service of the AI’s goal. Which particular atoms are easiest to access isn’t relevant; it will just use all of them.
My point is that the immediate cause of death for humans will most likely not be that the AI wants to use human atoms in service of its goals, but that the AI wants to use the atoms that make up survival-relevant infrastructure to build something, and humans die as a result of that (and their atoms may later be used for something else). Perhaps a practically irrelevant nitpick, but I think this mistake can make AI risk worries less credible among some people (including myself).
It depends on takeoff speed. I’ve always imagined the “atoms..” thing in the context of a fast takeoff, where, say, the Earth is converted to computronium by nanobot swarms / grey goo in a matter of hours.
Seems to me like a thing that’s hard to be confident about. Misaligned AGI will want to kill humans because we’re potential threats (e.g., we could build a rival AGI), and because we’re using matter and burning calories that could be put to other uses. It would also want to use the resources that we depend on to survive (e.g., food, air, water, sunlight). I don’t understand the logic of fixating on exactly which of these reasons is most mentally salient to the AGI at the time it kills us.
After this discussion (andespecially based on Greg’s comment), I would revise my point as follows:
The AI might kill us because 1) it sees us as a threat (most likely), 2) it uses up our resources/environment for its own purposes (somewhat likely), or 3) it converts all matter into whatever it deems useful instantly (seems less likely to me but still not unlikely).
I think common framings typically omit point 2, and overemphasize and overdramatize point 3 relative to point 1. We should fix that.
Is this is an overly pedantic nitpick? If you’re making claims that strongly violate most people’s priors, it’s not sufficient to be broadly correct. People will look at what you say and spot-check your reasoning. If the spot-check fails, they won’t believe what you’re saying, and it doesn’t matter if the spot-check is about a practically irrelevant detail as long as they perceive the detail to be sufficiently important to the overall picture.
I also have a bit of an emotional reaction along the lines of: Man, if you go around telling people how they personally are going to be killed by AGI, you better be sure that your story is correct.
Any atom that isn’t being used in service of the AI’s goal could instead be used in service of the AI’s goal. Which particular atoms are easiest to access isn’t relevant; it will just use all of them.
My point is that the immediate cause of death for humans will most likely not be that the AI wants to use human atoms in service of its goals, but that the AI wants to use the atoms that make up survival-relevant infrastructure to build something, and humans die as a result of that (and their atoms may later be used for something else). Perhaps a practically irrelevant nitpick, but I think this mistake can make AI risk worries less credible among some people (including myself).
It depends on takeoff speed. I’ve always imagined the “atoms..” thing in the context of a fast takeoff, where, say, the Earth is converted to computronium by nanobot swarms / grey goo in a matter of hours.
Hmm yeah, good point; I assign [EDIT: fairly but not very] low credence to takeoffs that fast.
Why?
Seems to me like a thing that’s hard to be confident about. Misaligned AGI will want to kill humans because we’re potential threats (e.g., we could build a rival AGI), and because we’re using matter and burning calories that could be put to other uses. It would also want to use the resources that we depend on to survive (e.g., food, air, water, sunlight). I don’t understand the logic of fixating on exactly which of these reasons is most mentally salient to the AGI at the time it kills us.
I’m not confident, sorry for implying otherwise.
After this discussion (andespecially based on Greg’s comment), I would revise my point as follows:
The AI might kill us because 1) it sees us as a threat (most likely), 2) it uses up our resources/environment for its own purposes (somewhat likely), or 3) it converts all matter into whatever it deems useful instantly (seems less likely to me but still not unlikely).
I think common framings typically omit point 2, and overemphasize and overdramatize point 3 relative to point 1. We should fix that.
Is this is an overly pedantic nitpick? If you’re making claims that strongly violate most people’s priors, it’s not sufficient to be broadly correct. People will look at what you say and spot-check your reasoning. If the spot-check fails, they won’t believe what you’re saying, and it doesn’t matter if the spot-check is about a practically irrelevant detail as long as they perceive the detail to be sufficiently important to the overall picture.
I also have a bit of an emotional reaction along the lines of: Man, if you go around telling people how they personally are going to be killed by AGI, you better be sure that your story is correct.