Okay. But what then stops someone from creating an AGI that does? The game doesn’t end after one turn.
This seems extremely unlikely. In terms of things I can think of, if I were an AGI, I’d just infect every device with malware, and then control the entire flow of information. This could easily be done by reality warping humans (giving them a false feed of information that completely distorts their model of reality) or simply shutting them out—humans can’t coordinate if they can’t communicate on a mass scale. This would be really, really easy. Human society is incredibly fragile. We only don’t realize this because we’ve never had a concerted, competent attempt to actually break it.
The current trend is that generalization is superior, and will probably continue to be so.
With the way current models are designed, this seems extremely unlikely.
This also seems unlikely, given how many have tried, and we’re still nowhere close to solving it.
This is the most plausible of these. But the scale of the tragedy might be extremely high. Breakdown of communication, loss of supply chains, mass starvations, economic collapse, international warfare, etc. Even if it’s not extinction, I’m not sure how many shocks current civilization can endure.
This could maybe slow it, but not for long. I imagine there are far more efficient ways of running an AGI that someone would learn to implement.
This seems extremely unlikely. In terms of things I can think of, if I were an AGI, I’d just infect every device with malware, and then control the entire flow of information. This could easily be done by reality warping humans (giving them a false feed of information that completely distorts their model of reality) or simply shutting them out—humans can’t coordinate if they can’t communicate on a mass scale. This would be really, really easy. Human society is incredibly fragile. We only don’t realize this because we’ve never had a concerted, competent attempt to actually break it.
Have you heard of this thing called “all of human history before the computer age”? Human coordination and civilization do not require hackable devices to operate. This plan would be exposed in 10 seconds flat as soon as people started talking to each other and comparing notes.
In general, I think that the main issue is a ridiculously overinflated idea of what “AGI” will actually be capable of. When something doesn’t exist yet, it’s easy to imagine it as having no flaws. But that invariably never turns out to be the case.
Yeah, once upon a time, but now our civilization is interconnected and dependent on the computer age. And they wouldn’t even have to realize they needed to coordinate.
This plan would be exposed in 10 seconds flat as soon as people started talking to each other and comparing notes.
How would they do that? They’ve lost control of communication. Expose it to who?
Okay. But what then stops someone from creating an AGI that does? The game doesn’t end after one turn.
This seems extremely unlikely. In terms of things I can think of, if I were an AGI, I’d just infect every device with malware, and then control the entire flow of information. This could easily be done by reality warping humans (giving them a false feed of information that completely distorts their model of reality) or simply shutting them out—humans can’t coordinate if they can’t communicate on a mass scale. This would be really, really easy. Human society is incredibly fragile. We only don’t realize this because we’ve never had a concerted, competent attempt to actually break it.
This sounds like summoning Godzilla to fight MechaGodzilla: https://www.lesswrong.com/posts/DwqgLXn5qYC7GqExF/godzilla-strategies
The current trend is that generalization is superior, and will probably continue to be so.
With the way current models are designed, this seems extremely unlikely.
This also seems unlikely, given how many have tried, and we’re still nowhere close to solving it.
This is the most plausible of these. But the scale of the tragedy might be extremely high. Breakdown of communication, loss of supply chains, mass starvations, economic collapse, international warfare, etc. Even if it’s not extinction, I’m not sure how many shocks current civilization can endure.
This could maybe slow it, but not for long. I imagine there are far more efficient ways of running an AGI that someone would learn to implement.
Have you heard of this thing called “all of human history before the computer age”? Human coordination and civilization do not require hackable devices to operate. This plan would be exposed in 10 seconds flat as soon as people started talking to each other and comparing notes.
In general, I think that the main issue is a ridiculously overinflated idea of what “AGI” will actually be capable of. When something doesn’t exist yet, it’s easy to imagine it as having no flaws. But that invariably never turns out to be the case.
Yeah, once upon a time, but now our civilization is interconnected and dependent on the computer age. And they wouldn’t even have to realize they needed to coordinate.
How would they do that? They’ve lost control of communication. Expose it to who?