If you think it’s like a nuclear weapon but better (less indiscriminate, potentially offers a defense against the nukes of rival countries) what choice do you have? Notably Israel has a nuclear arsenal for precisely this reason, now they see a need to get the next weapons upgrade.
Can you elaborate? Note that geopolitically Israel doesn’t need to beat superpowers. It has hostile neighbors it is concerned about.
It also is a small isolated country with a low population and low natural resources and is in an endless low level war over a small amount of land. So it needs high value industries to survive. Getting a share of the possible near future AI boom is a way to do that. Intel owns a company in Israel that has an inference accelerator that is competitive, Habana labs, and there is also Mobileye.
So it needs AI as a weapon to defend itself against the AIs of Syria and Egypt and Iran and other nearby threats. It needs it as a revenue source to afford to keep buying endless weapons to deal with lower level attacks.
What is your disagreement and how do you know it’s a valid reason?
I think you’re right about it favoring offense. Why would we be screwed?
Are you thinking it’s a situation where suppose superpowers with reasonably stable governments have massive arsenals of AI built and guided weapons.
(What is in those arsenals depends on future tech but I sorta abstractly imagine endless rows of automated single engine stealth jet fighter bombers and their munitions usually
drop clouds of suicide drones. The key thing is embedded AI handles the piloting, and general purpose robots made all the weapons and parts, allowing for an enormous arsenal for a modest cost. Human officers select the target areas to attack or defend and the rules of engagement, and human written software restricts at a low level the AI models to the rules. For example the embedded arming controller in a munition must think it is in a target area and have the correct code from a OTP that came from a hardware key in the weapons console or it will not be able to deploy.)
The main idea is that while a smaller country or terrorists may be able to attack and do damage, same concept as nuclear terrorism, but they will be wiped out by the reprisal.
I believe that AI is much more subject to proliferation than nukes or even bioweapons. And when it proliferates widely enough, we can’t really on mutually ensured destruction to dissuade actors.
Ok. Thats correct, I am sure you are aware of how MAD breaks when there are more than 2 nuclear armed factions.
On the other hand I don’t know if your assumptions about lack of defense are correct. You could : (1) have Embodied general ai
(2) invoke many temporary instances, drive robots to build robots
(3) with the expanded industrial capacity, manufacture bunkers and space suits
(4) use air gaps as often as possible. Assume anything can be hacked.
We can do this now, the delta is scale. There isn’t the resources to dig enough bunkers and equip them for all the population of the western world. Similar problem with space suits.
The bunkers protect from drone swarms and nukes, the space suits (and people live most of the time in bunkers) from bioweapons and hostile airborne nanotech.
Any comments?
I mean a world where humans live underground and the surface is littered with the aftermath of various battles between machines doesn’t sound super optimal. It’s just that I don’t know if we have a choice.
If you told people in 1950 where nuclear arms buildup would lead—grim faced men and women in bunkers prepared for the battle where they expect the aftermath to cover the silo fields in radioactive craters and for every major cities to be a smouldering ruin—I mean that’s awful but the technology and rivalry forced humans to do this. There was no “choice”. No pause agreement was possible. Like now.
Okay, so maybe suits would defend against bio. Generic protection against nano seems a lot harder though as it seems like there could be many attacks against the suit, but I could be wrong here.
However, it could also win via hacking, although maybe you can defend by producing an unhackable system.
Or it could use manipulation, but perhaps we deploy an AI to monitor all communications for signs of manipulation.
And even then, I’m not sure I got all of the possibilities.
The challenge is that there are many different ways to neutralise an enemy and an AI will pick whichever path is weakest. And I’m pretty sure at least one path will end up looking pretty weak.
It’s AI vs AI. Human world powers have their own and enormously more physical resources. So “attack them where they are weak” can be done but it doesn’t pay off if the weaker party doesn’t win immediately and is crushed by the retaliation.
That’s what makes a world where multiple parties have powerful means of attack semi stable. It’s the one we exist in.
Space suits and bunkers are just an expansion of what we already did to prepare for a nuclear war. It’s a way for most of the population to survive if the weaker party gets the first shot.
If you think it’s like a nuclear weapon but better (less indiscriminate, potentially offers a defense against the nukes of rival countries) what choice do you have? Notably Israel has a nuclear arsenal for precisely this reason, now they see a need to get the next weapons upgrade.
I’m skeptical of it offering an effective defense against other countries AI’s which is where that reasoning breaks down.
Can you elaborate? Note that geopolitically Israel doesn’t need to beat superpowers. It has hostile neighbors it is concerned about.
It also is a small isolated country with a low population and low natural resources and is in an endless low level war over a small amount of land. So it needs high value industries to survive. Getting a share of the possible near future AI boom is a way to do that. Intel owns a company in Israel that has an inference accelerator that is competitive, Habana labs, and there is also Mobileye.
So it needs AI as a weapon to defend itself against the AIs of Syria and Egypt and Iran and other nearby threats. It needs it as a revenue source to afford to keep buying endless weapons to deal with lower level attacks.
What is your disagreement and how do you know it’s a valid reason?
I suspect the offender defence balance massively favours the attacker and so if AI is widely distributed we’re all screwed.
I think you’re right about it favoring offense. Why would we be screwed?
Are you thinking it’s a situation where suppose superpowers with reasonably stable governments have massive arsenals of AI built and guided weapons.
(What is in those arsenals depends on future tech but I sorta abstractly imagine endless rows of automated single engine stealth jet fighter bombers and their munitions usually drop clouds of suicide drones. The key thing is embedded AI handles the piloting, and general purpose robots made all the weapons and parts, allowing for an enormous arsenal for a modest cost. Human officers select the target areas to attack or defend and the rules of engagement, and human written software restricts at a low level the AI models to the rules. For example the embedded arming controller in a munition must think it is in a target area and have the correct code from a OTP that came from a hardware key in the weapons console or it will not be able to deploy.)
The main idea is that while a smaller country or terrorists may be able to attack and do damage, same concept as nuclear terrorism, but they will be wiped out by the reprisal.
Is that how you see it or?
I believe that AI is much more subject to proliferation than nukes or even bioweapons. And when it proliferates widely enough, we can’t really on mutually ensured destruction to dissuade actors.
Ok. Thats correct, I am sure you are aware of how MAD breaks when there are more than 2 nuclear armed factions.
On the other hand I don’t know if your assumptions about lack of defense are correct. You could : (1) have Embodied general ai (2) invoke many temporary instances, drive robots to build robots (3) with the expanded industrial capacity, manufacture bunkers and space suits (4) use air gaps as often as possible. Assume anything can be hacked.
We can do this now, the delta is scale. There isn’t the resources to dig enough bunkers and equip them for all the population of the western world. Similar problem with space suits.
The bunkers protect from drone swarms and nukes, the space suits (and people live most of the time in bunkers) from bioweapons and hostile airborne nanotech.
Any comments?
I mean a world where humans live underground and the surface is littered with the aftermath of various battles between machines doesn’t sound super optimal. It’s just that I don’t know if we have a choice.
If you told people in 1950 where nuclear arms buildup would lead—grim faced men and women in bunkers prepared for the battle where they expect the aftermath to cover the silo fields in radioactive craters and for every major cities to be a smouldering ruin—I mean that’s awful but the technology and rivalry forced humans to do this. There was no “choice”. No pause agreement was possible. Like now.
Okay, so maybe suits would defend against bio. Generic protection against nano seems a lot harder though as it seems like there could be many attacks against the suit, but I could be wrong here.
However, it could also win via hacking, although maybe you can defend by producing an unhackable system.
Or it could use manipulation, but perhaps we deploy an AI to monitor all communications for signs of manipulation.
And even then, I’m not sure I got all of the possibilities.
The challenge is that there are many different ways to neutralise an enemy and an AI will pick whichever path is weakest. And I’m pretty sure at least one path will end up looking pretty weak.
It’s AI vs AI. Human world powers have their own and enormously more physical resources. So “attack them where they are weak” can be done but it doesn’t pay off if the weaker party doesn’t win immediately and is crushed by the retaliation.
That’s what makes a world where multiple parties have powerful means of attack semi stable. It’s the one we exist in.
Space suits and bunkers are just an expansion of what we already did to prepare for a nuclear war. It’s a way for most of the population to survive if the weaker party gets the first shot.
Same concept as a submarine loaded with ICBMs.