Israeli Prime Minister, Musk, Tegmark and Brockman discuss existential risk from AI.
Nothing truly revolutionary was said. I think the most interesting bits are that the Prime Minister seems to be taking AI risk seriously, has in mind exponential progress, wants to prevent monopolies and thinks that we roughly have 6 years before things change drastically.
Some quotes from the prime minister:
I had a conversation with one of your colleagues, Peter Thiel, and he said to me, “Oh, it’s all scale advantages. It’s all monopolies.” I said, well, yeah, I believe that, but we have to stop it at a certain point because we don’t want to depress competition.
AI is producing, you know, this wall [hands movement gesture some exponential progress wall]. And you have these trillion dollar companies that are produced what, overnight? And they concentrate enormous wealth and power with smaller and smaller number of people. And the question is, what do you do about that monopoly power?
With such power comes enormous responsibility. That’s the crux of what we’re talking about here, is how do we inject a measure of responsibility and ethics into this, into this exponentially changing development?
Max [Tegmark]’s book takes you to the existential question of whether, you know, you project basically machine intelligence or human intelligence into the cosmos. Human intelligence turned into machine intelligence, into the cosmos and so on. That’s a big philosophical question. I’d like to think we have about six years for that.
I think we have to conduct a robust discussion with the other powers of the world based on their self-interest as you began to do. And I think that’s a pioneering work. And I think we have a shot maybe at getting to some degree of control over our future, which could be amazing.
Damn. Given his focus on competition, it wouldn’t surprise me if this ends up being net-negative. Instead we need to be limiting the proliferation of capabilities.
If you think it’s like a nuclear weapon but better (less indiscriminate, potentially offers a defense against the nukes of rival countries) what choice do you have? Notably Israel has a nuclear arsenal for precisely this reason, now they see a need to get the next weapons upgrade.
I’m skeptical of it offering an effective defense against other countries AI’s which is where that reasoning breaks down.
Can you elaborate? Note that geopolitically Israel doesn’t need to beat superpowers. It has hostile neighbors it is concerned about.
It also is a small isolated country with a low population and low natural resources and is in an endless low level war over a small amount of land. So it needs high value industries to survive. Getting a share of the possible near future AI boom is a way to do that. Intel owns a company in Israel that has an inference accelerator that is competitive, Habana labs, and there is also Mobileye.
So it needs AI as a weapon to defend itself against the AIs of Syria and Egypt and Iran and other nearby threats. It needs it as a revenue source to afford to keep buying endless weapons to deal with lower level attacks.
What is your disagreement and how do you know it’s a valid reason?
I suspect the offender defence balance massively favours the attacker and so if AI is widely distributed we’re all screwed.
I think you’re right about it favoring offense. Why would we be screwed?
Are you thinking it’s a situation where suppose superpowers with reasonably stable governments have massive arsenals of AI built and guided weapons.
(What is in those arsenals depends on future tech but I sorta abstractly imagine endless rows of automated single engine stealth jet fighter bombers and their munitions usually drop clouds of suicide drones. The key thing is embedded AI handles the piloting, and general purpose robots made all the weapons and parts, allowing for an enormous arsenal for a modest cost. Human officers select the target areas to attack or defend and the rules of engagement, and human written software restricts at a low level the AI models to the rules. For example the embedded arming controller in a munition must think it is in a target area and have the correct code from a OTP that came from a hardware key in the weapons console or it will not be able to deploy.)
The main idea is that while a smaller country or terrorists may be able to attack and do damage, same concept as nuclear terrorism, but they will be wiped out by the reprisal.
Is that how you see it or?
I believe that AI is much more subject to proliferation than nukes or even bioweapons. And when it proliferates widely enough, we can’t really on mutually ensured destruction to dissuade actors.
Ok. Thats correct, I am sure you are aware of how MAD breaks when there are more than 2 nuclear armed factions.
On the other hand I don’t know if your assumptions about lack of defense are correct. You could : (1) have Embodied general ai (2) invoke many temporary instances, drive robots to build robots (3) with the expanded industrial capacity, manufacture bunkers and space suits (4) use air gaps as often as possible. Assume anything can be hacked.
We can do this now, the delta is scale. There isn’t the resources to dig enough bunkers and equip them for all the population of the western world. Similar problem with space suits.
The bunkers protect from drone swarms and nukes, the space suits (and people live most of the time in bunkers) from bioweapons and hostile airborne nanotech.
Any comments?
I mean a world where humans live underground and the surface is littered with the aftermath of various battles between machines doesn’t sound super optimal. It’s just that I don’t know if we have a choice.
If you told people in 1950 where nuclear arms buildup would lead—grim faced men and women in bunkers prepared for the battle where they expect the aftermath to cover the silo fields in radioactive craters and for every major cities to be a smouldering ruin—I mean that’s awful but the technology and rivalry forced humans to do this. There was no “choice”. No pause agreement was possible. Like now.
Okay, so maybe suits would defend against bio. Generic protection against nano seems a lot harder though as it seems like there could be many attacks against the suit, but I could be wrong here.
However, it could also win via hacking, although maybe you can defend by producing an unhackable system.
Or it could use manipulation, but perhaps we deploy an AI to monitor all communications for signs of manipulation.
And even then, I’m not sure I got all of the possibilities.
The challenge is that there are many different ways to neutralise an enemy and an AI will pick whichever path is weakest. And I’m pretty sure at least one path will end up looking pretty weak.
It’s AI vs AI. Human world powers have their own and enormously more physical resources. So “attack them where they are weak” can be done but it doesn’t pay off if the weaker party doesn’t win immediately and is crushed by the retaliation.
That’s what makes a world where multiple parties have powerful means of attack semi stable. It’s the one we exist in.
Space suits and bunkers are just an expansion of what we already did to prepare for a nuclear war. It’s a way for most of the population to survive if the weaker party gets the first shot.
Same concept as a submarine loaded with ICBMs.
Some context on Netanyahu:
He’s in the middle of both 3 personal corruption trials and a so-called “judicial overhaul” by his coalition (which we opposed to it call an autocratic coup attempt). He might say he’s interested in doing this or that, but in reality his coalition is solely focused on this judicial overhaul and on the narrow needs of Ultra-orthodox party leaders.
He’s been a politician for a few decades, and he’s the longest serving prime minister in Israeli history. During that time he’s earned a reputation as someone whose words and actions are utterly disconnected from eachother. Moreover, even his messages to foreign and domestic media are often very different from eachother.
He’s a hard neoliberal from the Chicago school. He’s not randomly mentioning competition—it’s the only thing he cares about. “[Concentrating] enormous wealth and power with smaller and smaller number of people” on the other hand is something he has absolutely no problem with. He’s done that for himself in our political context, and he loves befriending billionaires whose money and influence he can use for personal gain.
As I mentioned, he’s the longest serving prime minister in our history. None of his governments has ever tried to deal with existential or catastrophic risks of any kind. It’s not in his dictionary. The only ‘existential threat’ from his perspective is Iran.
Israel is extremely reluctant to do anything that’s not for its own benefit. I guess it might be a remnant of Europe trying to kill all our grandparents.
So, even though there’s some activity starting here in academic circles (mostly thanks to David Manheim), I wouldn’t count on the Israeli government to move even a finger about it anytime soon.
AI restrictions and bans thus create market opportunities for Israel et al right? For example, the government of Japan has stated they will not enforce copyright law against llms. This means that countries that have a permissive regulatory framework could become the world centers of excellence for AGI and robotics development, and only production grade products get exported to the EU after complying with all the regulatory requirements after the AI product has been proven effective.
Or in more extreme cases, AI may used as a tool to do high value engineering work and polities like the EU have to trade lower value resources and land for access to the results of that work. Engineers in the EU wouldn’t even have access to use the best models, since even the best models will occasionally make an error the model developers do not want to be liable for.
Not a good economic position to be in.
(I point out the eu specifically because they appear to be the most likely polity to enact onerous AI restrictions. This would kill their domestic industry)
The stream cut out, but there are longer versions available e.g. https://www.youtube.com/watch?v=Dg-rKXi9XYg