I agree that eventually, individuals may be able to train (or more importantly run exfiltrated models) advanced AI that is very dangerous. I expect that before that, it will be within the reach of richer, bigger groups. Today, it requires more compute/better techniques than we have available. At some point in the coming years/decades it will be within the reach of major states’ budgets, then smaller states and large companies, and then smaller and smaller groups until its within the reach of individuals. That’s the same process that many, many other technologies have followed. If that’s right, what does that suggest we need? Agreement between the major states, then non-proliferation agreements, then regulation and surveillance banning corporate/individuals.
On governments not being major players in cutting-edge AI research today. This is certainly true. I think cyber might be a relevant analogy here. Much of the development and deployment of cyberattacks has been by the private sector (companies and contractors in the US, often criminals for some autocracies). Nevertheless, the biggest cyberattacks (Stuxnet, NotPetya, etc) are directed by the governments of major states—i.e. the P5 of US, Russia, UK, France and China. Its possible that something similar happens for AI.
In terms of how long international agreements take, I think 50 years is a bit pessimistic. I would take arms control agreements as possible comparisons. Take 1972′s nuclear and biological weapons agreements. The ideas behind deterrence were largely developed around 1960 (Schelling 1985; Adler 1992), and then made into an international agreement in 1972. It might even have happened sooner, under LBJ, had the USSR not invaded Czechoslovakia on 20th August 1968, a day before SALT was supposed to start. On biological weapons, the UK proposed the BWC in August 1968, and it was signed in 1972 as well. New START took about 2 years. So in general, bilateral arms control style agreements with monitoring and verification can be agreed in less than 5 years.
To take the nuclear 1960s analogy, we could loosely think of ourselves as being in early 1962: we’ve come up with the concerns if not the specific agreements, and some decision-makers and politicians are on board. We haven’t yet had a major AI warning shot like the Cuban Missile Crisis (which began 60 years ago yesterday!), we haven’t yet had the confidence-building measures like the 1963 Hotline Agreement, and haven’t yet proposed or begun the equivalent of SALT. All that’s might be to come in the next few years/decades.
This won’t be an easy project by any means, but I don’t think we can yet say its completely infeasible—more research, and the attempt itself, is needed.
Hi Rob, thanks for responding.
I agree that eventually, individuals may be able to train (or more importantly run exfiltrated models) advanced AI that is very dangerous. I expect that before that, it will be within the reach of richer, bigger groups. Today, it requires more compute/better techniques than we have available. At some point in the coming years/decades it will be within the reach of major states’ budgets, then smaller states and large companies, and then smaller and smaller groups until its within the reach of individuals. That’s the same process that many, many other technologies have followed. If that’s right, what does that suggest we need? Agreement between the major states, then non-proliferation agreements, then regulation and surveillance banning corporate/individuals.
On governments not being major players in cutting-edge AI research today. This is certainly true. I think cyber might be a relevant analogy here. Much of the development and deployment of cyberattacks has been by the private sector (companies and contractors in the US, often criminals for some autocracies). Nevertheless, the biggest cyberattacks (Stuxnet, NotPetya, etc) are directed by the governments of major states—i.e. the P5 of US, Russia, UK, France and China. Its possible that something similar happens for AI.
In terms of how long international agreements take, I think 50 years is a bit pessimistic. I would take arms control agreements as possible comparisons. Take 1972′s nuclear and biological weapons agreements. The ideas behind deterrence were largely developed around 1960 (Schelling 1985; Adler 1992), and then made into an international agreement in 1972. It might even have happened sooner, under LBJ, had the USSR not invaded Czechoslovakia on 20th August 1968, a day before SALT was supposed to start. On biological weapons, the UK proposed the BWC in August 1968, and it was signed in 1972 as well. New START took about 2 years. So in general, bilateral arms control style agreements with monitoring and verification can be agreed in less than 5 years.
To take the nuclear 1960s analogy, we could loosely think of ourselves as being in early 1962: we’ve come up with the concerns if not the specific agreements, and some decision-makers and politicians are on board. We haven’t yet had a major AI warning shot like the Cuban Missile Crisis (which began 60 years ago yesterday!), we haven’t yet had the confidence-building measures like the 1963 Hotline Agreement, and haven’t yet proposed or begun the equivalent of SALT. All that’s might be to come in the next few years/decades.
This won’t be an easy project by any means, but I don’t think we can yet say its completely infeasible—more research, and the attempt itself, is needed.