Haydn has been a Research Associate and Academic Project Manager at the University of Cambridge’s Centre for the Study of Existential Risk since Jan 2017.
It’s really important that there is public, good-faith, well-reasoned critique of this important chapter in a central book in the field. You raise some excellent points that I’d love to see Ord (and/or others) respond to. Congratulations on your work, and thank you!
More than Philip Tetlock (author of Superforecasting)?
Does that particular quote from Yudkowsky not strike you as slightly arrogant?
There’s also a bunch of more under-the-radar wins in DC and Brussels, that for obvious reasons can’t be talked about so much.
There’s a whole AI ethics and safety field that would have been much smaller and less influential.
From my paper Activism by the AI Community: Analysing Recent Achievements and Future Prospects.
“2.2 Ethics and safety
There has been sustained activism from the AI community to emphasise that AI should be developed and deployed in a safe and beneficial manner. This has involved Open Letters, AI principles, the establishment of new centres, and influencing governments.
The Puerto Rico Conference in January 2015 was a landmark event to promote the beneficial and safe development of AI. It led to an Open Letter signed by over 8,000 people calling for the safe and beneficial development of AI, and a research agenda to that end . The Asilomar Conference in January 2017 led to the Asilomar AI Principles, signed by several thousand AI researchers . Over a dozen sets of principles from a range of groups followed .
The AI community has established several research groups to understand and shape the societal impact of AI. AI conferences have also expanded their work to consider the impact of AI. New groups include:
OpenAI (December 2015)
Centre for Human-Compatible AI (August 2016)
Leverhulme Centre for the Future of Intelligence (October 2016)3
DeepMind Ethics and Society (October 2017)
UK Government’s Centre for Data Ethics and Innovation (November 2017)”
Great post! Mass extinctions and historical societal collapses are important data sources—I would also suggest ecological regime shifts. My main takeaway is actually about multicausality: several ‘external’ shocks typically occur in a similar period. ‘Internal’ factors matter too—very similar shocks can affect societies very differently depending on their internal structure and leadership. When complex adaptive systems shift equilibria, several causes are normally at play.
Myself, Luke Kemp and Anders Sandberg (and many others!) have three seperate chapters touching on these topics in a forthcoming book on ‘Historical Systemic Collapse’ edited by Princeton’s Miguel Centeno et al . Hopefully coming out this year.
Thanks for this. I’m more counselling “be careful about secrecy” rather than “don’t be secret”. Especially be careful about secret sprints, being told you’re in a race but can’t see the secret information why, and careful about “you have to take part in this secret project”.
On the capability side, the shift in AI/ML publication and release norms towards staged release (not releasing full model immediately but carefully checking for misuse potential first), structured access (through APIs) and so on has been positive, I think.
On the risks/analysis side, MIRI have their own “nondisclosed-by-default” policy on publication. CSER and other academic research groups tend towards more of a “disclosed-by-default” policy.
Yes behavioural science isn’t a topic I’m super familiar with, but it seems very important!
I think most of the focus so far has been on shifting norms/behaviour at top AI labs, for example nudging Publication and Release Norms for Responsible AI.
Recommender systems are a great example of a broader concern. Another is lethal autonomous weapons, where a big focus is “meaningful human control”. Automation bias is an issue even up to the nuclear level—the concern is that people will more blindly trust ML systems, and won’t disbelieve them as people did in several Cold War close calls (eg Petrov not believing his computer warning of an attack). See Autonomy and machine learning at the interface of nuclear weapons, computers and people.
Jess Whittlestone’s PhD was in Behavioural Science, now she’s Head of AI Policy at the Centre for Long-Term Resilience.
This was very much Ellsberg’s view on eg the 80,000 Hours podcast:
“And it was just a lot better for Boeing and Lockheed and Northrop Grumman and General Dynamics to go that way than not to have them, then they wouldn’t be selling the weapons. And by the way what I’ve learned just recently by books like … A guys named Kofsky wrote a book called Harry Truman And The War Scare of 1947.Reveals that at the end of the war, Ford and GM who had made most of our bombers went back to making cars very profitably. But Boeing and Lockheed didn’t make products for the commercial market, only for commercial air except there wasn’t a big enough market to keep them from bankruptcy. They had suddenly lost their vast orders for military planes in mid 1945. The only way they could avoid bankruptcy was to sell a lot of planes to the government, military planes. But against who? Not Germany we were occupying Germany, not Japan we were occupying Japan. Who was our enemy that you needed a lot of planes against. Well Russia had been our ally during the war, but Russia had enough targets to justify, so they had to be an enemy and they had to be the enemy, and we went off from there.I would say that having read that book and a few others I could say, I now see since my book was written nine months ago, that the Cold War was a marketing campaign for selling war planes to the government and to our allies. It was a marketing campaign for annual subsidies to the aerospace industry, and the electronics industry. And also the basis for a protection racket for Europe, that kept us as a major European power. Strictly speaking we’re not a European power. But we are in effect because we provide their protection against Russia the super enemy with nuclear weapons, and for that purpose it’s better for the Russians to have ICBM, and missiles, and H-bombs, as an enemy we can prepare against. It’s the preparations that are profitable. All wars have been very profitable for the arms manufacturers, nuclear war will not be, but preparation for it is very profitable, and therefore we have to be prepared.”
“And it was just a lot better for Boeing and Lockheed and Northrop Grumman and General Dynamics to go that way than not to have them, then they wouldn’t be selling the weapons. And by the way what I’ve learned just recently by books like … A guys named Kofsky wrote a book called Harry Truman And The War Scare of 1947.
Reveals that at the end of the war, Ford and GM who had made most of our bombers went back to making cars very profitably. But Boeing and Lockheed didn’t make products for the commercial market, only for commercial air except there wasn’t a big enough market to keep them from bankruptcy. They had suddenly lost their vast orders for military planes in mid 1945. The only way they could avoid bankruptcy was to sell a lot of planes to the government, military planes. But against who? Not Germany we were occupying Germany, not Japan we were occupying Japan. Who was our enemy that you needed a lot of planes against. Well Russia had been our ally during the war, but Russia had enough targets to justify, so they had to be an enemy and they had to be the enemy, and we went off from there.
I would say that having read that book and a few others I could say, I now see since my book was written nine months ago, that the Cold War was a marketing campaign for selling war planes to the government and to our allies. It was a marketing campaign for annual subsidies to the aerospace industry, and the electronics industry. And also the basis for a protection racket for Europe, that kept us as a major European power. Strictly speaking we’re not a European power. But we are in effect because we provide their protection against Russia the super enemy with nuclear weapons, and for that purpose it’s better for the Russians to have ICBM, and missiles, and H-bombs, as an enemy we can prepare against. It’s the preparations that are profitable. All wars have been very profitable for the arms manufacturers, nuclear war will not be, but preparation for it is very profitable, and therefore we have to be prepared.”
Hi, yes good question, and one that has been much discussed—here’s three papers on the topic. I’m personally of the view that there shouldn’t really be much conflict/contradictions—we’re all pushing for the safe, beneficial and responsible development and deployment of AI, and there’s lots of common ground.
Bridging near- and long-term concerns about AI
Bridging the Gap: the case for an Incompletely Theorized Agreement on AI policy
Reconciliation between Factions Focused on Near-Term and Long-Term Artificial Intelligence
Apologies! LAWS = Lethal Autonomous Weapons. Have edited the text.
This is how I’ve responded to positive funding news before, seems right.
Thanks! And thanks for this link. Very moving on their sense of powerlessness.
Thanks Rohin. Yes I should perhaps have spelled this out more. I was thinking about two things—focussed on those two stages of advocacy and participation.
1. Don’t just get swept up in race rhetoric and join the advocacy: “oh there’s nothing we can do to prevent this, we may as well just join and be loud advocates so we have some chance to shape it”. Well no, whether a sprint occurs is not just in the hands of politicians and the military, but also to a large extent in the hands of scientists. Scientists have proven crucial to advocacy for, and participation in, sprints. Don’t give up your power too easily.
2. You don’t have to stay if it turns out you’re not actually in a race and you don’t have any influence on the sprint program. There were several times in 1945 when it seems to me that scientists gave up their power too easily—over when and how the bomb was used, and what information was given to the US public. Its striking that Rotblat was the only one to resign—and he was leant on to keep his real reasons secret.
One can also see this later in 1949 and the decision to go for the thermonuclear bomb. Oppenheimer, Conant, Fermi and Bethe all strongly opposed that second ‘sprint’ (“It is neccessarily an evil thing considerd in any light.”). They were overruled, and yet continued to actively participate in the program. The only person to leave the program (Ellsberg thinks, p.291-296) was Ellsberg’s own father, a factory designer—who also kept it secret.
Exit or the threat of exit can be a powerful way to shape outcomes—I discuss this further in Activism by the AI Community. Don’t give up your power too easily.
Thanks Pablo for those thoughts and the link—very interesting to read in his own words.
I completely agree that stopping a ‘sprint’ project is very hard—probably harder than not beginning one. The US didn’t slow down on ICBMs in 1960-2 either.
We can see some of the mechanisms by which this occurs around biological weapons programs. Nixon unilaterally ended the US one; Brezhnev increased the size of the secret Soviet one. So in the USSR there was a big political/military/industrial complex with a stake in the growth of the program and substantial lobbying power, and it shaped Soviet perceptions of ‘sunk costs’, precedent, doctrine, strategic need for a weapons technology, identities and norms; while in the US the oppossite occured.
I don’t think its a hole at all, I think its quite reasonable to focus on major states. The private sector approach is a different one with a whole different set of actors/interventions/literature—completely makes sense that its outside the scope of this report. I was just doing classic whatabouterism, wondering about your take on a related but seperate approach.
Btw I completely agree with you about cluster munitions.
Great report! Looking forward to digging into it more.
It definitely makes sense to focus on (major) states. However a different intervention I don’t think I saw in the piece is about targeting the private sector—those actually developing the tech. E.g. Reprogramming war by Pax for Peace, a Dutch NGO. They describe the project as follows:
“This is part of the PAX project aimed at dissuading the private sector from contributing to the development of lethal autonomous weapons. These weapons pose a serious threat to international peace and security, and would violate fundamental legal and ethical principles. PAX aims to engage with the private sector to help prevent lethal autonomous weapons from becoming a reality. In a series of four reports we look into which actors could potentially be involved in the development of these weapons. Each report looks at a different group of actors, namely states, the tech sector, universities & research institutes, and arms producers. This project is aimed at creating awareness in the private sector about the concerns related to lethal autonomous weapons, and at working with private sector actors to develop guidelines and regulations to ensure their work does not contribute to the development of these weapons.”
It follows fairly successful investor campaigns on e.g. cluster munitions. This project could form the basis for shareholder activism or divestment by investors, and/or wider activism by the AI community by students, researchers, employees, etc—building on eg FLI’s “we won’t work on LAWS” pledge.
I’d be interested in your views on that kind of approach.
Thanks for these questions! I tried to answer your first in my reply to Christian.
On your second, “delaying development” makes it sound like the natural outcome/null hypothesis is a sprint—but its remarkable how the more ‘natural’ outcome was to not sprint, and how much effort it took to make the US sprint.
To get initial interest at the beginning of the war required lots of advocacy from top scientists, like Einstein. Even then, the USA didn’t really do anything from 1939 until 1941, when an Australian scientist went to the USA, persuaded US scientists and promised that Britain would share all its research and resources. Britain was later cut out by the Americans, and didn’t have a serious independent program for the rest of the war. Germany considered it in the early war, but decided against in 1942. During the war, neither the USSR nor Japan had serious programs (and France was collaborating with Germany). All four major states (UK, Germany, USSR, Japan) realised it would cost a huge amount in terms of money, people and scarce resources like iron, and probably not come in time to affect the course of the war.
The counterfactual is just “The US acts like the other major powers of the time and decides not to launch a sprint program that costs 0.4% of GDP during a total war, and that probably won’t affect who wins the war”.
Thanks for the kind words Christian—I’m looking forward to reading that report, it sounds fascinating.
I agree with your first point—I say “They were arguably right, ex ante, to advocate for and participate in a project to deter the Nazi use of nuclear weapons.” Actions in 1939-42 or around 1957-1959 are defensible. However, I think this highlights 1) accurate information in 1942-3 (and 1957) would have been useful and 2) when they found out the accurate information (in 1944 and 1961) , its very interesting that it didn’t stop the arms buildup.
The question of whether over, under or calibrated confidence is more common is an interesting one that I’d like someone to research. It perhaps could be usefully narrowed to WWII & postwar USA. I offered some short examples, but this could easily be a paper. There are some theoretical reasons to expect overconfidence, I’d think: such as paranoia and risk-aversion, or political economy incentives for the military-industrial complex to overemphasise risk (to get funding). But yes, an interesting open empirical question.
woops thanks for catching—have cut