Haydn has been a Research Associate and Academic Project Manager at the University of Cambridge’s Centre for the Study of Existential Risk since Jan 2017.
HaydnBelfield
Hi, yes good question, and one that has been much discussed—here’s three papers on the topic. I’m personally of the view that there shouldn’t really be much conflict/contradictions—we’re all pushing for the safe, beneficial and responsible development and deployment of AI, and there’s lots of common ground.
Bridging near- and long-term concerns about AI
Bridging the Gap: the case for an Incompletely Theorized Agreement on AI policy
Reconciliation between Factions Focused on Near-Term and Long-Term Artificial Intelligence
Apologies! LAWS = Lethal Autonomous Weapons. Have edited the text.
This is how I’ve responded to positive funding news before, seems right.
Thanks! And thanks for this link. Very moving on their sense of powerlessness.
Thanks Rohin. Yes I should perhaps have spelled this out more. I was thinking about two things—focussed on those two stages of advocacy and participation.
1. Don’t just get swept up in race rhetoric and join the advocacy: “oh there’s nothing we can do to prevent this, we may as well just join and be loud advocates so we have some chance to shape it”. Well no, whether a sprint occurs is not just in the hands of politicians and the military, but also to a large extent in the hands of scientists. Scientists have proven crucial to advocacy for, and participation in, sprints. Don’t give up your power too easily.
2. You don’t have to stay if it turns out you’re not actually in a race and you don’t have any influence on the sprint program. There were several times in 1945 when it seems to me that scientists gave up their power too easily—over when and how the bomb was used, and what information was given to the US public. Its striking that Rotblat was the only one to resign—and he was leant on to keep his real reasons secret.
One can also see this later in 1949 and the decision to go for the thermonuclear bomb. Oppenheimer, Conant, Fermi and Bethe all strongly opposed that second ‘sprint’ (“It is neccessarily an evil thing considerd in any light.”). They were overruled, and yet continued to actively participate in the program. The only person to leave the program (Ellsberg thinks, p.291-296) was Ellsberg’s own father, a factory designer—who also kept it secret.
Exit or the threat of exit can be a powerful way to shape outcomes—I discuss this further in Activism by the AI Community. Don’t give up your power too easily.
Thanks Pablo for those thoughts and the link—very interesting to read in his own words.
I completely agree that stopping a ‘sprint’ project is very hard—probably harder than not beginning one. The US didn’t slow down on ICBMs in 1960-2 either.
We can see some of the mechanisms by which this occurs around biological weapons programs. Nixon unilaterally ended the US one; Brezhnev increased the size of the secret Soviet one. So in the USSR there was a big political/military/industrial complex with a stake in the growth of the program and substantial lobbying power, and it shaped Soviet perceptions of ‘sunk costs’, precedent, doctrine, strategic need for a weapons technology, identities and norms; while in the US the oppossite occured.
I don’t think its a hole at all, I think its quite reasonable to focus on major states. The private sector approach is a different one with a whole different set of actors/interventions/literature—completely makes sense that its outside the scope of this report. I was just doing classic whatabouterism, wondering about your take on a related but seperate approach.
Btw I completely agree with you about cluster munitions.
Great report! Looking forward to digging into it more.
It definitely makes sense to focus on (major) states. However a different intervention I don’t think I saw in the piece is about targeting the private sector—those actually developing the tech. E.g. Reprogramming war by Pax for Peace, a Dutch NGO. They describe the project as follows:
“This is part of the PAX project aimed at dissuading the private sector from contributing to the development of lethal autonomous weapons. These weapons pose a serious threat to international peace and security, and would violate fundamental legal and ethical principles. PAX aims to engage with the private sector to help prevent lethal autonomous weapons from becoming a reality. In a series of four reports we look into which actors could potentially be involved in the development of these weapons. Each report looks at a different group of actors, namely states, the tech sector, universities & research institutes, and arms producers. This project is aimed at creating awareness in the private sector about the concerns related to lethal autonomous weapons, and at working with private sector actors to develop guidelines and regulations to ensure their work does not contribute to the development of these weapons.”
It follows fairly successful investor campaigns on e.g. cluster munitions. This project could form the basis for shareholder activism or divestment by investors, and/or wider activism by the AI community by students, researchers, employees, etc—building on eg FLI’s “we won’t work on LAWS” pledge.
I’d be interested in your views on that kind of approach.
Thanks for these questions! I tried to answer your first in my reply to Christian.
On your second, “delaying development” makes it sound like the natural outcome/null hypothesis is a sprint—but its remarkable how the more ‘natural’ outcome was to not sprint, and how much effort it took to make the US sprint.
To get initial interest at the beginning of the war required lots of advocacy from top scientists, like Einstein. Even then, the USA didn’t really do anything from 1939 until 1941, when an Australian scientist went to the USA, persuaded US scientists and promised that Britain would share all its research and resources. Britain was later cut out by the Americans, and didn’t have a serious independent program for the rest of the war. Germany considered it in the early war, but decided against in 1942. During the war, neither the USSR nor Japan had serious programs (and France was collaborating with Germany). All four major states (UK, Germany, USSR, Japan) realised it would cost a huge amount in terms of money, people and scarce resources like iron, and probably not come in time to affect the course of the war.
The counterfactual is just “The US acts like the other major powers of the time and decides not to launch a sprint program that costs 0.4% of GDP during a total war, and that probably won’t affect who wins the war”.
Thanks for the kind words Christian—I’m looking forward to reading that report, it sounds fascinating.
I agree with your first point—I say “They were arguably right, ex ante, to advocate for and participate in a project to deter the Nazi use of nuclear weapons.” Actions in 1939-42 or around 1957-1959 are defensible. However, I think this highlights 1) accurate information in 1942-3 (and 1957) would have been useful and 2) when they found out the accurate information (in 1944 and 1961) , its very interesting that it didn’t stop the arms buildup.
The question of whether over, under or calibrated confidence is more common is an interesting one that I’d like someone to research. It perhaps could be usefully narrowed to WWII & postwar USA. I offered some short examples, but this could easily be a paper. There are some theoretical reasons to expect overconfidence, I’d think: such as paranoia and risk-aversion, or political economy incentives for the military-industrial complex to overemphasise risk (to get funding). But yes, an interesting open empirical question.
woops thanks for catching—have cut
Pretty sure jackva is responding to the linked article, not just this post, as e.g. they quote footnote 25 in full.
On first point, I think that that kind of argument could be found in Jonathan B. Wiener’s work on “‘risk-superior moves’—better options that reduce multiple risks in concert.” See e.g.
On the second point, what about climate change in India-Pakistan? e.g. an event worse than the current terrible heatwave—heat stress and agriculture/economic shock leads to migration, instability, rise in tension and accidental use of nuclear weapons. The recent modelling papers indicate that would lead to ‘nuclear autumn’ and probably be a global catastrophe.
Note that “humanity is doomed” is not the same as ‘direct extinction’, as there are many other ways for us to waste our potential.
I think its an interesting argument, but I’m unsure that we can get to a rigorous, defensible distinction between ‘direct’ and ‘indirect’ risks. I’m also unsure how this framework fits with the “risk/risk factor” framework, or the ‘hazard/vulnerability/exposure’ framework that’s common across disaster risk reduction, business + govt planning, etc. I’d be interested in hearing more in favour of this view, and in favour of the 2 claims I picked out above.
We’ve talked about this before, but in general I’ve got such uncertainty about the state of our knowledge and the future of the world that I incline towards grouping together nuclear, bio and climate as being in roughly the same scale/importance ‘tier’ and then spending most of our focus seeing if any particular research strand or intervention is neglected and solvable (e.g. your work flagging something underexplored like cement).
On your food production point, as I understand it the issue is more shocks than averages. Food system shocks that can lead to “economic shocks, socio-political instability as well as starvation, migration and conflict” (from the ‘causal loop diagram’ paper). However, I’m not a food systems expert, I’d suggest the best people to discuss this with more are our Catherine Richards and Asaf Tzachor, authors of e.g. Future Foods For Risk-Resilient Diets.
For other readers that might be similarly confused to me—there’s more in the profile on ‘indirect extinction risks’ and on other longrun effects on humanity’s potential.
Seems a bit odd to me to just post the ‘direct extinction’ bit, as essentially no serious researcher argues that there is a significant chance that climate change could ‘directly’ (and we can debate what that means) cause extinction. However, maybe this view is more widespread amongst the general public (and therefore worth responding to)?
On ‘indirect risk’, I’d be interested in hearing more on these two claims:
“it’s less important to reduce upstream issues that could be making them worse vs trying to fix them directly” (footnote 25); and
“our guess is that [climate change’s ‘indirect’] contribution to other existential risks is at most an order of magnitude higher — so something like 1 in 1,000”—which “still seems more than 10 times less likely to cause extinction than nuclear war or pandemics.”
If people are interested in reading more about climate change as a contributor to GCR, here are two CSER papers from last year (and we have a big one coming out soon)
Thanks for this Jeffrey and Lennart! Very interesting, and I broadly agree. Good area for people to gain skills/expertise, and private companies should beef up their infosec to make it harder for them to be hacked and stop some adversaries.
However, I think its worth being humble/realistic. IMO a small/medium tech company (even Big Tech themselves) are not going to be able to stop a motivated state-linked actor from the P5. Would you broadly agree?
AGI Safety Fundamentals has the best resources and reading guides. Best short intros are the very short (500 words) intro and a slightly longer one, both from Kelsey Piper.
You might find a lecture of mine useful:
Apologies, Kim Stanley Robinson
GLAAD is a really useful case study, thanks for highlighting it. Participant Media was another model I had in mind—they produced Contagion, Spotlight, Green Book, An Inconvenient Truth, Citizenfour, Food Inc, and The Post amongst others.
Hell yeah, I can’t wait to watch this and get really depressed. Have you read or watched When The Wind Blows? Seems a similar tone.
Thanks!
This was very much Ellsberg’s view on eg the 80,000 Hours podcast: