Health, technology and catastrophic risk—New Zealand https://adaptresearchwriting.com/blog/
Matt Boyd
Thanks for posting this. I’ll comment on the bit about New Zealand’s food production in nuclear winter conditions. Although the paper cited concludes there is potential for production to feed NZ’s population, this depends on there being sufficient liquid fuel to run agricultural equipment and NZ imports 100% of it’s refined fuels. Trade in fuel would almost certainly collapse in a major nuclear war. Without diesel, or imported fertiliser and agrichemicals, yield would be much lower. Distribution would be difficult too. See this paper: https://onlinelibrary.wiley.com/doi/abs/10.1111/risa.14297 Ideally, places like NZ would establish the potential to produce fuel locally, eg biofuels, in case of this scenario. If restricted to use in agriculture and food transport, with optimised cropping, surprisingly little biofuel would be needed. This kind of contingency planning could avert famine, and any associated disease and potential conflict. I agree that the existential risk is very low. But it is probably slightly higher when considering these factors.
Interesting juxtaposition:
It promotes the idea of spending considerable state resources, i.e. tax payer money, for building massive computing clusters in the US, while at the same time denying the knowledge required to build AI models from non-American people and companies.
With the following here:
I’m Leopold Aschenbrenner. I recently founded an investment firm focused on AGI.
As you say, the whole set of writings has a propaganda (marketing) tone to it, and a somewhat naive worldview. And the temporal coincidence of the essays and the investment startup are likely not accidental. I’m surprised it got the attention it did given, as you say, the sort of red-flag writing style of which we are taught to be skeptical. Any presentation of these essays should be placed alongside the kind of systematic skepticism of eg Gary Marcus et al. for readers to draw their own conclusions.
This all seems extremely optimistic. I don’t see the words ‘environment’, ‘externalities’, ‘biodiversity’, or ‘pollution’ mentioned at all, let alone ‘geopolitics’, ‘fragmentation’, ‘onshoring’, ‘deglobalisation’ or ‘supply chains’. And nothing about energy demands and cost of extraction. Based on upbeat consultancies biased models that always conclude things are good for business? I’ll be extremely surprised if this ‘lower bound’ scenario is even the upper.
Hopefully everyone who thinks that AI is the most pressing issue takes the time to write (or collaborate and write) their best solution in 2000 words and submit to the UN’s recent consultation call: https://dig.watch/updates/invitation-for-paper-submissions-on-worldwide-ai-governance A chance to put AI in the same global governance basket as biological and nuclear weapons. And potential high leverage from a relatively small task (Deadline 30 Sept).
Difficult to interpret a lot of this as it seems to be a debate between potentially biased pacifists, and potentially biased military blogger. As with many disagreements the truth is likely in the middle somewhere (as Rodriguez noted). Need new independent studies on this that are divorced from the existing pedigrees. That said, much of the catastrophic risk from nuclear war may be in the more than likely catastrophic trade disruptions, which alone could lead to famines, given that nearly 2⁄3 of countries are net food importers, and almost no one makes their own liquid fuel to run their agricultural equipment.
Thanks for this post. Reducing risks of great power war is important, but also consider reducing risks from great power war. In particular working on how non-combatant nations can ensure their societies survive the potentially catastrophic ensuing effects on trade/food/fuel etc. Disadvantages of this approach are that it does not prevent the massive global harms in the first place, advantages are that building resilience of eg relatively self-sufficient island refuges may also reduce existential risk from other causes (bio-threats, nuclear war/winter, catastrophic solar storm, etc).
One approach is our ongoing project the Aotearoa New Zealand Catastrophe Resilience Project.
Also, 100,000 deaths sounds about right for the current conflict in Ukraine, given that recent excess mortality analysis puts Russian deaths at about 50,000.
100% agree regarding catastrophe risk. This is where I think advocacy resources should be focused. Governments and people care about catastrophe as you say, even 1% would be an immense tragedy. And if we spell out how exactly (one or three or ten examples) of how AI development leads to a 1% catastrophe then this can be the impetus for serious institution-building, global cooperation, regulations, research funding, public discussion of AI risk. And packaged within all that activity can be resources for x-risk work. Focusing on x-risk alienates too many people, and focusing on risks like bias and injustice leaves too much tail risk out. There’s so much middle ground here. The extreme near/long term division on this debate has really surprised me. As someone noted with climate, in 1990 we could care about present day particulate pollution killing many people, AND care about 1.5C scenarios, AND care about 6C scenarios, all at once, it’s not mutually exclusive. (noted that the topic of the debate was ‘extinction risk’ so perhaps the topic wasn’t ideal for actually getting agreement on action).
Hi Steven, thanks for what I consider a very good post. I was extremely frustrated with this debate for many of the reasons you articulate. I felt that the affirmative side really failed to concretely articulate the x-risk concerns in a way that was clear and intuitive to the audience (people we need good clear scenarios of how exactly step by step this happens!). Despite years (decades!) of good research and debate on this (including in the present Forum) the words coming out of x-risk proponents mouths still seem to be ‘exponential curve, panic panic, [waves hands] boom!’ Yudkowsky is particularly prone to this, and unfortunately this style doesn’t land effectively and may even harmfully shift the overton window. Both Bengio and Tegmark tried to avoid this, but the result was a vague and watered down version of arguments (or omission of key arguments).
On the negative side Melanie seemed either (a) uninformed of the key arguments (she just needs to listen to one of Yampolskiy’s recent podcast interviews to get a good accessible summary). Or (b) refused to engage with such arguments. I think (like a similar recent panel discussion on the lab leak theory of Covid-19) this is a case of very defensive scientists feeling threatened by regulation, but then responding with a very naive and arrogant attack. No, science doesn’t get to decide policy. Communities do, whether rightly or wrongly. Both sides need to work on clear messages, because this debate was an unhelpful mess. The debate format possibly didn’t help because it set up an adversarial process, whereas there is actually common ground. Yes, there are important near term risks of AI, yes if left unchecked such processes could escalate (at some point) to existential risk.
There is a general communication failure here. More use needs to be made of scenarios and consequences. Nuclear weapons (nuclear weapons research) are not necessarily an ‘existential risk’ but a resulting nuclear winter, crop failures, famine, disease, and ongoing conflict could be. In a similar way ‘AI research’ is not necessarily the existential risk, but there are many plausible cascades of events stemming from AI as a risk factor and its interaction with other risks. These are the middle ground stories that need to be richly told, these will sway decision makers, not ‘Foom!’
Risk and Resilience in the Face of Global Catastrophe: A Closer Look at New Zealand’s Food Security [link(s)post]
More recent works than those cited above:
Famine after a range of nuclear winter scenarios (Xia et al 2022, Nature Food): https://www.nature.com/articles/s43016-022-00573-0
Resilient foods to mitigate likely famines (Rivers et al 2022, preprint): https://www.researchsquare.com/article/rs-1446444/v1
Likelihood of New Zealand collapse (Boyd & Wilson 2022, Risk Analysis): https://onlinelibrary.wiley.com/doi/10.1111/risa.14072
New Zealand agricultural production post-nuclear winter (Wilson et al 2022, in press): https://www.medrxiv.org/content/10.1101/2022.05.13.22275065v3
Optimising frost-resistant crops NZ nuclear winter (Wilson et al, preprint): https://www.researchsquare.com/article/rs-2670766/v1
Project examining New Zealand’s resilience to nuclear war (with focus on trade disruption):
Thanks for this great post mapping out the problem space! I’d add that trade disruption appears to be one of the most significant impacts of nuclear war, and plausibly amplifies the ‘famine’ aspect of nuclear winter significantly and a range of potential civilisation collapse risk factors, see my earlier post here: https://forum.effectivealtruism.org/posts/7arEfmLBX2donjJyn/islands-nuclear-winter-and-trade-disruption-as-a-human Trade disruption disappears into the ‘various risk factor mechanisms’ category above, but I think it’s worth more consideration. Here’s a report on a workshop we recently ran on nuclear winter risk and New Zealand and how the impact of trade disruption pushes nuclear war into the very severe regions of a risk matrix: https://adaptresearchwriting.com/2023/02/20/workshop-on-nuclear-war-winter-nz-wellbeing-of-millions-and-1-trillion-plus-at-risk-strategic-resilience-must-become-bread-butter-nz-policy/ We now have a survey across a range of sectors in pilot to better understand the cascading impacts of such disruption on NZ’s technological/industrial society (and how to avoid collapse). The full survey will be deployed soon. A lot of likely resilience measures against nuclear winter will have co-benefits across a range of other ‘ordinary’ and catastrophic risks, we hope to identify those with Delphi processes later this year. Project outline here: https://adaptresearchwriting.com/2022/09/13/introducing-the-aotearoa-nz-catastrophe-resilience-project/ I’d be interested to chat with anyone at Rethink Priorities who is continuing your work.
Thanks. I guess this relates to your point about democratically acceptable decisions of governments. If a government is choosing to neglect something (eg because its probability is low, or because they have political motivations for doing so, vested interests etc), then they should only do so if they have information suggesting the electorate has/would authorize this. Otherwise it is an undemocratic decision.
Revolutionising National Risk Assessment (NRA): improved methods and stakeholder engagement to tackle global catastrophe and existential risks
Thanks for this, great paper.
I 100% agree on the point that longtermism is not a necessary argument to achieve investment in existential/GCR risk reduction (and indeed might be a distraction). We have recently published on this (here). The paper focuses on the process of National Risk Assessment (NRA). We argue: “If one takes standard government cost-effectiveness analysis (CEA) as the starting point, especially the domain of healthcare where cost-per-quality-adjusted-life-year is typically the currency and discount rates of around 3% are typically used, then existential risk just looks like a limiting case for CEA. The population at risk is simply all those alive at the time and the clear salience of existential risks emerges in simple consequence calculations (such as those demonstrated above) coupled with standard cost-utility metrics.” (look for my post on this paper in the Forum, as I’m about to publish it (next 1-2 days probably >> update, here’s the link).
We then turn to the question of why governments don’t see things this way, and note: “The real question then becomes, why do government NRAs and CEAs not account for the probabilities and impacts of GCRs and existential risk? Possibilities include unfamiliarity (i.e., a knowledge gap, to be solved by wider consultation), apparent intractability (i.e., a lack of policy response options, to be solved by wider consultation), conscious neglect (due to low probability or for political purposes, but surely to be authorized by wider consultation), or seeing some issues as global rather than national (typically requiring a global coordination mechanism). Most paths point toward the need for informed public and stakeholder dialog.”
We then ask how wider consultation might be effected and propose a two-way communication approach between governments and experts/populations. Noting that NRAs are based on somewhat arbitrary assumptions we propose letting the public explore alternative outcomes of the NRA process by altering assumptions. This is where the AWTP line of your argument could be included, as discount rate and time-horizon are two of the assumptions that could be explored, and seeing the magnitude of benefit/cost people might be persuaded that a little WTP for altruistic outcomes might be good.
Overall, CEA/CBA is a good approach, and NRA is a method by which it could be formalized in government processes around catastrophe (provided current shortcomings where NRA is often not connected to a capabilities analysis (solutions) are overcome).
Refuges: sometimes the interests in securing a refuge and protecting the whole population align, as in the case of island refuges, where investment in the refuge is also protecting the entire currently alive population. So refuges may not always be left of the blue box in your figure.
We transform ourselves all the time, and very powerfully. The entire field of cognitive niche construction is dedicated to studying how the things we create/build/invent/change lead to developmental scaffolding and new cognitive abilities that previous generations did not have. Language, writing systems, education systems, religions, syllabi, external cognitive supports, all these things have powerfully transformed human thought and intelligence. And once they were underway the take-off speed of this evolutionary transformation was very rapid (compared to the 200,000 years spent being anatomically modern with comparatively little change).
Yes, feel free to translate whatever you like. And ahh, I’m a bit selective about what I post on here. It’s just they way I’ve decided to curate things. I don’t mind people linking to it though.
- 7 Mar 2023 14:56 UTC; 8 points) 's comment on Global catastrophic risks law approved in the United States by (
The GCRMA was included in the the final National Defense Authorization Act for FY2023 which became law in December 2022. The text is altered a little from the draft version, but can be read here: https://www.congress.gov/117/bills/hr7776/BILLS-117hr7776enr.pdf#page=1290 I have blogged about it here: https://adaptresearchwriting.com/2023/02/05/us-takes-action-to-avert-human-existential-catastrophe-the-global-catastrophic-risk-management-act-2022/ Not sure why there isn’t much discussion about it. It seems like something every country could replicate, and then the Chairs of each nation’s risk assessment committee could meet to coordinate.
Hi Ross, here’s the paper that I mentioned in my comment above (this pre-print uses some data from Xia et al 2022 in its preprint form, and their paper has just been published in Nature Food with some slightly updated numbers, so we’ll update our own once the peer review comes back, but the conclusions etc won’t change): https://www.researchsquare.com/article/rs-1927222/v1
We’re now starting a ‘NZ Catastrophe Resilience Project’ to more fully work up the skeleton details that are listed in Supplementary Table S1 of our paper. Engaging with public sector, industry, academia etc. Australia could do exactly the same.
Note that in the Xia paper, NZ’s food availability is vastly underestimated due to quirks of the UNFAO dataset. For an estimate of NZ’s export calories see our paper here: https://www.medrxiv.org/content/10.1101/2022.05.13.22275065v1
And we’ve posted here on the Forum about all this here: https://forum.effectivealtruism.org/posts/7arEfmLBX2donjJyn/islands-nuclear-winter-and-trade-disruption-as-a-human
I generally think that all these kinds of cost-effectiveness analyses around x-risk are wildly speculative and susceptible to small changes in assumptions. There is literally no evidence that the $250b would change bio-x-risk by 1% rather than, say, 0.1% or 10%, or even 50%, depending on how it was targeted and what developments it led to. On the other hand if you do successfully reduce the x-risk by, say, 1%, then you most likely also reduce the risk/consequences of all kinds of other non-existential bio-risks, again depending on the actual investment/discoveries/developments, so the benefit of all the ‘ordinary’ cases must be factored in. I think that the most compelling argument for investing in x-risk prevention without consideration of future generations, is simply to calculate the deaths in expectation (eg using Ord’s probabilities if you are comfortable with them) and to rank risks accordingly. It turns out that at 10% this century, AI risks 8 million lives per annum (obviously less than that early century, perhaps greater late century) and bio-risk is 2.7 million lives per annum in expectation (ie 8 billion x 0.0333 x 0.01). This can be compared to ALL natural disasters which Our World in Data reports kill ~60,000 people per annum. So there is an argument that we should focus on x-risk to at least some degree purely on expected consequences. I think its basically impossible to get robust cost-effectiveness estimates for this kind of work, and most of the estimates I’ve seen appear implausibly cost-effective. Things never go as well as you though they would in risk mitigation activities.
Similarly to Owen’s comment, I also think that AI and nuclear interact in important ways (various pathways to destabilisation that do not necessarily depend on AGI). It seems that many (most?) pathways from AI risk to extinction lead via other GCRs eg pandemic, nuclear war, great power war, global infrastructure failure, catastrophic food production failure, etc. So I’d suggest quite a bit more hedging with focus on these risks, rather than putting all resources into ‘solving AI’ in case that fails and we need to deal with these other risks.