Health, technology and catastrophic risk—New Zealand https://adaptresearchwriting.com/blog/
Matt Boyd
Thanks for this, great paper.
I 100% agree on the point that longtermism is not a necessary argument to achieve investment in existential/GCR risk reduction (and indeed might be a distraction). We have recently published on this (here). The paper focuses on the process of National Risk Assessment (NRA). We argue: “If one takes standard government cost-effectiveness analysis (CEA) as the starting point, especially the domain of healthcare where cost-per-quality-adjusted-life-year is typically the currency and discount rates of around 3% are typically used, then existential risk just looks like a limiting case for CEA. The population at risk is simply all those alive at the time and the clear salience of existential risks emerges in simple consequence calculations (such as those demonstrated above) coupled with standard cost-utility metrics.” (look for my post on this paper in the Forum, as I’m about to publish it (next 1-2 days probably >> update, here’s the link).
We then turn to the question of why governments don’t see things this way, and note: “The real question then becomes, why do government NRAs and CEAs not account for the probabilities and impacts of GCRs and existential risk? Possibilities include unfamiliarity (i.e., a knowledge gap, to be solved by wider consultation), apparent intractability (i.e., a lack of policy response options, to be solved by wider consultation), conscious neglect (due to low probability or for political purposes, but surely to be authorized by wider consultation), or seeing some issues as global rather than national (typically requiring a global coordination mechanism). Most paths point toward the need for informed public and stakeholder dialog.”
We then ask how wider consultation might be effected and propose a two-way communication approach between governments and experts/populations. Noting that NRAs are based on somewhat arbitrary assumptions we propose letting the public explore alternative outcomes of the NRA process by altering assumptions. This is where the AWTP line of your argument could be included, as discount rate and time-horizon are two of the assumptions that could be explored, and seeing the magnitude of benefit/cost people might be persuaded that a little WTP for altruistic outcomes might be good.
Overall, CEA/CBA is a good approach, and NRA is a method by which it could be formalized in government processes around catastrophe (provided current shortcomings where NRA is often not connected to a capabilities analysis (solutions) are overcome).
Refuges: sometimes the interests in securing a refuge and protecting the whole population align, as in the case of island refuges, where investment in the refuge is also protecting the entire currently alive population. So refuges may not always be left of the blue box in your figure.
Thanks Nick, interesting thoughts, great to see this discussion, and appreciated. Is there a timeline for when the initial (21 March deadline) applications will all be decided? As you say, it takes as long as it takes, but has some implications for prioritising tasks (eg deciding whether to commit to less impactful, less-scalable work being offered, and the opportunity costs of this). Is there a list of successful applications?
I really liked this episode, because of Carl’s no nonsense moderate approach. Though I must say that I’m a bit surprised that it appears that some in the EA community see the ‘commonsense argument’ as some kind of revelation. See for example the 80,000 email newsletter that comes via Benjamin Todd (“Why reducing existential risk should be a top priority, even if you don’t attach any value to future generations”, 16 Oct, 2021). I think this argument is just obvious, and is easily demonstrated through relatively simple life-year or QALY calculations. I said as much in my 2018 paper on New Zealand and Existential Risks (see p.63 here). I thought I was pretty late to the party at that point, and Carl was probably years down the track.
However, if this argument is not widely understood (and that’s a big ‘if’ because I think really it should be pretty easy for anyone to have deduced), then I wonder why? Maybe this is because the origins of the EA focus on x-risk hark back to papers like the ‘Astronomical Waste’ arguments etc, which basically take long-termism as the starting point and then argue for the importance of existential risk reduction. Whereas if you take government cost-effectiveness analysis (CEA) as the starting point, especially the domain of healthcare where cost-per-QALY is the currency. Then existential risk just looks like a limiting case of these CEAs and the priority they hold just emerges in the calculation (when only considering THE PRESENT generation).
The real question then becomes, WHY don’t government risk assessments and CEAs plug in the probabilities and impacts for x-risk? Two key suppositions are unfamiliarity (ie a knowledge gap) or intractability (ie a lack of policy response options). Whereas both of these have now progressed substantially.
The reason all this is important is because in the eyes of government policymakers and more importantly Ministers with power to make decisions about resource allocation, longtermism (especially in its strong form) is seen as somewhat esoteric and disconnected from day to day business. Whereas it seems the objectives of strong longtermism (if indeed it stands up to empirical challenges, eg how Fermi’s paradox is resolved will have implications for the strength of strong longtermism) can be met through simple ordinary CEA arguments. Or at least such arguments can be used for leverage. To actually achieve the goals of longtermism it seems like MUCH more work needs to be happening in translational research to communicate academic x-risk work into policymakers’ language for instrumental ends, not necessarily in strictly ‘correct’ ways.
Thanks Carla and Luke for a great paper. This is exactly the sort of antagonism that those not so deeply immersed in the xrisk literature can benefit from, because it surveys so much and highlights the dangers of a single core framework. Alternatives to the often esoteric and quasi-religious far-future speculations that seem to drive a lot of xrisk work are not always obvious to decision makers and that gap means that the field can be ignored as ‘far fetched’. Democratisation is a critical component (along with apoliticisation).
I must say that it was a bit of a surprise to me that TUA is seen as the paradigm approach to ERS. I’ve worked in this space for about 5-6 years and never really felt that I was drawn to strong-longtermism or transhumanism, or technological progress. ERS seems like the limiting case of ordinary risk studies to me. I’ve worked in healthcare quality and safety (risk to one person at a time), public health (risk to members of populations) and extinction risk just seems like the important and interesting limit of this. I concur with the calls for grounding in the literature of risk analysis, democracy, and pluralism. In fact in peer reviewed work I’ve previously called for citizen juries and public deliberation and experimental philosophy in this space (here), and for apolitical, aggregative processes (here), as well as calling for better publicly facing national risk (and xrisk) communication and prioritisation tools (under review with Risk Analysis).
Some key points I appreciated or reflected on in your paper were:
The fact that empirical and normative assumptions are often masked by tools and frameworks
The distinction between extinction risk and existential risk.
The questioning of total utilitarianism (I often prefer a maximin approach, also with consideration of important [not necessarily maximising] value obtained from honouring treaties, equity, etc)
I’ve never found the ‘astronomical waste’ claims hold up particularly well under certain resolutions of Fermi’s paradox (basically I doubt the moral and empirical claims of TUA and strong longtermism, and yet I am fully committed to ERS)
The point about equivocating over near-term nuclear war and billion year stagnation
Clarity around Ord’s 1 in 6 (extinction/existential) - I’m guilty of conflating this
I note that failing to mitigate ‘mere’ GCRs could also derail certain xrisk mitigation efforts.
Again, great work. This is a useful and important broad survey/stimulus, not every paper needs to take a single point and dive to its bottom. Well done.
Thanks for this great post mapping out the problem space! I’d add that trade disruption appears to be one of the most significant impacts of nuclear war, and plausibly amplifies the ‘famine’ aspect of nuclear winter significantly and a range of potential civilisation collapse risk factors, see my earlier post here: https://forum.effectivealtruism.org/posts/7arEfmLBX2donjJyn/islands-nuclear-winter-and-trade-disruption-as-a-human Trade disruption disappears into the ‘various risk factor mechanisms’ category above, but I think it’s worth more consideration. Here’s a report on a workshop we recently ran on nuclear winter risk and New Zealand and how the impact of trade disruption pushes nuclear war into the very severe regions of a risk matrix: https://adaptresearchwriting.com/2023/02/20/workshop-on-nuclear-war-winter-nz-wellbeing-of-millions-and-1-trillion-plus-at-risk-strategic-resilience-must-become-bread-butter-nz-policy/ We now have a survey across a range of sectors in pilot to better understand the cascading impacts of such disruption on NZ’s technological/industrial society (and how to avoid collapse). The full survey will be deployed soon. A lot of likely resilience measures against nuclear winter will have co-benefits across a range of other ‘ordinary’ and catastrophic risks, we hope to identify those with Delphi processes later this year. Project outline here: https://adaptresearchwriting.com/2022/09/13/introducing-the-aotearoa-nz-catastrophe-resilience-project/ I’d be interested to chat with anyone at Rethink Priorities who is continuing your work.
The GCRMA was included in the the final National Defense Authorization Act for FY2023 which became law in December 2022. The text is altered a little from the draft version, but can be read here: https://www.congress.gov/117/bills/hr7776/BILLS-117hr7776enr.pdf#page=1290 I have blogged about it here: https://adaptresearchwriting.com/2023/02/05/us-takes-action-to-avert-human-existential-catastrophe-the-global-catastrophic-risk-management-act-2022/ Not sure why there isn’t much discussion about it. It seems like something every country could replicate, and then the Chairs of each nation’s risk assessment committee could meet to coordinate.
More recent works than those cited above:
Famine after a range of nuclear winter scenarios (Xia et al 2022, Nature Food): https://www.nature.com/articles/s43016-022-00573-0
Resilient foods to mitigate likely famines (Rivers et al 2022, preprint): https://www.researchsquare.com/article/rs-1446444/v1
Likelihood of New Zealand collapse (Boyd & Wilson 2022, Risk Analysis): https://onlinelibrary.wiley.com/doi/10.1111/risa.14072
New Zealand agricultural production post-nuclear winter (Wilson et al 2022, in press): https://www.medrxiv.org/content/10.1101/2022.05.13.22275065v3
Optimising frost-resistant crops NZ nuclear winter (Wilson et al, preprint): https://www.researchsquare.com/article/rs-2670766/v1
Project examining New Zealand’s resilience to nuclear war (with focus on trade disruption):
Hopefully everyone who thinks that AI is the most pressing issue takes the time to write (or collaborate and write) their best solution in 2000 words and submit to the UN’s recent consultation call: https://dig.watch/updates/invitation-for-paper-submissions-on-worldwide-ai-governance A chance to put AI in the same global governance basket as biological and nuclear weapons. And potential high leverage from a relatively small task (Deadline 30 Sept).
Hi, I have quite a lot to say about this, but I’m actually currently writing a research paper on exactly this issue, and will write a full forum post/link-post once it’s completed (ETA June-ish). However, a couple of key observations:
Cost of living is likely to be irrelevant in nuclear aftermath as global finance and economics is in tatters (the value of assets will jump around unpredictably, eg mansions less important than electric vehicles if global oil trade ceases), prices will change dramatically according to scarcity, eg food prices.
Energy independence and food security are probably the most important (>50% combined index value) because without energy food production is slashed to pre-industrial yields, and without food security the risk of unrest is very high.
Latitude and temperature are less important than the impact on specific countries, eg temperature change is important not mean temperature, tropical crops like rice will die in a single frost. Europe could suffer −20 C or −30 C temperature change according to climate models, which makes agriculture impossible. Yet Iceland with vast fish resources could potentially increase food production.
Rainfall could have a massive impact. The tropical monsoons could be very disrupted and are essential for agriculture in many areas.
The could very well be almost no trade taking place in a severe nuclear aftermath as nations struggle internally, or due to fuel shortages (many countries are dependent on oil for agriculture at scale). Without trade many countries are fragile in areas of energy and manufacturing. Many component parts of power generation facilities, electricity & food distribution and communications infrastructure are manufactured in only a few places and within a few months without imports/exports such infrastructure may fail (eg lubricants, spark plugs, transformers, fibre optics, etc). Expect most things to grind to a halt without trade.
There is a lot more that could be said but you’re right that the large South American food producers (Argentina etc) look relatively more promising, as well as the usual suspects NZ & Australia. Though each will have severe problems in an actual nuclear winter and organisation such as food/fuel rationing and distribution from rural to urban areas will be immensely problematic. Not to mention the need for public communication processes to ensure people know there is a plan and survival is possible, again to avoid societal mayhem. Social cohesion, and stability indicators are probably very important.
One problem with composite indices is that very low scores on one dimension can be masked by reasonable scores on others. Countries should be ruled out if they fail on a critical dimension.
Finally, the act of ‘escaping to’ the ‘most promising’ location is not generalisable, and so the ethics of it are questionable. As Kant notes, the test is ‘what if everyone did the same as me, would that undermine the institution in question?’ and in this case it seems like the answer is yes. 8 billion people fleeing to Argentina would defeat the purpose of acting ahead of war to maximise the chances of each particular country. Carrying capacity calculations are important here too. I haven’t even considered HEMP yet, which could very much complicate matters.
The following case study is particularly illuminating of the problems even ‘good’ locations like NZ might suffer: https://www.jstor.org/stable/4313623?refreqid=excelsior%3A166e17f569637767a9caded49a1ced42 contact me if you want the full text.
I am also surprised that there are few comments here. Given the long and detailed technical quibbles that often append many of the rather esoteric EA posts it surprises me that where there is an opportunity to shape tangible influences at a global scale there is silence. I feel that there are often gaps in the EA community in the places that would connect research and insight with policy and governance.
Sean is right, there has been accumulating interest in this space. Our paper on the UN and existential risks in ‘Risk Analysis’ (2020) was awarded ‘best paper’ by that journal, and I suspect these kind of sentiments by the editors and many many others in the risk community have finally leaned upon the UN in sufficient weight, marshalled by the SG’s generally sympathetic disposition.
The UN calls for futures and foresight capabilities across countries and there is much scope for pressure on policy makers in every nation to act and establish such institutions. We have a forthcoming paper (November) in the New Zealand journal ‘Policy Quarterly’ that calls for a Parliamentary Commissioner for Extreme Risks to be supported by a well-resourced office and working in conjunction with a Select Committee. The Commissioner could offer support to CEOs of public sector organisations as they complete the newly legislated ‘long-term insights briefings’ that are to be tabled in Parliament from 2022.
I advocate for more work of this kind, but projects that ‘merely’ translate technical philosophical and ethical academic products into policy advocacy pieces don’t seem to generate funding. Yet, they may have the greatest impact. It matters not whether a paper is cited 100 times, it matters very much if the Minister with decision making capability is swayed by a well argued summary of the literature.
Thanks for these. Super interesting credences here, 19% (that health organisations will conclude lab origin) to 83% (that gain of function was in fact contributory). I guess the strikingly wide range suggests genuine uncertainty. Watch this space with interest.
Hi Steven, thanks for what I consider a very good post. I was extremely frustrated with this debate for many of the reasons you articulate. I felt that the affirmative side really failed to concretely articulate the x-risk concerns in a way that was clear and intuitive to the audience (people we need good clear scenarios of how exactly step by step this happens!). Despite years (decades!) of good research and debate on this (including in the present Forum) the words coming out of x-risk proponents mouths still seem to be ‘exponential curve, panic panic, [waves hands] boom!’ Yudkowsky is particularly prone to this, and unfortunately this style doesn’t land effectively and may even harmfully shift the overton window. Both Bengio and Tegmark tried to avoid this, but the result was a vague and watered down version of arguments (or omission of key arguments).
On the negative side Melanie seemed either (a) uninformed of the key arguments (she just needs to listen to one of Yampolskiy’s recent podcast interviews to get a good accessible summary). Or (b) refused to engage with such arguments. I think (like a similar recent panel discussion on the lab leak theory of Covid-19) this is a case of very defensive scientists feeling threatened by regulation, but then responding with a very naive and arrogant attack. No, science doesn’t get to decide policy. Communities do, whether rightly or wrongly. Both sides need to work on clear messages, because this debate was an unhelpful mess. The debate format possibly didn’t help because it set up an adversarial process, whereas there is actually common ground. Yes, there are important near term risks of AI, yes if left unchecked such processes could escalate (at some point) to existential risk.
There is a general communication failure here. More use needs to be made of scenarios and consequences. Nuclear weapons (nuclear weapons research) are not necessarily an ‘existential risk’ but a resulting nuclear winter, crop failures, famine, disease, and ongoing conflict could be. In a similar way ‘AI research’ is not necessarily the existential risk, but there are many plausible cascades of events stemming from AI as a risk factor and its interaction with other risks. These are the middle ground stories that need to be richly told, these will sway decision makers, not ‘Foom!’
Book review EA Forum post here
Bunker on island is probably a robust set-up, at least two given volcanic nature of eg Iceland, New Zealand: https://adaptresearchwriting.com/island-refuges/ Synergies/complementarities in island and bunker work should be explored. We’re currently exploring the islands/nuclear winter strand (EA LTFF), and have put in for FTX too.
Thanks for collating all of this here in one place. I should have read the later posts before I replied to the first one. Thank you too for your bold challenge. I feel like Kant waking from his ‘dogmatic slumber’. A few thoughts:
Humanity is an ‘interactive kind’ (to use Hacking’s term). Thinking about humanity can change humanity, and the human future.
Therefore, Ord’s ‘Long Reflection’ could lead to there being no future humans at all (if that was the course that the Long Reflection concluded).
This simple example shows that we cannot quantify over future humans, quadrillions or otherwise, or make long term assumptions about their value.
You’re right about trends, and in this context the outcomes are tied up with ‘human kinds’, as humans can respond to predictions and thereby invalidate the prediction. Makes me think of Godfrey-Smith’s observation that natural selection has no inertia, change the selective environment and the observable ‘trend’ towards some adaptation (trend) vanishes.
Cluelessness seems to be some version of the Socratic Paradox (I know only that I know nothing).
RCTs don’t just falsify hypotheses, but also provide evidence for causal inference (in spite of hypotheses!)
We transform ourselves all the time, and very powerfully. The entire field of cognitive niche construction is dedicated to studying how the things we create/build/invent/change lead to developmental scaffolding and new cognitive abilities that previous generations did not have. Language, writing systems, education systems, religions, syllabi, external cognitive supports, all these things have powerfully transformed human thought and intelligence. And once they were underway the take-off speed of this evolutionary transformation was very rapid (compared to the 200,000 years spent being anatomically modern with comparatively little change).
‘Partitioning’ is another concept that might be useful.
Islands as refuge (basically same idea as the city idea above), this paper specifically mentions pandemic as threat and island as solution (ie risk first approach) and also considers nuclear (and other) winter scenarios too (see the Supplementary material): https://pubmed.ncbi.nlm.nih.gov/33886124/
I note Alexey’s comment here too, broadly agree with his islands/refuge thinking.
The literature on group selection and species selection in biology might prove useful. You seem to be on to it tangentially with the butterfly example.
The infographic could perhaps have a ‘today’ and a ‘in 2050’ version, with the bubbles representing the risks being very small for AI ‘today’ compared to eg suicide, or cancer or heart disease, but then becoming much bigger in the 2050 version, illustrating the trajectory. Perhaps the standard medical cause of death bubbles shrink by 2050 illustrating medical progress.
We can quibble over the numbers but I think the point here is basically right, and if not right for AI then probably right for biorisk or some other risks. That point being even if you only look at probabilities in the next few years and only care about people alive today, then these issues appear to be the most salient policy areas. I’ve noted in a recent draft that the velocity of increase in risk (eg from some 0.0001% risk this year, to eg 10% per year in 50 years) results in issues with such probability trajectories being invisible to eg 2-year national risk assessments at present even though area under curve is greater in aggregate than every other risk. But in a sense potentially ‘inevitable’ (for the demonstration risk profiles I dreamed up) over a human lifetime. This then begs the question of how to monitor the trajectory (surely this is one role of national risk assessment, to invest in ‘fire alarms’, but this then requires these risks to be included in the assessment so the monitoring can be prioritized). Persuading policymakers is definitely going to be easier by leveraging decade long actuarial tables than having esoteric discussions about total utilitarianism.
Additionally, in the recent FLI ‘World Building Contest’ the winning entry from Mako Yass made quite a point of the fact that in the world he built the impetus for AI safety and global cooperation on this issue came from the development of very clear and very specific scenario development of how exactly AI could come to kill everyone. This is analogous to Carl Sagan/Turco’s work on nuclear winter in the early 1980s , a specific picture changed minds. We need this for AI.
Difficult to interpret a lot of this as it seems to be a debate between potentially biased pacifists, and potentially biased military blogger. As with many disagreements the truth is likely in the middle somewhere (as Rodriguez noted). Need new independent studies on this that are divorced from the existing pedigrees. That said, much of the catastrophic risk from nuclear war may be in the more than likely catastrophic trade disruptions, which alone could lead to famines, given that nearly 2⁄3 of countries are net food importers, and almost no one makes their own liquid fuel to run their agricultural equipment.