Health, technology and catastrophic risk—New Zealand https://adaptresearchwriting.com/blog/
Matt Boyd
I enjoyed this. Would seem to do well as an argument for preventing existential risk from Scheffler’s ‘the human project’ point of view, ie the continuation of transgenerational undertakings that we each contribute a tiny piece to, as opposed to the maximizing total utility approach. Persistence of the whole seems to have emergent merit beyond the lives of the individuals.
On the other hand it also made me think of the line Chigurh says in ‘No Country for Old Men’ > “If the rule that you followed brought you to this, of what use was the rule?” Rule = eg not eating meat, being compassionate etc. [note, I believe there IS use in the rules, but the line still haunts me]
Thanks Carla and Luke for a great paper. This is exactly the sort of antagonism that those not so deeply immersed in the xrisk literature can benefit from, because it surveys so much and highlights the dangers of a single core framework. Alternatives to the often esoteric and quasi-religious far-future speculations that seem to drive a lot of xrisk work are not always obvious to decision makers and that gap means that the field can be ignored as ‘far fetched’. Democratisation is a critical component (along with apoliticisation).
I must say that it was a bit of a surprise to me that TUA is seen as the paradigm approach to ERS. I’ve worked in this space for about 5-6 years and never really felt that I was drawn to strong-longtermism or transhumanism, or technological progress. ERS seems like the limiting case of ordinary risk studies to me. I’ve worked in healthcare quality and safety (risk to one person at a time), public health (risk to members of populations) and extinction risk just seems like the important and interesting limit of this. I concur with the calls for grounding in the literature of risk analysis, democracy, and pluralism. In fact in peer reviewed work I’ve previously called for citizen juries and public deliberation and experimental philosophy in this space (here), and for apolitical, aggregative processes (here), as well as calling for better publicly facing national risk (and xrisk) communication and prioritisation tools (under review with Risk Analysis).
Some key points I appreciated or reflected on in your paper were:
The fact that empirical and normative assumptions are often masked by tools and frameworks
The distinction between extinction risk and existential risk.
The questioning of total utilitarianism (I often prefer a maximin approach, also with consideration of important [not necessarily maximising] value obtained from honouring treaties, equity, etc)
I’ve never found the ‘astronomical waste’ claims hold up particularly well under certain resolutions of Fermi’s paradox (basically I doubt the moral and empirical claims of TUA and strong longtermism, and yet I am fully committed to ERS)
The point about equivocating over near-term nuclear war and billion year stagnation
Clarity around Ord’s 1 in 6 (extinction/existential) - I’m guilty of conflating this
I note that failing to mitigate ‘mere’ GCRs could also derail certain xrisk mitigation efforts.
Again, great work. This is a useful and important broad survey/stimulus, not every paper needs to take a single point and dive to its bottom. Well done.
Thanks for these comments Noumero, much appreciated!
Be a Stoic and build better democracies: an Aussie-as take on x-risks (review essay)
Many thanks Jackson :)
[Creative Writing Contest] The Sequence Matters
I really liked this episode, because of Carl’s no nonsense moderate approach. Though I must say that I’m a bit surprised that it appears that some in the EA community see the ‘commonsense argument’ as some kind of revelation. See for example the 80,000 email newsletter that comes via Benjamin Todd (“Why reducing existential risk should be a top priority, even if you don’t attach any value to future generations”, 16 Oct, 2021). I think this argument is just obvious, and is easily demonstrated through relatively simple life-year or QALY calculations. I said as much in my 2018 paper on New Zealand and Existential Risks (see p.63 here). I thought I was pretty late to the party at that point, and Carl was probably years down the track.
However, if this argument is not widely understood (and that’s a big ‘if’ because I think really it should be pretty easy for anyone to have deduced), then I wonder why? Maybe this is because the origins of the EA focus on x-risk hark back to papers like the ‘Astronomical Waste’ arguments etc, which basically take long-termism as the starting point and then argue for the importance of existential risk reduction. Whereas if you take government cost-effectiveness analysis (CEA) as the starting point, especially the domain of healthcare where cost-per-QALY is the currency. Then existential risk just looks like a limiting case of these CEAs and the priority they hold just emerges in the calculation (when only considering THE PRESENT generation).
The real question then becomes, WHY don’t government risk assessments and CEAs plug in the probabilities and impacts for x-risk? Two key suppositions are unfamiliarity (ie a knowledge gap) or intractability (ie a lack of policy response options). Whereas both of these have now progressed substantially.
The reason all this is important is because in the eyes of government policymakers and more importantly Ministers with power to make decisions about resource allocation, longtermism (especially in its strong form) is seen as somewhat esoteric and disconnected from day to day business. Whereas it seems the objectives of strong longtermism (if indeed it stands up to empirical challenges, eg how Fermi’s paradox is resolved will have implications for the strength of strong longtermism) can be met through simple ordinary CEA arguments. Or at least such arguments can be used for leverage. To actually achieve the goals of longtermism it seems like MUCH more work needs to be happening in translational research to communicate academic x-risk work into policymakers’ language for instrumental ends, not necessarily in strictly ‘correct’ ways.
Existential Risk Conference 7-8 Oct 2021: videos and timestamps
I am also surprised that there are few comments here. Given the long and detailed technical quibbles that often append many of the rather esoteric EA posts it surprises me that where there is an opportunity to shape tangible influences at a global scale there is silence. I feel that there are often gaps in the EA community in the places that would connect research and insight with policy and governance.
Sean is right, there has been accumulating interest in this space. Our paper on the UN and existential risks in ‘Risk Analysis’ (2020) was awarded ‘best paper’ by that journal, and I suspect these kind of sentiments by the editors and many many others in the risk community have finally leaned upon the UN in sufficient weight, marshalled by the SG’s generally sympathetic disposition.
The UN calls for futures and foresight capabilities across countries and there is much scope for pressure on policy makers in every nation to act and establish such institutions. We have a forthcoming paper (November) in the New Zealand journal ‘Policy Quarterly’ that calls for a Parliamentary Commissioner for Extreme Risks to be supported by a well-resourced office and working in conjunction with a Select Committee. The Commissioner could offer support to CEOs of public sector organisations as they complete the newly legislated ‘long-term insights briefings’ that are to be tabled in Parliament from 2022.
I advocate for more work of this kind, but projects that ‘merely’ translate technical philosophical and ethical academic products into policy advocacy pieces don’t seem to generate funding. Yet, they may have the greatest impact. It matters not whether a paper is cited 100 times, it matters very much if the Minister with decision making capability is swayed by a well argued summary of the literature.
Thanks for collating all of this here in one place. I should have read the later posts before I replied to the first one. Thank you too for your bold challenge. I feel like Kant waking from his ‘dogmatic slumber’. A few thoughts:
Humanity is an ‘interactive kind’ (to use Hacking’s term). Thinking about humanity can change humanity, and the human future.
Therefore, Ord’s ‘Long Reflection’ could lead to there being no future humans at all (if that was the course that the Long Reflection concluded).
This simple example shows that we cannot quantify over future humans, quadrillions or otherwise, or make long term assumptions about their value.
You’re right about trends, and in this context the outcomes are tied up with ‘human kinds’, as humans can respond to predictions and thereby invalidate the prediction. Makes me think of Godfrey-Smith’s observation that natural selection has no inertia, change the selective environment and the observable ‘trend’ towards some adaptation (trend) vanishes.
Cluelessness seems to be some version of the Socratic Paradox (I know only that I know nothing).
RCTs don’t just falsify hypotheses, but also provide evidence for causal inference (in spite of hypotheses!)
Hi Vaden,
I’m a bit late to the party here, I know. But I really enjoyed this post. I thought I’d add my two cents worth. Although I have a long term perspective on risk and mitigation, and have long term sympathies, I don’t consider myself a strong longtermist. That said, I wouldn’t like to see anyone (eg from policy circles) walk away from this debate with the view that it is not worth investing resources in existential risk mitigation. I’m not saying that’s what necessarily comes through, but I think there is important middle ground (and this middle ground may actually instrumentally lead to the outcomes that strong longtermists favour, without the need to accept the strong longtermist position).
I think it is just obvious that we should care about the welfare of people here and now. However, the worst thing that can happen to people existing now is for all of them to be killed. So it seems clear that funnelling some resources into x-risk mitigation, here and now, is important. And the primary focus should always be those x-risks that are most threatening in the near term (and the target risks will no doubt change with time, eg I would say it is biotechnology in the next 5-10 years, then perhaps climate or nuclear, and then AI, followed by rarer natural risks, or emerging technological risks, etc while all the while building cross-cutting defences such as institutions and resilience). As you note, every generation becomes the present generation and every x-risk will have it’s time. We can’t ignore future x-risks, for this very reason. Each future risk ‘era’ will become present and we had better be ready. So resources should be invested in future x-risks, or at least in understanding their timing.
The issue I have with strong-longtermism lies in the utility calculations. The Greaves/MacAskill paper presents a table of future human lives that is based on the carrying capacity of the Earth, solar system, etc. However, even here today we do not advocate some imperative that humans must reproduce right up to the carrying capacity of the Earth. In fact many of us think this would be wrong for a number of reasons. To factor ‘quadrillions’ or any definite number at all into the calculations is to miss the point that we (the moral agents) get to determine (morally speaking) the right number of future people, and we might not know how many this is yet. Uncertainty about moral progress means that we cannot know what the morally correct number is, because theory and argument might evolve across time (and yes, it’s probably obvious but I don’t accept that non-actual, and never-actual people can be harmed, and I don’t accept that non-existence is a harm).
However, there seems to be value in SOME humans persisting in order that these projects might be continued and hopefully resolved. Therefore, I don’t think we should be putting speculative utilities into our ‘in expectation’ calculations. There are independent arguments for preventing x-risk than strong-longtermism, and the emotional response it generates from many, potentially including aversive policymakers makes it a risky strategy to push. Even if EA is to be motivated by strong-longtermism, it may be useful to advocate an ‘instrumental’ theory of value in order to achieve the strong-longtermist agenda. There is a possibility that some of EA’s views can themselves be an information hazard. Being right is not always being effective, and therefore not always altruistic.
**
Thanks for this response. I guess the motivation for me writing this yesterday was a comment from a member of NZ’s public sector, who said basically ‘the Atomic Scientists article falls afoul of the principle of parsimony’. So I wanted to give the other side, ie there actually are some reasons to think lab-leak rather than parsimonious natural explanation. So I completely take your point about balance, but the idea is part of a dialogue rather than a comprehensive analysis, that could have been clearer. Cheers.
Thanks for these. Super interesting credences here, 19% (that health organisations will conclude lab origin) to 83% (that gain of function was in fact contributory). I guess the strikingly wide range suggests genuine uncertainty. Watch this space with interest.
Is SARS-CoV-2 a modern Greek Tragedy?
Great additional detail, thanks!
Are Humans ‘Human Compatible’?
Another one to consider, assuming you see it at the same level of analysis as the 8 above, is the spatial trajectory through which the catastrophe unfolds. E.g. a pandemic will spread from an origin(s) and I’m guessing is statistically likely to impact certain well-connected regions of the world first. Or a lethal command to a robot army will radiate outward from the storage facility for the army. Or nuclear winter will impact certain regions sooner than others. Or Ecological collapse due to an unstoppable biological novelty will devour certain kinds of environment more quickly (same possibly for grey goo), etc. There may be systematic regularities to which spaces on Earth are affected and when. Currently completely unknown. But knowledge of these patterns could help target certain kinds of resilience and mitigation measures to where they are likely to have time to succeed before themselves being impacted.
‘Partitioning’ is another concept that might be useful.
Islands as refuge (basically same idea as the city idea above), this paper specifically mentions pandemic as threat and island as solution (ie risk first approach) and also considers nuclear (and other) winter scenarios too (see the Supplementary material): https://pubmed.ncbi.nlm.nih.gov/33886124/
I note Alexey’s comment here too, broadly agree with his islands/refuge thinking.
The literature on group selection and species selection in biology might prove useful. You seem to be on to it tangentially with the butterfly example.