Independent researcher on SRM and GCR/XRisk and on pluralisms in existential risk studies
GideonF
There are various problems with this.
FIrstly, generally roughly 3C is considered the likely warming if policies continue as they are,not the 2C that you claim. If the world achieves decarbonisation leading to 550PPM (in line with current policies, although 3C rather than the 2C you claim), there is still a fat tail risk, and in fact there is about 10% probability of 6C warming, due to our remaining uncertainty of ECS. This doesn’t meaningfully account for tipping points either, which if we got such warming we would be very likely to hit. If you want to read more on this, either read Wagner & Weitzmann 2015 (its a little old but still very relevant) or just read some of the literature on fat tailed climate risks. 10% chance of above 6C in a very plausible scenario seems an unacceptably high risk. This doesn’t even account for the possibility(although small, nonetheless very far from non-negligable) that we end up following an RCP8.5 pathway, which would be considerably more devastating.
Even if we do end up reaching the agreed upon target of roughly 450PPM (2C levels of CO2 concentrations), there is still a 5% chance of 4 degrees warming and a 1% chance of 5 degrees warming. |The fat tails really magtter (data from Quiggin 2017)
Moreover, to suggest 2C is “very unlikely” to lead to a GCR state perhaps somewhat ignores some of the problems I say in the above response, that the chief issues of climate change are its increase in societal vulnerabilities, possibility of triggering cascading failure, and of converting civilisational collapse to irreversible civilisational collapse. Obviously a lot of this rests on what probabilities you mean; for instyance, if you mean “very unlikely” in the IPCC sense that would imply 0-10% chance, which seems awfully high. I may put 2C being highly significant in leading to a GCR in roughly 1% territory, but certainlyt not terretory that it can be ignored, although I do think most of the GCR risk comes from heavy tailed scenarios detailed above
I think this makes far too strong a claim for the evidence you provide. Firstly, under the standard ITN (Importance Tractability Neglectedness) framework, you only focus on importance. If there are orders of magnitude differences in, let’s say, traceability (seems most important here), then longtermists maybe shouldn’t work on AI. Secondly, your claims that there is a low possibility AGI isn’t possible seem to need to be fleshed out more. The term AGI and general intelligence is notoriously slippery, and many argue we simply don’t understand intelligence enough to actually clarify the concept of general intelligence. If we think we don’t understand what general intelligence is, one may suggest that it is intractable enough for present actors that no matter how important or unimportant AGI is, under an ITN framework its not the most important thing. On the other hand, I am not clear this claim about AGI is necessary; TAI (transformative AI) is clearly possible and potentially very disruptive without the AI being generally intelligent. Thirdly, your section on other X-Risks takes an overly single hazard approach to X-Risk, which probably leads to an overly narrow interprets of what might pose X-Risk. I also think the dismissal of climate change and nuclear war seems to imply that human extinction=X-Risk. This isn’t true (definitionally), although you may make an argument nuclear war and climate change aren’t X-Risks, that argument is not made here. I can clarify or provide evidence for these points if you think it would be useful, but I think the claims you make about AI vs other priorities is too strong for the evidence you provide. I am not hear claiming you are wrong, but rather you need stronger evidence to support your conclusions
“I think this makes sense when we’re in the domain of non-existential areas. I think that in practice when you’re confident on existential outcomes and don’t know how to solve them yet, you probably should still focus on it though” -I think this somewhat misinterprets what I said. This is only the case if you are CERTAIN that biorisk, climate, nuclear etc aren’t X-Risks. Otherwise it matters. If (toy numbers here) AI risk is 2 orders of magnitude more likely to occur than biorisk, but four orders of magnitude less tractable, then it doesn’t seem that AI risk is the thing to work on.
“Not sure what you mean by “this isn’t true (definitionnally”. Do you mean irrecoverable collapse, or do you mean for animals? ” -Sorry, I worded this badly. What I meant is that argument assumes that X-Risk and human extinction are identical. They are of course not, as irrecoverable collapse , s-risks and permanent curtailing of human potential (which I think is a somewhat problematic concept) are all X-Risks as well. Apologies for the lack of clarity.
“The posts I linked to were meant to have that purpose.” -I think my problem is that I don’t think the articles necessarily do a great job at evidencing the claims they make. Take the 80K one. It seems to ignore the concept of vulnerabilities and exposures, instead just going for a hazard centric approach. Secondly, it ignores a lot of important stuff that goes on in the climate discussion, for example what is discussed in this (https://www.pnas.org/doi/10.1073/pnas.2108146119) and this (https://www.cser.ac.uk/resources/assessing-climate-changes-contribution-global-catastrophic-risk/). Basically, I think it fails to adequately address systemic risk, cascading risk and latent risk. Also, it seems to (mostly) equate X-Risk to human extinction without massively exploring the question of if civilisation collapses whether we WILL recover not just whether we could. The Luisa Rodriguez piece also doesn’t do this (this isn’t a critique of her piece, as far as I can tell it didn’t intend to do this either).
An intuition for why it’s hard to kill everyone till only 1000 persons survive: - For humanity to die, you need an agent: Humans are very adaptive in general+ you might expect that at least the richest people of this planet have plans and will try to survive at all costs. So for instance, even if viruses infect 100% of the people (almost impossible if people are aware that there are viruss) and literally kill 99% of the people (again ; almost impossible), you still have 70 million people alive. And no agent on earth has ever killed 70 million people. So even if you had a malevolent state that wanted to do that (very unlikely), they would have a hard time doing that till there are below 1000 people left. Same goes for nuclear power. It’s not too hard to kill 90% of people with a nuclear winter but it’s very hard to kill the remaining 10-1-0.1% etc. -Again, this comes back to the idea that for something to be an X-Risk it needs to, in one single event, wipe out humanity or most of it. But X-risks may be a collapse we don’t recover from. Note this isn’t the same as a collapse we can’t recover from, but merely because “progress” (itself a very problematic term) seems highly contingent, even if we COULD recover doesn’t mean there isn’t a very high probability that we WILL. Moreover, if we retain this loss of complexity for a long time, ethical drift (making srisks far more likely even given recovery) is more likely. As is other catastrophes wiping us out, even if recoverable from alone, either in concert, by cascades or by discontinuous local catastrophes. It seems like it needs a lot more justification to have a very high probability that a civilisation we think is valuable would recover from a collapse that even leaves 100s of millions of people alive. This discussion over how likely a collapse or GCR would be converted into an X-Risk is still very open for debate, as is the discussion of contingency vs convergence. But for your position to hold, you need very high certainty on this point, which I think is highly debatable and perhaps at this point premature and unjustified. Sorry I can’t link the papers I need to right now, as I am on my phone, but will link later.
The problem is for the strength of the claims made here, that longtermists should work on AI above all else (like 95% of longtermists should be working on this), you need a tremendous amount of certainty that each of these assumptions hold. As your uncertainty grows, the strength of the argument made here reduces
I have a slight problem with the “tell me a story” framing. Scenarios are useful, but also lend themselves general to crude rather than complex risks. In asking this question, you implicitly downplay complex risks. For a more thorough discussion, the “Democratising Risk” paper by Cramer and Kemp has some useful ideas in it (I disagree with parts of the paper but still) It also continues to priorities epistemically neat and “sexy” risks which whilst possibly the most worrying are not exclusive. Also probabilities on scenarios in many contexts can be somewhat problematic, and the methodologies used to come up with very high xrisk values for AGI vs other xrisks have very high uncertainties. To this degree, I think the certainty you have is somewhat problematic
I actually think our big crux here is the amount of uncertainty. Each of the points I raise and each new assumption you are putting in should raise you uncertainty. Given you claim 95% ofongtermists should work on AI, high uncertainties fo not seem to weigh in the favour of your argument. Note I am not saying and haven’t that either AI isn’t the most important X-Risk or that we shouldn’t work on it. Just arguing against the certainty from your post
I would be interested in your uncertainties with all of this. If we are basing our ITN analysis on priors, given the limitations and biases of our priors, I would again be highly uncertain, once more leaning away from the certainty that you present in this post
NB I strongly disagree with this take but tjink its useful to share anyway
One other way you might be able to generate a time of perils (or more generally save the x-risk pessimist) is by denying that x-risks exist as a class of things. Rather you take X-Risk as an arbitrarily small number of hazards which once we solve them X-Risk reduces massively until the next hazard comes along. However, because there are few enough existential hazards ,solving one could be very significant in solving total x-risk, and so generating only a short ( or multiple very short) periods of heightened x-risk before we just solve it.
I think this is problematic for many reasons. It requires an immense amount of epistemic confidence that we won’t come across even larger x-risks in the future. Also, it seems really strange to be at a time where multiple x-risks are present with x-risk not being a class of things with common cause but rather separate hazards unrelated to each other. Finally, as discussed in many papers, a hazard viewpoint of xrisk may be significantly oversimplified the problem.
Ok Doomer! SRM and Catastrophic Risk Podcast
Good idea, will do!
If I am honest I have struggled to work out how to do this for free, and certainly don’t have enough cash to pay out of pocket for this to be done, so I might try at some point to transcribe it myself, but I wouldn’t hold my breath! Thanks for the suggestion though!
Thank you for doing this and congratulations!
I haven’t managed to read the full report yet unfortunately, but I have a few questions/criticisms already- sorry to move onto these so quickly, but nonetheless I do think its important. (I tried to write these in a more friendly way, but I keep on failing to do, so please don’t take the tone as too aggressive, I am really not intending it to be, it just keeps coming across that way ! Sorry (: ) :
There are no mentions of systemic or cascading risks in the report. Why is this?
You don’t seem to engage with much of the peer-reviewed literature already written on climate change and GCRs. For example: Beard et al 2021, Kemp et al 2022, Richards et al 2021. Don’t get me wrong, you might disagree or have strong arguments against these papers, but it seems to some degree like you have failed to engage with them
You don’t seem to engage with much of the more complex systems aspects of civilisation collapse/ existential risk theory. Why is this?
There are no mentions of existential vulnerabilities and exposures, and you seem to essentially buy into a broadly hazard based account. The subdivision into direct and indirect effects further seems to support this idea. In this way you seem to ignore complex risk analysis. Why is this?
You seem to broadly ignore the work that went on around “sexy vs unsexy risks” and “boring apocalypses” and the more expansive work done to diversify views of how X-Risks may come about. Why is this?
Thanks for the report, and I am sure I will have more questions the more I go through it. I guess my major concern with this sort of stuff is it is likely that this work will go down (unrelated to its quality, and I am not saying its bad) as a “canonical” work in EA, so I think you perhaps have a responsibility, even if you in the end reject some of this scholarship, to engage in a lot of this (peer reviewed) scholarship on GCRs and X-Risks that has occurred in the “third wave” research paradigm of Existential Risk Studies, and I am slightly concerned that you appear not to have engaged with this literature!
I’m interested to see your in depth response to me
Perhaps that’s fair, certainly the asking too many questions part. I am less sure that it doesn’t expand enough, because I would like to give John credit to suggest he knew what bits of the literature he was excluding. More generally, I think my concern is a post like this may quickly establish itself as “orthodoxy” so I wanted to raise my concerns as early as possible, but perhaps I should have waited a bit of time to do a more comprehensive response. Perhaps I will learn for next time
Given the review process was not like normal peer review, would it be possible to have a public copy of all the reviewers comments like we get with the IPCC. This seems like it may br important for epistemic transparency
I think to some degree this level of accusations is problematic and to some degree derails an important conversation. Given the role a report like this may play in EA in the future, ad hominem and false attacks on critiques seem somewhat problematic
Can you make your model of indirect risks accessable to the public? Its asking for access. Thanks a lot.
Also, why do you assume that “most of the risk of existential catastrophe stems from AI, biorisk and currently unforeseen technological risks.”? My impression from earlier in the chapter is that you are essentially drawing the idea you can essentially ignore other potential causes from the Precipice. Is this correct?
Moreover, this assumption only seems true if you assume an X-Risk will come as a single hazard. If it is, say, a cascading risk, cascading to civilisational collapse then extinction, then the idea these are the biggest risks should be questioned. Simultanously, if you view it as a multi-pulsed thing, say civilisational collapse from one hazard or a series of hazards or cascades, and then followed by whatever may (slowly) make us extinct- once civilisation is collapsed its easier for smaller hazards to kill us all, then once again the primacy of these hazards reduces. Only if you take a reductive view that sees extinction as primarily due to direct, single or near single, hazards that kill everyone or basically everyone, can this model be valid.
Of course, you do talk a little about multipulsed, subextinction risks followed by recovery being harder, but not in much detail. In particular, you claim that extreme climate change may make civilisational recovery from collapse much harder, but then don’t seem to deal in detail with this question, which may be considered to be highly important, particularly if we think civilisational collapse is considerably more likely than extinction. Moreover, you suggest that “there is some
chance of civilisational collapse due to nuclear war or engineered pandemics,” essentially suggesting other causes of civilisational collapse that are less direct, and therefore could be made more likely due to climate change, are negligable. This assumption should be stated and evidenced, and yet you seem to include no sources on this.Moreover, you state (uncited) that “the main indirect effect is Great power Conflict.” Whats your source for this claim, and why are you so certain of this that you are confident that you can discount other indirect effects? This feels like the assumptions once again should be supported;
If this is the case, then I might say that relying on the (in my opinion) rather reductive, hazard-centric, simple risk assessment model of Ord etc. is our crux of disagreement. This is why I would say from my (still moderately limited unfortunatly) reading of the report, it appears that most of your facts are in order, however a lot of what I think that it is very bad that you fail to mention (systemic risk, cascading risk, vulnerabilities, exposures, complex risk assessments (I don’t use this to suggest my way is inherently intellectually superior than yours, as indeed it is a plausible position to hold that X-Risks may emerge out of epistemically simplier more direct more “simple” risks) etc) originates out of this hazard-centric approach. I won’t overthrow a paradigm in a single comment, and I won’t even try, but do please tell me if you agree with me that this is the crux of our disagreement. Moreover, whilst in a previous comment to me you have said you have argued for this methodology of viewing X-Risks at length in the piece, I am yet to find such an argument. If you could point me to where you think you make this argument in the piece, I will reread that section, or I may have missed it (apologies if I have). If not, it feels this approach needs considerably greater justification.
I have more comments/criticisms which I will post in other comments, but certainly on this indirect risk things, these are my questions.
Hi John
Given the degree to which you have highlighted how experts have commented and reviewed the piece, will you, for the sake of intellectual transparency, commit to publishing all this expert feedback like the IPCC does. I think this may really help.
I said this in a subcomment, and it (worryingly) got significantly downvoted. It is a worrying sign for a community if a call for intellectual transparency (which is a key norm in EA) is downvoted just because the writer (ie me) has been critical of the piece.
I have great respect for you as an academic and an EA, and I trust that you will agree that such intellectual transparency is a useful norm, and if possible commit to publishing the commentsand reviews that those who reviewed the publication sent! The worry in the above paragraph is certainly not directed at you, and I have all the confidence that you are and will remain committed to maintaining EA as an as transparent space as possible
All the best
Gideon
Hi John
Thanks so much for this. Did any of the reviewers (Peter Watson, Goodwin Gibbins, James Ozden perhaps?) make comments on the overall report ie your methodology, your choices of areas of inquiry etc. As this is my major criticism of your work I would really love to see the reviewers comments on your overall methodology, structure of the report etc
Best
Gideon
I think this is perhaps quite a simplistic reading of climate change, and whilst somewhat in line with the “community orthodoxy”, I think this post and that orthodoxy is somewhat misguided.
Firstly, this post broadly ignores the concept of vulnerabilities and exposures in favour of a pure singular hazard model, which whilst broadly in line with the focus of people like Bostrom and Ord, seems overly reductive. Moreover, it seems highly unlikely that even the most damngerous pandemic would actually cause direct human extinction, nor an ordinary nuclear war, meaning a care about only direct X-Risk really should lead to a prioritisation of omnicidal actor, AI risk, and other speculative risks like physics experiments and selfreplicating nano-technology. Even if you focus on the broader hazards category, climate’s role as a risk factor is certainly not to be ignored, in particular I think in increasing risk of conflict and increasing the number of omnicidal actors. It should be noted, however, that X-Risk doesn’t just mean human extinction, but anything which irrepairably reduces the potential of humanity.
Once you are dealing with GCRs and societal collapse, and how this might pose an X-Risk (by conversion to irrecoverable societal collapse, which still needs more work on it), climate change rises in priority. Climate change increasing civilisational vulnerability becomes a much more serious issue, and an increase in natural disasters may be enough to cause cascading failures. If you seriously care about the collapse of our complex system, or collapses that result in mass death (not necessarily synonmous), I think these more reductionist arguments hold less sway. Whilst I won’t go into the long termist argument for this in detail here, if you think it unlikely that societal recovery is in line with what is good (you might be particularly susceptable to this if you are a moral antirealist who thinks your values are mostly arbitrary ) or that societal recovery is reasonably unlikely. It also should be noted that it seems that societies struggle to recover in unstable climates, so climate change may make it even harder for societal recovery. In the article you say that the ability for climate to cause societal collapse is instead a reason to focus on the relationship between food systems and societal collapse, however climate doesn’t just impact our food systems, but a huge amount of our critical systems , and just addressing food supply may lead us still vulnerable to societal collapse. (NB I think these societal collapse tendences of climate change is generally low probability, probably <10%) Climate change related vulnerabilities likely make the conversion of a GCR-> a societal collapse more likely and the conversion from societal collapse->irreversible societal collapse, as well as the conversion of shock-> GCR. Moreover, the literature on systemic risk would probably further elevate the importance of climate change. If you only care about fully wiping humanity out, because you think under almost all scenarios of GCRs/society collapse we recover to the same technological levels + in line with values you agree with, then maybe you can ignore most of this, but I tend to think such an argument is mostly implausible (I won’t give this argument here)
On the topic of neglectedness, it is true that climate change as a whole is not neglected. Nonetheless, potentially high impact interventions on climate may (and may is important), still be available and neglected. Thus, don’t let this general EA advice disuade you if you think you have found soemthing promising. In relation to the funding given to climate change, a lot of that is related to investment in energy generation technologies, and much pays for itself, although general climate investment is outside my area of expertise. Moreover, it is unclear how much more money on AI Safety would massively help us, although this is once again outside my expertise and I know there is a lot of disagreement on this, so take this paragraph with a little pinch of salt.
Finally, this article general presupposes that X-Risk is high at present and that we are at “the hinge of history,” presenting X-Risk work as the only outcome of longtermism. Whilst such may be a common sentiment in the community, it certainly isn’t the only perspective. If for example you think X-Risk in general is low, from other longtermist perspectives, it may be the case that the destabilising effects of climate change on the globe and the global economy is indeed highly important, and then you get into the neglectedness question ie is it easier to stop the4 negative effect of climate change on GDP growth (and many interventions probably increase gdp growth as well) or just focus on gdp growth/. This is certainly not a done question, although I think John Halstead did some stuff on it which I probably need to check.
Whilst I certainly think your argument is useful in parts, including the claim that climate change is probably overhyped, I nonetheless feel you unreasonably suggest climate change is less of an issue than it is. Less focus on the Bostrom/Ord -esque existential hazards may be beneficial, and a greater diversification of viewpoints, including better integration of some of the arguments that the references you cite make.
However, please don’t let the overall critical tone of this comment dissuade you- its awesome to see people new to EA writing such genuinely well researched and well written posts on the forum (I certainly haven’t had the bravery to post something on here yet!) Keep up the good work despite my critiscms.