Beyond Simple Existential Risk: Survival in a Complex Interconnected World

This is the script of a talk I gave at EAGx Rotterdam, with some citations and references linked throughout. I lay out the argument challenging the relatively narrow focus EA has in existential risk studies, and in favour of more methodological pluralism. This isn’t a finalised thesis, and nor should it be taken as anything except a conversation starter. I hope to follow this up with more rigorous work exploring the questions I pose over the next few years, and hope other do too, but I thought to post the script to give everyone an opportunity to see what was said. Note, however, the tone of this is obviously the tone of a speech, not as much of a forum post. I hope to link the video when its up. Mostly, however, this is really synthesising the work of others; very little of this is my own original thought. If people are interested in talking to me about this, please DM me on here.


Existential Risk Studies; the interdisciplinary “science” of studying existential and global catastrophic risk. So, what is the object of our study? There are many definitions of Existential Risk, including an irrecoverable loss of humanity’s potential or a major loss of the expected value of the future, both of these from essentially a transhumanist perspective. In this talk, however, I will be using Existential Risk in the broadest sense, taking my definition from Beard et al 2020, with Existential Risk being risk that may result in the very worst catastrophes “encompassing human extinction, civilizational collapse and any major catastrophe commonly associated with these things.”

X-Risk is a risk, not an event. It is defined therefore by potentiality, and thus is inherently uncertain. We can thus clearly distinguish between different global and existential catastrophes (nuclear winters, pandemics) and drivers of existential risk, and there are no one to one mapping of these. The IPCC commonly, and helpfully, splits drivers of risk into hazards, vulnerabilities, exposures, and responses, and through this lens, it is clear that risk isn’t something exogenous, but is reliant on decision making and governance failures, even if that failure is merely a failure of response.

The thesis I present here is not original, and draws on the work of a variety of thinkers, although I accept full blame for things that may be wrong. I will argue there are two different paradigms of studying X-Risk: a simple paradigm and a complex paradigm. I will argue that EA unfairly neglects the complex paradigm, and that this is dangerous if we want to have a complete understanding of X-Risk to be able to combat it. I am not suggesting the simple paradigm is “wrong”; but that alone it currently doesn’t and never truly can, capture the full picture of X-Risk. I think the differences in the two paradigms of existential risk are diverse, with some of the differences being “intellectual” due to fundamentally different assumptions about the nature of the world we live in, and some are “cultural” which is more contingent on which thinkers works gain prominence. I won’t really try and distinguish between these differences too hard, as I think this will make everything a bit too complicated. This presentation is merely a start, a challenge to the status quo, not asking for it to be torn down, but arguing for more epistemic and methodological pluralism. This call for pluralism is the core of my argument.

The “simple” paradigm of existential risk is at present dominant in EA-circles. It tends to assume that the best way to combat X-Risk is identify the most important hazards, find out the most tractable and neglected solutions to those, and work on that. It often takes a relatively narrow range of epistemic tools: forecasting and toy models, such as game theoretic approaches, thought experiments or well thought out “kill mechanism” causal chains, as fundamentally useful tools at examining the future which is taken to be fundamentally understandable and to a degree predictable, if only we were rational enough and had enough information. Its a methodology that, given the relative lack of evidence on X-Risk, is more based on rationality than empiricism; a methodology that emerges more from analytic philosophy than empirical science. Thus, risks are typically treated quasi-independently, so the question “what is the biggest X-Risk” makes sense and we can approach X-Risk by focusing on quasi-discrete “cause areas” such as AGI, engineered pandemics or nuclear warfare.

Such an approach can be seen in published works by the community and in the assumptions that programmes and more are set up in. The Precipice finds the separation of X-Risks into the somewhat arbitrary categories of “Natural” “Anthropogenic” and “Future” Risks to be useful, and quantifies those risks based on what each of those quasi-independent hazards contributes. The Cambridge Existential Risk Initiative summer research fellowship that I was lucky to participate in this summer separated their fellows into categories based broadly on these separate, discrete risks: AI, Climate Change, Biosecurity, Nuclear Weapons and Misc+Meta. Once again, this promotes a siloed approach that sees these things as essentially independent, or at least treating these independently is the best way of understanding them. Even on the swapcard for this conference, there is no category for areas of interest for “Existential Risk”, “Vulnerabilities” or “Systemic Risk”, whilst there is 2 categories for AI, a category for Nuclear Security, a category for climate change, a category for biosecurity. The “simple” approach to existential risk, permeates almost all the discussions we have in EA about existential risk; it is the sea in which we swim in. Thus, it profoundly affects the way we think about X-Risk. I think it could accurately be described as, in the sense Kuhn discusses it, as a paradigm.

That’s the simple approach. A world, which at its core, we can understand. A world where the pathways to extinction are to some degree definable, identifiable, quantifiable. Or at least, if we are rational enough and research enough, we can understand what the most important X-Risks are, prioritise these and deal with these. Its no wonder that this paradigm has been attractive to Effective Altruists; this stuff is our bread and butter. The idea that we can use rational methodologies in good-doing is what we were founded on, and retains its power and strength through the ITN framework. The problem is, I’m not sure this is very good at capturing the whole picture of X-Risk, and we ignore the whole picture at our peril.

Because maybe the world isn’t so simple, and the future not so predictable. Every facet of our society is increasingly interconnected, our ecological-climatic system coupling to our socio-economic system, global supply chains tied to our financial system tied to our food system. A future emerging from such complexity will be far from simple, or obvious, or predictable. Risk that threatens humanity in such a world will likely interact in emergent ways, or emerge in ways that are not predictable by simple analysis. Rather than predictable “kill mechanisms,” we might worry about tipping thresholds beyond which unsafe system transitions may occur, compounding, “snowballing” effects, worsening cascades, spread mechanisms of collapse and where in a complex system we have the most leverage. Arguably, we can only get the whole picture by acknowledging irreducible complexity, and that the tools that we currently use to give us relatively well defined credences, and a sense of understanding and predictability in the future are woefully insufficient.

I think its important to note that my argument here is not “there is complexity therefore risk,” but rather that the sort of global interconnected and interdependent systems that we have in place make the sorts of risk we are likely to face inherently unpredictable, and so it isn’t so easily definable as the simple paradigm likes to make out. Even Ord acknowledges this unpredictability, putting the probability of “unforseen anthropogenic risk” at 1 in 30; in fact, whilst I have constantly attacked the core of Ords approach in this talk, I think he acknowledges many of these issues anyway. And its not like this approach, focusing on fuzzy mechanisms emerging out of feedback loops, thresholds and tipping, is wholly foreign to EA; its arguable that the risk from AGI is motivated by the existence of a tipping threshold, which when past may lead to magnifying impacts in a positive feedback loop (the intelligence explosion), which will lead to unknown but probably very dangerous effects that, due to the complexity of all the systems involved, we probably can’t predict. This is rarely dismissed as pure hand-wavyness as we acknowledge we are dealing with a system that our reasoning can’t fully comprehend. Whilst EAs tend to utilise a few of the concepts of the complex approach with AGI, elsewhere its ignored, , which is slightly strange, but more on this later.

It is arguable that the complex paradigms focus on the complexity of the world is somewhat axiomatic, based on a different set of assumptions about the way the world functions to the simple approach; one that sees the world as a complex network of interconnected nodes, and risk as primarily emerging from the relatively well known fragility and vulnerability of such as system. I don’t think I can fully prove this to you, because I think it is a fundamental worldview shift, not just a change in the facts, but in the way you experience and understand the world. However, if you want to be convinced, I would look at much of the literature on complexity, on the coupled socio-technical-ecological-political system, the literature on risk such as the IPCC, or texts like “the risk society” on how we conceptualise risk. I’m happy to talk more about this in the Q&A, but right now I hope that you’re willing to come along for the ride even if you don’t buy it.

This is why I treat this as an entirely different paradigm to the current EA paradigm. The complexity approach is fundamentally different. It sees the world as inherently complex, and whilst facets are understandable, at its core the system is so chaotic we can never fully or even nearly fully understand it. It sees the future as not just unpredictable but inherent undefined. It sees risk mostly emerging from our growing, fragile, interconnected system, and typically sees existential hazards as only one part of the equation, with vulnerabilities ,exposures and responses perhaps at least as important. It takes seriously our uncertainty with regards to what the topography of the epistemic landscape is, and so uncertainty should be baked into any understanding or approach, and thus favours foresight over forecasting. The epistemic tools that serve the simple approach are simply not useful at dealing with the complexity that this paradigm takes as central to X-Risk, and thus new epistemic tools and frameworks must be developed; whether these have been successful is debatable.

A defender of the “simple” paradigm might argue that this is unfair: after all, thinkers like Ord discuss “direct” and “indirect” risks. This is helpful. The problem is, its very unclear what constitutes a “direct” vs “indirect” existential risk. If a nuclear war kills almost everyone, but the last person alive trips off a rock and falls off a cliff, which was the direct existential risk? The nuclear war or the rock? Well, this example could rightfully be considered absurd (after all, if one person is alive, humanity will go extinct after that person dies) but I hope the idea still broadly stands- very few “direct” existential risks actually wipe the last person out. What about a very deadly pandemic that can only spread due to the global system of international trade, and that the response of reducing transport, combined with climate change, causes major famines across the world, where only both combined cause collapse and extinction? Which is the direct risk? Suddenly, the risk stops looking so neat and simple, but still just as worrying.

This logic of direct and indirect doesn’t work, because it still favours a quasi-linear mechanistic worldview. Often, something is only considered a “risk factor” if it leads to something that is a direct risk. Such arguments can be seen in John Halstead’s enormous climate google doc, which I think is a relatively good canonical example of the “simple” approach. Here, he argues climate change is not a large contributor to existential risk because it can’t pose a direct risk, and isn’t a major contributor to things that would then wipe us out. So its not a direct risk, nor a first order indirect risk; so its not really a major risk. In fact, because of the simplicity of merely needing to identify the answer to whether it is a direct risk or a 1st order indirect risk, there is not even a need for a methodology, or that slippery word “theory”; one can merely answer the question by thinking about it and making a best guess. The type of system and causal chain dealt with is within the realm that one person can make such a judgement; if you acknowledge the complexity of the global network, such reliance on individual reasoning appears like dangerous overconfidence.

You might then say that the simple approach can still deal with issues by then looking at 2nd order indirect risks, 3rd order, 4th order and so on. But what happens when you get to nth order indirect risks; this mechanistic, predictable worldview simply cannot deal with that complexity. A reply to this may be that direct risks are just so much larger in expectation, however, this doesn’t fit with our understanding from the study of complex and adaptive networks, and work done by scholars like Lara Mani on volcanoes further show that cascading nth order impacts of volcanic eruptions may be far larger than the primary direct impacts. Even take the ship stuck in the suez canal- the ripple effects seem far larger than the initial, direct effect. This may similarly turn out the same for the long term impacts of COVID-19 as well.

Thus it seems the simple approach struggles when dealing with the ways most risks tend to manifest in the real, complex, interconnected world- through vulnerabilities and exposures, through systemic risks and through cascades. In fact, the simple approach tends to take Existential Risk to be synonymous with Existential Hazards, relegating other contributors to risk, like vulnerabilities, exposures and responses to the background. It has no real theory of systemic risk, hence the lack of need for defined methodologies,, and when I mentioned cascading risk to John Halstead in the context of his climate report, he said he simply didn’t think it worth investigating. I don’t think this is a problem with John- despite our disagreements he is an intelligent and meticulous scholar who put a lot of effort into that report; I think this is a problem of simple existential risk analysis- it is not capable of handling the complexity of the real world.

So we need complex risk analysis, that acknowledges the deep interconnectedness, emergence and complexity of the global system we are in to truly analyse risk. But here we are faced with a dilemma. On the one hand, we have a recognition as to the irreducible complexity of the world, and the inherent uncertainty of the future. On the other, we need to act within this system and understand the risks so we can combat them. So the question is, how?

The first step towards a more complex risk analysis picks up the baton from the simple approach, in emphasising compounding risk; how different hazards interact. More will be discussed on this later.

Secondly, risk is expanded beyond the concept of existential hazards, which is what the simple paradigm focuses on, to discuss vulnerabilities and exposures, as well as responses. To explain vulnerabilities and exposures, imagine someone with a peanut allergy: the peanut is the hazard, the allergy the vulnerability and the exposure is being in the same room as the peanut. The hazard is what kills you, the vulnerability is how you die, and the exposure is the interface between the two. So we can expand what we should do to combat existential risk from just “putting out fires” which is what the hazard-centric approach focuses on, to a more systemic approach focusing on making our overall system more resilient to existential risk. We might identify key nodes where systemic failure could occur, and try and increase their resilience, such as the work Lara Mani has been doing identifying global pinch points where small magnitude volcanic eruptions may cause cascading impacts resulting in a global catastrophe.

In doing this, we are abandoning the nice, neat categories the simple approach creates. In many ways, it no longer makes sense to talk about risks, as though these were quasi-independent “fires” to put out. Rather, it makes sense to speak about contributors to overall risk, with attempts made to shift the system to greater security, by identifying and reducing sources of risk. This doesn’t just include hazards, but other contributors as well; not just acknowledging the initial effect, but everything that made each cascade more likely. These cascades are not predictable, the threshold beyond which the feedback loop occurs not knowable, and thus foresight, where we may get a sample of what could occur, rather than forecasting where we try and predict what will occur, will be far more useful. This simple linguistic shift, from risks to risk, can be surprisingly powerful at highlighting the difference between the simple and complex approach.

Acknowledging that we don’t know the pathways to extinction actually opens up new approaches to combatting risk. We may see reducing systemic vulnerability as more impactful than under the simple approach. see reducing the probability of feedbacks and of passing thresholds beyond which we may reasonably assume catastrophe may follow as appropriate courses of action. Or, even if we are unsure about what exactly will kill us, we might want to focus on what is driving risk in general rather than specific hazards, be it work on “agents of doom” or Bostrom’s vulnerable work emerging out of a semi anarchic default condition. Whilst the complex approach acknowledges the difficulties that the nonlinearities and complexities bring, in other ways it allows for a broader repertoire of responses to risk as well, as Cotton Barrett et als work on defence in depth also shows, for example.

Another approach to complexity may be what might be called the “Planetary Boundaries” approach. Here, we identify thresholds whereby we know the system is safe, and try to avoid crossing into the unknown. Its like we’re at the edge of a dark forest; it may be safe to walk in, but better safe than sorry. It applies a precautionary principle; that in such a complex system, we should have the epistemic humility to simply say “better the devil you know.” This approach has rightfully been critiqued by many who tend to favour a more “simple” approach; it is very handwavy, with no clear mechanism to extinction or even collapse, with the boundaries chosen somewhat arbitrarily. Nevertheless, it may argued that lines had to be drawn somewhere, and wherever they would be drawn would be arbitrary; so this is a “play it safe” approach because we don’t know what is beyond these points rather than an “avoid knowable catastrophe approach.” However, such an approach is very problematic if we want to prioritise between approaches, something I will briefly discuss later.

Something similar could be said as a solution to Bostrom’s “Vulnerable World” and Manheim’s “Fragile World.” If increasing technological development and complexity puts us in danger, then maybe we should take every effort to stop this; after all, these things are not inevitable. Of course, Bostrom would never accept this- to him this alone poses an X-Risk- and instead proposes a global surveillance state, but that is slightly besides the point.

However, we are still faced with a number of problems. We are constantly moving into unprecedented territory. And sometimes, we are not left with an option which is nice and without tradeoffs. MacAskill somewhat successfully argues that technological stagnation would still leave us at danger of many threats. Sometimes we have already gone in the forest, and we can hear howling, and we have no idea what is going on, and we are posed with a choice of things to do, but no option is safe. We are stuck between a rock and a hard place. Under such deep uncertainty, how can we act if we refuse to reduce the complexity of the world? We can’t just play it safe, because every option fails a precautionary principle. What do we do in such cases?

This is the exact dilemma that faces me in my research. I’m researching the interactions of solar radiation modification and existential risk, both how it increases and decreases risk. As it is therefore simultaneously combatting a source of risk, and itself increases risk, the sort of “play it safe” approach to complexity just doesn’t necessarily work, although before I properly explain how I am attempting to unpick this, I ought to explain exactly what I’m on about.

Solar Radiation Modification (SRM), otherwise known as solar geoengineering is a set of technologies that aim to reflect a small amount of sunlight to reduce warming. Sunlight enters the earth, some is reflected. That which isn’t is absorbed by the earth, which is then reemitted as long wave infrared radiation. Some of this escapes to space, and some gets absorbed by greenhouse gases in the atmosphere, warming it. As we increase GHG concentrations, we increase the warming. SRM tries to reduce this warming by decreasing the amount of light entering the earth, by reflecting it by either injecting aerosols into the stratosphere, mimicing the natural effects of volcanos, or by brightening clouds, or a related technique that isn’t quite the same that involves thinning other clouds. This would likely reduce temperatures globally, and the climate would be generally closer to preindustrial, but it comes with its own risks that may make it more dangerous

Those working from a simple paradigm have tended to reject risks from climate change as especially large. Toby Ord estimates the risk at 0.1%. Will MacAskill in what we owe the future suggests “its hard to see how even [7-10 degrees of warming] could cause collapse.” Both of these have tended to use proxies for what would cause collapse, trying their best to come up with simple, linear models of catastrophe; Toby Ord wants to look at whether heat stress will cause the world to become uninhabitable, and Will wants to look at whether global agriculture will entirely collapse. These simple proxies, whilst making it easier to reason simple causal chains, are just not demonstrative of how risk manifests. Some have then attempted to argue whether climate change poses a first order indirect existential risk, which is mostly John Halstead’s approach in his climate report, but once again, I think this misses the point.

From a more complex paradigm, I think climate change becomes something to be taken more seriously, because not only does it make hazards more likely, and stunts our responses, but also, and perhaps more keenly, makes us more vulnerable, and may act to majorly compound risk in ways that make catastrophe far more likely. A variety of these scenarios where a “one hazard to kill us all” approach doesn’t work was explored in the recent “Climate Endgame” paper. One area where that paper strongly disagrees with the status quo is via “systemic risk.” In the Precipice, Ord argues that a single risk is more likely than two or more occurring in unison, however, Climate Endgame explores how climate change has the ability to trigger widespread, synchronous, systemic failure via multiple indirect stressors: food system failures, economic damage, water insecurity etc coalescing and reinforcing until you get system wide failure. A similar, but slightly different risk, is that of a cascade, with vulnerabilities increasing until one failure sets off another, and another, with the whole system snowballing; in the case of climate, this may not just refer to our socio-economic system, but evidence of tipping cascades in the physical system show that there is a non-negligable chance of major near synchronous collapse of major elements in the earth system. Such spread of risk is well documented in the literature, as occurred in the 2008 financial crisis, but has been almost entirely neglected by the simple paradigm of existential risk. The ability for such reinforcing, systemic risk to occur from initial hazards that are far smaller than the simple paradigm would consider “catastrophic” should really worry us: normally, lower magnitude hazards are more common, and we are likely severely neglecting these. If one takes such systemic failures seriously, climate change suddenly looks a lot more dangerous than the simple approach lets it be.

So, a technology like SRM that can reduce climate damage may seriously reduce the risk of catastrophe. There is a significant amount of evidence to suggest that SRM moderates climate impacts at relatively “median” levels of warming. However, one thing that has hardly been explored is the capacity of SRM to combat hitting those earth tipping thresholds, which, whilst not essential to have the spreading systemic risk, is certainly one key contributor to existential risk from climate change being higher. So, alongside some colleagues at Utrecht and Exeter, we are starting to investigate the literature, models and expert elicitations to try and make a start at understanding this question. So, this is one way one can deal with complexity: try and make a start with things which we know contribute to systemic risk in ways that could plausibly be catastrophic, and observe whether these can be reduced.

However, SRM also acts as a contributor to risk. In one sense, this contributor to risk is easier to understand from the simple paradigm, as it is its direct contribution to great power conflict, which is often itself considered a first order indirect risk. So here we can perhaps agree! This has been explored in many peoples work, some just simple, two variable analyses of the interaction of SRM and volcanic hazards, whilst some try and highlight how SRM may change geopolitics and tensions in a way which may change how other risk spreads and compounds. One key way it does this is by coupling our geopolitical-socio-political system with the ecological-climatic system, allowing for risk to spread from our human system to the climatic system that supports us much faster than before. This might really worry us, given how our climatic system then feeds back into our human system and so on.

A second manner which it contributes is by the so-called latent risk- a risk that lays “dormant” until activated. Here, if you stop carrying out SRM, you get rapid warming, what is often called “termination shock”, and faster rates of warming likely raise risk through all the pathways discussed for climate change. However, to add another wrinkle, such termination is mostly plausible because of another global catastrophe, so what would occur is what Seth Baum calls a “Double Catastrophe”- again highlighting how synchronous failure might be more likely than single failure! However, to get a better understanding of the physical effects of such double catastrophe under different conditions, I have been exploring how SRM would interact with another catastrophe that had climatic effects, namely a catastrophe involving the injection of soot into the stratosphere after a nuclear exchange. Here, its very unclear that the “termination shock” and the other effects of SRM actually make the impacts of such an exchange worse, and it is likely that actually it acts to slightly moderate the effects. I think this shows we cannot simply go “interacting hazards and complex risk = definitely worse,” but I also think it shows that the neglect of such complex risk by the simple approach loses a hell of a lot of the picture.

The other thing I am trying to explore is the plausible cascades and spread mechanisms of risk which SRM encourages. In part, I am doing this through foresight exercises like ParEvo, where experts are brought together to generate collaborative and creative storylines of diverging futures. Unlike forecasting, such scenarios don’t have probabilities on them; in fact, due to the specificity needed, a good scenario should have a probability zero, like a point on a probability distribution, but hopefully can give us a little bit of a map with what could occur. So we highlight a whole load of plausible scenarios, acknowledging that none of these are likely to come to fruition, but hopefully on the premise that these should perhaps highlight some of the key areas in which good action should focus. For example, my scenarios will be focusing on different SRM governance schemes response to different catastrophic shocks, so hopefully highlighting common failures of governance systems to more heavy tailed shocks. Scenarios are useful in many other areas, such as the use of more “game-like” scenarios such as Intelligence Rising to highlight the interactions of the development of AGI and international tensions and geopolitics.

Nonetheless, ultimately what is needed is to do a risk-risk-risk-risk-risk analysis, comparing the ways SRM reduces and contributes to risk, and what leverage could be to reduce each of those contributors. This is a way off, and I am unsure if we have good methodologies for this yet. Nonetheless, by acknowledging the large complexities, and utilising methods to uncover how SRM can contribute to risk and reduce risk in the global interconnected system, we get a far better picture of the risk landscape than under the simple approach. Many who take the simple approach have been quite happy to reject SRM as a risky technology without major benefit in mitigating X-Risk, and have been happy to do a “quick and dirty” risk-risk analysis based on simple models of how risk propagate. As we explore the more complex feedbacks, interactions and cascades of risk, the validity of such simple analyses is, I think, brought into question, highlighting the need for the complex paradigm in this field.

Finally, its important to note that it is not obvious how any given upstream action, like research for example, contributes to risk. Even doing research helps to rearrange the risk landscape in unpredictable ways, and so even answering of whether research makes deployment of the technology more likely is really hard, as research spurs governance, impacts tensions, impacts our ability to discover different technologies and make the system more resilient etc. Once again, the complex web of impacts of research also need to be untangled. This is tricky, and needs to be done very carefully. But given EA dominates the X-Risk space, its not something we can shirk doing.

I also think its important to note that these approaches, whilst working often from different intellectual assumptions, have their differences manifest predominantly culturally rather than intellectually. In many ways, the two approaches converge; this is perhaps surprising, and maybe acts as a caution against my grandsstanding of fundamental axiomatic differences. For example, the worry about an intelligence explosion is at its core, I think, a worry about a threshold beyond which we move into a dangerous unknown, with systems smarter than us who may, likely by some unknown mechanism, kill us. In many ways, it should be more comfortable inside the “complex” paradigm, without well thought out kill mechanism, acknowledging irreducable undetermination of the future, and how powerful technologies, phenomena and structures within our complex interconnected system are likely to contribute hugely to risk, than in the simple paradigm. Similarly, work that thinkers like Luke Kemp, who ostensibly aligns more with the “complex paradigm” have done on the agents of doom, which tries to identify the key actors that are drivers of risk, which is mostly the drivers of hazards, probably fits more neatly in the “simple” paradigm than the complex paradigm. I think these cultural splits are important as well, and it probably implies that a lot of us from across the spectrum are missing potentially important contributors to existential risk, irrespective of our paradigm.

As a coda to this talk, I would like to briefly summarise Adrian Currie’s arguments in his wonderful paper “Existential Risk, Creativity and Well Adapted Science.” This is relevant perhaps a level above what I am talking about, arguing about what “meta-paradigm” we should take. He suggests that all research has a topographical landscape with “peaks” representing important findings. Thus, research is a trade-off between exploring and exploiting this landscape. I think the simple approach is very good at exploiting certain peaks, but is particularly bad at understanding the topography of the whole landscape, which I think the complexity paradigm is much better at. But as Currie convincingly argues, this probably isn’t sufficient. X-Risk studies is in a relatively novel epistemic situation: the risks they deal with are unique, in the words of Carl Sagan “not readily amenable to experimental verification… at least not more than once.” The systems are wild and thus don’t favour systemic understanding. We are not just uncertain as to the answer to key questions, but also uncertain as to what to ask. It is a crisis field, centred around a goal rather than a discipline- in fact, we are uncertain what disciplines matter the most. All of this leaves us in an epistemic situation where in many ways uncertainty, and thus creativity, should be at the core of our approach, both trying to get an understanding of the topography of the landscape and because it stops us getting siloed. On the spectrum of exploring vs exploiting, exploratory approaches should be favoured, because we should reasonably see ourselves as deeply uncertain about nearly everything in X-Risk. Even if people haven’t managed to “change our minds” on a contributor to risk, experience should tell us that we are likely to be wrong in ways that no one yet understands, and there are quite probably even bigger peaks out there. We also should be methodological omnivores, happy to use many methodologies and tailoring these to local contexts, and with a pluralistic approach to techniques and evidence, increasing the epistemic tools at our disposal. Both of these imply the need for pluralism, rather than hegemony of any one approach. I am very worried that EAs culture and financial resources are pushing us away from creativity and towards conservatism in the X-RIsk space.

In conclusion, this talk hasn’t shown you the simple approach is wrong, just that it provides a thoroughly incomplete picture of the world that is insufficient at dealing with the complexity of many drivers of existential risk. This is why I, and many others, call for greater methodological diversity and pluralism, including a concerted effort to come up with better approaches to complex and systemic risk. The simple approach is clearly problematic, but it is far easier to make progress on problems using it- its like the Newtonian physics of existential risk studies. But to get a more complete picture of the field, we need a more complex approach. Anders Sandberg put this nicely, seeing the “risk network” as having a deeply interconnected core, where the approach of irreducable complexity must dominate, a periphery which has less connections, where a compounding risk approach can dominate, and a far off periphery where the simple, hazard centric approaches dominate with relatively few connections between hazards. The question is, and one that is probably axiomatic more than anything else, where the greatest source of risk is. But both methods of analysis clearly have their place.

As EA dominates the existential risk field, it is our responsibility to promote pluralism, through our discourses and our funding. Note, as a final thing, this isn’t the same as openness to criticism, based on “change my mind” around a set of rules which a narrow range of funders and community leaders set. Rather, we need pluralism, where ideas around existential risk coexist rather than compete, encouraging exploration and creativity, and, in the terms of Adrian Currie, “methodological omnivory.” There is so little evidentiary feedback loops that a lot of our answers to methodological or quasi-descriptive questions tends to be based on our prior assumptions, as there often isn’t enough evidence to hugely shift these, or evidence can be explained in multiple ways. This means our values and assumptions hugely impact everything, so having a very small group of thinkers and funders dominate and dictate the direction of the field is dangerous, essentially no matter how intelligent and rational we think they are. So we need to be willing not just to tolerate but fund and promote work into X-Risk that we individually may think is a dead-end, and cede power to increase the creativity possible in the field, because in such uncertainty, promoting creativity and diversity is the correct approach, as hard as it is to accept this. How we square this circle with our ethos of prioritisation and effectiveness is a very difficult question, one I don’t have the answer to. Maybe its not possible; but EAs seem to be very good at expanding the definition of what is possible in combatting the world’s biggest problems. This a question that we must pose, or we risk trillions of future lives. Thank you.