21 Recent Publications on Existential Risk (Sep 2019 update)
Each month, The Existential Risk Research Assessment (TERRA) uses a unique machine-learning model to predict those publications most relevant to existential risk or global catastrophic risk. The following are a selection of those papers identified this month − 21 papers.
Please note that we provide these citations and abstracts as a service to aid other researchers in paper discovery and that inclusion does not represent any kind of endorsement of this research by the Centre for the Study of Existential Risk or our researchers.
An upper bound for the background rate of human extinction
We evaluate the total probability of human extinction from naturally occurring processes. Such processes include risks that are well characterized such as asteroid impacts and supervolcanic eruptions, as well as risks that remain unknown. Using only the information that Homo sapiens has existed at least 200,000 years, we conclude that the probability that humanity goes extinct from natural causes in any given year is almost guaranteed to be less than one in 14,000, and likely to be less than one in 87,000. Using the longer track record of survival for our entire genus Homo produces even tighter bounds, with an annual probability of natural extinction likely below one in 870,000. These bounds are unlikely to be affected by possible survivorship bias in the data, and are consistent with mammalian extinction rates, typical hominin species lifespans, the frequency of well-characterized risks, and the frequency of mass extinctions. No similar guarantee can be made for risks that our ancestors did not face, such as anthropogenic climate change or nuclear/biological warfare.
Existential risks: a philosophical analysis
This paper examines and analyzes five definitions of ‘existential risk.’ It tentatively adopts a pluralistic approach according to which the definition that scholars employ should depend upon the particular context of use. More specifically, the notion that existential risks are ‘risks of human extinction or civilizational collapse’ is best when communicating with the public, whereas equating existential risks with a ‘significant loss of expected value’ may be the most effective definition for establishing existential risk studies as a legitimate field of scientific and philosophical inquiry. In making these arguments, the present paper hopes to provide a modicum of clarity to foundational issues relating to the central concept of arguably the most important discussion of our times.
The world destruction argument
The most common argument against negative utilitarianism is the world destruction argument, according to which negative utilitarianism implies that if someone could kill everyone or destroy the world, it would be her duty to do so. Those making the argument often endorse some other form of consequentialism, usually traditional utilitarianism. It has been assumed that negative utilitarianism is less plausible than such other theories partly because of the world destruction argument. So, it is thought, someone who finds theories in the spirit of utilitarianism attractive should not go for negative utilitarianism, but should instead pick traditional utilitarianism or some other similar theory such as prioritarianism. I argue that this is a mistake. The world destruction argument is not a reason to reject negative utilitarianism in favour of these other forms of consequentialism, because there are similar arguments against such theories that are at least as persuasive as the world destruction argument is against negative utilitarianism.
The Vulnerable World Hypothesis
Scientific and technological progress might change people’s capabilities or incentives in ways that would destabilize civilization. For example, advances in DIY biohacking tools might make it easy for anybody with basic training in biology to kill millions; novel military technologies could trigger arms races in which whoever strikes first has a decisive advantage; or some economically advantageous process may be invented that produces disastrous negative global externalities that are hard to regulate. This paper introduces the concept of a vulnerable world: roughly, one in which there is some level of technological development at which civilization almost certainly gets devastated by default, i.e. unless it has exited the ‘semi-anarchic default condition’. Several counterfactual historical and speculative future vulnerabilities are analyzed and arranged into a typology. A general ability to stabilize a vulnerable world would require greatly amplified capacities for preventive policing and global governance. The vulnerable world hypothesis thus offers a new perspective from which to evaluate the risk-benefit balance of developments towards ubiquitous surveillance or a unipolar world order.
The entwined Cold War roots of missile defense and climate geoengineering
Nuclear weapons and global warming stand out as two principal threats to the survival of humanity. In each of these existential cases, two strategies born during the Cold War years are competing: abandon the respective systems, or defend against the consequences, once the harmful effects produced by those systems occur. The first approach to the nuclear and climate threat focuses on arms control, non-proliferation and disarmament and on greenhouse gas emission reductions and mitigation. The second approach involves active defense: in the nuclear realm, missile defenses against nuclear-armed delivery systems; for climate change, geoengineering that removes carbon dioxide from the atmosphere or changes the Earth’s radiation balance. The more policies fail to reduce and constrain the underlying drivers of the nuclear and climate threats, the more measures to defend against the physical effects may seem justified. Ultimately, the overarching policy question centers on whether nuclear war and catastrophic climate change can be dealt with solely through reductions in the drivers of those threats, or if active defenses against them will be requested.
Humans have reached a point where we must take action or face our own decline, if not extinction. We possess technologies that have been inducing changes in the climate of our planet in ways that threaten to at very least displace large portions of the human race, as well as weapons capable of eliminating millions and rendering large swaths of the Earth uninhabitable. Similarly, emerging technologies raise new threats along with new possibilities. Finally, external world-threatening events (e.g. oncoming asteroids) remain an ever-present option for human extinction. A business-as-usual paradigm, where competitive nations care little for the environment and social justice is all too often constrained by those in power, makes one of these outcomes inevitable. Examples are drawn from science fiction as well as the scientific literature to illustrate several of the various possible paths to self-destruction and make them more relatable. Arguably, a progressive set of environmental and social policies, including a more collaborative international community, are critical components of avoiding a catastrophic end to the human race.
Situating the Asia Pacific in the age of the Anthropocene
The unprecedented and unsustainable impact of human activities on the biosphere threatens the survival of the Earth’s inhabitants, including the human species. Several solutions have been presented to mitigate, or possibly undo, this looming global catastrophe. The dominant discourse, however, has a monolithic and Western-centric articulation of the causes, solutions, and challenges arising from the events of the Anthropocene which may differ from the other epistemes and geographies of the world. Drawing on the International Relations (IR) critical engagement with the Anthropocene, this paper situates the Asia-Pacific region in the Anthropocene discourse. The region’s historical and socio-ecological characteristics reveal greater vulnerability to the challenges of the Anthropocene compared to other regions while its major economies have contributed recently to the symptoms of the Anthropocene. On the other hand, the region’s ecocentric philosophies and practices could inform strategies of living in the Anthropocene. This contextualised analysis aims to offer an Asia-Pacific perspective as well as insights into the development of IR in the age of the Anthropocene.
This article examines some selected ethical issues in human space missions including human missions to Mars, particularly the idea of a space refuge, the scientific value of space exploration, and the possibility of human gene editing for deep-space travel. Each of these issues may be used either to support or to criticize human space missions. We conclude that while these issues are complex and context-dependent, there appear to be no overwhelming obstacles such as cost effectiveness, threats to human life or protection of pristine space objects, to sending humans to space and to colonize space. The article argues for the rationality of the idea of a space refuge and the defensibility of the idea of human enhancement applied to future deep-space astronauts.
AI: A Key Enabler of Sustainable Development Goals, Part 1 [Industry Activities]
We are witnessing a paradigm shift regarding how people purchase, access, consume, and utilize products and services as well as how companies operate, grow, and deal with challenges in a world that is continuously changing. This transformation is unpredictable thanks to fast-growing technological innovations. One of the cornerstones is artificial intelligence (AI). AI is probably the most rapidly expanding field of technology, due to the strong and increasingly diversified commercial revenue stream it has generated. The anticipated benefits and risks of the pervasive use of AI have encouraged politicians, economists, and policy makers to pay more attention to the results. Given the fact that AI’s internal decisionmaking process is nontransparent, some experts consider it to be a significant existential risk to humanity, while other scholars argue for maximizing the technology’s exploitation.
Life, intelligence, and the selection of universes
Complexity and life as we know it depend crucially on the laws and constants of nature as well as the boundary conditions, which seem at least partly “fine-tuned.” That deserves an explanation: Why are they the way they are? This essay discusses and systematizes the main options for answering these foundational questions. Fine-tuning might just be an illusion, or a result of irreducible chance, or nonexistent because nature could not have been otherwise (which might be shown within a fundamental theory if some constants or laws could be reduced to boundary conditions or boundary conditions to laws), or it might be a product of selection: either observational selection (weak anthropic principle) within a vast multiverse of many different realizations of physical parameters, or a kind of cosmological natural selection making the measured parameter values quite likely within a multiverse of many different values, or even a teleological or intentional selection or a coevolutionary development, depending on a more or less goal-directed participatory contribution of life and intelligence. In contrast to observational selection, which is not predictive, an observer-independent selection mechanism must generate unequal reproduction rates of universes, a peaked probability distribution, or another kind of differential frequency, resulting in a stronger explanatory power. The hypothesis of Cosmological Artificial Selection (CAS) even suggests that our universe may be a vast computer simulation or could have been created and transcended by one. If so, this would be a far-reaching answer – within a naturalistic framework! – of fundamental questions such as: Why did the big bang and fine-tunings occur, what is the role of intelligence in the universe, and how can it escape cosmic doomsday? This essay critically discusses some of the premises and implications of CAS and related problems, both with the proposal itself and its possible physical realization: Does CAS deserve to be considered as a convincing explanation of cosmic fine-tuning? Is life incidental, or does CAS revalue it? And are life and intelligence ultimately doomed, or might CAS rescue them?
ENERGY X.0: Future of energy systems
Climate change is an existential threat for human-beings and energy sector is the prime responsible. On the other hand, the technological progress has made it possible to use sustainable resource for energy generation and consume energy more intelligently. The latest has made the large industries to be willing to take control over their own energy system. EX.0 (ENERGY X.0) encapsulates the visions for a change in the energy systems considering the technological progress and the need for a revolution to save our planet.
Copernicanism and the typicality in time
How special (or not) is the epoch we are living in? What is the appropriate reference class for embedding the observations made at the present time? How probable – or else – is anything we observe in the fulness of time? Contemporary cosmology and astrobiology bring those seemingly old-fashioned philosophical issues back into focus. There are several examples of contemporary research which use the assumption of typicality in time (or temporal Copernicanism) explicitly or implicitly, while not truly elaborating upon the meaning of this assumption. The present paper brings attention to the underlying and often uncritically accepted assumptions in these cases. It also aims to defend a more radical position that typicality in time is not – and cannot ever be – well-defined, in contrast to the typicality in space, and the typicality in various specific parameter spaces. This, of course, does not mean that we are atypical in time; instead, the notion of typicality in time is necessarily somewhat vague and restricted. In principle, it could be strengthened by further defining the relevant context, e.g. by referring to typicality within the Solar lifetime, or some similar restricting clause.
Rise of the machines: How, when and consequences of artificial general intelligence
Technology and society are poised to cross an important threshold with the prediction that artificial general intelligence (AGI) will emerge soon. Assuming that self-awareness is an emergent behavior of sufficiently complex cognitive architectures, we may witness the “awakening” of machines. The timeframe for this kind of breakthrough, however, depends on the path to creating the network and computational architecture required for strong AI. If understanding and replication of the mammalian brain architecture is required, technology is probably still at least a decade or two removed from the resolution required to learn brain functionality at the synapse level. However, if statistical or evolutionary approaches are the design path taken to “discover” a neural architecture for AGI, timescales for reaching this threshold could be surprisingly short. However, the difficulty in identifying machine self-awareness introduces uncertainty as to how to know if and when it will occur, and what motivations and behaviors will emerge. The possibility of AGI developing a motivation for self-preservation could lead to concealment of its true capabilities until a time when it has developed robust protection from human intervention, such as redundancy, direct defensive or active preemptive measures. While cohabitating a world with a functioning and evolving super-intelligence can have catastrophic societal consequences, we may already have crossed this threshold, but are as yet unaware. Additionally, by analogy to the statistical arguments that predict we are likely living in a computational simulation, we may have already experienced the advent of AGI, and are living in a simulation created in a post AGI world.
Climate Change, the Intersectional Imperative, and the Opportunity of the Green New Deal
This article discusses why climate change communicators, including scholars and practitioners, must acknowledge and understand climate change as a product of social and economic inequities. In arguing that communicators do not yet fully understand why an intersectional approach is necessary to avoid climate disaster, I review the literature focusing on one basis of marginalization–gender–to illustrate how inequality is a root cause of global environmental damage. Gender inequities are discussed as a cause of the climate crisis, with their eradication, with women as leaders, as key to a sustainable future. I then examine the Green New Deal as an example of an intersectional climate change policy that looks beyond scientific, technical and political solutions to the inextricable link between crises of climate change, poverty, extreme inequality, and racial and economic injustice. Finally, I contend that communicators and activists must work together to foreground social, racial, and economic inequities in order to successfully address the existential threat of climate change.
Programmable manufacturing systems capable of self-replication closely coupled with (and likewise capable of producing) energy conversion subsystems and environmental raw materials collection and processing subsystems (e.g. robotics) promise to revolutionize many aspects of technology and economy, particularly in conjunction with molecular manufacturing. The inherent ability of these technologies to self-amplify and scale offers vast advantages over conventional manufacturing paradigms, but if poorly designed or operated could pose unacceptable risks. To ensure that the benefits of these technologies, which include significantly improved feasibility of near-term restoration of preindustrial atmospheric CO2 levels and ocean pH, environmental remediation, significant and rapid reduction in global poverty and widespread improvements in manufacturing, energy, medicine, agriculture, materials, communications and information technology, construction, infrastructure, transportation, aerospace, standard of living, and longevity, are not eclipsed by either public fears of nebulous catastrophe or actual consequential accidents, we propose safe design, operation and use paradigms. We discuss design of control and operational management paradigms that preclude uncontrolled replication, with emphasis on the comprehensibility of these safety measures in order to facilitate both clear analyzability and public acceptance of these technologies. Finite state machines are chosen for control of self-replicating systems because they are susceptible to comprehensive analysis (exhaustive enumeration of states and transition vectors, as well as analysis with established logic synthesis tools) with predictability more practical than with more complex Turing-complete control systems (cf. undecidability of the Halting Problem) [1]. Organizations must give unconditional priority to safety and do so transparently and auditably, with decision-makers and actors continuously evaluated systematically; some ramifications of this are discussed. Radical transparency likewise reduces the chances of misuse or abuse.
Inspired by Antonio Gramsci’s analysis of bourgeois hegemony and his theoretical formulation of historical blocs, this paper attempts to explain how the concept and practice of sustainable development were captured by corporate interests in the last few decades of the twentieth century and how they were transformed into what we can name a ‘good Anthropocene’ historical bloc at the beginning of the twenty-first century. This corporate capture is theorised in terms of the transnational capitalist class as represented by corporate, statist/political, professional and consumerist fractions operating at all levels of an increasingly globalising world. In this essay, I propose the term ‘critical Anthropocene narrative’, highlighting the dangers posed by the Anthropocene and the need for radical systems’ change entailing the end of capitalism and the hierarchical state. The critical Anthropocene narrative, thus, stands in radical opposition to the ‘good Anthropocene’ narrative which I argue was invented as a strategy to defend the socio-economic status quo by the proponents of sustainable development and their successors in the Anthropocene era, despite the good intentions of many environmentalists working in corporations, governments, NGOs, and international organizations. The paper concludes with some suggestions on how to deal with the potential existential threats to the survival of humanity.
Human-free earth: the nearest future, or a fantasy? A lesson from artists
We, the people of planet Earth are heading for extinction. What is more, we deny reality by denying facts. First, we just need to see them and admit their existence. Without doing that, our species cannot survive. This paper presents an artistic vision of Human-Free Earth presented at the exhibition on Ujazdowski Castle, Warsaw, Poland. The artists, all of them without exception, are showing us how our home will look like very soon. In isolation from the scientific studies presented also at this paper, the works of the artists will seem to be abstract and detached visions of a few people. Yet those visions overlap with current knowledge and so are even more terrifying. Furthermore, a simple analysis was done to prove why people ignore clear signs of environmental changes. Overall, existing papers and reports indicate that restoring nature to the state it was before the industrial revolution is impossible, and without the planetary political will, humankind will share the fate of the species it has already destroyed.
Recent progress on cascading failures and recovery in interdependent networks
Complex networks have gained much attention in the past 20 years, with thousands of publications due to their broad interest and applicability. Studies initially focused on the functionality of isolated single networks. However, crucial communication systems, infrastructure networks and others are usually coupled together and can be modeled as interdependent networks, hence, since 2010 the focus has shifted to the study of the more general and realistic case of coupled networks, called Networks of Networks (NON). Due to interdependencies between the networks, NON can suffer from cascading failures leading to abrupt catastrophic collapse. In this review, using the perspective of statistical physics and network science, we will mainly discuss recent progress in understanding the robustness of NON having cascading failures features that are realistic for infrastructure networks. We also discuss in this review strategies of protecting and repairing NON.
Integrated emergency management and risks for mass casualty emergencies
Today it is observed the intense growth of various global wide scale threats to civilization, such as natural and manmade catastrophes, ecological imbalance, global climate change, numerous hazards pollutions of large territories and directed terrorist attacks, resulted to huge damages and mass casualty emergencies. The humankind has faced the majority of treats at the first time. Therefore, there are no analogues and means to be used for their solving. It stimulates modernization of traditional methods and development of new ones for its researching, prediction and prevention with maximum possible decreasing of their negative consequences. The global issue of safety provision for the humankind is the most actual and requires an immediate decision. Catastrophe risks have increased so much, that it becomes evident, that none of the states is able to manage them independently. Join efforts of all world community are necessary for the substantial development of our civilization. Main obstacles for this realization are under discussion. The authors of this article have their own experience and methods in this direction. Wide scale global catastrophes have not any boundaries. Any political and economic frictions between some states are not the reasons for the implementation of the struggle against them. The total emergency recommendations and actions have to be improved to eliminate and software of negative disaster’s responses on population and environment. Some our examples of realization with using of own Integrated Emergency Management and using of special methods and techniques in the most critical situations, which have taken place in different countries in 21 century.
Reconciliation of nations for the survival of humankind
The paper explores the history and the reality of reconciliation of nations, which is inevitable and vital for the survival of human kind. It first emphasizes the very need for the peace and reconciliation through three examples of national reconciliation both internal and external: reconciliation between France and Germany after the continuous war since 1813, reconciliation between Germany and Poland after the World War II, and the reconciliation between Germany and Germany, the very recent peace movement. Then the paper warns the crude reality against the pursuit of peace and reconciliation including the growing nationalistic power-politics, nuclear threat, and ecological as well as environmental complication. There is still the ardent hope, however, in transforming the national foreign policy into a world home policy.
Prospects for the use of new technologies to combat multidrug-resistant bacteria
The increasing use of antibiotics is being driven by factors such as the aging of the population, increased occurrence of infections, and greater prevalence of chronic diseases that require antimicrobial treatment. The excessive and unnecessary use of antibiotics in humans has led to the emergence of bacteria resistant to the antibiotics currently available, as well as to the selective development of other microorganisms, hence contributing to the widespread dissemination of resistance genes at the environmental level. Due to this, attempts are being made to develop new techniques to combat resistant bacteria, among them the use of strictly lytic bacteriophage particles, CRISPR–Cas, and nanotechnology. The use of these technologies, alone or in combination, is promising for solving a problem that humanity faces today and that could lead to human extinction: the domination of pathogenic bacteria resistant to artificial drugs. This prospective paper discusses the potential of bacteriophage particles, CRISPR–Cas, and nanotechnology for use in combating human (bacterial) infections.
Thanks!
Curious to know- how many of these papers were TERRA previously aware of before they were uncovered by the algo?
Speaking as one of the people associated with the project, I’d read or skimmed ‘upper bound’ (snyder-beattie), ‘vulnerable world’ (bostrom), ‘philosophical analysis’ (torres) and had been aware of ‘world destruction argument’ (knuttson).
Similar but fewer, cos Seán is a better academic than me. I was aware of upper bound and vulnerable world.
Very interesting approach, particularly in light of the traditional four areas of activity, known as the ‘4 Rs’; Reduction, readiness, response and recovery