Centre for the Study of Existential Risk Six Month Report: November 2018 - April 2019
We have just prepared a Six Month Report for our Management Board. This is a public version of that Report. We send short monthly updates in our newsletter – subscribe here.
Contents:
Overview
Policy and Industry Engagement
Academic Engagement
Public Engagement
Recruitment and research team
Expert Workshops and Public Lectures
Upcoming activities
Publications
1. Overview
The Centre for the Study of Existential Risk (CSER) is an interdisciplinary research centre within the University of Cambridge dedicated to the study and mitigation of risks that could lead to civilizational collapse or human extinction. We study existential risk, develop collaborative strategies to reduce them, and foster a global community of academics, technologists and policy-makers working to tackle these risks. Our research focuses on Global Catastrophic Biological Risks, Extreme Risks and the Global Environment, Risks from Artificial Intelligence, and Managing Extreme Technological Risks.
Our last Management Board Report was in October 2018. Over the last five months, we have continued to advance existential risk research and grow the field. Highlights include:
Publication of Extremes book, seven papers in venues like Nature Machine Intelligence, and a Special Issue.
Engagement with global policymakers and industry-leaders at conferences, and in one-on-one meetings.
Announcement that Prof. Dasgupta will lead the UK Government Global Review of the Economics of Biodiversity.
Submission of advice to key US, UN and EU advisory bodies.
Hosting of several expert workshops, helping us to inter alia encourage leading machine learning researchers to produce over 20 AI safety papers.
Welcomed new research staff and visitors.
Produced a report on business school rankings, contributing to the two leading business school rankers reviewing their methodology.
Public engagement through media coverage and the exhibition ‘Ground Zero Earth’.
2. Policy Engagement:
We have had the opportunity to speak directly with policymakers and institutions across the world who are grappling with the difficult and novel challenge of how to unlock the socially beneficial aspects of new technologies while mitigating their risks. Through advice and discussions, we have the opportunity to reframe the policy debate and to hopefully shape the trajectory of these technologies themselves.
Prof. Sir Partha Dasgupta, the Chair of CSER’s Management Board, will lead the UK Government comprehensive global review of the link between biodiversity and economic growth. The aim is to “explore ways to enhance the natural environment and deliver prosperity”. The announcement was made by the Chancellor of the Exchequer in the Spring Statement.
Submitted advice to the UN High-level Panel on Digital Cooperation (Luke Kemp, Haydn Belfield, Seán Ó hÉigeartaigh, Zoe Cremer). CSER and FHI researchers laid out the challenges posed by AI and offered some options for the global, international governance of AI. The Secretary-General established the Panel, which Melinda Gates and Jack Ma co-chair. The Panel chose this advice as one of five from over 150 submissions to be highlighted at a ‘virtual town hall’. The advice may influence global policy-makers and help set the agenda. Read Advice.
Submitted advice to the EU High-Level Expert Group on Artificial Intelligence. Haydn Belfield and Shahar Avin respond to the Draft Ethics Guidelines for Trustworthy AI, drawing attention to the recommendations in our The Malicious Use of Artificial Intelligence report. This helped influence the EU’s Ethics Guidelines, affecting behaviour across Europe. Read Advice.
The All-Party Parliamentary Group for Future Generations, set up by Cambridge students mentored by CSER researchers, held an event on Global Pandemics: Is the UK Prepared? in Parliament in November 2019, continuing our engagement with UK parliamentarians on existential risk topics. Speakers: Dr Catherine Rhodes (CSER), Dr Piers Millett (FHI), Professor David Heymann CBE (London School of Hygiene and Tropical Medicine). Report here. The APPG has also recently hired two Coordinators, Sam Hilton and Caroline Baylon.
Submitted advice to the US Government’s Bureau of Industry and Security on “Review of Controls on Certain Emerging Technologies” (Sam Weiss Evans). The Bureau is the part of the US government that controls the US export control regime. Read Advice.
CSER researchers advised the Centre for Data Ethics and Innovation (the UK’s national AI advisory body). This kind of engagement is crucial to ensuring research papers actually have an impact, and do not just gather dust on the shelf.
Sean Ó hÉigeartaigh was one of 50 experts exclusively invited to participate in the second Global AI Governance Forum at the World Government Summit in Dubai. The Summit is dedicated to shaping the future of governments worldwide.
CSER researchers attended invite-only events on Modern Deterrence (Ditchley Park), and High impact bio-threats (Wilton Park).
At the United Nations, CSER researchers attended the negotiations on Lethal Autonomous Weapons Systems (LAWS) and the Biological Weapons Convention annual meeting of states parties. They also engaged with the United Nations Institute for Disarmament Research (UNIDIR).
CSER researchers continued meetings with top UK civil servants as part of the policy fellows program organized by the Centre for Science and Policy (CSaP).
3. Academic and Industry Engagement:
As an interdisciplinary research centre within Cambridge University, we seek to grow the academic field of existential risk research, so that it receives the rigorous and detailed attention it deserves. Researchers also continued their extensive and deep collaboration with industry. Extending our links improves our research by exposing us to the cutting edge of industrial R&D, and helps to nudge powerful companies towards more responsible practices.
Several researchers participated in the Beneficial AI Puerto Rico Conference, engaging with industry and academic leaders, and shaping the agenda of the AI risk community for the next two years. Sean Ó hÉigeartaigh and Shahar Avin gave Keynotes. This was the third conference organised by the Future of Life Institute. The first in 2015 produced a research agenda for safe and beneficial AI, endorsed by thousands of researchers. The second in 2017 produced the Asilomar AI Principles.
Visiting researchers: Dr Kai Spiekermann from LSE visited January-March to work on a paper on ‘irreversible losses’; Prof Hiski Haukkala, former foreign policy Adviser to the Finnish President; Dr Simona Chiodo and Dr Daniele Chiffi of the €9m Territorial Fragility project at the Politecnico di Milano.
Sean Ó hÉigeartaigh attended the Partnership on AI meeting and contributed to the creation of several AI/AGI safety- and strategy-relevant project proposals with the Safety-critical AI working group.
Several CSER researchers contributed to the mammoth Ethically Aligned Design, First Edition: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems. It was produced by IEEE, the world’s largest technical professional organization. The release culminates a three-year, global iterative process involving thousands of experts.
Luke Kemp and Shahar Avin participated in a high-level AI and political security workshop led by Prof Toni Erskine at the Coral Bell School of Asia Pacific Affairs at the Australian National University.
Shahar Avin continued running ‘scenario exercises’ exploring different possible AI scenarios. He has run over a dozen so far, with some participants from leading AI labs. He aims to explore the realm of possibilities, and educate participants on some of the challenges ahead.
We continued our support for the student-run Engineering Safe AI reading group. The group exposes masters and PhD students to interesting AI safety research, so they consider careers in that area.
Catherine Rhodes had several meetings with groups in Washington DC working on global catastrophic biological risks, governance of dual-use research in the life sciences, and extreme technological risk more broadly.
Lalitha Sundaram is working with South African groups to boost their capacity in low cost viral diagnostics.
We will partner with the Journal of Science Policy & Governance to produce a Special Issue on governance for dual-use technologies. This Special Issue will encourage students to engage with existential risk research and help us identify future talent.
4. Public Engagement:
Luke Kemp, Lauren Holt, and Simon Beard had articles published on the BBC with over 1.5m views: Are we on the road to civilisation collapse? and What are the biggest threats to humanity?
We have published 11 videos of talks given at 2018’s Cambridge Conference on Catastrophic Risk, our major international conference.
Lalitha Sundaram and Simon Beard were featured on leading BBC Radio 4 programme Analysis on Will humans survive the century?
Sean Ó hÉigeartaigh and other researchers were featured in a Cambridge University video on life in the age of intelligent machines.
We are able to reach far more people with our research online:
14,000 website visitors over the last 90 days.
6,602 newsletter subscribers, up from 4,863 in Oct 2016.
6,343 Twitter followers.
2,208 Facebook followers.
Catherine Rhodes was interviewed on the Future of Life Institute Podcast about Governing Biotechnology.
Simon Beard produced a BBC radio programme on “I love my children but are they the biggest moral mistake I ever made?”
Catherine Rhodes, Des Browne, Bill Sutherland and David Aldridge had a letter published in Nature on Brexit threatening biosecurity.
Lauren Holt, Paul Upchurch and Simon Beard published a Conversation article on global systems failure and the extinction of the dinosaurs.
Lord Martin Rees was interviewed by Christiane Amanpour on CNN and Stephen Sackur on BBCHardtalk, the Economist, Talking Politics, and Canadian national radio, Academia Europaea, and the Guardian (video). He gave keynotes at the Long Now foundation (video), the European Parliament (video) and the House of Lords (video).
5. Recruitment and research team
New Postdoctoral Research Associates:
Dr Ellen Quigley is working on how to address climate change and biodiversity risks through the investment policies and practices of institutional investors. She was previously a CSER Research Affiliate. She also collaborates with the Centre for Endowment Asset Management at the Judge Business School, who jointly fund her work. She recently published the Business School Rankings for the 21st Centuryreport at events in Davos and Shanghai. Four days later, the Financial Times announced a “complete review of their methodology”, supported by a letter in the FT signed by two dozen business leaders.
Dr Jess Whittlestone is working on a research project combining foresight and policy/ethics for AI, in collaboration with the Centre for the Future of Intelligence (CFI) where she is a postdoctoral researcher. She is the lead author on a major new report(and paper) Ethical and Societal Implications of Algorithms, Data, and Artificial Intelligence: A Roadmap for Research. It surveys the dozen sets of AI principles proposed over the last two years, and suggests that the next step for the field of AI ethics is to explore the tensions that arise as we try to implement principles in practice.
Visiting researchers:
Lord Des Browne, UK Secretary of State for Defence (2006-2008) and Vice Chair of the Nuclear Threat Initiative. Lord Browne is involved with the new Biosecurity Risk Initiative at St Catharine’s College (BioRISC), and will be based at CSER for around a day a week.
Phil Torres, visiting March-June. Author of Morality, Foresight, and Human Flourishing (2017) and The End: What Science and Religion Tell Us about the Apocalypse (2016). He will work on co-authored papers with Simon Beard and on a new book.
Dr Olaf Corry, visiting March-September. Associate Professor in the Department of Political Science, Copenhagen University. With a background in the international politics of climate change, he will be researching solar geoengineering politics.
Rumtin Sepasspour, visiting Spring/Summer (four months). Foreign Policy Adviser in the Australian Prime Minister’s Office, he will focus on enhancing CSER researchers’ capability to develop policy ideas
Dr Eva Vivalt, visiting June. Assistant Professor (Economics) at Australia National University, PI on Y Combinator Research’s basic income study, and founder of AidGrade, a research institute that generates and synthesizes evidence in international development.
6. Expert Workshops and Public Events:
November, January: Epistemic Security Workshops (led by Dr Avin). Part of a series of workshops co-organised with the UK’s Alan Turing Institute, looking at the changing threat landscape of information campaigns and propaganda, given current and expected advances in machine learning.
January: SafeAI 2019 Workshop (led by Dr Ó hÉigeartaigh and colleagues) at the Association for the Advancement of Artificial Intelligence’s (AAAI) Conference. AAAI is one of the four most important AI conferences globally. These regular workshops embed safety in the wider field, and provide a publication venue. The workshop featured over 20 cutting-edge papers in AI safety, and encouraged leading AI researchers to publish on AI safety.
February-March: Ground Zero Earth Exhibition. Curated by CSER Research Affiliate Yasmine Rix, held in collaboration with CRASSH. Five artists explored existential risk. The exhibition was held at the Alison Richard Building, home to the Politics and International Studies Department. The exhibition engaged academics and the public in our research. The launch event was featured on BBC Radio. It closed with a ‘Rise of the Machines’ short film screening. Read overview.
March: Extremes Book Launch. The book, edited by Julius Weitzdörfer and Duncan Needham, draws on the 2017 Darwin College Lecture Series Julius co-organised. It features contributions from Emily Shuckburgh, Nassim Nicholas Taleb, David Runciman, and others. Read more.
March 28-31: Augmented Intelligence Summit. The Summit brought together a multi-disciplinary group of policy, research, and business leaders to imagine and interact with a simulated model of a positive future for our global society, economy, and politics – through the lens of advanced AI. Dr Avin was on the Steering Committee, delivered a Keynote, and ran a scenario simulation. More.
3-5 April: EiM 2: The second meeting on Ethics in Mathematics. Dr Maurice Chiodo and Dr Piers Bursill-Hall from the Faculty of Mathematics in Cambridge have been spearheading an effort to teach responsible behaviour and ethical awareness to mathematicians. CSER supported the workshop. More.
5-6 April: Tools for building trust in AI development (co-led by Shahar Avin) this two-day workshop convened some of the world’s top experts in AI, security, and policy to survey existing mechanisms for trust-building in AI and develop a research agenda for designing new ones.
7. Upcoming activities
Three more books will be published this year:
Fukushima and the Law is edited by Julius Weitzdörfer and Kristian Lauta, and draws upon a 2016 workshop Fukushima – Five Years On, which Julius co-organised.
Biological Extinction is edited by Partha Dasgupta, and draws upon the 2017 workshop with the Vatican’s Pontifical Academy of Sciences he co-organised.
Time and the Generations—population ethics for a diminishing planet (New York: Columbia University Press), by Partha Dasgupta, based on his Kenneth Arrow Lectures delivered at Columbia University.
Upcoming events:
21 May: Local Government Climate Futures (led by Simon Beard with Anne Miller).
6-7 June: Evaluating Extreme Technological Risks workshop (led by Simon Beard).
26 June: The Centre for Science and Policy (CSaP) Conference. CSER is partnering on a panel at the conference focusing on methods and techniques for forecasting extreme risks.
26-27 August: Decision Theory & the Future of Artificial Intelligence Workshop (led by Huw Price and Yang Liu). The third workshop in a series bringing together philosophers, decision theorists, and AI researchers to promote research at the nexus of decision theory and AI. Co-organised with the Munich Center for Mathematical Philosophy.
Timing to be confirmed:
Summer: The next in the Cambridge² workshop series, co-organised by the MIT-IBM Watson AI Lab and CFI.
Summer: Culture of Science—Security and Dual Use Workshop (led by Dr Evans).
Summer/Autumn: Biological Extinction symposium, around the publication of Sir Partha’s book.
Autumn: Horizon-Scanning workshop (led by Dr Kemp).
April 2020: CSER’s next international conference: the 2020 Cambridge Conference on Catastrophic Risk.
8. Publications
Needham, D. and Weitzdörfer, J. (Eds). (2019). Extremes. Cambridge University Press.
Humanity is confronted by and attracted to extremes. Extreme events shape our thinking, feeling, and actions; they echo in our politics, media, literature, and science. We often associate extremes with crises, disasters, and risks to be averted, yet extremes also have the potential to lead us towards new horizons. Featuring essays by leading intellectuals and public figures (like Emily Shuckburgh, Nassim Nicholas Taleb and David Runciman) arising from the 2017 Darwin College Lectures, this volume explores ‘extreme’ events.
Cave, S. and Ó hÉigeartaigh, S. (2019). Bridging near-and long-term concerns about AI. Nature Machine Intelligence 1:5.
We were invited to contribute a paper to the first edition of the new Nature journal, Nature Machine Intelligence.
“Debate about the impacts of AI is often split into two camps, one associated with the near term and the other with the long term. This divide is a mistake — the connections between the two perspectives deserve more attention.”
Häggström, O. and Rhodes, C. (2019). Special Issue: Existential risk to humanity. Foresight.
Häggström, O. and Rhodes, C. (2019). Guest Editorial. Foresight.
“We are not yet at a stage where the study of existential risk is established as an academic discipline in its own right. Attempts to move in that direction are warranted by the importance of such research (considering the magnitude of what is at stake). One such attempt took place in Gothenburg, Sweden, during the fall of 2017: an international guest researcher program on existential risk at Chalmers University of Technology and the University of Gothenburg, featuring daily seminars and other research activities over the course of two months, with Anders Sandberg serving as scientific leader of the program and Olle Häggström as chief local organizer, and with participants from a broad range of academic disciplines. The nature of this program brought substantial benefits in community building and in building momentum for further work in the field: of which the contributions here are one reflection. The present special issue of Foresight is devoted to research carried out and/or discussed in detail at that program. All in all, the issue collects ten papers that have made it through the peer review process.”
Beard, S. (2019). What Is Unfair about Unequal Brute Luck? An Intergenerational Puzzle. Philosophia.
“According to Luck egalitarians, fairness requires us to bring it about that nobody is worse off than others where this results from brute bad luck, but not where they choose or deserve to be so. In this paper, I consider one type of brute bad luck that appears paradigmatic of what a Luck Egalitarian ought to be most concerned about, namely that suffered by people who are born to badly off parents and are less well off as a result. However, when we consider what is supposedly unfair about this kind of unequal brute luck, luck egalitarians face a dilemma. According to the standard account of luck egalitarianism, differential brute luck is unfair because of its effects on the distribution of goods. Yet, where some parents are worse off because they have chosen to be imprudent, it may be impossible to neutralize these effects without creating a distribution that seems at least as unfair. This, I argue, is problematic for luck egalitarianism. I, therefore, explore two alternative views that can avoid this problem. On the first of these, proposed by Shlomi Segall, the distributional effects of unequal brute luck are unfair only when they make a situation more unequal, but not when they make it more equal. On the second, it is the unequal brute luck itself, rather than its distributional effects, that is unfair. I conclude with some considerations in favour of this second view, while accepting that both are valid responses to the problem I describe.”
Beard, S. (2019). Perfectionism and the Repugnant Conclusion. The Journal of Value Inquiry.
“The Repugnant Conclusion and its paradoxes pose a significant problem for outcome evaluation. Derek Parfit has suggested that we may be able to resolve this problem by accepting a view he calls ‘Perfectionism’, which gives lexically superior value to ‘the best things in life’. In this paper, I explore perfectionism and its potential to solve this problem. I argue that perfectionism provides neither a sufficient means of avoiding the Repugnant Conclusion nor a full explanation of its repugnance. This is because even lives that are ‘barely worth living’ may contain the best things in life if they also contain sufficient ‘bad things’, such as suffering or frustration. Therefore, perfectionism can only fully explain or avoid the Repugnant Conclusion if combined with other claims, such as that bad things have an asymmetrical value relative to many good things. This combined view faces the objection that any such asymmetry implies Parfit’s ‘Ridiculous Conclusion’. However, I argue that perfectionism itself faces very similar objections, and that these are question-begging against both views. Finally, I show how the combined view that I propose not only explains and avoids the Repugnant Conclusion but also allows us to escape many of its paradoxes as well.”
Avin, S. (2018). Mavericks and lotteries. Studies in History and Philosophy of Science Part A.
“In 2013 the Health Research Council of New Zealand began a stream of funding entitled ‘Explorer Grants’, and in 2017 changes were introduced to the funding mechanisms of the Volkswagen Foundation ‘Experiment!’ and the New Zealand Science for Technological Innovation challenge ‘Seed Projects’. All three funding streams aim at encouraging novel scientific ideas, and all now employ random selection by lottery as part of the grant selection process. The idea of funding science by lottery emerged independently in several corners of academia, including in philosophy of science. This paper reviews the conceptual and institutional landscape in which this policy proposal emerged, how different academic fields presented and supported arguments for the proposal, and how these have been reflected (or not) in actual policy. The paper presents an analytical synthesis of the arguments presented to date, notes how they support each other and shape policy recommendations in various ways, and where competing arguments highlight the need for further analysis or more data. In addition, it provides lessons for how philosophers of science can engage in shaping science policy, and in particular, highlights the importance of mixing complementary expertise: it takes a (conceptually diverse) village to raise (good) policy.”
Avin, S. (2019). Exploring artificial intelligence futures. Journal of Artificial Intelligence Humanities Vol.2.
“Artificial intelligence technologies are receiving high levels of attention and ‘hype’, leading to a range of speculation about futures in which such technologies, and their successors, are commonly deployed. By looking at existing AI futures work, this paper surveys and offers an initial categorisation of, several of the tools available for such futures-exploration, in particular those available to humanities scholars, and discusses some of the benefits and limitations of each. While no tools exist to reliably predict the future of artificial intelligence, several tools can help us expand our range of possible futures in order to reduce unexpected surprises, and to create common languages and models that enable constructive conversations about the kinds of futures we would like to occupy or avoid. The paper points at several tools as particularly promising and currently neglected, calling for more work in data-driven, realistic, integrative, and participatory scenario role-plays.”
Lewis, S.C., Perkins-Kirkpatrick, S.E, Althor, G., King, A.D., Kemp, L. (2019). Assessing contributions of major emitters’ Paris‐era decisions to future temperature extremes. Geophysical Research Letters.
“Temperature extremes can damage aspects of human society, infrastructure, and our ecosystems. The frequency, severity, and duration of high temperatures are increasing in some regions and are projected to continue increasing with further global temperature increases as greenhouse gas emissions rise. While the international Paris Agreement aims to limit warming through emissions reduction pledges, none of the major emitters has made commitments that are aligned with limiting warming to 2 °C. In this analysis, we examine the impact of the world’s three largest greenhouse gas emitters’ (EU, USA, and China) current and future decisions about carbon dioxide emissions on the occurrence of future extreme temperatures. We show that future extremes depend on the emissions decisions made by the major emitters. By implementing stronger climate pledges, major emitters can reduce the frequency of future extremes and their own calculated contributions to these temperature extremes.”
Hernandez Orallo, J., Martinez-Plumed, F., Avin, S., and ÓhÉigeartaigh, S. (2019). Surveying Safety-relevant AI Characteristics. Proceedings of the AAAI Workshop on Artificial Intelligence Safety 2019.
Shortlisted for Best Paper Prize.
“The current analysis in the AI safety literature usually combines a risk or safety issue (e.g., interruptibility) with a particular paradigm for an AI agent (e.g., reinforcement learning). However, there is currently no survey of safety-relevant characteristics of AI systems that may reveal neglected areas of research or suggest to developers what design choices they could make to avoid or minimise certain safety concerns. In this paper, we take a first step towards delivering such a survey, from two angles. The first features AI system characteristics that are already known to be relevant to safety concerns, including internal system characteristics, characteristics relating to the effect of the external environment on the system, and characteristics relating to the effect of the system on the target environment. The second presents a brief survey of a broad range of AI system characteristics that could prove relevant to safety research, including types of interaction, computation, integration, anticipation, supervision, modification, motivation and achievement. This survey enables further work in exploring system characteristics and design choices that affect safety concerns.”
Report: Pitt-Watterson, D. and Quigley, E. (2019). Business School Rankings for the 21st Century.
“This paper addresses the question of how business schools, and the courses they offer, are evaluated and ranked. The existing benchmarking systems, many of which are administered by well-respected media institutions, appear to have a strong motivational effect for administrators and prospective students alike. Many of the rankings criteria currently in use were developed years or decades ago, and use simple measures such as salary and salary progression. Less emphasis has been placed on what is taught and learned at the schools. This paper argues that, given the influence of the ranking publications, it is time for a review of the way they evaluate business education. What follows is meant to contribute to a fruitful ongoing discussion about the future of business schools in our current century.”
What is the relevance of “the link between biodiversity and economic growth” to existential risk? It is not immediately obvious to me.
Thanks for the question. Biodiversity loss and associated catastrophic ecosystem shifts are a contributor to existential risk. Partha’s review may influence UK and international policy.
See:
https://www.cser.ac.uk/news/dasgupta-lead-uk-review-eco-biodiversity/
https://www.cser.ac.uk/resources/existential-risk-due-ecosystem-collapse/
https://www.cser.ac.uk/resources/biological-extinction-proceedings/
We also have further publications forthcoming on the link between biodiversity and existential risk.
Can you explain what the mechanism is whereby biodiversity loss creates existential risk? And if biodiversity loss is an existential risk, how big a risk is it? Should 80k be getting people to go into conservation science or not?
There are independent reasons to think that the risk is negligible. Firstly, according to wikipedia, during the Eocene period ~65m years ago, there were thousands fewer genera than today. We have made ~1% of species extinct, and we would have to continue at current rates of species extinctions for at least 200 years to return to Eocene levels of biodiversity. And yet, even though significantly warmer than today, the Eocene marked the dawn of thousands of new species. So, why would we expect the world 200 years hence to be inhospitable to humans if it wasn’t inhospitable for all of the species emerging in the Eocene, who are/were significantly less numerous than humans and significantly less capable of a rational response to problems?
Secondly, as far as I am aware, evidence for pressure-induced non-linear ecosystem shifts is very limited. This is true for a range of ecosystems. Linear ecosystem damage seems to be the norm. If so, this leaves more scope for learning about the costs of our damage to ecosystems and correcting any damage we have done.
Thirdly, ecosystem services are overwhelmingly a function of the relations within local ecosystems, rather than of global trends in biodiversity. Upon discovering Hawaii, the Polynesians eliminated so many species that global decadal extinction rates would have been exceptional. This has next to no bearing on ecosystem services outside Hawaii. Humanity is an intelligent species and will be able to see if other regions are suffering from biodiversity loss and make adjustments accordingly. Why would all regions be so stupid as to ignore lessons from elsewhere? Also, is biodiversity actually decreasing in the rich world? I know forest cover is increasing in many places. Population is set to decline in many rich countries in the near future, and environmental impact per person is declining on many metrics.
I also find it surprising that you cite the Kareiva and Carranza paper in support of your claims, for this paper in fact directly contradicts them:
“The interesting question is whether any of the planetary thresholds other than CO2 could also portend existential risks. Here the answer is not clear. One boundary often mentioned as a concern for the fate of global civilization is biodiversity (Ehrlich & Ehrlich, 2012), with the proposed safety threshold being a loss of greater than 0.001% per year (Rockström et al., 2009). There is little evidence that this particular 0.001% annual loss is a threshold—and it is hard to imagine any data that would allow one to identify where the threshold was (Brook, Ellis, Perring, Mackay, & Blomqvist, 2013; Lenton & Williams, 2013). A better question is whether one can imagine any scenario by which the loss of too many species leads to the collapse of societies and environmental disasters, even though one cannot know the absolute number of extinctions that would be required to create this dystopia.
While there are data that relate local reductions in species richness to altered ecosystem function, these results do not point to substantial existential risks. The data are small-scale experiments in which plant productivity, or nutrient retention is reduced as species numbers decline locally (Vellend, 2017), or are local observations of increased variability in fisheries yield when stock diversity is lost (Schindler et al., 2010). Those are not existential risks. To make the link even more tenuous, there is little evidence that biodiversity is even declining at local scales (Vellend et al., 2013, Vellend et al., 2017). Total planetary biodiversity may be in decline, but local and regional biodiversity is often staying the same because species from elsewhere replace local losses, albeit homogenizing the world in the process. Although the majority of conservation scientists are likely to flinch at this conclusion, there is growing skepticism regarding the strength of evidence linking trends in biodiversity loss to an existential risk for humans (Maier, 2012; Vellend, 2014). Obviously if all biodiversity disappeared civilization would end—but no one is forecasting the loss of all species. It seems plausible that the loss of 90% of the world’s species could also be apocalyptic, but not one is predicting that degree of biodiversity loss either. Tragic, but plausible is the possibility of our planet suffering a loss of as many as half of its species. If global biodiversity were halved, but at the same time locally the number of species stayed relatively stable, what would be the mechanism for an end-of-civilization or even end of human prosperityscenario? Extinctions and biodiversity loss are ethical and spiritual losses, but perhaps not an existential risk.”
Hi John, thanks for these detailed points and considerations. I’d like to add a few comments of my own (disclosure that I’m co-Director of CSER, although quite a bit of what’s below represents my individual opinion, as flagged).
1) I should note that there isn’t a ‘CSER position’ re: biodiversity loss directly causing an existential risk to humanity, or the extent to which it’s a cause of concern as a contributing factor. I don’t believe I know of anyone holding the former stronger view on current evidence, and the extent to which different people weight the latter differs both between researchers and between advisers.
2) While I note that I’m not a domain expert on biodiversity loss, my own individual view leans towards the Kareiva & Carranza you quote above (from a paper presented at one of our conferences). I’d note that other experts appear to disagree (e.g. https://www.theguardian.com/environment/2018/nov/03/stop-biodiversity-loss-or-we-could-face-our-own-extinction-warns-un) though I’m disinclined to weight strong statements from agency heads in public media as strong and reliable evidence.
3) Re: “is biodiversity loss declining in rich countries?” Recent reports do indicate that biodiversity is continuing to decline, and ecosystems continue to be under threat in Europe, despite this being a (comparatively) rich region, and despite strategies intended to combat this—e.g. see
https://www.eea.europa.eu/soer-2015/europe/biodiversity
https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52015DC0478
4) Even if not an existential risk, these reports indicate a significant negative impact on the global economy from global biodiversity loss “Policy inaction and failure to halt the loss of global biodiversity could result in annual losses in ecosystem services equivalent to 7 % of world GDP, with the greatest impacts being felt by the poorest nations and the rural poor. ”
5) There are a lot of matters that remain unclear, such as the interrelationship and possible feedback loops between climate change, biodiversity loss, increased resource use etc, that in my view would be useful to understand better in order to better understand the effect of biodiversity loss on human civilisation, and how it fits into the bigger picture of global catastrophic risks to humanity in the coming decades. As I understand it, it remains unclear what the plausible worst case scenario is.
6) My own (non-expert) view is that it’s worth it for the GCR community not to ignore global biodiversity loss, due to the dynamic and unprecedented-in-human-history nature of the change, the interaction with other pressures with other potential GCR significance, and plausible reasons to think this may have large negative consequences for the planet and human civilisation in its current form (which can have destabilising consequences). To your question to Haydn, I don’t think 80K should be recommending it as a top cause area based on current evidence, but I may update on this in future years in light of further evidence. At CSER it’s a small part of our current portfolio and resource use, which I think is appropriate (as indicated by the fraction on the report above it takes up; also worth noting our leading work is currently being done by a non-grant-supported professor). It is of course a particularly influential part in various regards, given that Prof Dasgupta is in an unusually influential position and can achieve a lot of good on a topic of global significance (with potentially significant effects on global human well-being and productivity as noted above).
7) >Secondly, as far as I am aware, evidence for pressure-induced non-linear ecosystem shifts is very limited. This is true for a range of ecosystems.
My understanding is that this is correct. A project currently being written up was looking to review the evidence on this in order to better understand how concerned to be about this possibility, although it’s proven difficult to gather sufficient evidence from the literature. I’ll be better-placed to know what to conclude from this once write up is complete.
Apologies that these comments aren’t in correlated order with your post; I’ll go back and reorder if required. Again, I’d highlight that others associated with CSER may hold stronger (and more expert) views than I on this topic.
3. I have a sceptical prior against EU studies of scientific issues because the EU has taken an anti-science stance on many issues under pressure from the environmental movement—see e.g. the effective prohibition of GMOs. The fact that the report you cite advocates for increased organic farming adds weight to my scepticism. The report also says that the estimate of the economic costs is extremely uncertain and potentially a massive overestimate.
4. There are many things in the world that impose substantial economic costs, including inefficienct taxation, labour market regulation, failure to invest in R&D, etc. While they may indeed create economic costs, I fail to see the connection to existential risk.
5. While it is a small part of your portfolio, there is limited political attention for existential risk, and if CSER does start advocating for the view that biodiversity loss deserves serious consideration as a factor relevant to existential risk, that comes at a cost. In this case, the fact that Partha Dasgupta is an influential person is a negative because he risks distracting policymakers from the genuine risks
Thanks John. With apologies for brevity: I don’t think I’d agree with such broad-strokes scepticism of EU scientific studies on environment, but this is a topic for a longer conversation. Ditto (4).
Re: 5, I don’t expect this to be the framing that Partha adopts in the review in question; rather I expect it will be in line with the kinds of analysis and framings he has adopted in his work in this space in the past years (on the basis of which he was chosen for this appointment). Thanks!
Re 5: To be honest, I doubt that his framing matters much. Whether it’s “influential person says Y should receive attention” or “influential person says Y should receive attention with a lot of caveats” it’s still a distraction if we think Y is not nearly as relevant as X.
I think this point to a wider issue about risk communication and advocacy: should the x-risk community:
1) advocate for many approaches to x-risk and be opportunistic in where policy-makers are responsive, or
2) advocate for addressing the biggest risks only and bullishly pursue only opportunities that address these biggest risks.
This seems to depend on ‘how widely is x-risk distributed over various risk factors?’ and different research organizations seem to hold different opinions. Is CSER’s view that x-risk is widely distributed or narrowly?
>Re 5: To be honest, I doubt that his framing matters much. Whether it’s “influential person says Y should receive attention” or “influential person says Y should receive attention with a lot of caveats” it’s still a distraction if we think Y is not nearly as relevant as X.
From my experience of engaging with policymakers on Xrisk/GCR, I disagree with this way of looking at things (and to an extent John’s related concerns). If Partha was directly pushing biodiversity loss as a direct existential risk to humanity needing policy action, without evidence for this, then yes I would have concerns about this. But that’s not what’s happening here. At most, some ‘potential worst case scenarios’ might be surfaced, and referred to centres like ours for further research to support or rule out.
A few points:
1) I think it’s wrong to view this as a zero sum game. There’s a huge, huge space for policymakers to care more about GCR, Xrisk, and the long-term future than they currently do. Working with them on a global risk-relevant topic they’re already planning to work on (biodiversity and economic growth), as Partha is doing, is not going to result in the space that could be taken up with Xrisk concerns being occupied.
2) What we have here is a leading scholar (with a background specifically in economics and in recent years, biodiversity/sustainability) working in a high-profile fashion on a global risk-relevant topic (biodiversity loss and economics), who also has strong links to an existential risk research centre. This establishes useful links; it demonstrates that scholars associated with existential risk (a flaky-seeming topic not so long ago, and still in some circles) are people who do good work and are useful and trustworthy for governments on risks already within their ‘attention’ overton window; it’s helpful for legitimacy and reputation of existential risk research (e.g. through these links, interactions, and reputable work on related topics, helping to nudge existential risks into the overton window of risks that policymakers take seriously and take active government action on.)
More broadly, and to your later points:
Working on these sorts of processes is also an effective way of understanding how governance and policy around major risk works, and developing the skillset and positioning needed to engage more effectively around other risks (e.g. existential).
We don’t know all the correct actions to take to prevent existential risks right now. In some cases (i) because the xrisks will come to light in future; (ii) in some cases because we know the problem but don’t yet know how to solve; (iii) in some cases because we have a sense of the solution but not a good enough sense of how to action. For all these things, doing some engagement in policy processes where we can work to mitigate global risks currently within the policy overton window can be useful.
I do think the Xrisk community needs ‘purists’, and there will be points at which the community will need to undertake a hard prioritisation action on a particular xrisk with government. But most within the community would agree it’s not the time with transformative AI; it’s not the time with nano; there’s disagreement over whether it is the time with nuclear. With bio, a productive approach is expanding the overton window of risks within current biosecurity and biosafety, which is made easier by being clearly competent and useful within these broader domains.
What it is time for is internally doing the research to develop answers. Externally and with policy communities, developing the expertise to engage with the mechanics of the world, the networks and reputation to be effective, embedding the foresight and risk-scanning/response mechanisms that will allow governments to be more responsive, and so forth. Some of that involves engaging with a wider range of global (but not necessarily existential) risk issues. (As well as other indirect work: e.g. the AI safety/policy community not just working on the control problem and the deployment problem, but also getting into position in a wide range of other ways that often involve broader processes or non-existential risk issues).
To your final question, my own individual view is that mitigating xrisk will involve a small number of big opportunities/actions at the right times, underpinned and made possible by a large number of smaller and more widely distributed ones.
Apologies that I’m now out of time for further engagement online due to other deadlines.
Hi John, thanks for the very detailed response. My claim was that ecosystem shift is a “contributor” to existential risk—my claim is that it should be examined to assess the extent to which it is a “risk factor” that increases other risks, one of a set of causes that may overwhelm societal resilience, and a mechanism by which other risks cause damage.
As I said in the first link, “humanity relies on ecosystems to provide ecosystem services, such as food, water, and energy. Sudden catastrophic ecosystem shifts could pose equally catastrophic consequences to human societies. Indeed environmental changes are associated with many historical cases of societal ‘collapses’; though the likelihood of occurrence of such events and the extent of their socioeconomic consequences remains uncertain.”
I can’t respond to your comment at the length it deserves, but we will be publishing papers on the potential link between ecosystem shifts and existential risk in the future, and I hope that they will address some of your points.
I’ll email you with some related stuff.
There are lots of risk factors for societal resilience to catastrophes, including all contemporary political and economic problems. The key question is how much of a risk they are and I have yet to see any evidence that biodiversity loss is among the top ones.
It isn’t clear to me what the relationship between the business school ranking paper to x-risk is, what is the goal of such research?
Thanks for the question. Climate change is a contributor to existential risk. Changing what business schools teach (specifically to include sustainability) might change the behaviour of the next generation of business leaders.
See:
We also have further publications forthcoming on the link between climate change and existential risk.
This seems like a very long expected causal chain, and therefore—unless each link is specifically supported by evidence—unlikely to produce much effect compared to other approaches. It seems to assume:
1) Climate change is a relatively large x-risk factor (I interpreted the presentation I saw of your forthcoming article as claiming that “climate change is a non-negligible risk factor, but not a relatively large one”).
2) Improving sustainability of businesses and business leaders is a relatively effective way of addressing climate change (possibly, but there are many alternatives)
3) Increasing the amount of sustainability in business school programs will improve the sustainability of business and business leaders (There seem more direct ways of influencing business leaders; Examples: what about corporate campaigns but focused on sustainability? What about carbon taxes?)
4) Affecting business rankings will affect the curriculum (Yes, this seems to happen)
It might be the case that this was an opportunity that passed by Ellen Quiqley and was low-effort to give input on. But I’m afraid this was not a great use of time, and furthermore I’m afraid this validates the—for lack of a better term—“good-by-association fallacy”:
I think this fallacy is a harmful meme that poses a risk to the EA and x-risk brand, because it’s very bad prioritization.
Thank you. Some specific info: Ellen Quigley joined as (part)-salaried at CSER in January 2019 (previously she was an external collaborator). The report was published in January 2019. It was conducted and mostly completed as part of a Judge Business School project in 2018. I was happy for CSER to co-brand as (a) it’s a good piece of work (b) being published by someone on staff (and where others provided some previous input) with (c) a well-thought out strategic aim, with good reasons to think it would be effective and timely in its aims from people with a lot of expertise in the topic (d) on a topic within our remit (climate/sustainability) and (e) offered various potential networking and reputational opportunities.
Since the report launch, Ellen has focused on other projects—the report has high value (by usual postdoctoral project standards) followup opportunities, but there are other projects of higher priority from a GCR/Xrisk perspective. Our current thinking is that if non-fungible-for Xrisk funding becomes available, Ellen may supervise a postdoc/research assistant in designing/actioning followups. Ellen has also accepted a more direct action-focused part-appointment (advising on the university of cambridge’s investment and shareholder engagement strategy around climate change (https://www.staff.admin.cam.ac.uk/general-news/two-environmental-appointments-at-the-university) so her research time is more limited.
More broadly, there are a lot of reasons why centres will sometimes engage in projects with indirect impacts or longer causal chains that don’t boil down to ‘failure to understand basic prioritisation for impact’. These include: 1) good intellectual or evidence-based reasons to have confidence that indirect approaches/longer causal chain-based approaches are likely to be effective, either in of themselves or as part of a suite of activities. (2) Value of these projects in establishing strong networks and credibility with bodies likely to be relevant for broader Xrisk mitigation (3) developing the ability and skillset to engage with the machinery of the world in different regards.
It will sometimes be affected by external constraints (e.g. funding stipulations—not every organisation has full funding from fully xrisk-aligned funders—or need for researchers to establish/maintain reputation and credibility in their ‘home domains’ in order to remain effective in the roles they play in Xrisk research). This is likely particularly true in academic institutions.
I would expect that with most xrisk organisations, particularly those with an active engagement with other research communities, policy bodies etc, there will be a suite of outputs where some are very obviously and directly relevant to xrisk, and where others are less direct or obvious but have good reason within an overall suite of activities.
My apologies in advance that I don’t have time to engage further due to other deadlines.
Thanks for the elaborate response Seán. It’s valuable for the EA community to understand the internal considerations of x-risk organization, and I don’t want to disincentivize organisations from publishing updates like these on the forum.
Just to be clear: I was not accusing CSER of ‘failure to understand basic prioritisation for impact’. I meant to say that it’s hard for outsiders to evaluate the reasons why an organisation chooses to pursue a certain project. When pure/direct x-risk related projects are reported together with these indirect projects, that can reinforce the ‘good-by-association fallacy’ in the outsiders.
I think you’re right about that, although this does not necessarily mean that the current portfolio equals this ‘realistic ideal’ portfolio. I’m also wondering how much of the indirectness is necessary to make progress. A higher degree of indirect projects probably makes x-risk organization mainstream quicker, but at a larger risk that ‘existential risk’ becomes a diluted term and co-opted by other organizations.
Hi Haydn, thanks for the links, looking forward to learning more about CSER’s views on this. I wasn’t aware that CSER was actively doing projects to promote sustainability and climate change.