AMA: Seth Baum, Global Catastrophic Risk Institute
I will be online to answer questions morning-afternoon US Eastern time on Friday 17 December. Ask me anything!
About me:
I am co-founder and Executive Director of the Global Catastrophic Risk Institute.
I am also an editor at the journals Science and Engineering Ethics and AI & Society, and an honorary research affiliate at CSER.
I have seen the field of global catastrophic risk grow and evolve over the years. I’ve been involved in global catastrophic risk since around 2008 and co-founded GCRI in 2011.
My work focuses on bridging the divide between theoretical ideals about global catastrophic risk, the long-term future, outer space, etc. and the practical realities of how to make a positive difference on these issues. This includes research to develop and evaluate viable options for reducing global catastrophic risk, outreach to important actors (policymakers, industry, etc.), and activities to support the overall field of global catastrophic risk.
The topics I cover are a bit eclectic. I have worked across a range of global catastrophic risks, especially artificial intelligence, asteroids, climate change, and nuclear weapons. I also work with a variety of research disciplines and non-academic professions. A lot of my work involves piecing together these various perspectives, communities, etc. This includes working at the interface between EA communities and other communities relevant to global catastrophic risk.
I do a lot of advising for people interested in getting more involved in global catastrophic risk. Most of this is through the GCRI Advising and Collaboration Program. The program is not currently open; it will open again in 2022.
Some other items of note:
Common points of advice for students and early-career professionals interested in global catastrophic risk, a write up of running themes from the advising I do (originally posted here).
Summary of 2021-2022 GCRI Accomplishments, Plans, and Fundraising, our recent annual report on the current state of affairs at GCRI.
Subscribe to the GCRI newsletter or follow the GCRI website to stay informed about our work, next year’s Advising and Collaboration Program, etc.
My personal website here.
I’m happy to field a wide range of questions, such as:
Advice on how to get involved in global catastrophic risk, pursue a career in it, etc. Also specific questions on decisions you face: what subjects to study, what jobs to take, etc.
Topics I wish more people were working on. There are many, so please provide some specifics of the sorts of topics you’re looking at. Otherwise I will probably say something about nanotechnology.
The details of the global catastrophic risks and the opportunities to address them, and why I generally favor an integrated, cross-risk approach.
What’s going on at GCRI: our ongoing activities, plans, funding, etc.
The intersection of animal welfare and global catastrophic risk/long-term future, and why GCRI is working on nonhumans and AI ethics (see recent publications 1, 2, 3, 4).
The world of academic publishing, which I’ve gotten a behind-the-scenes view of as a journal editor.
One type of question I will not answer is advice on where to donate money. GCRI does take donations, and I think GCRI is an excellent organization to donate to. We do a lot of great work on a small budget. However, I will not engage in judgments about which other organizations may be better or worse.
- 2021 AI Alignment Literature Review and Charity Comparison by 23 Dec 2021 14:06 UTC; 176 points) (
- 2021 AI Alignment Literature Review and Charity Comparison by 23 Dec 2021 14:06 UTC; 168 points) (LessWrong;
- EA Updates for January 2022 by 5 Jan 2022 11:35 UTC; 37 points) (
- EA Organization Updates: December 2021 by 22 Dec 2021 19:34 UTC; 26 points) (
How hopeful are you that governments will respond effectively and proportionately to catastrophic risks in the future? Does your experience fit with the idea that existential risk is under-served due to it being an ‘intergenerational global public good’?
And bonus: have you seen Cass Sunstein’s recent book ‘Averting Catastrophe’? If so, what do you make of it?
Thanks for your questions. In reply:
I would not ever expect governments to respond to catastrophic risks to a degree that I (for one) think is proportionate to the importance of the risks. This is because I would rate the risks as being more important than most other people would. There are a variety of reasons for this, including the intergenerational nature of it, and the global nature, and some psychological and institutional factors. Jonathan Wiener’s paper The Tragedy of the Uncommons is a good read on this.
That said, I do see potential for governments to have some effective responses to catastrophic risks. Indeed, they already are doing a variety of worthwhile things. There is also opportunity to get them to do more. Some of the opportunity is in persuading governments to care more about it, but a lot of the opportunity is on improving their capacity for skilled work on the risks. In my “Common points of advice” write up, there’s a section on Work across the divide between (A) humanities-social science-policy and (B) engineering-natural science, which addresses a major aspect of the challenge.
And no, I have not seen this book; thanks for suggesting it. At a quick glance here, it seems that the book is advocating for a maximin decision rule and for a frequentist probability theory “in which it is not possible to assign probabilities to various outcomes”. I would disagree with both of those positions. But Sunstein is a distinguished legal scholar, and the book may nonetheless contain a lot of worthy insight.
Thanks!
Hi everyone. Thanks for all the questions so far. I’ll be online for most of the day today and I’ll try to get to as many of your questions as I can.
What are the 3 biggest problems currently slowing GCRI in achieving its goals? What are you currently doing to solve them?
The best way to answer this question is probably in terms of GCRI’s three major areas of activity: research, outreach, and community support, plus the fourth item of organization development.
GCRI’s ultimate goal is to reduce global catastrophic risk. Everything we do is oriented toward that end. Our research develops ideas and reduces uncertainty about how best to reduce global catastrophic risk. Our outreach gets those ideas to important decision-makers and helps us understand what research questions decision-makers would benefit from answers to. Our community support advances the overall population of people working on global catastrophic risk, including people who work with us on research and outreach. Our organization development work provides us with the capacity to do all of these things.
Phrased in terms of three problems: (1) We don’t know the best ways of reducing global catastrophic risk, and so we are advancing research to understand this better. (2) We are not positioned to take all of the necessary actions to reduce global catastrophic risk on our own, so we are doing outreach to other people who are well positioned to have an impact and we are supporting the overall community of people who are working on the risks. (3) We don’t have the capacity to do as much to reduce global catastrophic risk as we could, so we are developing the organization to increase our capacity.
I appreciate that this is all perhaps a bit vague. Because we work across so many topics within global catastrophic risk, it’s hard to specify three more specific problems that we face. Some further detail is available at our Summary of 2021-2022 GCRI Accomplishments, Plans, and Fundraising, and in other comments on this AMA.
Let’s compare the existing initiatives against different catastrophic risks (especially AI, nuclear weapons, asteroid impacts, extreme climate change, and biosecurity).
Which of these areas do you think <10 individuals could make the most impact in? Those 10 individuals could be the most powerful lawmakers, the most brilliant researchers, the greatest startup founders, whatever.
Interesting question, thanks. To summarize my answer: I believe nuclear weapons have the largest opportunities for a few select individuals to make an impact; climate change has the smallest opportunities; and AI, asteroids, and biosecurity are somewhere in between.
First, please note that I am answering this question without regard for the magnitude of the risks. One risk might have larger opportunities for an individual to make an impact on because it’s a much larger risk. However, accounting for that turns this into a question about which risks are larger, whereas it seems more fruitful to focus on other aspects of the risks.
Second, all of these risks require a lot more than 10 people to address. Indeed, a lot of important roles involve engaging with lots of other people: lawmakers setting policy that influences the activities of government agencies, private citizens, etc.; researchers who develop ideas that influence other people’s thinking; startup founders who build companies with large numbers of employees; etc. This is an important caveat.
With that in mind, I believe the answer is nuclear weapons. The president of the United States has a very high degree of influence over nuclear weapons risk, including the sole authority to order the launch of nuclear weapons. This is a point of ongoing debate; see e.g. this. I am less familiar with procedures in other countries but at least some of them may be similar. There are significant opportunities for a variety of people to impact nuclear weapons risk (see this for discussion), but I think it’s still the risk in which a few well-placed individuals can have the largest impact, for better or worse.
On the opposite end of the spectrum, a few powerful individuals probably have the least influence over climate change. A central characteristic of climate change is that its solutions are highly distributed. Greenhouse gas emissions are distributed widely across countries and economic sectors. Solutions for reducing emissions must likewise be implemented across countries and economic sectors and must additionally be maintained over extended periods of time. Technological solutions like renewable energy depend less on a single brilliant idea or a single policy enactment and more on sustained investment in research, development, and deployment. The best example I can think of is the idea of a geoengineering “greenfinger”, in which a rogue actor unilaterally implements a geoengineering regime. I’m not up to speed on the research on this idea and I don’t have a good sense for whether the idea is viable in practice.
For AI, the largest opportunities may involve a research group developing technological solutions that, once developed, would be readily adopted by other groups—though the adoption process can be a limiting factor that requires larger numbers of people.
For asteroids, the largest opportunities may involve leading a program to detect and deflect incoming asteroids; the program itself would require larger numbers of people, though there may be a role for a few well-placed government officials to have a major impact.
For biosecurity, the best example that comes to mind involves an increase in the risk. There are scenarios in which a research lab creates and (intentionally or accidentally) releases a dangerous pathogen. See debates on “gain of function” experiments, “dual-use research of concern”, etc.
Finally, some collective action theory is relevant here. Opportunities for a few individuals to have an impact may be especially large in “single best effort” situations, in which the problem can be solved by one effort: a single best technological solution for AI, a single best detection/deflection effort for asteroids, or even a single effort to launch nuclear weapons or develop a pathogen. In contrast, reducing greenhouse gas emissions is an “aggregate effort” situation, in which results come from the total amount of effort aggregated across everyone who contributes. Geoengineering is more in the direction of a single best effort situation, though perhaps not to the same extent as the other examples. For more on this theory, see my paper Collective action on artificial intelligence: A primer and review, especially Section 2.3, or work by Scott Barrett, especially his book Why Cooperate? The Incentive to Supply Global Public Goods.
I’m an undergraduate. I’m quite interested in existential risk (particularly AI) from a research or policy perspective. You mentioned that you’re happy to offer
I think your general thoughts would be valuable to others on this forum, but here is a specific question:
I’ve thought and learned a decent amount about existential risk; how should I get involved?
By the way, thanks for linking this!
Glad to hear that you’re interested in these topics. It’s a good area to pursue work in.
Regarding how to get involved, to a large extent my advice is just general advice for getting involved in any area: study, network, and pursue opportunities as you get them. The networking can often be the limiting factor for people new to something. I would keep an eye on fellowship programs, such as the ones listed here. One of those is the GCRI Advising and Collaboration Program, which to a large extent exists to provide an entry point for people interested in these topics. We try to connect participants to other people in our networks to help them get plugged in. That said, I would encourage you to not restrict yourself to formal programs like these, but instead to try to create your own opportunities. Finally, regarding AI policy specifically, it’s good to monitor ongoing policy initiatives, e.g. this in the US, and research on AI policy, especially on the gcr/xrisk dimensions, e.g. GCRI’s AI research (though definitely look at more than just GCRI). If you can draw connections between ongoing policy initiatives and the ideas being developed in research, that’s a really valuable skill that there will almost certainly be continued demand for over the years.
Let’s compare the existing initiatives against different catastrophic risks (especially AI, nuclear weapons, asteroid impacts, extreme climate change, and biosecurity).
Which of these areas do you think are most conducive to market oriented solutions? Which do you think are most conducive to government oriented solutions? And which do you think are most conducive to philanthropic solutions?
If that’s too broad, feel free to focus on the most common type of initiative in each area instead of the areas as a whole :D
That’s an interesting question, thanks. To summarize my remarks below: AI and climate change are more market-oriented, asteroids and nuclear weapons are more government-oriented, biosecurity is a mix of both, and philanthropy has a role everywhere.
First, market solutions will be limited for all global catastrophic risks because the risks inevitably involve major externalities. The benefits of reducing global catastrophic risks go to people all over the world and future generations. Markets aren’t set up to handle that sort of value.
That said, there can still be a role for market activity in certain global catastrophic risks, especially AI and climate change. AI and climate change are distinctive in that both involve highly profitable activity from some of the largest corporations in the world. Per this https://companiesmarketcap.com, the current top five largest companies are Apple, Microsoft, Alphabet, Saudi Aramco, and Amazon. My own work on AI corporate governance is largely motivated by my prior experience working on climate change policy, including my PhD dissertation).
There are ways to make money while reducing climate change risk, such as by reducing expenditures on energy or building transit-oriented housing. The climate benefits are more incidental, but they can still be significant. Likewise, for AI, market demand for safe near-term AI technologies can have some incidental benefits for improving the safety of long-term AI technologies like AGI. These are good opportunities to pursue, as are opportunities to influence corporate governance to better align corporate activities with reducing global catastrophic risk.
Second, governments can play important roles in all of the global catastrophic risks. Even for corporate activity related to AI and climate change, governments have important roles as regulators. That said, governments are especially important for nuclear weapons and asteroids. There certainly is a role for a variety non-governmental actors to reduce nuclear weapons risk (see this for an overview). That said, it’s still the case that governments control the weapons and make the major decisions about them. Governments also play a central in addressing asteroid impacts, supplemented by a robust scientific community, though I believe the science is also largely funded by government work.
Third, philanthropy can play important roles in all of the global catastrophic risks. Philanthropic and nonprofit activity is highly versatile and can play roles that markets and governments can’t or won’t do. Ten years ago, prior to the deep learning revolution, I believe almost all work on global catastrophic risk from AI was from philanthropy; now the portfolio of work is more diverse. For the current state of affairs, I don’t have a specific answer to the question.
And briefly, regarding biosecurity, that is a risk in which governments and markets are both quite important. This is seen in the ongoing pandemic, for example in the role of the pharmaceutical industry in developing and manufacturing vaccines and the role of governments in supporting vaccine development and distribution and a variety of other policy responses.
This is a very comprehensive answer! I especially appreciate your summary up top and you linking to sources. Thank you :-)
Let’s compare the existing initiatives against different catastrophic risks (especially AI, nuclear weapons, asteroid impacts, extreme climate change, and biosecurity).
How would you rank each in terms of tractability? Ex: % reduced risk / unit of work. What is the most tractable effort we could take to reduce risk in each area?
Thanks for the question. To summarize, I don’t have a clear ranking of the risks, and I don’t think it makes sense to rank them in terms of tractability. There are some tractable opportunities across a variety of risks, but how tractable they are can vary a lot depending on one’s background and other factors.
First, tractability of a risk can vary significantly from person to person or from opportunity to opportunity. There was a separate question on which risks a few select individuals could have the largest impact on; my answer to that is relevant here.
Second, this is a good topic to note the interconnections between risks. There is a sense in which AI, nuclear weapons, asteroid impacts, extreme climate change, and biosecurity are not distinct from each other. For example, nuclear power helps with climate change but can increase nuclear weapons risks, as in international debate over the nuclear program of Iran. Nuclear explosives have been proposed to address asteroid risk, but this could also affect nuclear weapons risks; see discussion in my paper Risk-risk tradeoff analysis of nuclear explosives for asteroid deflection. Pandemics can affect climate change; see e.g. Impact of COVID-19 on greenhouse gases emissions: A critical review. Improving international relations and improving the resilience of civilization helps across a range of risks. This makes it further difficult to compare the tractability of these various risks.
Third, I see tractability and neglectedness as being closely related. When a risk gets a lot of attention, a lot of the most tractable opportunities have already been taken or will be taken anyway.
With those caveats in mind, some answers:
Climate change is distinctive in the wide range of opportunities to reduce the risk. On one hand, this makes it difficult for dedicated effort to significantly reduce the overall risk, because so many efforts are needed. On the other hand, it does create some relatively easy opportunities to reduce the risk. For example, when you’re walking out of a room, you might as well turn the lights off. This might not have a massive risk reduction, but the unit of work here is trivially small. More significant examples include living somewhere in which you don’t need to drive everywhere and eating more of a vegan diet; these are both also worth doing for a variety of other reasons. That said, the most significant examples involve changes to policy, industry, etc that are unfortunately generally difficult to implement.
Nuclear weapons opportunities vary a lot in terms of tractability. There is a sense in which reducing nuclear weapons risk is easy: just don’t launch the nuclear weapons! There is a different sense in which reducing the risk is very difficult: at its core, the risk derives from adversarial relations between certain major countries, and reducing the risk may depend on improving these relations, which is difficult. In between, there are a lot of opportunities to influence nuclear weapons policy. These are mostly very high-skill activities that benefit from advanced training in both international security and global catastrophic risk. For people who are able to train in these fields, I think the opportunities are quite good. Otherwise, there still are opportunities, but they are perhaps more limited.
Asteroid risk is an interesting case because the extreme portion of the risk may actually be more tractable. Large asteroids cause more extreme collisions, and because they are larger, they are also easier for astronomy research to detect. Indeed, a high percentage of the largest asteroids are believed to already be detected. None of the ones detected are on collision course with Earth. Much of the residual global catastrophic risk may involve more complex scenarios, such as involving smaller asteroids triggering inadvertent nuclear war; see my papers on this scenario here and here. My impression is that there may be some compelling opportunities to reduce the risk from these scenarios.
For AI, at the moment I think there are some excellent opportunities related to near-term AI governance. The deep learning revolution has put AI high on the agenda for public policy. There are active high-level initiatives to establish AI policy going on right now, and there are good opportunities to influence these policies. Once these policies are set, they may remain largely intact for a long time. It’s important to take advantage of these opportunities while they still exist. Additionally, I think there is low-hanging fruit in other domains. One example is corporate governance, which has gotten relatively little attention especially from people with an orientation toward long-term catastrophic risks; see my recent post on long-term AI corporate governance with Jonas Schuett of the Legal Priorities Project. Another example is AI ethics, which has gotten surprisingly little attention; see my work with Andrea Owe of GCRI here, here, here, and here. There may also be good opportunities on AI safety design techniques, though I am less qualified to comment on this.
For biosecurity, I am less active on it at the moment, so I am less qualified to comment. Also, COVID-19 significantly changes the landscape of opportunities. So I don’t have a clear answer on this.
Let’s compare the existing initiatives against different catastrophic risks (especially AI, nuclear weapons, asteroid impacts, extreme climate change, and biosecurity).
What are the most neglected areas of research in each?
Thanks for the question. I see that the question is specifically on neglected areas of research, not other types of activity, so I will focus my answer on that. I’ll also note that my answers to this question map pretty closely to my own research agenda, which may be a bit of a bias, though it’s also the case that I try to focus my research on the most important open questions.
For AI, there are a variety of topics in need of more attention, especially (1) the relation between near-term governance initiatives and long-term AI outcomes; (2) detailed concepts for specific, actionable governance initiatives in both public policy and corporate governance; (3) corporate governance in general (see discussion here); (4) the ethics of what an advanced AI should be designed to do; and (5) the implications of military AI for global catastrophic risk. There may also be neglected areas of research on how to design safe AI, though it is less my own expertise and it already gets a relatively large amount of investment.
For asteroids, I would emphasize the human dimensions of the risk. Prior work on asteroid risk has included a lot of contributions from astronomers and from the engineers involved in space missions, and I think comparatively little attention from social scientists. The possibility of an asteroid collision causing inadvertent nuclear war is a good example of a topic in need of a wider range of attention.
For climate change, one important line of research is on characterizing climate change as a global catastrophic risk. The recent paper Assessing climate change’s contribution to global catastrophic risk by S. J. Beard and colleagues at CSER provides a good starting point, but more work is needed. There is also a lot of opportunity to apply insights from climate change research to other global catastrophic risks. I’ve done this before here, here, here, and here. One good topic for new research would be evaluating the geoengineering moral hazards debate in terms of its implications for other risky technologies, including debates over what ideas shouldn’t be published in the first place, e.g. Was breaking the taboo on research on climate engineering via albedo modification a moral hazard, or a moral imperative?
For nuclear weapons, I would like to see more on policy measures that are specifically designed to address global catastrophic risk. My winter-safe deterrence paper is one effort in that direction, but more should be done to develop this sort of idea.
For biosecurity, I’m less at the forefront of the literature, so I have fewer specific suggestions, though I would expect that there are good opportunities to draw lessons from COVID-19 for other global catastrophic risks.
If you could broadcast one statistic about catastrophic risks to the screens of everyone in this forum, what would it be? :-)
I regret that I don’t have a good answer to this question. Global catastrophic risk doesn’t have much in the way of statistics, due to the lack of prior global catastrophes. (Which is a good thing!)
There are some statistics on the amount of work being done on global catastrophic risk. For that, I would recommend the paper Accumulating evidence using crowdsourcing and machine learning: A living bibliography about existential risk and global catastrophic risk by Gorm Shackelford and colleagues at CSER. It finds that there is a significant body of work on the topic, in contrast with some prior concerns, such as those comparing the amount of research on global catastrophic risk to the amount of research on dung beetles.
I glanced at GCRI’s research you linked. I think AI is a big deal in expectation, but I’m prima facie skeptical about the value of “AI ethics.” My baseline imagination is that we get capabilities first, then figure out what to do with AI. I’m substantially more optimistic about our ability to make good decisions after we have strong AI, and I think the moral importance of the time after we get strong AI dominates the time before (in expectation). Of course, GCRI isn’t the only institution to do AI ethics work, so I might be missing something — what’s the basic case for doing AI ethics now? (Feel free to refer me to something already written rather than writing a reply yourself; there may be good existing writeups.)
Thanks for the question. This is a good thing to think critically about. With respect to strong AI, the short answer is that it’s important to develop these sorts of ideas in advance. If we wait until we already have the technology, it could be too late. There are some scenarios in which waiting is more viable, such as the idea of a long reflection, but this is only a portion of the total scenario space, and even then, the outcomes could depend on the initial setup. Additionally, ethics can also matter for near-term / weak AI, including in ways that affect global catastrophic risk, such as in the context of environmental or military affairs.
Let’s compare the existing initiatives against different catastrophic risks (especially AI, nuclear weapons, asteroid impacts, extreme climate change, and biosecurity).
In which area of catestrophic risk initiatives do you see:
The most cooperation between different individuals/institutions
The most transparent communications
The most scalable governance (ex: employee management, decision-making systems, etc.)
If it’s too broad to answer that at an area-wide level, feel free to point to examples specific institutions :D
Thanks for the question.
Asteroid risk probably has the most cooperation and the most transparent communication. Asteroid risk is notable for its high degree of agreement: all parties around the world agree that it would be bad for Earth to get hit by a large rock, and that there should be astronomy to detect nearby asteroids, and that if a large Earthbound asteroid is detected, there should be some sort of mission to deflect it away from Earth. There are some points of disagreement, such as on the use of nuclear explosives for asteroid deflection, but this is a bit more down in the details.
Additionally, the conversation about asteroid risk is heavily driven by scientific communities. Scientists have a strong orientation toward transparency, such as publishing research in the open literature, including details on methods, etc. There are relatively few aspects of asteroid risk that involve the sorts of information that is less transparent, such as classified government information or proprietary business information. There is some, such as regarding nuclear explosives, but it’s overall a small portion of the topic. This manifests in a relatively transparent conversation about asteroid risk.
The question of scalability is harder to answer. A lot of the relevance governance activities are singular or top-down in a way that scalability is less relevant. For example, it’s hard to talk about the scalability of initiatives to deflect asteroids or make sound nuclear weapon launch decisions because these are things that only need to be done in a few isolated circumstances.
It’s easier to talk about the scalability of initiatives for reducing climate change because there’s such a broad ongoing need to reduce greenhouse gases. For example, a notable recent development in the climate change space is the rapid growth in the market for electric bicycles; this is a technology that is rapidly maturing and can be manufactured at scale. Certain climate change governance concepts can also scale, for example urban design concepts that are initially implemented in a few neighborhoods and then scaled up. Scaling things like this up is often difficult, but it at least in principle can be scaled up.