Let’s compare the existing initiatives against different catastrophic risks (especially AI, nuclear weapons, asteroid impacts, extreme climate change, and biosecurity).
What are the most neglected areas of research in each?
Thanks for the question. I see that the question is specifically on neglected areas of research, not other types of activity, so I will focus my answer on that. I’ll also note that my answers to this question map pretty closely to my own research agenda, which may be a bit of a bias, though it’s also the case that I try to focus my research on the most important open questions.
For AI, there are a variety of topics in need of more attention, especially (1) the relation between near-term governance initiatives and long-term AI outcomes; (2) detailed concepts for specific, actionable governance initiatives in both public policy and corporate governance; (3) corporate governance in general (see discussion here); (4) the ethics of what an advanced AI should be designed to do; and (5) the implications of military AI for global catastrophic risk. There may also be neglected areas of research on how to design safe AI, though it is less my own expertise and it already gets a relatively large amount of investment.
For asteroids, I would emphasize the human dimensions of the risk. Prior work on asteroid risk has included a lot of contributions from astronomers and from the engineers involved in space missions, and I think comparatively little attention from social scientists. The possibility of an asteroid collision causing inadvertent nuclear war is a good example of a topic in need of a wider range of attention.
For climate change, one important line of research is on characterizing climate change as a global catastrophic risk. The recent paper Assessing climate change’s contribution to global catastrophic risk by S. J. Beard and colleagues at CSER provides a good starting point, but more work is needed. There is also a lot of opportunity to apply insights from climate change research to other global catastrophic risks. I’ve done this before here, here, here, and here. One good topic for new research would be evaluating the geoengineering moral hazards debate in terms of its implications for other risky technologies, including debates over what ideas shouldn’t be published in the first place, e.g. Was breaking the taboo on research on climate engineering via albedo modification a moral hazard, or a moral imperative?
For nuclear weapons, I would like to see more on policy measures that are specifically designed to address global catastrophic risk. My winter-safe deterrence paper is one effort in that direction, but more should be done to develop this sort of idea.
For biosecurity, I’m less at the forefront of the literature, so I have fewer specific suggestions, though I would expect that there are good opportunities to draw lessons from COVID-19 for other global catastrophic risks.
Let’s compare the existing initiatives against different catastrophic risks (especially AI, nuclear weapons, asteroid impacts, extreme climate change, and biosecurity).
What are the most neglected areas of research in each?
Thanks for the question. I see that the question is specifically on neglected areas of research, not other types of activity, so I will focus my answer on that. I’ll also note that my answers to this question map pretty closely to my own research agenda, which may be a bit of a bias, though it’s also the case that I try to focus my research on the most important open questions.
For AI, there are a variety of topics in need of more attention, especially (1) the relation between near-term governance initiatives and long-term AI outcomes; (2) detailed concepts for specific, actionable governance initiatives in both public policy and corporate governance; (3) corporate governance in general (see discussion here); (4) the ethics of what an advanced AI should be designed to do; and (5) the implications of military AI for global catastrophic risk. There may also be neglected areas of research on how to design safe AI, though it is less my own expertise and it already gets a relatively large amount of investment.
For asteroids, I would emphasize the human dimensions of the risk. Prior work on asteroid risk has included a lot of contributions from astronomers and from the engineers involved in space missions, and I think comparatively little attention from social scientists. The possibility of an asteroid collision causing inadvertent nuclear war is a good example of a topic in need of a wider range of attention.
For climate change, one important line of research is on characterizing climate change as a global catastrophic risk. The recent paper Assessing climate change’s contribution to global catastrophic risk by S. J. Beard and colleagues at CSER provides a good starting point, but more work is needed. There is also a lot of opportunity to apply insights from climate change research to other global catastrophic risks. I’ve done this before here, here, here, and here. One good topic for new research would be evaluating the geoengineering moral hazards debate in terms of its implications for other risky technologies, including debates over what ideas shouldn’t be published in the first place, e.g. Was breaking the taboo on research on climate engineering via albedo modification a moral hazard, or a moral imperative?
For nuclear weapons, I would like to see more on policy measures that are specifically designed to address global catastrophic risk. My winter-safe deterrence paper is one effort in that direction, but more should be done to develop this sort of idea.
For biosecurity, I’m less at the forefront of the literature, so I have fewer specific suggestions, though I would expect that there are good opportunities to draw lessons from COVID-19 for other global catastrophic risks.