Let’s compare the existing initiatives against different catastrophic risks (especially AI, nuclear weapons, asteroid impacts, extreme climate change, and biosecurity).
How would you rank each in terms of tractability? Ex: % reduced risk / unit of work. What is the most tractable effort we could take to reduce risk in each area?
Thanks for the question. To summarize, I don’t have a clear ranking of the risks, and I don’t think it makes sense to rank them in terms of tractability. There are some tractable opportunities across a variety of risks, but how tractable they are can vary a lot depending on one’s background and other factors.
First, tractability of a risk can vary significantly from person to person or from opportunity to opportunity. There was a separate question on which risks a few select individuals could have the largest impact on; my answer to that is relevant here.
Second, this is a good topic to note the interconnections between risks. There is a sense in which AI, nuclear weapons, asteroid impacts, extreme climate change, and biosecurity are not distinct from each other. For example, nuclear power helps with climate change but can increase nuclear weapons risks, as in international debate over the nuclear program of Iran. Nuclear explosives have been proposed to address asteroid risk, but this could also affect nuclear weapons risks; see discussion in my paper Risk-risk tradeoff analysis of nuclear explosives for asteroid deflection. Pandemics can affect climate change; see e.g. Impact of COVID-19 on greenhouse gases emissions: A critical review. Improving international relations and improving the resilience of civilization helps across a range of risks. This makes it further difficult to compare the tractability of these various risks.
Third, I see tractability and neglectedness as being closely related. When a risk gets a lot of attention, a lot of the most tractable opportunities have already been taken or will be taken anyway.
With those caveats in mind, some answers:
Climate change is distinctive in the wide range of opportunities to reduce the risk. On one hand, this makes it difficult for dedicated effort to significantly reduce the overall risk, because so many efforts are needed. On the other hand, it does create some relatively easy opportunities to reduce the risk. For example, when you’re walking out of a room, you might as well turn the lights off. This might not have a massive risk reduction, but the unit of work here is trivially small. More significant examples include living somewhere in which you don’t need to drive everywhere and eating more of a vegan diet; these are both also worth doing for a variety of other reasons. That said, the most significant examples involve changes to policy, industry, etc that are unfortunately generally difficult to implement.
Nuclear weapons opportunities vary a lot in terms of tractability. There is a sense in which reducing nuclear weapons risk is easy: just don’t launch the nuclear weapons! There is a different sense in which reducing the risk is very difficult: at its core, the risk derives from adversarial relations between certain major countries, and reducing the risk may depend on improving these relations, which is difficult. In between, there are a lot of opportunities to influence nuclear weapons policy. These are mostly very high-skill activities that benefit from advanced training in both international security and global catastrophic risk. For people who are able to train in these fields, I think the opportunities are quite good. Otherwise, there still are opportunities, but they are perhaps more limited.
Asteroid risk is an interesting case because the extreme portion of the risk may actually be more tractable. Large asteroids cause more extreme collisions, and because they are larger, they are also easier for astronomy research to detect. Indeed, a high percentage of the largest asteroids are believed to already be detected. None of the ones detected are on collision course with Earth. Much of the residual global catastrophic risk may involve more complex scenarios, such as involving smaller asteroids triggering inadvertent nuclear war; see my papers on this scenario here and here. My impression is that there may be some compelling opportunities to reduce the risk from these scenarios.
For AI, at the moment I think there are some excellent opportunities related to near-term AI governance. The deep learning revolution has put AI high on the agenda for public policy. There are active high-level initiatives to establish AI policy going on right now, and there are good opportunities to influence these policies. Once these policies are set, they may remain largely intact for a long time. It’s important to take advantage of these opportunities while they still exist. Additionally, I think there is low-hanging fruit in other domains. One example is corporate governance, which has gotten relatively little attention especially from people with an orientation toward long-term catastrophic risks; see my recent post on long-term AI corporate governance with Jonas Schuett of the Legal Priorities Project. Another example is AI ethics, which has gotten surprisingly little attention; see my work with Andrea Owe of GCRI here, here, here, and here. There may also be good opportunities on AI safety design techniques, though I am less qualified to comment on this.
For biosecurity, I am less active on it at the moment, so I am less qualified to comment. Also, COVID-19 significantly changes the landscape of opportunities. So I don’t have a clear answer on this.
Let’s compare the existing initiatives against different catastrophic risks (especially AI, nuclear weapons, asteroid impacts, extreme climate change, and biosecurity).
How would you rank each in terms of tractability? Ex: % reduced risk / unit of work. What is the most tractable effort we could take to reduce risk in each area?
Thanks for the question. To summarize, I don’t have a clear ranking of the risks, and I don’t think it makes sense to rank them in terms of tractability. There are some tractable opportunities across a variety of risks, but how tractable they are can vary a lot depending on one’s background and other factors.
First, tractability of a risk can vary significantly from person to person or from opportunity to opportunity. There was a separate question on which risks a few select individuals could have the largest impact on; my answer to that is relevant here.
Second, this is a good topic to note the interconnections between risks. There is a sense in which AI, nuclear weapons, asteroid impacts, extreme climate change, and biosecurity are not distinct from each other. For example, nuclear power helps with climate change but can increase nuclear weapons risks, as in international debate over the nuclear program of Iran. Nuclear explosives have been proposed to address asteroid risk, but this could also affect nuclear weapons risks; see discussion in my paper Risk-risk tradeoff analysis of nuclear explosives for asteroid deflection. Pandemics can affect climate change; see e.g. Impact of COVID-19 on greenhouse gases emissions: A critical review. Improving international relations and improving the resilience of civilization helps across a range of risks. This makes it further difficult to compare the tractability of these various risks.
Third, I see tractability and neglectedness as being closely related. When a risk gets a lot of attention, a lot of the most tractable opportunities have already been taken or will be taken anyway.
With those caveats in mind, some answers:
Climate change is distinctive in the wide range of opportunities to reduce the risk. On one hand, this makes it difficult for dedicated effort to significantly reduce the overall risk, because so many efforts are needed. On the other hand, it does create some relatively easy opportunities to reduce the risk. For example, when you’re walking out of a room, you might as well turn the lights off. This might not have a massive risk reduction, but the unit of work here is trivially small. More significant examples include living somewhere in which you don’t need to drive everywhere and eating more of a vegan diet; these are both also worth doing for a variety of other reasons. That said, the most significant examples involve changes to policy, industry, etc that are unfortunately generally difficult to implement.
Nuclear weapons opportunities vary a lot in terms of tractability. There is a sense in which reducing nuclear weapons risk is easy: just don’t launch the nuclear weapons! There is a different sense in which reducing the risk is very difficult: at its core, the risk derives from adversarial relations between certain major countries, and reducing the risk may depend on improving these relations, which is difficult. In between, there are a lot of opportunities to influence nuclear weapons policy. These are mostly very high-skill activities that benefit from advanced training in both international security and global catastrophic risk. For people who are able to train in these fields, I think the opportunities are quite good. Otherwise, there still are opportunities, but they are perhaps more limited.
Asteroid risk is an interesting case because the extreme portion of the risk may actually be more tractable. Large asteroids cause more extreme collisions, and because they are larger, they are also easier for astronomy research to detect. Indeed, a high percentage of the largest asteroids are believed to already be detected. None of the ones detected are on collision course with Earth. Much of the residual global catastrophic risk may involve more complex scenarios, such as involving smaller asteroids triggering inadvertent nuclear war; see my papers on this scenario here and here. My impression is that there may be some compelling opportunities to reduce the risk from these scenarios.
For AI, at the moment I think there are some excellent opportunities related to near-term AI governance. The deep learning revolution has put AI high on the agenda for public policy. There are active high-level initiatives to establish AI policy going on right now, and there are good opportunities to influence these policies. Once these policies are set, they may remain largely intact for a long time. It’s important to take advantage of these opportunities while they still exist. Additionally, I think there is low-hanging fruit in other domains. One example is corporate governance, which has gotten relatively little attention especially from people with an orientation toward long-term catastrophic risks; see my recent post on long-term AI corporate governance with Jonas Schuett of the Legal Priorities Project. Another example is AI ethics, which has gotten surprisingly little attention; see my work with Andrea Owe of GCRI here, here, here, and here. There may also be good opportunities on AI safety design techniques, though I am less qualified to comment on this.
For biosecurity, I am less active on it at the moment, so I am less qualified to comment. Also, COVID-19 significantly changes the landscape of opportunities. So I don’t have a clear answer on this.