Interesting question, thanks. To summarize my answer: I believe nuclear weapons have the largest opportunities for a few select individuals to make an impact; climate change has the smallest opportunities; and AI, asteroids, and biosecurity are somewhere in between.
First, please note that I am answering this question without regard for the magnitude of the risks. One risk might have larger opportunities for an individual to make an impact on because it’s a much larger risk. However, accounting for that turns this into a question about which risks are larger, whereas it seems more fruitful to focus on other aspects of the risks.
Second, all of these risks require a lot more than 10 people to address. Indeed, a lot of important roles involve engaging with lots of other people: lawmakers setting policy that influences the activities of government agencies, private citizens, etc.; researchers who develop ideas that influence other people’s thinking; startup founders who build companies with large numbers of employees; etc. This is an important caveat.
With that in mind, I believe the answer is nuclear weapons. The president of the United States has a very high degree of influence over nuclear weapons risk, including the sole authority to order the launch of nuclear weapons. This is a point of ongoing debate; see e.g. this. I am less familiar with procedures in other countries but at least some of them may be similar. There are significant opportunities for a variety of people to impact nuclear weapons risk (see this for discussion), but I think it’s still the risk in which a few well-placed individuals can have the largest impact, for better or worse.
On the opposite end of the spectrum, a few powerful individuals probably have the least influence over climate change. A central characteristic of climate change is that its solutions are highly distributed. Greenhouse gas emissions are distributed widely across countries and economic sectors. Solutions for reducing emissions must likewise be implemented across countries and economic sectors and must additionally be maintained over extended periods of time. Technological solutions like renewable energy depend less on a single brilliant idea or a single policy enactment and more on sustained investment in research, development, and deployment. The best example I can think of is the idea of a geoengineering “greenfinger”, in which a rogue actor unilaterally implements a geoengineering regime. I’m not up to speed on the research on this idea and I don’t have a good sense for whether the idea is viable in practice.
For AI, the largest opportunities may involve a research group developing technological solutions that, once developed, would be readily adopted by other groups—though the adoption process can be a limiting factor that requires larger numbers of people.
For asteroids, the largest opportunities may involve leading a program to detect and deflect incoming asteroids; the program itself would require larger numbers of people, though there may be a role for a few well-placed government officials to have a major impact.
For biosecurity, the best example that comes to mind involves an increase in the risk. There are scenarios in which a research lab creates and (intentionally or accidentally) releases a dangerous pathogen. See debates on “gain of function” experiments, “dual-use research of concern”, etc.
Finally, some collective action theory is relevant here. Opportunities for a few individuals to have an impact may be especially large in “single best effort” situations, in which the problem can be solved by one effort: a single best technological solution for AI, a single best detection/deflection effort for asteroids, or even a single effort to launch nuclear weapons or develop a pathogen. In contrast, reducing greenhouse gas emissions is an “aggregate effort” situation, in which results come from the total amount of effort aggregated across everyone who contributes. Geoengineering is more in the direction of a single best effort situation, though perhaps not to the same extent as the other examples. For more on this theory, see my paper Collective action on artificial intelligence: A primer and review, especially Section 2.3, or work by Scott Barrett, especially his book Why Cooperate? The Incentive to Supply Global Public Goods.
Interesting question, thanks. To summarize my answer: I believe nuclear weapons have the largest opportunities for a few select individuals to make an impact; climate change has the smallest opportunities; and AI, asteroids, and biosecurity are somewhere in between.
First, please note that I am answering this question without regard for the magnitude of the risks. One risk might have larger opportunities for an individual to make an impact on because it’s a much larger risk. However, accounting for that turns this into a question about which risks are larger, whereas it seems more fruitful to focus on other aspects of the risks.
Second, all of these risks require a lot more than 10 people to address. Indeed, a lot of important roles involve engaging with lots of other people: lawmakers setting policy that influences the activities of government agencies, private citizens, etc.; researchers who develop ideas that influence other people’s thinking; startup founders who build companies with large numbers of employees; etc. This is an important caveat.
With that in mind, I believe the answer is nuclear weapons. The president of the United States has a very high degree of influence over nuclear weapons risk, including the sole authority to order the launch of nuclear weapons. This is a point of ongoing debate; see e.g. this. I am less familiar with procedures in other countries but at least some of them may be similar. There are significant opportunities for a variety of people to impact nuclear weapons risk (see this for discussion), but I think it’s still the risk in which a few well-placed individuals can have the largest impact, for better or worse.
On the opposite end of the spectrum, a few powerful individuals probably have the least influence over climate change. A central characteristic of climate change is that its solutions are highly distributed. Greenhouse gas emissions are distributed widely across countries and economic sectors. Solutions for reducing emissions must likewise be implemented across countries and economic sectors and must additionally be maintained over extended periods of time. Technological solutions like renewable energy depend less on a single brilliant idea or a single policy enactment and more on sustained investment in research, development, and deployment. The best example I can think of is the idea of a geoengineering “greenfinger”, in which a rogue actor unilaterally implements a geoengineering regime. I’m not up to speed on the research on this idea and I don’t have a good sense for whether the idea is viable in practice.
For AI, the largest opportunities may involve a research group developing technological solutions that, once developed, would be readily adopted by other groups—though the adoption process can be a limiting factor that requires larger numbers of people.
For asteroids, the largest opportunities may involve leading a program to detect and deflect incoming asteroids; the program itself would require larger numbers of people, though there may be a role for a few well-placed government officials to have a major impact.
For biosecurity, the best example that comes to mind involves an increase in the risk. There are scenarios in which a research lab creates and (intentionally or accidentally) releases a dangerous pathogen. See debates on “gain of function” experiments, “dual-use research of concern”, etc.
Finally, some collective action theory is relevant here. Opportunities for a few individuals to have an impact may be especially large in “single best effort” situations, in which the problem can be solved by one effort: a single best technological solution for AI, a single best detection/deflection effort for asteroids, or even a single effort to launch nuclear weapons or develop a pathogen. In contrast, reducing greenhouse gas emissions is an “aggregate effort” situation, in which results come from the total amount of effort aggregated across everyone who contributes. Geoengineering is more in the direction of a single best effort situation, though perhaps not to the same extent as the other examples. For more on this theory, see my paper Collective action on artificial intelligence: A primer and review, especially Section 2.3, or work by Scott Barrett, especially his book Why Cooperate? The Incentive to Supply Global Public Goods.