The public beurocracy that I am most familiar with is that in the UK. Here, the government’s approach to risk, including Catastrophic risk, takes place within the Cabinet Office, and in particular their civil contingency secretariat. From what little I know of the US beurocracy I think it is the same in that risk management is the responsibility, at least in part, of the white house.
Risks fall under the authority of different government departments as you put it, in two different ways. Firstly, for the purpose of the Annual National Risk Assessment departments are assigned ‘ownership’ of risks for which they are percieved to have particular expertise. They are then tasked with producing a ‘reasonable worst case scenario’ for these risks, setting out different aspects of the potential impact. In the most recent assessment they were also asked to consider possible impacts beyond this reasonable worst case and that we might see as catestrophic.
The second way in which departments are tasked with responding to risks is that each department is asked to consider what effect the reasonable worst case scenario of each risk might have on them, and to consider ways of mitigating this risk. Contrary to the suggestion here however, this ownership is not 1 to 1 between departments and risks, but rather all departments are asked to submit plans in how to respond to all risks unless they can show that 1 - the impact on them will be minimal or 2 - if things are so bad that the impact on them would be significant then things have already become so bad that this is unlikely to be a significant priority. Obviously some departments wwill have a lot more to do to respond to some risks than others, but in-fact each department plans for the entire breadty of risks that fall into the national risk assessment.
I think that this process is preferable to having a seperate department for catestrophic risks in several respects. Firstly, the central executive department is best placed to coordinate activity acrosss other departments, which I think is something we would all agree is important. As other commentators have suggested, a seperate department might well end up having bugetary and other disputes with existing departments hambering such coordination. Secondly, this arrangement helps to highlight that risk mitigation should be right at the heart of government, and is part of its central executive function. Contrast this for instance with Climate change, which is part of its own department in many countries, but is consiquently a long way from most executive thinking—although there may be other reaosns for this of course.
There are some downsidew with this set up of course. irst amongst these is probably that it places catestrophic risks as just one amongst the entire selection of risks faced by a country, and to the extent that these risks are global may give them less rather than more priority in central government thinking. If catestrophic risk was primarilly under the foreighnn office / state department it is possible that global risks might get more priority, although it is also possible that exactly the same problem would reemerge. Another problem is that, even with the introduction of some consideration of scenarios that are beyond the ‘reasonable worst case’ it does seem like departments prefer to limit their considerations to what is reasonably likely, rather than what is potentially most catestrophic, because it is easier to base such scenarios on scientific evidence and the make good contingency plans for them. A final issue is that The cabinet office seems to classify its information by default, wheras it would probably be better from a scientific and campaigning perspective if assessments about catestrophic risks and how we are mitigating them where freely available.
Perhaps all of these problems together might imply that it would be net positive if, within the existing risk assessment frameworks, there was a specific office for assessing potentially catestrophic risks that could provide additional input alongside that from individual departments, do so more openley and potentially be situated between the Cabinet and Foreign Offices (Whitehouse and State Department). However, given the benefits of the existing system I do not see much likelyhood that an entire government department for existential and catestrophic risk would be a net improvement over the existing model, at least in the UK, even if such a thing were politically feasible.
This comment was informed by some recent meetings with civil servants, obviously it is possible that it reflects some of their biases in favour of the existing system which, due to its nature, is hard to evaluate from an external perspective.
I don’t think the people of this forum are qualified to discuss this. Nobody in the post or comments (as of the time I posted my comment, and I am including myself) leaves me with a visible impression that they have detailed knowledge of the process and trade-offs for making a new government agency or any other type of major governmental action on x-risk. As laymen I believe we should not be proposing or judging any particular policy but recognizing and supporting people with genuine expertise interested in existential risk policy.
I am not sure you are giving governments enough credit. Wrt things like gene drive safety, certain agencies are already working on these things. I know some researchers who just got a grant to work on how to contain and manage gene drives. US military research also includes plenty of stuff on bioterrorism—both agricultural and through pathogens. Grantmaking efforts are relatively rapid ways to get this stuff done, I think?
X-risk is so broad and cuts across so many different fields that dedicating an entire agency to it seems difficult, especially if you consider effectiveness.
I think the breadth and interdisciplinary nature of x-risks are the best arguments for a dedicated agency with a mandate to consider any plausible catastrophic risks. It’s too easy to overlook risks without a natural “home” in a particular department right now.
You’d have to get a ton of people with minimal overlap in professional interests/skills to work together in such an agency. And depending on the ‘pet projects’ of the people at the highest levels, you might get a disproportionate focus on one particular risk (similar to what you see in EA right now—might be difficult to retain biologists working on antimicrobial resistance, or gene drive safety). Then the politics of funding within such an agency would be another minefield entirely—“oh, why does the bioterrorism division get x% more funding than the halting nuclear proliferation division?”.
How do you figure that X-risks are interdisciplinary in nature? AI safety has little crossover to, say bioterrorism. Speaking to what I personally know, even within disciplines, antimicrobial resistance research has very little to do with outbreak detection/epidemic control, even if researchers are interested in both. You already have agencies who work on each of these problems, so what’s the point in creating a new one? It seems like unnecessary bureaucracy to have to figure out what specifically falls under X-risk Agency’s domain vs DARPA vs the CDC vs DHS vs USDA vs NASA.
I think governments ARE interested in catastrophic risk; they’re just not approaching it in the same ways as EA. They don’t see threats to humanity’s existence as all under one umbrella—they are all separate issues that warrant separate approaches. Deciding which things are more or less deserving of funding is a political game motivated by different base assumptions. For one, the government is probably more likely to be concerned about risk of a deadly pandemic over AI security because the government will naturally weight current humans as higher value than future humans. If the government assigns weights that the EA community doesn’t agree with, how are you going to handle that? Wouldn’t the relative levels of funding that causes are currently getting already kind of reflect that?
Also, what about x risks that are associated with the government to begin with?
It sounds like your main concerns are creating needless bureaucracy and moving researchers/civil servants from areas where they have a natural fit (eg pandemic research in the Department of Health) to an interdisciplinary group where they can’t easily draw on relevant expertise and might be unhappy over different funding levels.
The part of that I’m most concerned about is moving people from a relevant organisation to a less relevant organisation. It does make sense for pandemic preparation to be under health.
The part of the current system that I’m most concerned about is identification of new risks. In the policy world, things don’t get done unless there’s a clear person responsible for them. If there’s no one responsible for thinking about “What else could go wrong?” no one will be thinking about it. Alternatively, if people are only responsible for thinking “What could go wrong?” in their own departments (Health, Defense, etc) it could be easy to miss a risk that falls outside of the current structure of government departments. Even if an outside expert spots a significant risk (think AI risk), if there’s not a clear department responsible (in the UK: Business, Enterprise, and Innovation? Digital, Culture, Media, and Sport?) then nothing will happen. If we have a clear place to go every time a concern comes up, where concerns can be assessed against each other and prioritised, we would be better at dealing with risks.
In the US, maybe Homeland Security fills this role? Canada has a Minister of Public Safety and Emergency Preparedness, so that’s quite clear. The UK doesn’t have anything as clearcut, and that can be a problem when it comes to advocacy.
About your other points:
-I don’t like needless bureaucracy either, but it seems like bureaucracy is a major part of getting your issue on the agenda. It might be necessary, even if it doesn’t seem incredibly efficient.
-I actually think it would be really good to compare government funding for different catastrophic scenarios. I doubt it’s based on anything so rational as weighting current people more highly than future people. :)
-On government risks: hopefully if it’s explicitly someone’s job to consider risks, they can put forward policy ideas to mitigate those risks. I want those risks to be easy to notice and make policy around.
“Even if an outside expert spots a significant risk (think AI risk), if there’s not a clear department responsible (in the UK: Business, Enterprise, and Innovation? Digital, Culture, Media, and Sport?) then nothing will happen.”—I disagree with this point. You yourself pointed out the environmental movement. You can get x-risks onto the agenda without creating an entire branch of the government for it. Also, having x-risks identified by experts in the field who can talk about it amongst each other without immediately assuming “this is an x-risk” is useful and leads to better dialogue.
EA people aren’t the only rational people around. Funding decisions aren’t made entirely arbitrarily. Also, weighting current humans as more valuable than future humans is something that governments will naturally tend to do, because governments exist to take care of people who are currently alive. Additionally, governments don’t care about humanity so much as their people. No government on earth is dedicated to preserving humanity; they are concerned for the people living in their borders.
Nuclear proliferation is a real threat, and for a government agency to address it and suggest policy would be potentially undermining other goals that the government may have. A sensible policy to mitigate the threat of nuclear war would be to get the US to disarm, but this obviously goes against the goal to retain its status as military superpower.
—
Government agencies typically have very well defined roles or domains. X-risk crosses way too many domains to be effective as an agency. X-risk think tanks are great because they have the ability and the freedom to sprawl across many different domains as needed.
But let’s say we get this x-risk agency. In practice, how do you expect this to work out?
It seems to me that you’re mostly concerned about identifying new risks and having a place for people to go if they’ve identified risks. So, let’s say you get this agency, and they work on it for a year, and don’t come up with any new risks. Have they been doing their job? How do we determine if this agency should continue to get funding or not? With regard to your point about identifying new risks, if I’m working in Health or Defense, I’m not even going to be thinking about risks associated with AI. Identifying new risks takes a level of specific expertise and knowledge. I’m a biologist—I’m never going to point out new x-risks that are not associated with biology. I simply don’t have the expertise or understanding to think about AI security, just as an AI security person doesn’t have the knowledge base to determine exactly how much of a risk antimicrobial resistance is. To this point, if you hire specific people to think about “what could go wrong?”, they’re almost certain to be biased to looking for risks in the areas they already know best. You can see this in EA—lots of computer science people in EA, and all we talk about is AI security, even though other risks pose much bigger near-term threats.
If the goal isn’t to identify new risks, but to develop and manage research programs, that’s pretty much already done in other agencies. It doesn’t make sense for Xrisk Agency to fund something that a health agency is also funding. If the idea is to have in house experts work on it, that also doesn’t make sense, and can also be problematic. Even within some x-risk areas, experts disagree on what the biggest things are—example in antimicrobial resistance is that lots of people cite agricultural use as a big driver, but plenty of other people think that use in hospitals is a much bigger problem. If you only have a set budget, how do you decide what to tackle first when the experts are split?
Given all that, even within agencies, it’s super hard to get people to care about the one thing your office is working on. Just because you have an agency working on something doesn’t mean it’s 1) effective or 2) useful. As a student, I briefly worked with an office in DHHS and half of our work was just trying to get other people in the same department, presumably working on the same types of problems to care about our particular issue. In fact, I can absolutely see an x-risk agency as being more of a threat than a boon to x-risk research—“oh, Xrisk Agency works on that; that’s not my problem, go to them.”
We’d have to think very carefully about how we frame it. The solution is less obvious than it might appear at first, irreversible, and a major factor in how successful we are at improving government responses to risks overall.
They’ll expect it to address different issues if it’s under Defense rather than Health and Human Services or Homeland Security. If we make it a part of the defense bureaucracy, it’s there forever, which has pros and cons. That would likely be a better approach somewhere like the US where defense is relatively well-funded than somewhere like Canada where the defense budget is regularly being cut. It’s also a better approach if we’re very concerned about nuclear war and bioterrorism and we want to frame AGI as a hostile power. It’s a worse option if we want to frame dangerous AGI as domestic enterprise gone wrong and focus on issues like pandemics and climate change. If we decide creation of government agencies are an important part of our long-term policy strategy, several people should think very hard about where these agencies should be located in each government we lobby.
One issue to consider is whether catastrophic risk is a sufficiently popular issue for an agency to use it to sustain itself. Independent organisations can be vulnerable to cuts. This probably varies a lot by country.
Independent organisations can be vulnerable to cuts.
Do you know of any quantitative evidence on the subject? My impression was there is a fair bit of truth to the maxim “There’s nothing as permanent as a temporary government program.”
Both creating and sustaining a government agency will likely take more popular support than we currently have, but I still think it’s an important long term goal.
I’m under the impression that agencies are less dependent on the ebb and flow of public opinion than individual policy ideas. However, they would certainly still need some public support. On the other hand, having an agency for catastrophic risk prevention might give the issue legitimacy and actually make it more popular.
The public beurocracy that I am most familiar with is that in the UK. Here, the government’s approach to risk, including Catastrophic risk, takes place within the Cabinet Office, and in particular their civil contingency secretariat. From what little I know of the US beurocracy I think it is the same in that risk management is the responsibility, at least in part, of the white house.
Risks fall under the authority of different government departments as you put it, in two different ways. Firstly, for the purpose of the Annual National Risk Assessment departments are assigned ‘ownership’ of risks for which they are percieved to have particular expertise. They are then tasked with producing a ‘reasonable worst case scenario’ for these risks, setting out different aspects of the potential impact. In the most recent assessment they were also asked to consider possible impacts beyond this reasonable worst case and that we might see as catestrophic.
These impacts are then collated and evaluated by the cabinet office for the purose of producing the National Risk Assessment, which is classified, and the National Risk Register, which is available here https://www.gov.uk/government/publications/national-risk-register-of-civil-emergencies-2017-edition.
The second way in which departments are tasked with responding to risks is that each department is asked to consider what effect the reasonable worst case scenario of each risk might have on them, and to consider ways of mitigating this risk. Contrary to the suggestion here however, this ownership is not 1 to 1 between departments and risks, but rather all departments are asked to submit plans in how to respond to all risks unless they can show that 1 - the impact on them will be minimal or 2 - if things are so bad that the impact on them would be significant then things have already become so bad that this is unlikely to be a significant priority. Obviously some departments wwill have a lot more to do to respond to some risks than others, but in-fact each department plans for the entire breadty of risks that fall into the national risk assessment.
I think that this process is preferable to having a seperate department for catestrophic risks in several respects. Firstly, the central executive department is best placed to coordinate activity acrosss other departments, which I think is something we would all agree is important. As other commentators have suggested, a seperate department might well end up having bugetary and other disputes with existing departments hambering such coordination. Secondly, this arrangement helps to highlight that risk mitigation should be right at the heart of government, and is part of its central executive function. Contrast this for instance with Climate change, which is part of its own department in many countries, but is consiquently a long way from most executive thinking—although there may be other reaosns for this of course.
There are some downsidew with this set up of course. irst amongst these is probably that it places catestrophic risks as just one amongst the entire selection of risks faced by a country, and to the extent that these risks are global may give them less rather than more priority in central government thinking. If catestrophic risk was primarilly under the foreighnn office / state department it is possible that global risks might get more priority, although it is also possible that exactly the same problem would reemerge. Another problem is that, even with the introduction of some consideration of scenarios that are beyond the ‘reasonable worst case’ it does seem like departments prefer to limit their considerations to what is reasonably likely, rather than what is potentially most catestrophic, because it is easier to base such scenarios on scientific evidence and the make good contingency plans for them. A final issue is that The cabinet office seems to classify its information by default, wheras it would probably be better from a scientific and campaigning perspective if assessments about catestrophic risks and how we are mitigating them where freely available.
Perhaps all of these problems together might imply that it would be net positive if, within the existing risk assessment frameworks, there was a specific office for assessing potentially catestrophic risks that could provide additional input alongside that from individual departments, do so more openley and potentially be situated between the Cabinet and Foreign Offices (Whitehouse and State Department). However, given the benefits of the existing system I do not see much likelyhood that an entire government department for existential and catestrophic risk would be a net improvement over the existing model, at least in the UK, even if such a thing were politically feasible.
This comment was informed by some recent meetings with civil servants, obviously it is possible that it reflects some of their biases in favour of the existing system which, due to its nature, is hard to evaluate from an external perspective.
This was very informative. Thank you for taking the time to share it.
Do you know of any resources discussing the pros and cons of the introduction of new government agencies?
An idea worth discussing, regardless.
This book is a core text on this subject, which explicitly considers when specific agencies are effective and motivated to pursue particular goals: https://www.amazon.co.uk/Bureaucracy-Government-Agencies-Basic-Classics/dp/0465007856
I’m also reminded of Nate Silver’s interviews with the US hurricane forecasting agency in The Signal and the Noise.
I saw the idea in passing and it caught my eye. I’ll look out for this kind of information over the next week.
I don’t think the people of this forum are qualified to discuss this. Nobody in the post or comments (as of the time I posted my comment, and I am including myself) leaves me with a visible impression that they have detailed knowledge of the process and trade-offs for making a new government agency or any other type of major governmental action on x-risk. As laymen I believe we should not be proposing or judging any particular policy but recognizing and supporting people with genuine expertise interested in existential risk policy.
I am not sure you are giving governments enough credit. Wrt things like gene drive safety, certain agencies are already working on these things. I know some researchers who just got a grant to work on how to contain and manage gene drives. US military research also includes plenty of stuff on bioterrorism—both agricultural and through pathogens. Grantmaking efforts are relatively rapid ways to get this stuff done, I think?
X-risk is so broad and cuts across so many different fields that dedicating an entire agency to it seems difficult, especially if you consider effectiveness.
I think the breadth and interdisciplinary nature of x-risks are the best arguments for a dedicated agency with a mandate to consider any plausible catastrophic risks. It’s too easy to overlook risks without a natural “home” in a particular department right now.
You’d have to get a ton of people with minimal overlap in professional interests/skills to work together in such an agency. And depending on the ‘pet projects’ of the people at the highest levels, you might get a disproportionate focus on one particular risk (similar to what you see in EA right now—might be difficult to retain biologists working on antimicrobial resistance, or gene drive safety). Then the politics of funding within such an agency would be another minefield entirely—“oh, why does the bioterrorism division get x% more funding than the halting nuclear proliferation division?”.
How do you figure that X-risks are interdisciplinary in nature? AI safety has little crossover to, say bioterrorism. Speaking to what I personally know, even within disciplines, antimicrobial resistance research has very little to do with outbreak detection/epidemic control, even if researchers are interested in both. You already have agencies who work on each of these problems, so what’s the point in creating a new one? It seems like unnecessary bureaucracy to have to figure out what specifically falls under X-risk Agency’s domain vs DARPA vs the CDC vs DHS vs USDA vs NASA.
I think governments ARE interested in catastrophic risk; they’re just not approaching it in the same ways as EA. They don’t see threats to humanity’s existence as all under one umbrella—they are all separate issues that warrant separate approaches. Deciding which things are more or less deserving of funding is a political game motivated by different base assumptions. For one, the government is probably more likely to be concerned about risk of a deadly pandemic over AI security because the government will naturally weight current humans as higher value than future humans. If the government assigns weights that the EA community doesn’t agree with, how are you going to handle that? Wouldn’t the relative levels of funding that causes are currently getting already kind of reflect that?
Also, what about x risks that are associated with the government to begin with?
It sounds like your main concerns are creating needless bureaucracy and moving researchers/civil servants from areas where they have a natural fit (eg pandemic research in the Department of Health) to an interdisciplinary group where they can’t easily draw on relevant expertise and might be unhappy over different funding levels.
The part of that I’m most concerned about is moving people from a relevant organisation to a less relevant organisation. It does make sense for pandemic preparation to be under health.
The part of the current system that I’m most concerned about is identification of new risks. In the policy world, things don’t get done unless there’s a clear person responsible for them. If there’s no one responsible for thinking about “What else could go wrong?” no one will be thinking about it. Alternatively, if people are only responsible for thinking “What could go wrong?” in their own departments (Health, Defense, etc) it could be easy to miss a risk that falls outside of the current structure of government departments. Even if an outside expert spots a significant risk (think AI risk), if there’s not a clear department responsible (in the UK: Business, Enterprise, and Innovation? Digital, Culture, Media, and Sport?) then nothing will happen. If we have a clear place to go every time a concern comes up, where concerns can be assessed against each other and prioritised, we would be better at dealing with risks.
In the US, maybe Homeland Security fills this role? Canada has a Minister of Public Safety and Emergency Preparedness, so that’s quite clear. The UK doesn’t have anything as clearcut, and that can be a problem when it comes to advocacy.
About your other points: -I don’t like needless bureaucracy either, but it seems like bureaucracy is a major part of getting your issue on the agenda. It might be necessary, even if it doesn’t seem incredibly efficient. -I actually think it would be really good to compare government funding for different catastrophic scenarios. I doubt it’s based on anything so rational as weighting current people more highly than future people. :) -On government risks: hopefully if it’s explicitly someone’s job to consider risks, they can put forward policy ideas to mitigate those risks. I want those risks to be easy to notice and make policy around.
“Even if an outside expert spots a significant risk (think AI risk), if there’s not a clear department responsible (in the UK: Business, Enterprise, and Innovation? Digital, Culture, Media, and Sport?) then nothing will happen.”—I disagree with this point. You yourself pointed out the environmental movement. You can get x-risks onto the agenda without creating an entire branch of the government for it. Also, having x-risks identified by experts in the field who can talk about it amongst each other without immediately assuming “this is an x-risk” is useful and leads to better dialogue.
EA people aren’t the only rational people around. Funding decisions aren’t made entirely arbitrarily. Also, weighting current humans as more valuable than future humans is something that governments will naturally tend to do, because governments exist to take care of people who are currently alive. Additionally, governments don’t care about humanity so much as their people. No government on earth is dedicated to preserving humanity; they are concerned for the people living in their borders.
Nuclear proliferation is a real threat, and for a government agency to address it and suggest policy would be potentially undermining other goals that the government may have. A sensible policy to mitigate the threat of nuclear war would be to get the US to disarm, but this obviously goes against the goal to retain its status as military superpower.
— Government agencies typically have very well defined roles or domains. X-risk crosses way too many domains to be effective as an agency. X-risk think tanks are great because they have the ability and the freedom to sprawl across many different domains as needed.
But let’s say we get this x-risk agency. In practice, how do you expect this to work out?
It seems to me that you’re mostly concerned about identifying new risks and having a place for people to go if they’ve identified risks. So, let’s say you get this agency, and they work on it for a year, and don’t come up with any new risks. Have they been doing their job? How do we determine if this agency should continue to get funding or not? With regard to your point about identifying new risks, if I’m working in Health or Defense, I’m not even going to be thinking about risks associated with AI. Identifying new risks takes a level of specific expertise and knowledge. I’m a biologist—I’m never going to point out new x-risks that are not associated with biology. I simply don’t have the expertise or understanding to think about AI security, just as an AI security person doesn’t have the knowledge base to determine exactly how much of a risk antimicrobial resistance is. To this point, if you hire specific people to think about “what could go wrong?”, they’re almost certain to be biased to looking for risks in the areas they already know best. You can see this in EA—lots of computer science people in EA, and all we talk about is AI security, even though other risks pose much bigger near-term threats.
If the goal isn’t to identify new risks, but to develop and manage research programs, that’s pretty much already done in other agencies. It doesn’t make sense for Xrisk Agency to fund something that a health agency is also funding. If the idea is to have in house experts work on it, that also doesn’t make sense, and can also be problematic. Even within some x-risk areas, experts disagree on what the biggest things are—example in antimicrobial resistance is that lots of people cite agricultural use as a big driver, but plenty of other people think that use in hospitals is a much bigger problem. If you only have a set budget, how do you decide what to tackle first when the experts are split?
Given all that, even within agencies, it’s super hard to get people to care about the one thing your office is working on. Just because you have an agency working on something doesn’t mean it’s 1) effective or 2) useful. As a student, I briefly worked with an office in DHHS and half of our work was just trying to get other people in the same department, presumably working on the same types of problems to care about our particular issue. In fact, I can absolutely see an x-risk agency as being more of a threat than a boon to x-risk research—“oh, Xrisk Agency works on that; that’s not my problem, go to them.”
Maybe you would start with a small part of the defense bureaucracy?
We’d have to think very carefully about how we frame it. The solution is less obvious than it might appear at first, irreversible, and a major factor in how successful we are at improving government responses to risks overall.
They’ll expect it to address different issues if it’s under Defense rather than Health and Human Services or Homeland Security. If we make it a part of the defense bureaucracy, it’s there forever, which has pros and cons. That would likely be a better approach somewhere like the US where defense is relatively well-funded than somewhere like Canada where the defense budget is regularly being cut. It’s also a better approach if we’re very concerned about nuclear war and bioterrorism and we want to frame AGI as a hostile power. It’s a worse option if we want to frame dangerous AGI as domestic enterprise gone wrong and focus on issues like pandemics and climate change. If we decide creation of government agencies are an important part of our long-term policy strategy, several people should think very hard about where these agencies should be located in each government we lobby.
One issue to consider is whether catastrophic risk is a sufficiently popular issue for an agency to use it to sustain itself. Independent organisations can be vulnerable to cuts. This probably varies a lot by country.
Do you know of any quantitative evidence on the subject? My impression was there is a fair bit of truth to the maxim “There’s nothing as permanent as a temporary government program.”
Both creating and sustaining a government agency will likely take more popular support than we currently have, but I still think it’s an important long term goal.
I’m under the impression that agencies are less dependent on the ebb and flow of public opinion than individual policy ideas. However, they would certainly still need some public support. On the other hand, having an agency for catastrophic risk prevention might give the issue legitimacy and actually make it more popular.