I think the breadth and interdisciplinary nature of x-risks are the best arguments for a dedicated agency with a mandate to consider any plausible catastrophic risks. It’s too easy to overlook risks without a natural “home” in a particular department right now.
You’d have to get a ton of people with minimal overlap in professional interests/skills to work together in such an agency. And depending on the ‘pet projects’ of the people at the highest levels, you might get a disproportionate focus on one particular risk (similar to what you see in EA right now—might be difficult to retain biologists working on antimicrobial resistance, or gene drive safety). Then the politics of funding within such an agency would be another minefield entirely—“oh, why does the bioterrorism division get x% more funding than the halting nuclear proliferation division?”.
How do you figure that X-risks are interdisciplinary in nature? AI safety has little crossover to, say bioterrorism. Speaking to what I personally know, even within disciplines, antimicrobial resistance research has very little to do with outbreak detection/epidemic control, even if researchers are interested in both. You already have agencies who work on each of these problems, so what’s the point in creating a new one? It seems like unnecessary bureaucracy to have to figure out what specifically falls under X-risk Agency’s domain vs DARPA vs the CDC vs DHS vs USDA vs NASA.
I think governments ARE interested in catastrophic risk; they’re just not approaching it in the same ways as EA. They don’t see threats to humanity’s existence as all under one umbrella—they are all separate issues that warrant separate approaches. Deciding which things are more or less deserving of funding is a political game motivated by different base assumptions. For one, the government is probably more likely to be concerned about risk of a deadly pandemic over AI security because the government will naturally weight current humans as higher value than future humans. If the government assigns weights that the EA community doesn’t agree with, how are you going to handle that? Wouldn’t the relative levels of funding that causes are currently getting already kind of reflect that?
Also, what about x risks that are associated with the government to begin with?
It sounds like your main concerns are creating needless bureaucracy and moving researchers/civil servants from areas where they have a natural fit (eg pandemic research in the Department of Health) to an interdisciplinary group where they can’t easily draw on relevant expertise and might be unhappy over different funding levels.
The part of that I’m most concerned about is moving people from a relevant organisation to a less relevant organisation. It does make sense for pandemic preparation to be under health.
The part of the current system that I’m most concerned about is identification of new risks. In the policy world, things don’t get done unless there’s a clear person responsible for them. If there’s no one responsible for thinking about “What else could go wrong?” no one will be thinking about it. Alternatively, if people are only responsible for thinking “What could go wrong?” in their own departments (Health, Defense, etc) it could be easy to miss a risk that falls outside of the current structure of government departments. Even if an outside expert spots a significant risk (think AI risk), if there’s not a clear department responsible (in the UK: Business, Enterprise, and Innovation? Digital, Culture, Media, and Sport?) then nothing will happen. If we have a clear place to go every time a concern comes up, where concerns can be assessed against each other and prioritised, we would be better at dealing with risks.
In the US, maybe Homeland Security fills this role? Canada has a Minister of Public Safety and Emergency Preparedness, so that’s quite clear. The UK doesn’t have anything as clearcut, and that can be a problem when it comes to advocacy.
About your other points:
-I don’t like needless bureaucracy either, but it seems like bureaucracy is a major part of getting your issue on the agenda. It might be necessary, even if it doesn’t seem incredibly efficient.
-I actually think it would be really good to compare government funding for different catastrophic scenarios. I doubt it’s based on anything so rational as weighting current people more highly than future people. :)
-On government risks: hopefully if it’s explicitly someone’s job to consider risks, they can put forward policy ideas to mitigate those risks. I want those risks to be easy to notice and make policy around.
“Even if an outside expert spots a significant risk (think AI risk), if there’s not a clear department responsible (in the UK: Business, Enterprise, and Innovation? Digital, Culture, Media, and Sport?) then nothing will happen.”—I disagree with this point. You yourself pointed out the environmental movement. You can get x-risks onto the agenda without creating an entire branch of the government for it. Also, having x-risks identified by experts in the field who can talk about it amongst each other without immediately assuming “this is an x-risk” is useful and leads to better dialogue.
EA people aren’t the only rational people around. Funding decisions aren’t made entirely arbitrarily. Also, weighting current humans as more valuable than future humans is something that governments will naturally tend to do, because governments exist to take care of people who are currently alive. Additionally, governments don’t care about humanity so much as their people. No government on earth is dedicated to preserving humanity; they are concerned for the people living in their borders.
Nuclear proliferation is a real threat, and for a government agency to address it and suggest policy would be potentially undermining other goals that the government may have. A sensible policy to mitigate the threat of nuclear war would be to get the US to disarm, but this obviously goes against the goal to retain its status as military superpower.
—
Government agencies typically have very well defined roles or domains. X-risk crosses way too many domains to be effective as an agency. X-risk think tanks are great because they have the ability and the freedom to sprawl across many different domains as needed.
But let’s say we get this x-risk agency. In practice, how do you expect this to work out?
It seems to me that you’re mostly concerned about identifying new risks and having a place for people to go if they’ve identified risks. So, let’s say you get this agency, and they work on it for a year, and don’t come up with any new risks. Have they been doing their job? How do we determine if this agency should continue to get funding or not? With regard to your point about identifying new risks, if I’m working in Health or Defense, I’m not even going to be thinking about risks associated with AI. Identifying new risks takes a level of specific expertise and knowledge. I’m a biologist—I’m never going to point out new x-risks that are not associated with biology. I simply don’t have the expertise or understanding to think about AI security, just as an AI security person doesn’t have the knowledge base to determine exactly how much of a risk antimicrobial resistance is. To this point, if you hire specific people to think about “what could go wrong?”, they’re almost certain to be biased to looking for risks in the areas they already know best. You can see this in EA—lots of computer science people in EA, and all we talk about is AI security, even though other risks pose much bigger near-term threats.
If the goal isn’t to identify new risks, but to develop and manage research programs, that’s pretty much already done in other agencies. It doesn’t make sense for Xrisk Agency to fund something that a health agency is also funding. If the idea is to have in house experts work on it, that also doesn’t make sense, and can also be problematic. Even within some x-risk areas, experts disagree on what the biggest things are—example in antimicrobial resistance is that lots of people cite agricultural use as a big driver, but plenty of other people think that use in hospitals is a much bigger problem. If you only have a set budget, how do you decide what to tackle first when the experts are split?
Given all that, even within agencies, it’s super hard to get people to care about the one thing your office is working on. Just because you have an agency working on something doesn’t mean it’s 1) effective or 2) useful. As a student, I briefly worked with an office in DHHS and half of our work was just trying to get other people in the same department, presumably working on the same types of problems to care about our particular issue. In fact, I can absolutely see an x-risk agency as being more of a threat than a boon to x-risk research—“oh, Xrisk Agency works on that; that’s not my problem, go to them.”
I think the breadth and interdisciplinary nature of x-risks are the best arguments for a dedicated agency with a mandate to consider any plausible catastrophic risks. It’s too easy to overlook risks without a natural “home” in a particular department right now.
You’d have to get a ton of people with minimal overlap in professional interests/skills to work together in such an agency. And depending on the ‘pet projects’ of the people at the highest levels, you might get a disproportionate focus on one particular risk (similar to what you see in EA right now—might be difficult to retain biologists working on antimicrobial resistance, or gene drive safety). Then the politics of funding within such an agency would be another minefield entirely—“oh, why does the bioterrorism division get x% more funding than the halting nuclear proliferation division?”.
How do you figure that X-risks are interdisciplinary in nature? AI safety has little crossover to, say bioterrorism. Speaking to what I personally know, even within disciplines, antimicrobial resistance research has very little to do with outbreak detection/epidemic control, even if researchers are interested in both. You already have agencies who work on each of these problems, so what’s the point in creating a new one? It seems like unnecessary bureaucracy to have to figure out what specifically falls under X-risk Agency’s domain vs DARPA vs the CDC vs DHS vs USDA vs NASA.
I think governments ARE interested in catastrophic risk; they’re just not approaching it in the same ways as EA. They don’t see threats to humanity’s existence as all under one umbrella—they are all separate issues that warrant separate approaches. Deciding which things are more or less deserving of funding is a political game motivated by different base assumptions. For one, the government is probably more likely to be concerned about risk of a deadly pandemic over AI security because the government will naturally weight current humans as higher value than future humans. If the government assigns weights that the EA community doesn’t agree with, how are you going to handle that? Wouldn’t the relative levels of funding that causes are currently getting already kind of reflect that?
Also, what about x risks that are associated with the government to begin with?
It sounds like your main concerns are creating needless bureaucracy and moving researchers/civil servants from areas where they have a natural fit (eg pandemic research in the Department of Health) to an interdisciplinary group where they can’t easily draw on relevant expertise and might be unhappy over different funding levels.
The part of that I’m most concerned about is moving people from a relevant organisation to a less relevant organisation. It does make sense for pandemic preparation to be under health.
The part of the current system that I’m most concerned about is identification of new risks. In the policy world, things don’t get done unless there’s a clear person responsible for them. If there’s no one responsible for thinking about “What else could go wrong?” no one will be thinking about it. Alternatively, if people are only responsible for thinking “What could go wrong?” in their own departments (Health, Defense, etc) it could be easy to miss a risk that falls outside of the current structure of government departments. Even if an outside expert spots a significant risk (think AI risk), if there’s not a clear department responsible (in the UK: Business, Enterprise, and Innovation? Digital, Culture, Media, and Sport?) then nothing will happen. If we have a clear place to go every time a concern comes up, where concerns can be assessed against each other and prioritised, we would be better at dealing with risks.
In the US, maybe Homeland Security fills this role? Canada has a Minister of Public Safety and Emergency Preparedness, so that’s quite clear. The UK doesn’t have anything as clearcut, and that can be a problem when it comes to advocacy.
About your other points: -I don’t like needless bureaucracy either, but it seems like bureaucracy is a major part of getting your issue on the agenda. It might be necessary, even if it doesn’t seem incredibly efficient. -I actually think it would be really good to compare government funding for different catastrophic scenarios. I doubt it’s based on anything so rational as weighting current people more highly than future people. :) -On government risks: hopefully if it’s explicitly someone’s job to consider risks, they can put forward policy ideas to mitigate those risks. I want those risks to be easy to notice and make policy around.
“Even if an outside expert spots a significant risk (think AI risk), if there’s not a clear department responsible (in the UK: Business, Enterprise, and Innovation? Digital, Culture, Media, and Sport?) then nothing will happen.”—I disagree with this point. You yourself pointed out the environmental movement. You can get x-risks onto the agenda without creating an entire branch of the government for it. Also, having x-risks identified by experts in the field who can talk about it amongst each other without immediately assuming “this is an x-risk” is useful and leads to better dialogue.
EA people aren’t the only rational people around. Funding decisions aren’t made entirely arbitrarily. Also, weighting current humans as more valuable than future humans is something that governments will naturally tend to do, because governments exist to take care of people who are currently alive. Additionally, governments don’t care about humanity so much as their people. No government on earth is dedicated to preserving humanity; they are concerned for the people living in their borders.
Nuclear proliferation is a real threat, and for a government agency to address it and suggest policy would be potentially undermining other goals that the government may have. A sensible policy to mitigate the threat of nuclear war would be to get the US to disarm, but this obviously goes against the goal to retain its status as military superpower.
— Government agencies typically have very well defined roles or domains. X-risk crosses way too many domains to be effective as an agency. X-risk think tanks are great because they have the ability and the freedom to sprawl across many different domains as needed.
But let’s say we get this x-risk agency. In practice, how do you expect this to work out?
It seems to me that you’re mostly concerned about identifying new risks and having a place for people to go if they’ve identified risks. So, let’s say you get this agency, and they work on it for a year, and don’t come up with any new risks. Have they been doing their job? How do we determine if this agency should continue to get funding or not? With regard to your point about identifying new risks, if I’m working in Health or Defense, I’m not even going to be thinking about risks associated with AI. Identifying new risks takes a level of specific expertise and knowledge. I’m a biologist—I’m never going to point out new x-risks that are not associated with biology. I simply don’t have the expertise or understanding to think about AI security, just as an AI security person doesn’t have the knowledge base to determine exactly how much of a risk antimicrobial resistance is. To this point, if you hire specific people to think about “what could go wrong?”, they’re almost certain to be biased to looking for risks in the areas they already know best. You can see this in EA—lots of computer science people in EA, and all we talk about is AI security, even though other risks pose much bigger near-term threats.
If the goal isn’t to identify new risks, but to develop and manage research programs, that’s pretty much already done in other agencies. It doesn’t make sense for Xrisk Agency to fund something that a health agency is also funding. If the idea is to have in house experts work on it, that also doesn’t make sense, and can also be problematic. Even within some x-risk areas, experts disagree on what the biggest things are—example in antimicrobial resistance is that lots of people cite agricultural use as a big driver, but plenty of other people think that use in hospitals is a much bigger problem. If you only have a set budget, how do you decide what to tackle first when the experts are split?
Given all that, even within agencies, it’s super hard to get people to care about the one thing your office is working on. Just because you have an agency working on something doesn’t mean it’s 1) effective or 2) useful. As a student, I briefly worked with an office in DHHS and half of our work was just trying to get other people in the same department, presumably working on the same types of problems to care about our particular issue. In fact, I can absolutely see an x-risk agency as being more of a threat than a boon to x-risk research—“oh, Xrisk Agency works on that; that’s not my problem, go to them.”