“Even if an outside expert spots a significant risk (think AI risk), if there’s not a clear department responsible (in the UK: Business, Enterprise, and Innovation? Digital, Culture, Media, and Sport?) then nothing will happen.”—I disagree with this point. You yourself pointed out the environmental movement. You can get x-risks onto the agenda without creating an entire branch of the government for it. Also, having x-risks identified by experts in the field who can talk about it amongst each other without immediately assuming “this is an x-risk” is useful and leads to better dialogue.
EA people aren’t the only rational people around. Funding decisions aren’t made entirely arbitrarily. Also, weighting current humans as more valuable than future humans is something that governments will naturally tend to do, because governments exist to take care of people who are currently alive. Additionally, governments don’t care about humanity so much as their people. No government on earth is dedicated to preserving humanity; they are concerned for the people living in their borders.
Nuclear proliferation is a real threat, and for a government agency to address it and suggest policy would be potentially undermining other goals that the government may have. A sensible policy to mitigate the threat of nuclear war would be to get the US to disarm, but this obviously goes against the goal to retain its status as military superpower.
—
Government agencies typically have very well defined roles or domains. X-risk crosses way too many domains to be effective as an agency. X-risk think tanks are great because they have the ability and the freedom to sprawl across many different domains as needed.
But let’s say we get this x-risk agency. In practice, how do you expect this to work out?
It seems to me that you’re mostly concerned about identifying new risks and having a place for people to go if they’ve identified risks. So, let’s say you get this agency, and they work on it for a year, and don’t come up with any new risks. Have they been doing their job? How do we determine if this agency should continue to get funding or not? With regard to your point about identifying new risks, if I’m working in Health or Defense, I’m not even going to be thinking about risks associated with AI. Identifying new risks takes a level of specific expertise and knowledge. I’m a biologist—I’m never going to point out new x-risks that are not associated with biology. I simply don’t have the expertise or understanding to think about AI security, just as an AI security person doesn’t have the knowledge base to determine exactly how much of a risk antimicrobial resistance is. To this point, if you hire specific people to think about “what could go wrong?”, they’re almost certain to be biased to looking for risks in the areas they already know best. You can see this in EA—lots of computer science people in EA, and all we talk about is AI security, even though other risks pose much bigger near-term threats.
If the goal isn’t to identify new risks, but to develop and manage research programs, that’s pretty much already done in other agencies. It doesn’t make sense for Xrisk Agency to fund something that a health agency is also funding. If the idea is to have in house experts work on it, that also doesn’t make sense, and can also be problematic. Even within some x-risk areas, experts disagree on what the biggest things are—example in antimicrobial resistance is that lots of people cite agricultural use as a big driver, but plenty of other people think that use in hospitals is a much bigger problem. If you only have a set budget, how do you decide what to tackle first when the experts are split?
Given all that, even within agencies, it’s super hard to get people to care about the one thing your office is working on. Just because you have an agency working on something doesn’t mean it’s 1) effective or 2) useful. As a student, I briefly worked with an office in DHHS and half of our work was just trying to get other people in the same department, presumably working on the same types of problems to care about our particular issue. In fact, I can absolutely see an x-risk agency as being more of a threat than a boon to x-risk research—“oh, Xrisk Agency works on that; that’s not my problem, go to them.”
“Even if an outside expert spots a significant risk (think AI risk), if there’s not a clear department responsible (in the UK: Business, Enterprise, and Innovation? Digital, Culture, Media, and Sport?) then nothing will happen.”—I disagree with this point. You yourself pointed out the environmental movement. You can get x-risks onto the agenda without creating an entire branch of the government for it. Also, having x-risks identified by experts in the field who can talk about it amongst each other without immediately assuming “this is an x-risk” is useful and leads to better dialogue.
EA people aren’t the only rational people around. Funding decisions aren’t made entirely arbitrarily. Also, weighting current humans as more valuable than future humans is something that governments will naturally tend to do, because governments exist to take care of people who are currently alive. Additionally, governments don’t care about humanity so much as their people. No government on earth is dedicated to preserving humanity; they are concerned for the people living in their borders.
Nuclear proliferation is a real threat, and for a government agency to address it and suggest policy would be potentially undermining other goals that the government may have. A sensible policy to mitigate the threat of nuclear war would be to get the US to disarm, but this obviously goes against the goal to retain its status as military superpower.
— Government agencies typically have very well defined roles or domains. X-risk crosses way too many domains to be effective as an agency. X-risk think tanks are great because they have the ability and the freedom to sprawl across many different domains as needed.
But let’s say we get this x-risk agency. In practice, how do you expect this to work out?
It seems to me that you’re mostly concerned about identifying new risks and having a place for people to go if they’ve identified risks. So, let’s say you get this agency, and they work on it for a year, and don’t come up with any new risks. Have they been doing their job? How do we determine if this agency should continue to get funding or not? With regard to your point about identifying new risks, if I’m working in Health or Defense, I’m not even going to be thinking about risks associated with AI. Identifying new risks takes a level of specific expertise and knowledge. I’m a biologist—I’m never going to point out new x-risks that are not associated with biology. I simply don’t have the expertise or understanding to think about AI security, just as an AI security person doesn’t have the knowledge base to determine exactly how much of a risk antimicrobial resistance is. To this point, if you hire specific people to think about “what could go wrong?”, they’re almost certain to be biased to looking for risks in the areas they already know best. You can see this in EA—lots of computer science people in EA, and all we talk about is AI security, even though other risks pose much bigger near-term threats.
If the goal isn’t to identify new risks, but to develop and manage research programs, that’s pretty much already done in other agencies. It doesn’t make sense for Xrisk Agency to fund something that a health agency is also funding. If the idea is to have in house experts work on it, that also doesn’t make sense, and can also be problematic. Even within some x-risk areas, experts disagree on what the biggest things are—example in antimicrobial resistance is that lots of people cite agricultural use as a big driver, but plenty of other people think that use in hospitals is a much bigger problem. If you only have a set budget, how do you decide what to tackle first when the experts are split?
Given all that, even within agencies, it’s super hard to get people to care about the one thing your office is working on. Just because you have an agency working on something doesn’t mean it’s 1) effective or 2) useful. As a student, I briefly worked with an office in DHHS and half of our work was just trying to get other people in the same department, presumably working on the same types of problems to care about our particular issue. In fact, I can absolutely see an x-risk agency as being more of a threat than a boon to x-risk research—“oh, Xrisk Agency works on that; that’s not my problem, go to them.”