My current view is that the near and medium term overlap between AI risk and biorisk is nearly 100%. Bioweapons and biotechnology seem like the only path for an AI to extinct humanity that has any decent chance of working in the short or medium term.
I recently did a deep dive into molecular nanotech (one alternative method that has been proposed), and I think the technology is definitely at least 60 years away, possibly a century or more away, possibly not even possible. Even with the speedups in research from AGI, I think our enemy would be foolish to pursue this path, instead of working on bioweapons, a technology that already exists and has been proven to be devastating in effect. (note that I do not believe in intelligence explosions or other “godlike AI” hypotheses).
As someone who does lots of biorisk work, I disagree that this is the only likely catastrophic risk, as I note in my response, but event more strongly disagree that this is actually a direct extinction risk—designed diseases that kill everyone in the face of humans actually trying to stop them aren’t obviously possible, much less findable by near-human or human level AI systems.
Of course, combined with systemic fragility, intentional disinformation, and other attack modes enabled by AI, it seems plausible that a determined adversary with tremendous resources could create an extinction event—though it’s unclear any such group exists. But that doesn’t route only through biorisk, and even robustly solving biorisk probably wouldn’t eliminate this risk given other vulnerabilities and determined adversaries. (So it’s a good thing no large and undetected group of people or major government with implausibly good opsec wants to spend a decade and billions of dollars to commit omnicide, while no-one notices or does anything to stop them.)
Oh wow, that is quite a drastic overlap! Do you by any chance know of any writing on the topic that has convinced you, e.g. why nuclear+AI is not something to worry about?
I would be open to persuasion on nuclear risk, but it seems like a difficult plan to me. There are only a few nations with sufficient arsenals to trigger a nuclear exchange, and they all require human beings to launch the nukes. I would be interested if someone could make the case for AI+nuclear, though.
I am no expert in this but can think of an AI directly convincing people with launch access. Or deepfakes pretending to be their commander and probably many other scenarios. What if new nukes being built are using AI in hardware design and AI sneaks in some cyber backdoor with which to get control of the nuke?
There are meant to be a lot of procedures in place to ensure that an order to launch nukes is genuine, and to ensure that a nuke can’t be launched without the direct cooperation of the head of state and the military establishment. Convincing one person wouldn’t be enough, unless that one person was the president, and even then, the order may be disobeyed if it comes off as insane. As for the last part, if you get “control of a nuke” while it’s sitting in a bunker somewhere, all you can do is blow up the bunker, which doesn’t do anything.
The most likely scenario seems to be some sort of stanislav petrov scenario, where you falsely convince people that a first strike is occurring and that they need to respond immediately. Or that there are massive security holes somewhere that can be overcome.
My current view is that the near and medium term overlap between AI risk and biorisk is nearly 100%. Bioweapons and biotechnology seem like the only path for an AI to extinct humanity that has any decent chance of working in the short or medium term.
I recently did a deep dive into molecular nanotech (one alternative method that has been proposed), and I think the technology is definitely at least 60 years away, possibly a century or more away, possibly not even possible. Even with the speedups in research from AGI, I think our enemy would be foolish to pursue this path, instead of working on bioweapons, a technology that already exists and has been proven to be devastating in effect. (note that I do not believe in intelligence explosions or other “godlike AI” hypotheses).
As someone who does lots of biorisk work, I disagree that this is the only likely catastrophic risk, as I note in my response, but event more strongly disagree that this is actually a direct extinction risk—designed diseases that kill everyone in the face of humans actually trying to stop them aren’t obviously possible, much less findable by near-human or human level AI systems.
Of course, combined with systemic fragility, intentional disinformation, and other attack modes enabled by AI, it seems plausible that a determined adversary with tremendous resources could create an extinction event—though it’s unclear any such group exists. But that doesn’t route only through biorisk, and even robustly solving biorisk probably wouldn’t eliminate this risk given other vulnerabilities and determined adversaries. (So it’s a good thing no large and undetected group of people or major government with implausibly good opsec wants to spend a decade and billions of dollars to commit omnicide, while no-one notices or does anything to stop them.)
Oh wow, that is quite a drastic overlap! Do you by any chance know of any writing on the topic that has convinced you, e.g. why nuclear+AI is not something to worry about?
I would be open to persuasion on nuclear risk, but it seems like a difficult plan to me. There are only a few nations with sufficient arsenals to trigger a nuclear exchange, and they all require human beings to launch the nukes. I would be interested if someone could make the case for AI+nuclear, though.
I am no expert in this but can think of an AI directly convincing people with launch access. Or deepfakes pretending to be their commander and probably many other scenarios. What if new nukes being built are using AI in hardware design and AI sneaks in some cyber backdoor with which to get control of the nuke?
There are meant to be a lot of procedures in place to ensure that an order to launch nukes is genuine, and to ensure that a nuke can’t be launched without the direct cooperation of the head of state and the military establishment. Convincing one person wouldn’t be enough, unless that one person was the president, and even then, the order may be disobeyed if it comes off as insane. As for the last part, if you get “control of a nuke” while it’s sitting in a bunker somewhere, all you can do is blow up the bunker, which doesn’t do anything.
The most likely scenario seems to be some sort of stanislav petrov scenario, where you falsely convince people that a first strike is occurring and that they need to respond immediately. Or that there are massive security holes somewhere that can be overcome.