I strongly agree with this, contra titotal. To explain why, I’ll note that there are several disjunctive places that this risk plays out.
First, misuses of near human AGI systems or narrow AI could be used by sophisticated actors to enhance their ability to create bioweapons. This might increase that risk significantly, but there are few such actors, and lots of security safeguards. Bio is hard, and near-human-level AI isn’t a magic bullet for making it easy. Narrow AI that accelerates the ability to create bioweapons also accelerates a lot of defensive technologies, and it seems very, very implausible that something an order of magnitude worse than natural diseases would be found. That’s not low risk, but it’s not anything like half the total risk.
Second, misuse or misalignment of human level AI systems creating Bostromian speed superintelligences or collective superintelligences creates huge risks, but these aren’t specific to biological catastrophes, and they don’t seem dominant, humanity is vulnerable is so many ways that patching one route seems irrelevant. And third, this is true to a far greater extent for misaligned ASI.
I’m interested in what other paths of attack you think could be more successful than deploying bioweapons (and attacking the survivors).
Or are you saying that only a massively scaled up superintelligence could pull off extinction, and that if such a thing is impossible, then so is near-term AI x-risk?
In the near term, misuse via bio doesn’t pose existential risks, because synthetic bio is fundamentally harder than people seem to assume. Making a bioweapon is very hard, making one significantly worse than what previous natural diseases and bioweapons were capable of is even harder, and the critical path isn’t addressed with most of the capabilities that narrow AI I expect is possible before AGI could plausibly do.
After that, I think that the risk from powerful systems is disjunctive, and any of a large number of different things could allow a malign actor to take over given effectively unlimited resources that a collective or speed superintelligence enabled by relatively cheap AGI would be able to amass. I don’t know exactly how scaled up it needs to be to pose that risk, and perhaps it’s far away, but if we’re facing a misaligned ASI that wants to kill us, the specific method isn’t really a limiting factor.
I strongly agree with this, contra titotal. To explain why, I’ll note that there are several disjunctive places that this risk plays out.
First, misuses of near human AGI systems or narrow AI could be used by sophisticated actors to enhance their ability to create bioweapons. This might increase that risk significantly, but there are few such actors, and lots of security safeguards. Bio is hard, and near-human-level AI isn’t a magic bullet for making it easy. Narrow AI that accelerates the ability to create bioweapons also accelerates a lot of defensive technologies, and it seems very, very implausible that something an order of magnitude worse than natural diseases would be found. That’s not low risk, but it’s not anything like half the total risk.
Second, misuse or misalignment of human level AI systems creating Bostromian speed superintelligences or collective superintelligences creates huge risks, but these aren’t specific to biological catastrophes, and they don’t seem dominant, humanity is vulnerable is so many ways that patching one route seems irrelevant. And third, this is true to a far greater extent for misaligned ASI.
I’m interested in what other paths of attack you think could be more successful than deploying bioweapons (and attacking the survivors).
Or are you saying that only a massively scaled up superintelligence could pull off extinction, and that if such a thing is impossible, then so is near-term AI x-risk?
In the near term, misuse via bio doesn’t pose existential risks, because synthetic bio is fundamentally harder than people seem to assume. Making a bioweapon is very hard, making one significantly worse than what previous natural diseases and bioweapons were capable of is even harder, and the critical path isn’t addressed with most of the capabilities that narrow AI I expect is possible before AGI could plausibly do.
After that, I think that the risk from powerful systems is disjunctive, and any of a large number of different things could allow a malign actor to take over given effectively unlimited resources that a collective or speed superintelligence enabled by relatively cheap AGI would be able to amass. I don’t know exactly how scaled up it needs to be to pose that risk, and perhaps it’s far away, but if we’re facing a misaligned ASI that wants to kill us, the specific method isn’t really a limiting factor.