In the near term, misuse via bio doesn’t pose existential risks, because synthetic bio is fundamentally harder than people seem to assume. Making a bioweapon is very hard, making one significantly worse than what previous natural diseases and bioweapons were capable of is even harder, and the critical path isn’t addressed with most of the capabilities that narrow AI I expect is possible before AGI could plausibly do.
After that, I think that the risk from powerful systems is disjunctive, and any of a large number of different things could allow a malign actor to take over given effectively unlimited resources that a collective or speed superintelligence enabled by relatively cheap AGI would be able to amass. I don’t know exactly how scaled up it needs to be to pose that risk, and perhaps it’s far away, but if we’re facing a misaligned ASI that wants to kill us, the specific method isn’t really a limiting factor.
In the near term, misuse via bio doesn’t pose existential risks, because synthetic bio is fundamentally harder than people seem to assume. Making a bioweapon is very hard, making one significantly worse than what previous natural diseases and bioweapons were capable of is even harder, and the critical path isn’t addressed with most of the capabilities that narrow AI I expect is possible before AGI could plausibly do.
After that, I think that the risk from powerful systems is disjunctive, and any of a large number of different things could allow a malign actor to take over given effectively unlimited resources that a collective or speed superintelligence enabled by relatively cheap AGI would be able to amass. I don’t know exactly how scaled up it needs to be to pose that risk, and perhaps it’s far away, but if we’re facing a misaligned ASI that wants to kill us, the specific method isn’t really a limiting factor.