[A]dvanced nanotechnology might allow states to manufacture weapons that pose an (even) greater probability of existential catastrophe than the highly risky weapons that are currently accessible (such as currently accessible nuclear and biological weapons). I’d place a “grey goo” scenario, where self-replicating nanoscale machines turn the entire world into copies of themselves, into this category.
I have a few uninformed doubts about how likely this is (although you never claimed it was especially likely):
There are already millions of different types of self-replicating nanoscale machines out there (biological organisms), and none of them seem to have gotten close to turning the entire world into copies of themselves. So it seems pretty hard to make “gray goo,” even with APM.
These machines may be more capable than biological ones if they were very intelligent, but this scenario seems to already be covered by the “APM accelerates TAI” scenario.
On the other hand, maybe APM is much more dangerous than evolution because operators could do more than just local optimization.
Niche point: How much is that argument undermined by anthropic considerations? I suspect not very, because:
I’m pointing out that we don’t see near-catastrophe, rather than that we don’t see total catastrophe.
Our actions arguably matter much more if we haven’t gotten lucky.
As armchair ecology, there seem to be non-luck reasons why there hasn’t been biological “gray goo” (“green goo”?) (although, admittedly, manufactured machines might be able to get around these):
There’s a tradeoff between versatility and specialization—it’s hard to be most successful in all niches.
There’s competition, e.g., if a population is very large, predators multiply.
Organisms seem unable to have both explosive population growth and fast motion, since organisms that rely on eating other organisms for energy run out of food if their population explodes, while organisms that rely on the sun for energy can’t move quickly.
The offense-defense balance might not be so bad: as suggested by the point about biological predators, APM might create strong defensive capabilities, e.g., the capability to quickly identify dangerously replicating machines and then create targeted/specialized countermeasures.
Being really good at replicating within human bodies (naively) seems much easier than being good at replicating in any environment. But the former worry is ~bioweapons, which are already covered by another risk scenario you mention.
Developing or using “gray goo” might not be very strategically appealing.
Assuming it could be made, it’d be very self-destructive (and/or maybe could be retaliated against), so using it would be a terrible idea, and it’d be hard to make credible threats with it. In other words, it might not be a type-2a vulnerability (“safe first strike”) after all.
It might still be “safe first strike” vulnerability if there were great narrow-scope countermeasures that just one side could develop in advance and no secure second-strike capabilities, in which case these weapons would pose more local risk but less humanity-wide risk. (Or maybe more humanity-wide risk if first strike were just somewhat safe?)
Thanks for posting this!
I have a few uninformed doubts about how likely this is (although you never claimed it was especially likely):
There are already millions of different types of self-replicating nanoscale machines out there (biological organisms), and none of them seem to have gotten close to turning the entire world into copies of themselves. So it seems pretty hard to make “gray goo,” even with APM.
These machines may be more capable than biological ones if they were very intelligent, but this scenario seems to already be covered by the “APM accelerates TAI” scenario.
On the other hand, maybe APM is much more dangerous than evolution because operators could do more than just local optimization.
Niche point: How much is that argument undermined by anthropic considerations? I suspect not very, because:
I’m pointing out that we don’t see near-catastrophe, rather than that we don’t see total catastrophe.
Our actions arguably matter much more if we haven’t gotten lucky.
As armchair ecology, there seem to be non-luck reasons why there hasn’t been biological “gray goo” (“green goo”?) (although, admittedly, manufactured machines might be able to get around these):
There’s a tradeoff between versatility and specialization—it’s hard to be most successful in all niches.
There’s competition, e.g., if a population is very large, predators multiply.
Organisms seem unable to have both explosive population growth and fast motion, since organisms that rely on eating other organisms for energy run out of food if their population explodes, while organisms that rely on the sun for energy can’t move quickly.
The offense-defense balance might not be so bad: as suggested by the point about biological predators, APM might create strong defensive capabilities, e.g., the capability to quickly identify dangerously replicating machines and then create targeted/specialized countermeasures.
Being really good at replicating within human bodies (naively) seems much easier than being good at replicating in any environment. But the former worry is ~bioweapons, which are already covered by another risk scenario you mention.
Developing or using “gray goo” might not be very strategically appealing.
Assuming it could be made, it’d be very self-destructive (and/or maybe could be retaliated against), so using it would be a terrible idea, and it’d be hard to make credible threats with it. In other words, it might not be a type-2a vulnerability (“safe first strike”) after all.
It might still be “safe first strike” vulnerability if there were great narrow-scope countermeasures that just one side could develop in advance and no secure second-strike capabilities, in which case these weapons would pose more local risk but less humanity-wide risk. (Or maybe more humanity-wide risk if first strike were just somewhat safe?)