The main thing is that the clean distinction between attackers and defenders in the theory of the offense-defense balance does not exist in practice. All attackers are also defenders and vice-versa.
I notice that this doesn’t seem to apply to the scenario/conversation you started this post with. If a crazy person wants to destroy the world with an AI-created bioweapon, he’s not also a defender.
Another scenario I worry about is AIs enabling value lock-in, and then value locked-in AIs/humans/groups would have an offensive advantage in manipulating other people’s values (i.e., those who are not willing to value lock-in yet) while not having to be defenders.
If a crazy person wants to destroy the world with an AI-created bioweapon
Or, more concretely, nuclear weapons. Leaving aside regular full-scale nuclear war (which is censored from the graph for obvious reasons), this sort of graph will never show you something like Edward Teller’s “backyard bomb”, or a salted bomb. (Or any of the many other nuclear weapon concepts which never got developed, or were curtailed very early in deployment like neutron bombs, for historically-contingent reasons.)
There is, as far as I am aware, no serious scientific doubt that they are technically feasible: multi-gigaton bombs could be built or that salted bombs in relatively small quantities would render the earth uninhabitable to a substantial degree, for what are also modest expenditures as a percentage of GDP etc. It is just that there is no practical use of these weapons by normal, non-insane people. There is no use in setting an entire continent on fire, or in long-term radioactive poisoning of the same earth on which you presumably intend to live afterwards.
But you would be greatly mistaken if you concluded from historical data that these were impossible because there is nothing in the observed distribution anywhere close to those fatality rates.
(You can’t even make an argument from an Outside View of the sort that ‘there have been billions of humans and none have done this yet’, because nuclear bombs are still so historically new, and only a few nuclear powers were even in a position to consider whether to pursue these weapons or not—you don’t have k = billions, you have k < 10, maybe. And the fact that several of those pursued weapons like neutron bombs as far as they did, and that we know about so many concepts, is not encouraging.)
I notice that this doesn’t seem to apply to the scenario/conversation you started this post with. If a crazy person wants to destroy the world with an AI-created bioweapon, he’s not also a defender.
Another scenario I worry about is AIs enabling value lock-in, and then value locked-in AIs/humans/groups would have an offensive advantage in manipulating other people’s values (i.e., those who are not willing to value lock-in yet) while not having to be defenders.
Or, more concretely, nuclear weapons. Leaving aside regular full-scale nuclear war (which is censored from the graph for obvious reasons), this sort of graph will never show you something like Edward Teller’s “backyard bomb”, or a salted bomb. (Or any of the many other nuclear weapon concepts which never got developed, or were curtailed very early in deployment like neutron bombs, for historically-contingent reasons.)
There is, as far as I am aware, no serious scientific doubt that they are technically feasible: multi-gigaton bombs could be built or that salted bombs in relatively small quantities would render the earth uninhabitable to a substantial degree, for what are also modest expenditures as a percentage of GDP etc. It is just that there is no practical use of these weapons by normal, non-insane people. There is no use in setting an entire continent on fire, or in long-term radioactive poisoning of the same earth on which you presumably intend to live afterwards.
But you would be greatly mistaken if you concluded from historical data that these were impossible because there is nothing in the observed distribution anywhere close to those fatality rates.
(You can’t even make an argument from an Outside View of the sort that ‘there have been billions of humans and none have done this yet’, because nuclear bombs are still so historically new, and only a few nuclear powers were even in a position to consider whether to pursue these weapons or not—you don’t have k = billions, you have k < 10, maybe. And the fact that several of those pursued weapons like neutron bombs as far as they did, and that we know about so many concepts, is not encouraging.)