This seems like an important claim, but I could also quite plausibly see misaligned AIs destabilising a world where an aligned AI exists. What reasons do we have to think that an aligned AI would be able to very consistently (e.g. 99.9%+ of the time) ward off attacks from misaligned AI from bad actors? Given all the uncertainty around these scenarios, I think the extinction risk per century from this alone could in the 1-5% range and create a nontrivial discount rate.
Today there is room for an intelligence explosion and explosive reproduction of AGI/robots (the Solar System can support trillions of AGIs for every human alive today). If aligned AGI undergoes such intelligence explosion and reproduction there is no longer free energy for rogue AGI to grow explosively. A single rogue AGI introduced to such a society would be vastly outnumbered and would lack special advantages, while superabundant AGI law enforcement would be well positioned to detect or prevent such an introduction in any case.
Already today states have reasonably strong monopolies on the use of force. If all military equipment (and an AI/robotic infrastructure that supports it and most of the economy) is trustworthy (e.g. can be relied on not to engage in military coup, to comply with and enforce international treaties via AIs verified by all states, etc) then there could be trillions of aligned AGIs per human, plenty to block violent crime or WMD terrorism.
For war between states, that’s point #7. States can make binding treaties to renounce WMD war or protect human rights or the like, enforced by AGI systems jointly constructed/inspected by the parties.
One possibility would be that these misaligned AIs are quickly defeated or contained, and future ones are also severely resource-constrained by the aligned AI, which has a large resource advantage. So there are possible worlds where there aren’t really any powerful misaligned AIs (nearby), and those worlds have vast futures.
This seems like an important claim, but I could also quite plausibly see misaligned AIs destabilising a world where an aligned AI exists. What reasons do we have to think that an aligned AI would be able to very consistently (e.g. 99.9%+ of the time) ward off attacks from misaligned AI from bad actors? Given all the uncertainty around these scenarios, I think the extinction risk per century from this alone could in the 1-5% range and create a nontrivial discount rate.
Today there is room for an intelligence explosion and explosive reproduction of AGI/robots (the Solar System can support trillions of AGIs for every human alive today). If aligned AGI undergoes such intelligence explosion and reproduction there is no longer free energy for rogue AGI to grow explosively. A single rogue AGI introduced to such a society would be vastly outnumbered and would lack special advantages, while superabundant AGI law enforcement would be well positioned to detect or prevent such an introduction in any case.
Already today states have reasonably strong monopolies on the use of force. If all military equipment (and an AI/robotic infrastructure that supports it and most of the economy) is trustworthy (e.g. can be relied on not to engage in military coup, to comply with and enforce international treaties via AIs verified by all states, etc) then there could be trillions of aligned AGIs per human, plenty to block violent crime or WMD terrorism.
For war between states, that’s point #7. States can make binding treaties to renounce WMD war or protect human rights or the like, enforced by AGI systems jointly constructed/inspected by the parties.
One possibility would be that these misaligned AIs are quickly defeated or contained, and future ones are also severely resource-constrained by the aligned AI, which has a large resource advantage. So there are possible worlds where there aren’t really any powerful misaligned AIs (nearby), and those worlds have vast futures.