Really like this post!
I’m wondering whether human-level AI and robotics will significantly decrease civilisation’s susceptibility to catastrophic setbacks?
AI systems and robots can’t be destroyed by pandemics. They don’t depend on agriculture—just mining and some form of energy production. And a very small number of systems could hold tacit expertise for ~all domains.
Seems like this this might reduce the risk by a lot, such that the 10% numbers you’re quoting are too high. E.g. you’re assigning 10% to a bio-driven set-back. But i’d have thought that would have to happen before we get human-level robotics?
I think my biggest uncertainty about this is:
If there were a catastrophic setback of this kind, and civilisation tried hard to save and maintain the weights of superintelligent AI (which they presumably would), how likely are they to succeed?
My hunch is that they very likely could succeed. E.g. in the first cpl of decades they’d have continued access to superintelligent AI advice (and maybe robotics) from pre-existing hardware. They could use that to bootstrap to longer periods of time. E.g. saving the weights on hard drives rather than SSDs, and then later transferring them to a more secure, long lasting format. Then figure out the minimal-effort-version of compute maintenance and/or production needed to keep running some superintelligences indefinitely