If there were a catastrophic setback of this kind, and civilisation tried hard to save and maintain the weights of superintelligent AI (which they presumably would), how likely are they to succeed?
My hunch is that they very likely could succeed. E.g. in the first cpl of decades they’d have continued access to superintelligent AI advice (and maybe robotics) from pre-existing hardware. They could use that to bootstrap to longer periods of time. E.g. saving the weights on hard drives rather than SSDs, and then later transferring them to a more secure, long lasting format. Then figure out the minimal-effort-version of compute maintenance and/or production needed to keep running some superintelligences indefinitely
I think my biggest uncertainty about this is:
If there were a catastrophic setback of this kind, and civilisation tried hard to save and maintain the weights of superintelligent AI (which they presumably would), how likely are they to succeed?
My hunch is that they very likely could succeed. E.g. in the first cpl of decades they’d have continued access to superintelligent AI advice (and maybe robotics) from pre-existing hardware. They could use that to bootstrap to longer periods of time. E.g. saving the weights on hard drives rather than SSDs, and then later transferring them to a more secure, long lasting format. Then figure out the minimal-effort-version of compute maintenance and/or production needed to keep running some superintelligences indefinitely