I think a common assumption is that if you can survive superintelligent AI, then the AI can figure out how to provide existential safety. So all you need to do is survive AI.
(“Surviving AI” means not just aligning AI, but also ensuring fair governance—making sure AI doesn’t enable a permanent dictatorship or whatever.)
(FWIW I think this assumption is probably correct.)
I think a common assumption is that if you can survive superintelligent AI, then the AI can figure out how to provide existential safety. So all you need to do is survive AI.
(“Surviving AI” means not just aligning AI, but also ensuring fair governance—making sure AI doesn’t enable a permanent dictatorship or whatever.)
(FWIW I think this assumption is probably correct.)