No one is saying p(doom) is 100%, but there is good reason to think that it is 50% or more—that the default outcome of AGI is doom. It doesn’t default to somehow everything being ok. To alignment solving itself, or the alignment that has been done today (or by 2030) being enough if we get a foom tomorrow (by 2030). I’ve not seen any compelling argument to that effect.
Thanks for the links. I think a lot of the problem with the proposed solutions is that they don’t scale to ASI, and aren’t water tight. Having 99.999999% alignment in the limit of ASI performing billions of actions a minute still means everyone dead after a little while. RHLF’d GPT-4 is only safe because it is weak.
Alignment at the level that is typical human-to-humanity, or what is represented by “common sense” that can be picked up from training data, is still nowhere near sufficient. Uplifting any given human to superintelligence would also lead to everyone dead before too long, due to the massive power imbalance, even if it’s just by accident (“whoops I was just doing some physics experiments; didn’t think that would happen”; “I thought it would be cool if everyone became a post-human hive mind; I thought they’d like it”).
And quite apart from alignment, we still need to eliminate catastrophic risks from misuse (jailbreaks, open sourced unaligned base model weights) and coordination failure (how to avoid chaos when everyone is wishing for different things from their genies). Those alone are enough to justify shutting it all down now.
No one is saying p(doom) is 100%, but there is good reason to think that it is 50% or more—that the default outcome of AGI is doom. It doesn’t default to somehow everything being ok. To alignment solving itself, or the alignment that has been done today (or by 2030) being enough if we get a foom tomorrow (by 2030). I’ve not seen any compelling argument to that effect.
Thanks for the links. I think a lot of the problem with the proposed solutions is that they don’t scale to ASI, and aren’t water tight. Having 99.999999% alignment in the limit of ASI performing billions of actions a minute still means everyone dead after a little while. RHLF’d GPT-4 is only safe because it is weak.
Alignment at the level that is typical human-to-humanity, or what is represented by “common sense” that can be picked up from training data, is still nowhere near sufficient. Uplifting any given human to superintelligence would also lead to everyone dead before too long, due to the massive power imbalance, even if it’s just by accident (“whoops I was just doing some physics experiments; didn’t think that would happen”; “I thought it would be cool if everyone became a post-human hive mind; I thought they’d like it”).
And quite apart from alignment, we still need to eliminate catastrophic risks from misuse (jailbreaks, open sourced unaligned base model weights) and coordination failure (how to avoid chaos when everyone is wishing for different things from their genies). Those alone are enough to justify shutting it all down now.