I found these visualizations very helpful! I think of AGI as the top of your HLAI section: human level in all tasks. Life 3.0 claimed that just being superhuman at AI coding would become super risky (recursive self improvement (RSI)). But it seems to me it would need to be ~human level at some other tasks as well like planning and deception to be super risky. Still, that could be relatively narrow overall.
I found these visualizations very helpful! I think of AGI as the top of your HLAI section: human level in all tasks. Life 3.0 claimed that just being superhuman at AI coding would become super risky (recursive self improvement (RSI)). But it seems to me it would need to be ~human level at some other tasks as well like planning and deception to be super risky. Still, that could be relatively narrow overall.