I’m a bit confused and wanted to clarify what you mean by AGI vs AAGI: are you of the belief that AGI could be safely controlled (e.g., boxed) but that setting it to “autonomously” pursue the same objectives would be unsafe?
Could you describe what an AGI system might look like in comparison to an AAGI?
I’m a bit confused and wanted to clarify what you mean by AGI vs AAGI: are you of the belief that AGI could be safely controlled (e.g., boxed) but that setting it to “autonomously” pursue the same objectives would be unsafe?
Could you describe what an AGI system might look like in comparison to an AAGI?
Yes, surely inner-alignment is needed for AGI to not (accidentally) become AAGI by default?