Error
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
>>after a tech company singularity, such as if the tech company develops safe AGI
I think this should be “after AGI”?
Yes, thanks! Fixed.
I’m a bit confused and wanted to clarify what you mean by AGI vs AAGI: are you of the belief that AGI could be safely controlled (e.g., boxed) but that setting it to “autonomously” pursue the same objectives would be unsafe?
Could you describe what an AGI system might look like in comparison to an AAGI?
Yes, surely inner-alignment is needed for AGI to not (accidentally) become AAGI by default?
Thank you so much for this extremely important and brilliant post, Andrew! I really appreciate it.
I completely agree that the degree to which autonomous general-capabilities research is outpacing alignment research needs to be reduced (most likely via recruitment and social opinion dynamics), and that this seems neglected relative to its importance.
I wrote a post on a related topic recently, and it would be really great to hear what you think! (https://forum.effectivealtruism.org/posts/juhMehg89FrLX9pTj/a-grand-strategy-to-recruit-ai-capabilities-researchers-into)