We are fast running out of time to avoid ASI-induced extinction. How long until a model (that is intrinsically unaligned, given no solution yet to alignment) self-exfiltrates and initiates recursive self-improvement? We need a global moratorium on further AGI/ASI development asap. Please do what you can to help with this—talk to people you know, and your representatives. Support groups like PauseAI.
We are fast running out of time to avoid ASI-induced extinction. How long until a model (that is intrinsically unaligned, given no solution yet to alignment) self-exfiltrates and initiates recursive self-improvement? We need a global moratorium on further AGI/ASI development asap. Please do what you can to help with this—talk to people you know, and your representatives. Support groups like PauseAI.