Why Not Try Build Safe AGI?Remmelt24 Dec 2022 10:02 UTCCopy-pasting from my one-on-ones with AI Safety researchers:Why mechanistic interpretability does not and cannot contribute to long-term AGI safety (from messages with a friend)Remmelt19 Dec 2022 12:02 UTC17 points3 comments1 min readEA linkList #1: Why stopping the development of AGI is hard but doableRemmelt24 Dec 2022 9:52 UTC24 points2 comments1 min readEA linkList #2: Why coordinating to align as humans to not develop AGI is a lot easier than, well… coordinating as humans with AGI coordinating to be aligned with humansRemmelt24 Dec 2022 9:53 UTC3 points0 comments1 min readEA linkList #3: Why not to assume on prior that AGI-alignment workarounds are availableRemmelt24 Dec 2022 9:54 UTC6 points0 comments1 min readEA link