I didn’t mean to imply that. I think we very likely need to solve alignment at some point to avoid existential catastrophe (since we need aligned powerful AIs to help us achieve our potential), but I’m not confident that the first misaligned AGI would be enough to cause this level of catastrophe (especially for relatively weak definitions of “AGI”).
I didn’t mean to imply that. I think we very likely need to solve alignment at some point to avoid existential catastrophe (since we need aligned powerful AIs to help us achieve our potential), but I’m not confident that the first misaligned AGI would be enough to cause this level of catastrophe (especially for relatively weak definitions of “AGI”).