Okay, fine. I agree that it’s hard to come up with an x-risk more urgent than AGI. (Though here’s one: digital people being instantiated and made to suffer in large numbers would be an s-risk, and could potentially outweigh the risk of damage done by misaligned AGI over the long term.)
Okay, fine. I agree that it’s hard to come up with an x-risk more urgent than AGI. (Though here’s one: digital people being instantiated and made to suffer in large numbers would be an s-risk, and could potentially outweigh the risk of damage done by misaligned AGI over the long term.)