[Question] Imagine AGI killed us all in three years. What would have been our biggest mistakes?

Imagine GPT-8 arrives in 2026, creates a fast takeoff and AGI kills us all.

Now imagine the AI Safety work we did in the meantime (technical and non-technical) progressed in a linear fashion (i.e. in resources committed), but it obviously wasn’t enough to prevent us all being killed.

What were the biggest, most obvious mistakes we made as (1) individuals and (2) as a community?

For example, I spend some of my time working on AI Safety but it is ~ 10%. In this world I probably should have committed my life to it. Maybe I should have considered more public displays of protest?

No comments.