I’ve been thinking that the default existential risk framing might bias EAs to think that the world would eventually end up okay if it weren’t for specific future risks (AI, nuclear war, pandemics). This framing downplays the possibility that things are on a bad trajectory by default because sane and compassionate forces (“pockets of sanity”) aren’t sufficiently in control of history (and perhaps have never been – though some of the successes around early nuclear risk strike me as impressive).
We can view AI risk as an opportunity to attain control over history because AI, if it’s aligned and things go well, could do it better. But how do you get from “not being in control of history” to solving a massive coordination problem (as well as technical problems around alignment)? It seems it’s a top priority to grow/expand pockets of sanity.
(Separate point: My intuition is that “pockets of sanity” are pretty black and white, so that if something isn’t a pocket of sanity, marginal improvements to it will have little effect and it’s better to focus on supporting (or building anew) something where the team, organization, government branch, etc., already has the type of leadership and culture you want to see more of.)
Your impression of the default framing aligns with what I’ve heard from folks! In addition to the benefits of changing humanity’s trajectory, there’s also an argument that we should pursue systems change for the factors that are driving existential risk in the first place, in addition to addressing it from a research-focused angle. That’s the argument of this article on meta existential risk!
I’ve been thinking that the default existential risk framing might bias EAs to think that the world would eventually end up okay if it weren’t for specific future risks (AI, nuclear war, pandemics). This framing downplays the possibility that things are on a bad trajectory by default because sane and compassionate forces (“pockets of sanity”) aren’t sufficiently in control of history (and perhaps have never been – though some of the successes around early nuclear risk strike me as impressive).
We can view AI risk as an opportunity to attain control over history because AI, if it’s aligned and things go well, could do it better. But how do you get from “not being in control of history” to solving a massive coordination problem (as well as technical problems around alignment)? It seems it’s a top priority to grow/expand pockets of sanity.
(Separate point: My intuition is that “pockets of sanity” are pretty black and white, so that if something isn’t a pocket of sanity, marginal improvements to it will have little effect and it’s better to focus on supporting (or building anew) something where the team, organization, government branch, etc., already has the type of leadership and culture you want to see more of.)
Your impression of the default framing aligns with what I’ve heard from folks! In addition to the benefits of changing humanity’s trajectory, there’s also an argument that we should pursue systems change for the factors that are driving existential risk in the first place, in addition to addressing it from a research-focused angle. That’s the argument of this article on meta existential risk!