Most suffering-focused EAs I know agree about the facts: there’s a small chance that AI-powered space colonization will create flourishing futures highly optimized for happiness and other forms of moral value, and this small chance of a vast payoff dominates the expected value of the future on many moral views. I think people generally agree that the typical/median future scenario will be much better than the present (for reasons like this one, though there’s much more to say about that), though in absolute terms probably not nearly as good as it could be.
So in my perception, most of the disagreement comes from moral views, not from perceptions of the likelihood or severity of s-risks.
Most suffering-focused EAs I know agree about the facts: there’s a small chance that AI-powered space colonization will create flourishing futures highly optimized for happiness and other forms of moral value, and this small chance of a vast payoff dominates the expected value of the future on many moral views. I think people generally agree that the typical/median future scenario will be much better than the present (for reasons like this one, though there’s much more to say about that), though in absolute terms probably not nearly as good as it could be.
So in my perception, most of the disagreement comes from moral views, not from perceptions of the likelihood or severity of s-risks.