The title of this post is a general claim about the long-term future, and yet nowhere in your post do you mention any x-risks other than AI. Why should we not expect other x-risks to outweigh these AGI considerations, since they may not fit into this framework of extinction, ok outcome, utopian outcome? I am not necessarily convinced that pulling the utopia handle on actions related to AGI (like the four you suggest) have a greater effect on P(utopia) than some set of non-AGI-related interventions.
The title of this post is a general claim about the long-term future, and yet nowhere in your post do you mention any x-risks other than AI. Why should we not expect other x-risks to outweigh these AGI considerations, since they may not fit into this framework of extinction, ok outcome, utopian outcome? I am not necessarily convinced that pulling the utopia handle on actions related to AGI (like the four you suggest) have a greater effect on P(utopia) than some set of non-AGI-related interventions.