Random thought: does the idea of explosive takeoff of intelligence assume the alignment is solvable?
If the alignment problem isn’t solvable, then an AGI, in creating ASI, would face the same dilemma as humans: The ASI wouldn’t necessarily have the same goals, would disempower the AGI, instrumental convergence, all the usual stuff.
I suppose one counter argument is that the AGI rationally shoudn’t create ASI, for these reasons, but, similar to humans, might do so anyway due to competitive/racing dynamics. Whichever AGI doesn’t creates ASI will be left behind, etc.
Another one you missed is that the world is getting better over time, so we should expect donation opportunities in the future to be worse.