Ah okay, I didn’t state this, but I’m operating under the definition of superintelligence being inherently uncontrollable, and thus not a tool. For now, AI is being used as a tool, but in order to gain more power, states/corporations will develop it to the point where it has its own agency, as described by Bostrom and others. I don’t see any power-seeking entity reaching a point in their AI’s capability where they’re satisfied and stop developing it, since a competitor could continue development and gain a power/capabilities advantage. Moreover, a sufficiently advanced AI would be motivated to improve its own cognitive abilities to further its goals.
It may be possible that states/corporations could align superintelligence just to themselves if they can figure out which values to specify and how to hone in on them, but the superintelligence would be acting on its own accord and still out of their control in terms of how it’s accomplishing its goals. This doesn’t seem likely to me if superintelligence is built via automated self-improvement, though, as there are real possibilities of value drift, instrumental goals that broaden its moral scope to include more humans, emergent properties that appear (which produce unexpected behavior), or competing superintelligences that are designed to align with all of humanity. All of these possibilities, with the exception of the last one, are problems for aligning superintelligence with all of humanity too.
So I said two different things which made my argument unclear. First I said “assuming superintelligence comes aligned with human values” and then I said “AI could lead to major catastrophes, a global totalitarian regime, or human extinction.”
If we knew for sure that AGI is imminent and will eradicate all diseases then I agree with you that it’s worth it to donate to malaria charities. Right now, though, we don’t know what the outcome will be. So, not knowing the outcome of alignment, do you still choose to donate to malaria charities, or do you allocate that money toward, say, a nonprofit actively working on the alignment problem?
Shameless plug; I have an idea for a nonprofit that aims to help solve the alignment problem—https://forum.effectivealtruism.org/posts/GGxZhEdxndsyhFnGG/an-international-collaborative-hub-for-advancing-ai-safety?utm_campaign=post_share&utm_source=link