Assuming human-level AGI is expensive and of potential military value, it seems likely the governments of USA and probably other powers like China will be strongly involved in its development.
Is it now time to create an official process of international government-level coordination about AI safety? Is it realistic and desirable?
Why do you think it’s stupid? Sometimes people get tortured horribly, or die of horrible slow causes. Surely you need some minimum on the positive side to outweigh that?
People can also have a negative quality of life and still keep going and reproducing for various other reasons.