Assuming human-level AGI is expensive and of potential military value, it seems likely the governments of USA and probably other powers like China will be strongly involved in its development.
Is it now time to create an official process of international government-level coordination about AI safety? Is it realistic and desirable?
Assuming human-level AGI is expensive and of potential military value, it seems likely the governments of USA and probably other powers like China will be strongly involved in its development.
Is it now time to create an official process of international government-level coordination about AI safety? Is it realistic and desirable?