It would depend on if the risk from AGI is a one time risk that goes away when humans figure out alignment or an ongoing effort.
Alignment may be impossible. As a SWE working in AI I don’t know of any plausible method for the kind of alignment discussed here.
Risk mitigation is possible, in that we can stack together serial steps that must fail for the AGI to escape and do meaningful damage, as well as countermeasures (pre constructed weapons and detection institutions) ready to respond when this happens.
But the risk remains nonzero and recurring . Each century there is always this risk that the AGIs escape human control and the risk remains as long as “humans” are significantly stupider and less rational than AGI. I don’t know if “humans augment themselves so much to compete they are not remotely humanlike” counts as the extinction of humanity or not.
It would depend on if the risk from AGI is a one time risk that goes away when humans figure out alignment or an ongoing effort.
Alignment may be impossible. As a SWE working in AI I don’t know of any plausible method for the kind of alignment discussed here.
Risk mitigation is possible, in that we can stack together serial steps that must fail for the AGI to escape and do meaningful damage, as well as countermeasures (pre constructed weapons and detection institutions) ready to respond when this happens.
But the risk remains nonzero and recurring . Each century there is always this risk that the AGIs escape human control and the risk remains as long as “humans” are significantly stupider and less rational than AGI. I don’t know if “humans augment themselves so much to compete they are not remotely humanlike” counts as the extinction of humanity or not.