Is this question too hard to answer? Also, if we predict AGI will be banned or at least restricted in a few years, then, this problem won’t be that urgent, so the priority of AI safety will be lowered?
Is this question too hard to answer? Also, if we predict AGI will be banned or at least restricted in a few years, then, this problem won’t be that urgent, so the priority of AI safety will be lowered?