AGI development is already taboo outside of tech circles. Per the September poll by the AIPI, only 12% disagree that “Preventing AI from quickly reaching superhuman capabilities” should be an important AI policy goal. (56% strongly agree, 20% somewhat agree, 8% somewhat disagree, 4% strongly disagree, 12% not sure.) Despite the fact that world leaders are themselves influenced by tech circles’ positions, leaders around the world are quite clear that they take the risk seriously.
The only reason AGI development hasn’t been halted already is that the general public does not yet know that big tech is both trying to build AGI, and actually making real progress towards it.
AGI development is already taboo outside of tech circles. Per the September poll by the AIPI, only 12% disagree that “Preventing AI from quickly reaching superhuman capabilities” should be an important AI policy goal. (56% strongly agree, 20% somewhat agree, 8% somewhat disagree, 4% strongly disagree, 12% not sure.) Despite the fact that world leaders are themselves influenced by tech circles’ positions, leaders around the world are quite clear that they take the risk seriously.
The only reason AGI development hasn’t been halted already is that the general public does not yet know that big tech is both trying to build AGI, and actually making real progress towards it.