I’m skeptical about the tractability of making AGI development taboo within a very few years. It seems like this plan would require moderate timelines in order to be viable.
That said, I’m starting to wonder whether we should be trying to gain support for a pause in theory: specifically, people to agree that in an ideal world, we would pause AI development where we are now.
AGI development is already taboo outside of tech circles. Per the September poll by the AIPI, only 12% disagree that “Preventing AI from quickly reaching superhuman capabilities” should be an important AI policy goal. (56% strongly agree, 20% somewhat agree, 8% somewhat disagree, 4% strongly disagree, 12% not sure.) Despite the fact that world leaders are themselves influenced by tech circles’ positions, leaders around the world are quite clear that they take the risk seriously.
The only reason AGI development hasn’t been halted already is that the general public does not yet know that big tech is both trying to build AGI, and actually making real progress towards it.
The taboo only really needs to kick in on moderate timelines, so we’re in luck :) On short timelines, only massive data centres and the leading AI labs need to be regulated.
I’m skeptical about the tractability of making AGI development taboo within a very few years. It seems like this plan would require moderate timelines in order to be viable.
That said, I’m starting to wonder whether we should be trying to gain support for a pause in theory: specifically, people to agree that in an ideal world, we would pause AI development where we are now.
That could help open up the Overton window.
AGI development is already taboo outside of tech circles. Per the September poll by the AIPI, only 12% disagree that “Preventing AI from quickly reaching superhuman capabilities” should be an important AI policy goal. (56% strongly agree, 20% somewhat agree, 8% somewhat disagree, 4% strongly disagree, 12% not sure.) Despite the fact that world leaders are themselves influenced by tech circles’ positions, leaders around the world are quite clear that they take the risk seriously.
The only reason AGI development hasn’t been halted already is that the general public does not yet know that big tech is both trying to build AGI, and actually making real progress towards it.
The taboo only really needs to kick in on moderate timelines, so we’re in luck :) On short timelines, only massive data centres and the leading AI labs need to be regulated.