I don’t know what a mass movement to align AGI would look like (so how could I build one?), and I’d expect polarization dynamics to undermine it and make the situation worse (so I wouldn’t want to). It’s possible this explains why it hasn’t happened. It’s also inherently somewhat incompatible with polarization (it affects everyone equally), which will reduce the amount of air it will be given.
My experience is that a pretty large proportion of people actually do find the alignment problem immediately concerning when it’s explained well. But then they can’t see anything they can do about it so they rightly don’t think about it very much. This is probably for the best.
I’d discourage trying to promote the issue (beyond AI research communities) until someone can present a detailed, realistic story as to how making it political could possibly help. (~ I’ve written up a big part of that story… but there’s an unrealized technical dependency...)
I don’t know what a mass movement to align AGI would look like (so how could I build one?), and I’d expect polarization dynamics to undermine it and make the situation worse (so I wouldn’t want to). It’s possible this explains why it hasn’t happened. It’s also inherently somewhat incompatible with polarization (it affects everyone equally), which will reduce the amount of air it will be given.
My experience is that a pretty large proportion of people actually do find the alignment problem immediately concerning when it’s explained well. But then they can’t see anything they can do about it so they rightly don’t think about it very much. This is probably for the best.
I’d discourage trying to promote the issue (beyond AI research communities) until someone can present a detailed, realistic story as to how making it political could possibly help. (~ I’ve written up a big part of that story… but there’s an unrealized technical dependency...)