I agree there are some possible attitudes that society could have towards AI development which could put us in a much safer position.
I think that the degree of consensus you’d need for the position that you’re outlining here is practically infeasible, absent some big shift in the basic dynamics. I think that the possible shifts which might get you there are roughly:
Scientific ~consensus—people look to scientists for thought leadership on this stuff. Plausibly you could have a scientist-driven moratorium (this still feels like a stretch, but less than just switching the way society sees AI without having the scientists leading that)
Freak-out about everyday implications of AI—sufficiently advanced AI would not just pose unprecedented risks, but also represent a fundamental change in the human condition. This could drive a tide of strong sentiment, that doesn’t rely on abstract arguments about danger.
Much better epistemics and/or coordination—out of reach now, put potentially obtainable with stronger tech.
I think there’s potentially something to each of these. But I think the GDM paper is (in expectation) actively helpful for 1 and probably 3, and doesn’t move the needle much either way on 2.
(My own view is that 3 is the most likely route to succeed. There’s some discussion of the pragmatics of this route in AI Tools for Existential Security or AI for AI Safety (both of which also discuss automation of safety research, which is another potential success route), and relevant background views on the big-picture strategic situation in the Choice Transition. But I also feel positive about people exploring routes 1 and 2.)
These are in the same category because:
I’m talking about game-changing improvements to our capabilities (mostly via more cognitive labour; not requiring superintelligence)
These are the capacities that we need to help everyone to recognize the situation we’re in and come together to do something about it (and they are partial substitutes: the better everyone’s epistemics are, the less need for a big lift on coordination which has to cover people seeing the world very differently)
I’m not actually making a claim about alignment difficulty—beyond that I do think systems in the vein of those today and the near-successors of those look pretty safe.
I think that getting people to pause AI research would be a bigger lift than any nonproliferation treaties we’ve had in the past (not that such treaties have always been effective!). This isn’t just a military tech, it’s a massively valuable economic tech. Given the incentives, and the importance of having treaties actually followed, I do think this would be a more difficult challenge than any past nonproliferation work. I don’t think that means it’s impossible, but I do think it’s way more likely if something shifts—hence my 1-3.
(Or if you were asking why I say “out of reach now” in the quoted sentence it’s because I’m literally talking about “much better coordination” as a capability; not what could or couldn’t be achieved with a certain level of coordination.)