do you support efforts calling for a global moratorium on AGI (to allow time for alignment research to catch up / establish the possibility of alignment of superintelligent AI)?
I’m definitely interested in seeing these ideas explored, but I want to be careful before getting super into it. My guess is that a global moratorium would not be politically feasible. But pushing for a global moratorium could still be worthwhile to pursue even if it is unlikely to happen as it could be a good galvanizing ask that brings more general attention to AI safety issues and make other policy asks seem more reasonable by comparison. I’d like to see more thinking about this.
On the merits of the actual policy, I am unsure whether a moratorium is a good idea. My concern is may just produce a larger compute overhang which could increase the likelihood of future discontinuous and hard-to-control AI progress.
Some people in our community have been convinced that an immediate and lengthy AI moratorium is a necessary condition for human survival, but I don’t currently share that assessment.
Re compute overhang, I don’t think this is a defeater. We need the moratorium to be indefinite, and only lifted when there is a global consensus on an alignment solution (and perhaps even a global referendum on pressing go on more powerful foundation models).
Some people in our community have been convinced that an immediate and lengthy AI moratorium is a necessary condition for human survival, but I don’t currently share that assessment.
This makes sense given your timelines and p(doom) outlined above. But I urge you (and others reading) to reconsider the level of danger we are now in[1].
I’m definitely interested in seeing these ideas explored, but I want to be careful before getting super into it. My guess is that a global moratorium would not be politically feasible. But pushing for a global moratorium could still be worthwhile to pursue even if it is unlikely to happen as it could be a good galvanizing ask that brings more general attention to AI safety issues and make other policy asks seem more reasonable by comparison. I’d like to see more thinking about this.
On the merits of the actual policy, I am unsure whether a moratorium is a good idea. My concern is may just produce a larger compute overhang which could increase the likelihood of future discontinuous and hard-to-control AI progress.
Some people in our community have been convinced that an immediate and lengthy AI moratorium is a necessary condition for human survival, but I don’t currently share that assessment.
Good to see that you think the ideas should be explored. I think a global moratorium is becoming more feasible, given the UN Security Council meeting on AI, The UK Summit, the Statement on AI risk, public campaigns etc.
Re compute overhang, I don’t think this is a defeater. We need the moratorium to be indefinite, and only lifted when there is a global consensus on an alignment solution (and perhaps even a global referendum on pressing go on more powerful foundation models).
This makes sense given your timelines and p(doom) outlined above. But I urge you (and others reading) to reconsider the level of danger we are now in[1].
Or, ahem, to rethink your priorities (sorry).