[Question] What are the top priorities in a slow-takeoff, multipolar world?

The model of AI risk I’ve mostly believed for years was that of fast-takeoff and therefore a unipolar world.[1] This model allowed me to have some concrete models of what the EA community should do to make AI go better. Now, I am at least half-persuaded of slow-takeoff, multipolar worlds (1, 2). But I have much less idea what to do in this world. So, what should the top priorities be for EA longtermists who want to make AI go well?

Fast-takeoff, unipolar priorities, as seen by me writing quickly:

  • Get the top AI labs concerned about safety

    • Means that if they feel like they’re close to AGI, they will hopefully be receptive to whatever the state of the art in alignment research is

  • Try to solve the alignment problem in the most rigorous way possible.

    • After all, we only get one shot

  • [Less obvious to me] Try to get governments concerned about safety in case they nationalize AI labs. But also don’t increase the likelihood of them doing that by shouting about how AI is going to be this incredibly powerful thing.

Multipolar, slow takeoff worlds

  • Getting top AI labs concerned about safety seems much harder in the long term, as they become increasingly economically incentivized to ignore it.

  • Trying to solve the alignment problem in the most rigorous way possible seems less necessary. Also maybe alignment is easier, and therefore is less likely to be the thing that fails.

  • Governments might be captured by increasingly powerful private interests /​ there might be AI-powered propaganda that does … something to their ability to function.

Broadly in this world I’m much more worried about race-to-the-bottom dynamics. In Meditations on Moloch terms, instead of AI being the solution to Moloch, Moloch becomes the largest contributor to AI x-risk.

I’m interested in all sorts of comments, including:

  • What should the top priorities be to make AI go well in a slow-takeoff world?

  • Challenging the hypothetical

  • Is there anything wrong with this analysis?


  1. ↩︎

    See Nick Bostrom’s Superintelligence