I think a pause on AI progress wouldn’t be very helpful unless used in concert with other effective governance interventions, such as the ones that I have outlined above.
Agree.
A longer pause that lasts until we are confident that we have robust AI safety measures in place that allow for safe deployment would be helpful. I’m currently in favor of building the capacity of the world to create a long pause on AI.
As a result, I’m only excited about versions of a pause that don’t return to “AI progress as usual”, after the pause is over.
Yes, I don’t think anyone is seriously proposing a fixed-expiry pause at this point (FLI’s “6 month” letter was really just a foot-in-the-doorOverton-Window I think). Pause in my thinking is basically shorthand for “global indefinite pause of frontier AI development, until global consensus is reached on an x-safety solution (including solving the alignment problem, preventing misuse, and ensuring multi-agent coordination); including accepting that this may not be possible such that the pause becomes effectively permanent[1]”.
Great stuff Thomas.
Agree.
Yes, I don’t think anyone is seriously proposing a fixed-expiry pause at this point (FLI’s “6 month” letter was really just a foot-in-the-
doorOverton-Window I think). Pause in my thinking is basically shorthand for “global indefinite pause of frontier AI development, until global consensus is reached on an x-safety solution (including solving the alignment problem, preventing misuse, and ensuring multi-agent coordination); including accepting that this may not be possible such that the pause becomes effectively permanent[1]”.but fear not we can still have a good future including all the nice things, it might just take a bit longer