I basically grant 2, sort of agree with 1, and drastically disagree with three (that timelines will be long.)
Which makes me a bit weird, since while I do have real confidence in the basic story that governments are likely to influence AI a lot, I do have my doubts that governments will try to regulate AI seriously, especially if timelines are short enough.
In retrospect, I agree more with 3, and while I do still think AI timelines are plausibly very short, I do think that after-2030 timelines are reasonably plausible from my perspective.
I have become less convinced that takeoff speed from the perspective of the state will be slow, slightly due to entropix reducing my confidence in a view where algorithmic progress doesn’t suddenly go critical and make AI radically better, and more so because I now think there will be less flashy/public progress, and more importantly I think the gap between consumer AI and internal AI used in OpenAI will only widen, so I expect a lot of the GPT-4 moments where people wowed and got very concerned at AI to not happen again.
So I expect the landscape of AI governance to have less salience when AIs can automate AI research than the current AI governance field thinks, which means overall I’ve reduced my probability of a strong societal response from say 80-90% likely to only 45-60% likely.
I basically grant 2, sort of agree with 1, and drastically disagree with three (that timelines will be long.)
Which makes me a bit weird, since while I do have real confidence in the basic story that governments are likely to influence AI a lot, I do have my doubts that governments will try to regulate AI seriously, especially if timelines are short enough.
In retrospect, I agree more with 3, and while I do still think AI timelines are plausibly very short, I do think that after-2030 timelines are reasonably plausible from my perspective.
I have become less convinced that takeoff speed from the perspective of the state will be slow, slightly due to entropix reducing my confidence in a view where algorithmic progress doesn’t suddenly go critical and make AI radically better, and more so because I now think there will be less flashy/public progress, and more importantly I think the gap between consumer AI and internal AI used in OpenAI will only widen, so I expect a lot of the GPT-4 moments where people wowed and got very concerned at AI to not happen again.
So I expect the landscape of AI governance to have less salience when AIs can automate AI research than the current AI governance field thinks, which means overall I’ve reduced my probability of a strong societal response from say 80-90% likely to only 45-60% likely.
Appreciate the updated thoughts!