I am curious how you think about integrating social and political feedback loops into timeline forecasts.
Roughly speaking,
(a) when we remain in the paradigm of relatively predictable progress (in terms of amount of progress, not specific capabilities) enabled by scaling laws,
(b) we put significant probability on being fairly close to TAI, e.g. within 10 years,
(c) it remains true that model progress is clearly observable by the broader public,
then it seems that social and political factors might drive a large degree in the variance of expectable timelines (by affecting your (E2)).
E.g. things like
(i) Sam Altman seeking to discontinously increase chip supply
(ii) How the next jump in capabilities will be perceived, e.g. if it is true that GPT-5 will be another similarly sized jumped compared to what we’ve seen from GPT-3 to GPT-4 what policy and investment responses will this cause?
(iii) whether there will be a pre-TAI warming shot that will lead to a significant shift in the Overton Window and more serious regulation.
It seems to me that those dynamics should take up a larger part of the variance the closer we get so I am curious how you think about this and whether you will include this in your upcoming work on short timelines.
Hi Jack, thanks for your comment! I think you’ve raised some really interesting points here.
I agree that it would be valuable to consider the effect of social and political feedback loops on timelines. This isn’t something I have spent much time thinking about yet—indeed, when discussing forecast models within this article, I focused far more on E1 than I did on E2. But I think that (a) some closer examination of E2 and (b) exploration of the effect of social/political factors on AI scenarios and their underlying strategic parameters—including their timelines! - are both within the scope of what Convergence’s scenario planning work hopes to eventually cover. I’d like to think more about it!
If you have any specific suggestions about how we could approach these issues and explore these dynamics, I’d be really keen to hear them.
I am also just beginning to think about this more, but some initial thoughts:
Path dependency from self-ampliying processes—Thinking about model generations as forks where significant changes in the trajectory become possible (e.g. crowding in a lot more investment, as has happened with ChatGPT/GPT4, but also, as has also happened, a changed Overton window). I think overall this introduces a dynamic where the extremes of the scenario space become more likely, with social dynamics such as strong increase in investment or, on the other side, stricter regulation after a warning shot, having self-amplifying dynamics. As the sums get larger and the public and policy makers pay way more attention, I think the development process will become a lot more contingent (my sense is that you are already thinking about these things at Convergence).
Modeling domestic and geopolitics—e.g. the Biden and Trump AI policies probably look quite different, as does the outlook for race dynamics (essentially all mentions of artificial intelligence by Project 2025, a Heritage-backed attempt to define priorities for an incoming Republican administration, are about science dominance and/or competition with China, there is no discussion of safety at all).
Modeling more direct AI progress > AI politics > AI policy > AI progress feedback loops, i.e. based on what we know from past examples or theory, what kind of labor displacement would one need to see to expect serious backlash? what kind of warning shots would likely lead to serious regulation? and similar questions.
Fascinating stuff!
I am curious how you think about integrating social and political feedback loops into timeline forecasts.
Roughly speaking, (a) when we remain in the paradigm of relatively predictable progress (in terms of amount of progress, not specific capabilities) enabled by scaling laws, (b) we put significant probability on being fairly close to TAI, e.g. within 10 years, (c) it remains true that model progress is clearly observable by the broader public,
then it seems that social and political factors might drive a large degree in the variance of expectable timelines (by affecting your (E2)).
E.g. things like (i) Sam Altman seeking to discontinously increase chip supply (ii) How the next jump in capabilities will be perceived, e.g. if it is true that GPT-5 will be another similarly sized jumped compared to what we’ve seen from GPT-3 to GPT-4 what policy and investment responses will this cause? (iii) whether there will be a pre-TAI warming shot that will lead to a significant shift in the Overton Window and more serious regulation.
It seems to me that those dynamics should take up a larger part of the variance the closer we get so I am curious how you think about this and whether you will include this in your upcoming work on short timelines.
Hi Jack, thanks for your comment! I think you’ve raised some really interesting points here.
I agree that it would be valuable to consider the effect of social and political feedback loops on timelines. This isn’t something I have spent much time thinking about yet—indeed, when discussing forecast models within this article, I focused far more on E1 than I did on E2. But I think that (a) some closer examination of E2 and (b) exploration of the effect of social/political factors on AI scenarios and their underlying strategic parameters—including their timelines! - are both within the scope of what Convergence’s scenario planning work hopes to eventually cover. I’d like to think more about it!
If you have any specific suggestions about how we could approach these issues and explore these dynamics, I’d be really keen to hear them.
I am also just beginning to think about this more, but some initial thoughts:
Path dependency from self-ampliying processes—Thinking about model generations as forks where significant changes in the trajectory become possible (e.g. crowding in a lot more investment, as has happened with ChatGPT/GPT4, but also, as has also happened, a changed Overton window). I think overall this introduces a dynamic where the extremes of the scenario space become more likely, with social dynamics such as strong increase in investment or, on the other side, stricter regulation after a warning shot, having self-amplifying dynamics. As the sums get larger and the public and policy makers pay way more attention, I think the development process will become a lot more contingent (my sense is that you are already thinking about these things at Convergence).
Modeling domestic and geopolitics—e.g. the Biden and Trump AI policies probably look quite different, as does the outlook for race dynamics (essentially all mentions of artificial intelligence by Project 2025, a Heritage-backed attempt to define priorities for an incoming Republican administration, are about science dominance and/or competition with China, there is no discussion of safety at all).
Modeling more direct AI progress > AI politics > AI policy > AI progress feedback loops, i.e. based on what we know from past examples or theory, what kind of labor displacement would one need to see to expect serious backlash? what kind of warning shots would likely lead to serious regulation? and similar questions.