Hi Jack, thanks for your comment! I think you’ve raised some really interesting points here.
I agree that it would be valuable to consider the effect of social and political feedback loops on timelines. This isn’t something I have spent much time thinking about yet—indeed, when discussing forecast models within this article, I focused far more on E1 than I did on E2. But I think that (a) some closer examination of E2 and (b) exploration of the effect of social/political factors on AI scenarios and their underlying strategic parameters—including their timelines! - are both within the scope of what Convergence’s scenario planning work hopes to eventually cover. I’d like to think more about it!
If you have any specific suggestions about how we could approach these issues and explore these dynamics, I’d be really keen to hear them.
I am also just beginning to think about this more, but some initial thoughts:
Path dependency from self-ampliying processes—Thinking about model generations as forks where significant changes in the trajectory become possible (e.g. crowding in a lot more investment, as has happened with ChatGPT/GPT4, but also, as has also happened, a changed Overton window). I think overall this introduces a dynamic where the extremes of the scenario space become more likely, with social dynamics such as strong increase in investment or, on the other side, stricter regulation after a warning shot, having self-amplifying dynamics. As the sums get larger and the public and policy makers pay way more attention, I think the development process will become a lot more contingent (my sense is that you are already thinking about these things at Convergence).
Modeling domestic and geopolitics—e.g. the Biden and Trump AI policies probably look quite different, as does the outlook for race dynamics (essentially all mentions of artificial intelligence by Project 2025, a Heritage-backed attempt to define priorities for an incoming Republican administration, are about science dominance and/or competition with China, there is no discussion of safety at all).
Modeling more direct AI progress > AI politics > AI policy > AI progress feedback loops, i.e. based on what we know from past examples or theory, what kind of labor displacement would one need to see to expect serious backlash? what kind of warning shots would likely lead to serious regulation? and similar questions.
Hi Jack, thanks for your comment! I think you’ve raised some really interesting points here.
I agree that it would be valuable to consider the effect of social and political feedback loops on timelines. This isn’t something I have spent much time thinking about yet—indeed, when discussing forecast models within this article, I focused far more on E1 than I did on E2. But I think that (a) some closer examination of E2 and (b) exploration of the effect of social/political factors on AI scenarios and their underlying strategic parameters—including their timelines! - are both within the scope of what Convergence’s scenario planning work hopes to eventually cover. I’d like to think more about it!
If you have any specific suggestions about how we could approach these issues and explore these dynamics, I’d be really keen to hear them.
I am also just beginning to think about this more, but some initial thoughts:
Path dependency from self-ampliying processes—Thinking about model generations as forks where significant changes in the trajectory become possible (e.g. crowding in a lot more investment, as has happened with ChatGPT/GPT4, but also, as has also happened, a changed Overton window). I think overall this introduces a dynamic where the extremes of the scenario space become more likely, with social dynamics such as strong increase in investment or, on the other side, stricter regulation after a warning shot, having self-amplifying dynamics. As the sums get larger and the public and policy makers pay way more attention, I think the development process will become a lot more contingent (my sense is that you are already thinking about these things at Convergence).
Modeling domestic and geopolitics—e.g. the Biden and Trump AI policies probably look quite different, as does the outlook for race dynamics (essentially all mentions of artificial intelligence by Project 2025, a Heritage-backed attempt to define priorities for an incoming Republican administration, are about science dominance and/or competition with China, there is no discussion of safety at all).
Modeling more direct AI progress > AI politics > AI policy > AI progress feedback loops, i.e. based on what we know from past examples or theory, what kind of labor displacement would one need to see to expect serious backlash? what kind of warning shots would likely lead to serious regulation? and similar questions.