AGI Timelines in Governance: Different Strategies for Different Timeframes

Summarization Table

TimelinesPre-2030Post-2030
ExpectationsAGI will be built by an organization that’s already trying to build it (85%)

Some governments will be in the race (80%)

Compute will still be centralized at the time AGI is developed (60%)More companies will be in the race (90%)
National government policy won’t have strong positive effects (70%)

China is more likely to lead than pre-2030 (85%)

The best strategies will have more variance (75%)There will be more compute suppliers[1] (90%)
Comparatively More Promising Strategies (under timelines X)[2]Aim to promote a security mindset in the companies currently developing AI (85%)Focus on general community building (90%)
Focus on corporate governance (75%)
Build the AI safety community in China (80%)
Target outreach to highly motivated young people and senior researchers (80%)
Avoid publicizing AGI risk (60%)
Coordinate with national governments (65%)
Beware of large-scale coordination efforts (80%)

Probability estimates in the “Promising Strategies” category have to be interpreted as the likelihood that this strategy/​consideration is more promising/​important under timelines X than timelines Y.

Introduction

Miles Brundage recently argued that AGI timeline discourse might be overrated. He makes a lot of good points, but I disagree with one thing. Miles says: “I think the correct actions are mostly insensitive to timeline variations.”

Unlike Miles, I think that if the timeline differences are greater than a couple of years, the choice of actions does depend on timeline differences[3]. In particular, our approach to governance should be very different depending on whether we think that AGI will be developed in ~5-10 years or after that. In this post, I list some of the likely differences between a world in which AGI is developed before, and after ~2030 and discuss how those differences should affect how we approach AGI governance. I discuss most of the strategies and considerations in relative terms, i.e. arguing why they’re likely to be significantly more crucial under certain timelines than others. I am discussing these specific strategies and considerations because I believe they are important for AI governance, or at least likely to be effective strategies under one of the two timelines I am considering.

I chose 2030 as a cut-off point because it is easy to remember and it seems to make sense to differentiate between the actions that should be prioritized in the time leading up to 2030 (~5-10 years timelines) and those that should be prioritized after 2030 (15-20 years and beyond timelines). But perhaps a better way to read this post is ‘the sooner you think AGI will be developed, the more likely my points about the pre-2030 AGI world are to be true’, and vice versa.

Epistemic status and reasoning behind publishing this:

  • It seems to me that not many people have given detailed thought to this issue, which is why I wrote this post. To me, this is a major consideration. I included probability estimates to help you assess my level of certainty for each claim, even though not all claims may be easy to verify.

  • If there were no trade-offs to consider, we could implement almost all interventions at the same time. However, in practice, talent is limited and I expect that there will be trade-offs in terms of how resources are allocated among different timelines. I expect to update a significant portion of the claims I have made based on feedback and new information that becomes available in the coming years. This is because there may be considerations that I have missed and AI governance is complex and uncertain. Therefore, my probability estimates have low resilience.

Thanks to Nicole Nohemi, Felicity Reddel, Andrea Miotti, Fabien Roger and Gerard van Smeden for the feedback on this post.

If AGI is developed before 2030, the following is more likely to be true:

AGI will be built by an organization that’s already trying to build it (85%)

Building huge AI models takes a lot of accumulated expertise. The organizations currently working on AI rely on huge internal libraries and repositories of tricks that they’ve built up over time. It’s unlikely that a new organization or actor, starting from scratch, could achieve this within a couple of years[4]. This means that if AGI is developed before 2030, it’s likely to be first developed by one of the (<15) companies that are currently working on it.

In decreasing order:

  • Most likely: OpenAI or DeepMind, since their experience will be hard for others to beat within 7 years.

  • Possibly: one of the FAANG companies

  • Least likely: a recently created lab (Adept, Inflection, Keen Technologies, Cohere, Character, etc.)

This is relevant because DeepMind and OpenAI are more concerned about safety than others. They both have alignment teams and their leaders have expressed commitments to safety, whereas (for example) Meta and Amazon seem less interested in safety.

Compute will still be centralized at the time AGI is developed (60%)

The compute supply chain is currently highly centralized at several points of the supply chain. This is partly because, even though selling compute is lucrative, the machines and fabs needed to make computer parts are extremely expensive. Therefore, companies need to make a massive initial investment just to get started. On top of that, the initial R&D investments that are required are huge.

This is relevant because we can leverage the compute supply chain for AI governance. For example, we could encourage suppliers to put on-chip safety mechanisms in place. However, this is more likely to work if there are fewer companies in the supply chain.

National government policy won’t have strong[5] positive effects (70%)

Governments are slow, and the policy cycle is long. Advocacy efforts usually take years to bear fruit. First, advocates have to raise awareness and shift public opinion in the right direction, and politicians will only take note if their constituents care. If you think that AGI will be developed by 2030, there is likely not enough time to influence national governments in a way that lead them to take strong measures, so governance interventions that rely on national policy or law are less likely to be useful.

I think this matters particularly for the US because the contribution of the US government seems indispensable for most policies that can significantly impact AGI timelines or AGI governance. However, the US government will likely only get involved in X-risk related topics if there is strong support from the public. Achieving the necessary levels of support would require a significant shift in public opinion. Unfortunately, such major shifts probably take more than 7 years and are highly uncertain processes.

The best strategies will have more variance (75%)

If timelines are short, we should be more willing to tolerate variance[6] since we have much less time to explore the possible strategies and can’t wait for slower, less risky strategies to pan out. Timelines seem pretty crucial to calibrate our risk aversion, especially for funders. I think that this consideration is one of the most important effects of timelines on macro-strategy.

Here are some decisions on which this should have a significant effect:

  • We absolutely want org X to exist. Founders Y and Z seem good but not ideal. Should we fund them?

  • In a world with post-2030 timelines, delaying the creation of a crucial organization for a couple of years to get better founders is most likely the best thing to do. It’s probably not the case with pre-2030 timelines.

  • As an organization, should we wait a few more years before entering the AI governance space, in order to have a better and clearer understanding of what we ought to do?

  • Shorter timelines should decrease the threshold for determining whether an idea is worth implementing.

If you think that AGI will be developed before 2030, it would make sense to:

Aim to promote a security mindset in the companies currently developing AI (85%)

Some governance strategies involve pushing for a security mindset among AI developers (using outreach) so that they voluntarily decide to do things that make AGI less dangerous. Any researcher at DeepMind, Google Brain, or OpenAI who starts taking AI risks seriously is a huge win because it:

  • Increases the expected amount of work in alignment

  • Decreases the expected amount of capabilities work

  • Increases the chances that, given their lab develops the first AGI, it will be an aligned one.

If you think AGI will be developed very soon, these strategies are more likely to be promising since there are still relatively few companies aiming to develop AGI. Later, there will be more such companies, increasing coordination difficulty and decreasing the cost-effectiveness of efforts that will target companies individually. For example, you might encourage companies to create an alignment team if they don’t have one or, if they do have one, increase their funding.

Prioritize targeted outreach to highly motivated young people and senior researchers (80%)

If timelines are short, prioritize outreach efforts on senior researchers[8], or on people who’ll be able to make contributions within the next few years, i.e. highly motivated young people[9]. Community building that focuses on younger people who are undergrads and want to do a standard curriculum before working on the problem is likely to have a much lower EV under short timelines[10]. EA general community building would also have much less time to pay off than more targeted AI safety outreach.

Avoid publicizing AGI risk among the general public (60%)

It’s difficult to explain why AI is dangerous without also explaining why it’s powerful. This means that if you try to mitigate risk by raising awareness, it might backfire. You might inadvertently persuade governments to enter the race sooner than they otherwise would. Governments and national defense have a worrying track record of not caring whether developing a powerful technology is dangerous. If your timelines are short, it therefore might make sense not to publicize AGI risk to the general public enough for governments to enter the race. On the other hand, if your timelines are longer, governments will likely be aware of AGI’s power anyway, and thus it might make more sense to publicize AGI risks, putting an emphasis on risks[11].

Note that this advice holds only when governments don’t know a lot about AGI. If AGI is already being discussed or is already an important consideration, then it is likely that talking about accidental risks is a good strategy.

Beware of large-scale coordination efforts (80%)

Large-scale coordination efforts involving many actors usually take a lot of time to have effects. Therefore, if you’re relying on such a mechanism in your governance plan for pre-2030 timelines, you should probably begin implementing the plan in the next few years and thus start building the necessary coalitions now you would need to succeed. Preferring actors that are moving faster might also be good.

Focus on corporate governance (75%)

There will be more AI companies in the future, and governments will also be in the race. This means that achieving cultural change and coordination between AI companies leveraging corporate governance is a much less promising strategy under longer timelines than under shorter ones. compared to governance that involves governments. On the other hand, some of the top labs’ governance teams are genuinely concerned by AGI risks and seem to be acting to make AGI development as safe as possible. Thus I think that engaging with these actors and ensuring that they have all the tools and ideas they need to actually cut the right risks seems promising to me.

If AGI is developed after 2030, the following is more likely to be true:

Some governments will be in the race (80%)

National governments are likely to eventually realize that AGI is incredibly powerful and will try to build it. In particular, national defense organizations may try to develop it. If you believe that AGI will be developed after 2030, it is possible that it will be developed by a government, as they may have had time to catch up with the organizations currently working on it by that point.

More companies will be in the race (90%)

If AGI is developed later than 2030, it may be developed by a new company that has not yet started building it. Given the number of companies that started racing in 2022, it seems plausible that in 2030 there will be more than 50 companies in the race.

China is more likely to lead (85%)

Chinese companies and the government are currently lagging in AI development, but they’re making progress quite quickly. I think they’re decently likely to catch up to Western companies eventually (I’d put 35% by 2035). The recent export controls on semiconductors may have made that a lot more difficult, but they’ll probably try harder than ever to develop their own chip supply chain. This seems to be a crucial consideration because Chinese AI developers currently don’t care much about safety, and the safety community doesn’t have much influence in China.

There will be more compute suppliers[12] (90%)

Despite the high barriers to entry, because it became clear in recent years that compute would be hugely important, there are likely to be more companies at all stages of the compute supply chain in future. For example, the Chinese government is trying to build its own computing supply chain at the moment. There are also startups, such as Cerebras, that are trying to enter the market. This means that the strategies of compute governance that rely on compute companies will probably be less promising.

If you think that AGI will be developed after 2030, it would make sense to:

Focus on general community building (90%)

The later AI is developed, the more useful it is to do community building now, because many of the results of community building take a while to bear fruit. If a community builder gets an undergraduate computer scientist interested in AI safety, it may be many years before they make their greatest contributions. Great community builders also recruit and/​or empower new community builders, who go on to form their own cohorts, which means that a community builder today might be counterfactually responsible for many new AI researchers in 20 years. If you think that AGI won’t be developed for 10 years, building the AGI safety community (or the EA community in general) is probably one of the most effective things for you to do.

Note that community building is promising even on shorter time scales but is particularly exciting under post-2030 timelines (potentially more than anything else).

Build the AI safety community in China (80%)

If your timelines are longer, AGI is more likely to be developed by the Chinese government or a Chinese company. There is currently not a large EA or AI safety community in China. So if you think AGI will be developed after 2030, you should try to build bridges with Chinese ML researchers and AI developers[13]. It’s especially important not to frame AI governance questions adversarially, as ‘US vs China’, as this could make it harder for the US and European safety communities to build alliances with Chinese developers. AI safety may become politicized as ‘an annoying thing that domineering Americans are trying to impose on us’ rather than common sense.

Coordinate with national governments (65%)

This is a more promising strategy if your timelines are longer, because national governments are more likely to be both, developing AGI themselves and generally interested in AGI policy. A way you might be able to have some influence on AGI governance in national governments is by being a civil servant or politician. Other ways could involve trying to become a recognized expert in AGI governance in the relevant country.

I am unsure if theories of change that utilize compliance mechanisms will be more or less effective after 2030. The lengthy process of policy development, including setting standards and establishing an audit system that prevents loopholes, suggests that compliance mechanisms may be more effective after 2030. However, the possibility that China may be in a leadership position could mean that compliance mechanisms will rely heavily on the Brussels effect, which is not a very reliable compliance mechanism.

I would say that post-2030 timelines probably favor these theories of change, but not very confidently.

Conclusion

To summarize, whether you have a 5-10 year timeline or a 15-20 year timeline changes the strategic landscape in which we operate and thus changes some of the strategies we should pursue.

Under pre-2030 timelines:

  • National policy matters less (governments are not involved in the race)

  • Corporate governance matters more

  • There are fewer than 15 key labs that are most likely to develop AGI, and they are located in the US and the UK

  • AI safety field building should be very focused on people who can contribute in the next few years (i.e. senior researchers & highly motivated people)

  • China is much less likely to lead the race at any point

  • Compute is centralized and thus lets room for compute governance

I’m looking forward to reading your comments and disagreements about this important topic. I’m also happy to make a call if you want to talk more in-depth about this topic (https://​​calendly.com/​​simeon-campos/​​).

This post was written collaboratively by Siméon Campos and Amber Dawn Ace. The ideas are Siméon’s; Siméon explained them to Amber, and Amber wrote them up. Then Siméon partly rewrote the post on that basis. We would like to offer this service to other EAs who want to share their as-yet unwritten ideas or expertise.

If you would be interested in working with Amber to write up your ideas, fill out this form.

  1. ^

    This is a prediction about the number of suppliers that represent more than 1% of the market they operate in, not the size of the market or the total production. Some events could lead to some supply chain disruptions that could overall decrease the total production of chips.

  2. ^

    Probability estimates in this category have to be interpreted as the likelihood that this strategy/​consideration is more promising/​important under timelines X than timelines Y.

  3. ^

    Naturally, if timelines turn out to be longer, the same “couple of years” estimation differences make a smaller difference in what actions would be best.

  4. ^

    Main caveat: Recent startups such as Adept.ai and Cohere.ai were built by team leads or major researchers from leader labs. Thanks to the expertise they have, they’re fairly likely to reach the state of the art in at least one subfield of deep learning. That said, most of these organizations are quite likely to not have the compute and money that OpenAI and DeepMind have.

  5. ^

    By strong, I mean measures in the reference class of “Constrain labs to airgap and box their SOTA models while they train them”.

  6. ^

    In the exploration vs exploitation dilemma, you should start exploiting earlier and thus tolerate a) more downside risks and b) to have chances of not having chosen the maximum.

  7. ^

    And wants to contribute to survive alignment.

  8. ^

    The senior researchers that are the most relevant are probably those working in top labs and those who are highly regarded in the ML community. It’s much less tractable than young people but it’s probably at least 10 times more valuable in the next 5 years to have a senior researcher who starts caring about AI safety than a junior one. Thus, I’d expect this intervention to be highly valuable under short timelines.

  9. ^

    Obviously, how talented the people are matters a lot. I mostly want to underline the fact that for someone to start contributing in the next couple of years, the most important factor is probably motivation.

  10. ^

    Note that under post-2030 timelines, the effect of having a lot more PhD students in AI safety in the next few years is probably quite high, mostly due to cultural effects of “AI safety is legible and is a big thing in academia”.

  11. ^

    One key consideration here is the medium you’re using to do that publicization. AI alignment is a very complex problem and thus you need to find the media that maximize the complexity you can successfully transmit. Movies seem to be a promising avenue in that respect.

  12. ^

    This is a prediction about the number of suppliers that represent more than 1% of the market they operate in, not the size of the market or the total production. Some events could lead to some supply chain disruptions that could overall decrease the total production of chips.

  13. ^

    Note that it’s recommended to talk to people with experience on the topic if you want to do that.

Crossposted from LessWrong (63 points, 28 comments)