What term to use for AI in different policy contexts?

There is a cluster of terms (frontier AI, AGI, etc.) that are commonly used when talking or writing about AI systems, particularly when discussing how AI could cause existential or other catastrophes. This post gives a quick overview of some of the ones that are most common among people who focus on extreme risks.

I hope that the post will help people who are communicating about AI to choose a term that (a) captures well the types of AI systems that they have in mind, and (b) will not have unnecessarily negative connotations for their particular audience. I particularly have in mind people who are speaking to non-technical and non-specialist audiences, such as people who attempt to improve government policy around AI risk.

I first share my bottom lines on which terms I think are best in different contexts. For most of the post, I provide an overview of commonly-used terms. In later sections I give a quick overview of other terms, share relevant thoughts from other people in the field, and link to other sources that are relevant for this question.

I’m writing this post in a personal capacity; it doesn’t necessarily reflect the views of my employer.

Bottom lines

I mainly discuss six terms in this post: “frontier AI”, “advanced AI”, “general-purpose AI”, “AGI”, “TAI”, and “superintelligence”. Different terms will work best in different contexts, and it seems fine to me for the field to continue using several terms.

Here are some bottom lines about which terms seem best to me in different contexts:

  • “Frontier AI” is very helpful if focusing on the most advanced models at a given moment in time. A lot of current AI governance work seems to be in this category. That said, AI governance should not just focus on this subset of models; models behind the frontier could also be dangerous, particularly as AI capabilities advance.

  • As a default, I like “advanced AI” because it is so non-jargon-y and neutral. That said, communicators would generally need to define what they mean by “advanced” when using it. What AI systems count as advanced?

  • “General-purpose AI” and “AGI” both point to systems that can achieve a wide range of tasks.[1] This seems useful for a lot of AI governance work. General-purpose AI might sound less speculative and is less associated with AI developers, in particular OpenAI. This could be helpful or unhelpful depending on the context.

  • I expect that “TAI” and “superintelligence” are typically worse than the other four terms, at least when speaking to non-technical and non-specialist audiences. TAI is jargon-y, and superintelligence sounds to many people like sci-fi.

I discuss these terms in more detail immediately below.

Overview of commonly-used terms

I discuss here six terms that are commonly used by people who focus on extreme risks from AI.[2] I give some examples of these terms being used, and provide “considerations” for each term. Considerations could often be either advantages or disadvantages, depending on the context.

“Frontier AI”, “frontier AI systems”, “frontier models”

Considerations:

  • Implies that the author is only referring to a small number of the most cutting-edge models at a given point in time.[3]

    • A lot of AI governance efforts currently focus on frontier models.

    • That said, non-frontier models are also important from an extreme risk perspective; non-frontier models may become powerful enough to cause a catastrophe, and non-frontier models will presumably be accessible to a larger and more varied set of actors.[4]

  • Connotations around the word “frontier” in general

    • One person said that they associate the term with American expansion/​colonialism.

    • There’s evidence of the term “frontier” generally playing well in DC (at least among Democrats), e.g. with the proposed “Endless Frontier Act” and Kennedy’s “New Frontier”.

    • The word “Frontier” may have positive connotations relating to discovery, innovation, etc. I assume that this is part of the reason why it is used for the policy initiatives above and the Frontier Model Forum.

  • Industry seems to have converged on this term, e.g. with the Frontier Model Forum, so people who use it might seem more sympathetic to industry than they would otherwise.

Examples of this term being used:

  • Frontier Model Forum”, a new industry body that currently consists of Anthropic, Google, Microsoft, and OpenAI.

  • Frontier AI Regulation: Managing Emerging Risks to Public Safety” (Anderljung et al., 2023).

    • They write that “‘frontier AI’ models [are] highly capable foundation models that could possess dangerous capabilities sufficient to pose severe risks to public safety”.[5]

    • This implies a different meaning of “frontier” to the one that I describe above. Future models that are powerful by today’s standards but that are not cutting-edge when they are produced would presumably meet the Anderljung et al. definition.

  • Model evaluation for extreme risks” (Shevlane et al., 2023). The authors focus on models “at the frontier”, though also talk about “general-purpose AI systems”. See in particular the figure on page 3.

  • The document published by the White House about the July 2023 voluntary lab commitments.

“Highly capable foundation model” may achieve a similar meaning and be more easily understandable to policymakers, e.g. because “foundation models” feature extensively in the EU AI Act.

“Advanced AI” or “advanced AI systems”

Considerations:

  • Very non-jargon-y

  • Doesn’t necessarily distinguish between models produced by the leading actors and everyone else.

  • It’s not immediately clear what is meant by “advanced”. I assume that people would generally need to specify how they are using the term.[6]

Examples of this term being used:

Similar considerations apply for the term “powerful AI systems”. Relative to “advanced AI”, “powerful AI” might sound more exciting, attractive, and/​or scary. That might induce more race-like behavior, more appetite for risk-reduction measures, or both.

General-purpose AI

Considerations:

  • Some of the same considerations apply here as for AGI (see below). Some differences:

    • This term has a lower “capabilities bar” for inclusion. For example, both GPT-4 and a hypothetical future AGI system could be referred to as “general-purpose AI”.[7]

    • This term probably has less sci-fi connotations and is less associated with the AI safety community and with people such as Bostrom and Yudkowsky.

  • Anderljung et al. (2023) says “We intentionally avoid using the term “general-purpose AI” to avoid confusion with the use of that term in the EU AI Act and other legislation. Frontier AI systems are a related but narrower class of AI systems with general-purpose functionality, but whose capabilities are relatively advanced and novel.”

Examples of this term being used:

  • The current draft text of the EU AI Act: “‘general purpose AI system’ means an AI system that can be used in and adapted to a wide range of applications for which it was not intentionally and specifically designed.”

  • Various writing from the Future of Life Institute, e.g. Policymaking in the Pause. They define (p. 6) it as “an AI system that can accomplish or be adapted to accomplish a range of distinct tasks, including some for which it was not intentionally and specifically trained.”

  • My impression is that this term is relatively often used by mainstream think tanks.

One could use variants of this term to point to AI that is both general and powerful, e.g. “highly capable general-purpose AI”.

Artificial General Intelligence (AGI)

Considerations:

  • Many people seem to find this term off-putting because it feels like something from science fiction.

  • Some AI developers seem to be (explicitly) aiming for AGI.[8]

  • Some people (I think in particular from the AI ethics community) see the term “AGI” as “marketing hype” from AI developers.

  • Implies that the AI systems in question will be able to do a wide range of tasks.

    • Whether this is good or bad depends largely on to what extent such systems are really what we want to focus on in general or in a particular conversation/​output.[9]

    • Arguably this also makes the term somewhat attention hazardous; greater awareness or salience that AGI systems might be possible might motivate additional efforts to build them, potentially increasing racing or reckless development.[10]

Examples of this term being used:

  • OpenAI, “Planning for AGI and beyond”.

    • Defines AGI as “AI systems that are generally smarter than humans”.

  • Towards best practices in AGI safety and governance” (Schuett et al., 2023).

    • “AI systems that achieve or exceed human performance across a wide range of cognitive task”

    • Note that this paper was specifically talking about companies that describe themselves as attempting to build AGI.

Transformative Artificial Intelligence (TAI)

Considerations:

  • Jargon-y. That may be unhelpful if communicating to a wide audience, but helpful if wanting to avoid generating hype around AI.

  • A range of types of AI systems could qualify as having “transformative” effects. For example, the term could cover both one powerful general-purpose system, as well as an outcome with many narrow systems.

  • Experts may have different understandings of the term.

    • Karnofsky (2016) defines TAI as: “potential future AI that precipitates a transition comparable to (or more significant than) the agricultural or industrial revolution”. That leaves room for interpretation regarding whether only transitions in the form of accelerating economic development count and how comparability /​ significance is determined.

    • Cotra (2020) writes “I think of “transformative AI” as software which causes a tenfold acceleration in the rate of growth of the world economy (assuming that it is used everywhere that it would be economically profitable to use it).” She presents this as an operationalization of Karnofsky’s definition. This seems to downplay the ways in which AI could have transformative effects other than economic growth, such as causing extinction.[11]

Examples of this term being used:

Superintelligence

(I occasionally also see the term “artificial superintelligence” or “ASI”.)

Considerations:

  • I expect that this term sounds weird or like sci-fi to many people, and that it will be polarizing with the AI ethics community, e.g. because it so explicitly focuses on future systems.

  • Emphasizes systems that are (much) smarter than humans, not just comparably smart. This emphasis seems helpful in some contexts, but note that systems could be extremely dangerous even if they are not much smarter than humans.[14]

  • This term is strongly associated with Nick Bostrom (due to his book Superintelligence).

  • The term may now be associated with OpenAI due to the posts listed below.

Examples of this term being used:

  • Superintelligence: Paths, Dangers, Strategies (Bostrom, 2014)

  • Governance of superintelligence” (OpenAI, 2023)

    • The blogpost implies that the authors mean “AI systems [that] exceed expert skill level in most domains, and carry out as much productive activity as one of today’s largest corporations”.

    • See also “Introducing Superalignment”, which is explicitly about OpenAI’s efforts to align superintelligence.

Terms that I’ll mostly leave for now

This section is generally for terms that do not closely match the categories that people who work on reducing existential and catastrophic risks from AI often want to point to.[15] That said, I did not think much about this categorization, and I expect that it would be best to use each of these terms in some contexts.

  • Advanced, Planning, Strategically aware systems (APS). See Carlsmith (2021)

  • Foundation models, e.g. the UK’s “Foundation Model Taskforce”.[16]

  • Generative AI.[17]

  • Generally-capable AI.

  • God-like AI. See e.g. this op-ed from the now lead of the UK’s Foundation Model Taskforce.[18]

  • High-risk models /​ high-risk systems.

  • Human-level machine intelligence (HLMI).

  • Large language models (LLMs).[19]

Thoughts from others

When I shared an initial draft of this post, several people left particularly interesting and detailed comments. With those people’s permission, I reproduce some of these comments here.[20]

Ben Garfinkel:

Four terms I use:

  • General-purpose AI (when I want to talk about AI systems that can perform a very broad range of tasks—a category that encompasses both GPT-4 and future things people sometimes call “AGI”)

  • Advanced AI (when I want to reference AI systems significantly more sophisticated than the ones we have now).

  • Frontier AI (when I want to talk about AI systems that are—at a given point in time—comparatively more advanced than pretty much all other AI systems that exist at that time)

  • High-risk systems (It’s sometimes worth having a term that specifically distinguishes systems by the level of risk they pose. This isn’t going to perfectly correspond to any of the above categories).

I mostly don’t like “AGI” because I don’t really know what it means, even though it sounds like it means something distinct and specific—also has some baggage. “TAI” has some baggage and is often used confusingly. (People sometimes talk about “TAI” as though they’re talking about a particular AI system and sometimes talk about it as though they’re talking about a general state of affairs in the world.)

General point: I’m often wary of ways of speaking/​thinking about risks from AI that suggest there’s a discrete and identifiable threshold (e.g. “AGI”) where risks click over from non-catastrophic to catastrophic. So often prefer terms and ways of speaking that don’t give this impression (e.g. “risks from increasingly advanced AI systems” vs. “risks from AGI”).

Person 2:

[Some terms that I use are]

  • Potential future AI systems: to discuss risks from systems that don’t exist yet, and be clear it doesn’t include current systems. Can add other terms like “advanced”, to be more specific. I find it useful to be very clear if I’m talking about systems that don’t exist yet.

  • Advanced AI/​Frontier AI: to discuss systems comparatively more advanced than pretty much all other AI systems that exist at that time.

  • AGI: to discuss risks or policies that may be important for systems that roughly surpass human levels of general intelligence

Person 3:

A near-synonym for “frontier model” [...] is “highly capable foundation model”. Microsoft’s policy piece in May used that term and several variants. Many [policy] audiences are also becoming more familiar with the “foundation model” term in context of the European Parliament’s proposed amendments to the AI Act. I also sometimes use “frontier model”, especially since the announcement of the Frontier Model Forum.

Other helpful sources

(I’m sure this list is very incomplete, and I have not carefully read all of the sources listed here)

Acknowledgements

Thank you to several people from the AI governance community, in particular Michael Aird and the people quoted above, who shared thoughts that informed this post.

  1. ^

    My sense is that people who are focused on extreme risks from AI more commonly use the term “AGI”.

  2. ^

    Various additional terms are used in conversations about AI more broadly and are mentioned at the bottom of the post.

  3. ^

    E.g. GPT-4 is currently cutting-edge so would likely qualify as frontier. It would presumably no longer qualify as frontier if a much more powerful GPT-5 is released in future. The “frontier” can be defined in specific ways, e.g. with frontier models in a given year having to have been trained using a given amount of FLOP. Note, however, that technical definitions like this might be harder for non-technical audiences to understand.

  4. ^

    One could refer to “future frontier AI” if pointing specifically at cutting-edge models in the future.

  5. ^

    See section 2.1 and Appendix A for much more detail on the definition.

  6. ^

    For example, “What % of existing systems count as advanced? What % of systems in a few years will count as “advanced”?”

  7. ^

    That said, one could be more specific by saying something like “advanced general-purpose AI”.

  8. ^

    See Schuett et al. (2023, p. 3): “By ‘AGI labs’, we mean organizations that have the stated goal of building AGI. This includes OpenAI, Google DeepMind, and Anthropic.” Similarly, Wikipedia says “Creating AGI is a primary goal of some artificial intelligence research and of companies such as OpenAI, DeepMind, and Anthropic.” That said, I could not immediately find primary sources where those companies say this.

  9. ^

    In contrast, we could imagine very narrow AI systems (collectively) still having big effects, cf. Drexler (2019).

  10. ^

    “Additional efforts” could mean either that actors that are already somewhat focused on AGI try harder to build these systems, or that new actors become focused on building them. Some thoughtful AI safety people have raised this attention hazard concern in the past. It may be less relevant (on the margin) since attention to AI and AGI has dramatically increased anyway from November 2022 onwards.

  11. ^

    That said, in an interview, Cotra clarified that “transformative” includes models that could be used to cause a 10x acceleration in growth, even if people do not decide to use them in this way, but rather e.g. direct them towards military advantage. (See the transcript here, then search for the paragraph beginning “There may be reasons”.)

  12. ^

    I believe that Open Philanthropy introduced the term, though I have not checked this.

  13. ^
  14. ^

    See some discussion of this in Karnofsky (2022).

  15. ^

    I include a few terms (e.g. “APS”) that are used by people who focus on extreme risks, but that are fairly uncommon.

  16. ^

    For an overview that focuses on this term, see Jones (2023).

  17. ^

    Mainstream policymakers /​ think tanks seem to often use this term and it features in the EU AI Act. I think this is sometimes to gesture to AI that is specifically “generative”, e.g. when talking about disinformation concerns from AI-generated images and text. Anecdotally, I think they also use it to gesture at a broader class of powerful or general-purpose systems. (People who work on existential and catastrophic risk would not generally use it in this way).

  18. ^

    This term may be intuitive and rhetorically powerful. That said, its connotations (e.g. indicating extreme power) may increase some actors’ interest in risky AI acceleration or deployment by “their side”, even if aiming for this increase accident risk.

  19. ^

    Many of the systems that are relevant to AI governance are (currently) LLMs. That said, LLMs feel to me like a specific example of the type of system that one might want to discuss. RL agents are another example in this category.

  20. ^

    Two of the three people had a preference for being quoted anonymously. I consider all three to be very knowledgeable about AI governance and policy.