In this post, we map out cruxes of disagreement relevant for AI Takeoff. In particular, this module decomposes the question “will we see fast or slow AI Takeoff” into four different cruxes. Each of these particular questions has sometimes been asked when discussing ‘fast’ or ‘slow’ Takeoff. These four cruxes are:
Intelligence Explosion – will a positive feedback loop involving AI capabilities lead these capabilities to grow roughly hyperbolically across a sufficient range, such that the capabilities eventually grow incredibly quickly to an incredibly high level (presumably before plateauing as they reach some fundamental limit)?
Discontinuity around HLMI without self-improvement – will there be a rapid jump in AI capabilities from pre-HLMI AI to HLMI (high-level machine intelligence) and/or from HLMI to higher intelligence (for instance, if a hardware overhang allows for rapid capability gain after HLMI)?
Takeoff Speed of the Economy– how fast will the global economy (or the next closest thing, if this concept doesn’t transfer to a post-HLMI world) grow, once HLMI has matured as a technology?
HLMI is Distributed – will AI capabilities in a post-HLMI world be dispersed among many comparably powerful HLMIs? A negative answer to this node indicates that HLMI capabilities will be concentrated in a few powerful systems.
These cruxes provide a rough way of characterizing different AI takeoff scenarios. While they are not exhaustive, we believe they are a simple way of specifying the range of outcomes which those who have seriously considered AI takeoff find plausible.
Each of these notions of takeoff speed—Intelligence Explosion, Discontinuity around HLMI, Takeoff Speed (of the Economy) and HLMI Distribution, depends on the others in various ways. In this post, we describe these relationships with a graphical model, and also describe what assumptions about HLMI progress they depend on.
This post is part of a project in collaboration with David Manheim, Daniel Eth, Aryeh Englander, Issa Rice, Ben Cottier, Jérémy Perret, Ross Gruetzemacher, and Alexis Carlier.
We think three main groups of people would benefit from reading the post:
Those who don’t understand why different smart, knowledgable people disagree so much on these topics, and would like to understand better
Those who are trying to form their own views on these topics, and are unsure what factors to consider
Those who already have a general understanding of most of the key disagreements, but would like to dig deeper into others
[Link post] Will we see fast AI Takeoff?
This is a linkpost for https://www.lesswrong.com/posts/pGXR2ynhe5bBCCNqn/takeoff-speeds-and-discontinuities
In this post, we map out cruxes of disagreement relevant for AI Takeoff. In particular, this module decomposes the question “will we see fast or slow AI Takeoff” into four different cruxes. Each of these particular questions has sometimes been asked when discussing ‘fast’ or ‘slow’ Takeoff. These four cruxes are:
Intelligence Explosion – will a positive feedback loop involving AI capabilities lead these capabilities to grow roughly hyperbolically across a sufficient range, such that the capabilities eventually grow incredibly quickly to an incredibly high level (presumably before plateauing as they reach some fundamental limit)?
Discontinuity around HLMI without self-improvement – will there be a rapid jump in AI capabilities from pre-HLMI AI to HLMI (high-level machine intelligence) and/or from HLMI to higher intelligence (for instance, if a hardware overhang allows for rapid capability gain after HLMI)?
Takeoff Speed of the Economy – how fast will the global economy (or the next closest thing, if this concept doesn’t transfer to a post-HLMI world) grow, once HLMI has matured as a technology?
HLMI is Distributed – will AI capabilities in a post-HLMI world be dispersed among many comparably powerful HLMIs? A negative answer to this node indicates that HLMI capabilities will be concentrated in a few powerful systems.
These cruxes provide a rough way of characterizing different AI takeoff scenarios. While they are not exhaustive, we believe they are a simple way of specifying the range of outcomes which those who have seriously considered AI takeoff find plausible.
Each of these notions of takeoff speed—Intelligence Explosion, Discontinuity around HLMI, Takeoff Speed (of the Economy) and HLMI Distribution, depends on the others in various ways. In this post, we describe these relationships with a graphical model, and also describe what assumptions about HLMI progress they depend on.
This post is part of a project in collaboration with David Manheim, Daniel Eth, Aryeh Englander, Issa Rice, Ben Cottier, Jérémy Perret, Ross Gruetzemacher, and Alexis Carlier.
We think three main groups of people would benefit from reading the post:
Those who don’t understand why different smart, knowledgable people disagree so much on these topics, and would like to understand better
Those who are trying to form their own views on these topics, and are unsure what factors to consider
Those who already have a general understanding of most of the key disagreements, but would like to dig deeper into others
Again, here’s a link to the post: https://www.lesswrong.com/posts/pGXR2ynhe5bBCCNqn/takeoff-speeds-and-discontinuities