AGI x-risk timelines: 10% chance (by year X) estimates should be the headline, not 50%.

Artificial General Intelligence (AGI) poses an existential risk (x-risk) to all known sentient life. Given the stakes involved—the whole world/​future light cone—we should, by default, be looking at 10% chance-of-AGI-by timelines as the deadline for adequate preparation (alignment), rather than 50% (median) chance-of-AGI-by timelines, which seem to be the current default.

We should regard timelines of ≥10% probability of AGI in ≤10 years as crunch time. Given that there is already an increasingly broad consensus around this[1], we should be treating AGI x-risk as an urgent immediate priority, not something to mull over leisurely as part of a longtermist agenda. Thinking that we have decades to prepare (median timelines) is gambling a huge amount of current human lives, let alone the cosmic endowment.

Of course it’s not just time to AGI that is important. It’s also the probability of doom given AGI happening at that time. A recent survey of people working on AI risk gives a median of 30% for “level of existential risk” from “AI systems not doing/​optimizing what the people deploying them wanted/​intended”.[2]

To borrow from Stuart Russell’s analogy: if there was a 10% chance of aliens landing in the next 10-15 years[3], humanity would be doing a lot more than we are currently doing[4]. AGI is akin to an alien species more intelligent than us that is unlikely to share our values.

  1. ^

    Note that Holden Karnofsky’s all-things-considered (and IMO conservative) estimate for the advent of AGI is >10% chance in (now) 14 years. Anecdotally, the majority of people I’ve spoke to on the current AGISF course have estimates for 10% chance of 10 years or less. Yet most people in EA at large seem to put more emphasis on the 50% estimates that are in the 2050-2060 range.

  2. ^

    I originally wrote ”..probability of doom given AGI. I think most people in AI Alignment would regard this as >50% given our current state of alignment knowledge and implementation*”,”*Correct me if you think this is wrong; would be interesting to see a recent survey on this”, and was linked to a recent survey!

    Note that there is a mismatch with the framing in my post in that the survey implicitly incorporates time to AGI, for which the median estimate amongst those surveyed is presumably significantly later than 10 years. This suggests that P(doom|AGI in 10 years) would be estimated to be higher. It would be good to have a survey of the following questions:
    1. Year with 10% chance of AGI.
    2. P(doom|AGI in that year).
    (We can operationalise “doom” as Ord’s definition of “the greater part of our potential is gone and very little remains”; although I pretty much think of it as being paperclipped or equivalent so that ~0 value remains).

  3. ^

    This is different to the original analogy, which was an email saying: “People of Earth: We will arrive on your planet in 50 years. Get ready.” Say astronomers spotted something that looked like a space-craft, heading in approximately our direction, and estimated there was 10% chance that it was indeed a spacecraft heading to Earth.

  4. ^

    Although perhaps we wouldn’t. Maybe people would endlessly argue about whether the evidence is strong enough to declare a >10% probability. Or flatly deny it.