AGI x-risk timelines: 10% chance (by year X) estimates should be the headline, not 50%.
Given the stakes involved (the whole world/future light cone), we should regard timelines of ≥10% probability of AGI in ≤10 years as crunch time, and—given that there is already an increasingly broad consensus around this[1] -- be treating AGI x-risk as an urgent immediate priority (not something to mull over leisurely as part of a longtermist agenda).
Of course it’s not just time to AGI that is important. It’s also P(doom|AGI & alignment progress). I think most people in AI Alignment would regard this as >50% given our current state of alignment knowledge and implementation[2].
To borrow from Stuart Russell’s analogy: if there was a 10% chance of aliens landing in the next 10-15 years[3], we would be doing a lot more than we are currently doing[4]. AGI is akin to an alien species more intelligent than us that is unlikely to share our values.
Note that Holden Karnofsky’s all-things-considered (and IMO conservative) estimate for the advent of AGI is >10% chance in (now) 14 years. Anecdotally, the majority of people I’ve spoke to on the current AGISF course have estimates for 10% chance of 10 years or less.
Correct me if you think this is wrong; would be interesting to see a recent survey on this. Maybe there is more optimism factoring in extra progress before the advent of AGI.
This is different to the original analogy, which was an email saying: “People of Earth: We will arrive on your planet in 50 years. Get ready.” Say astronomers spotted something that looked like a space-craft, heading in approximately our direction, and estimated there was 10% chance that it was indeed a spacecraft heading to Earth.
Although perhaps we wouldn’t. Maybe people would endlessly argue about whether the evidence is strong enough to declare a >10% probability. Or flatly deny it.
AGI x-risk timelines: 10% chance (by year X) estimates should be the headline, not 50%.
Given the stakes involved (the whole world/future light cone), we should regard timelines of ≥10% probability of AGI in ≤10 years as crunch time, and—given that there is already an increasingly broad consensus around this[1] -- be treating AGI x-risk as an urgent immediate priority (not something to mull over leisurely as part of a longtermist agenda).
Of course it’s not just time to AGI that is important. It’s also P(doom|AGI & alignment progress). I think most people in AI Alignment would regard this as >50% given our current state of alignment knowledge and implementation[2].
To borrow from Stuart Russell’s analogy: if there was a 10% chance of aliens landing in the next 10-15 years[3], we would be doing a lot more than we are currently doing[4]. AGI is akin to an alien species more intelligent than us that is unlikely to share our values.
Note that Holden Karnofsky’s all-things-considered (and IMO conservative) estimate for the advent of AGI is >10% chance in (now) 14 years. Anecdotally, the majority of people I’ve spoke to on the current AGISF course have estimates for 10% chance of 10 years or less.
Correct me if you think this is wrong; would be interesting to see a recent survey on this. Maybe there is more optimism factoring in extra progress before the advent of AGI.
This is different to the original analogy, which was an email saying: “People of Earth: We will arrive on your planet in 50 years. Get ready.” Say astronomers spotted something that looked like a space-craft, heading in approximately our direction, and estimated there was 10% chance that it was indeed a spacecraft heading to Earth.
Although perhaps we wouldn’t. Maybe people would endlessly argue about whether the evidence is strong enough to declare a >10% probability. Or flatly deny it.
I agree with this, and think maybe this should just be a top-level post
Done :)