Superforecasters more than quadruple their extinction risk forecasts by 2100 if conditioned on AGI or TAI by 2070.
The data on this table is strange! Originally Superforecasters’ gave 0.38% for extinction by 2100 (though 0.088% for RS top quintile) but on this survey it’s 0.225%. Why? Also, somehow the first number has 3 digits of precision while the second number is “1%” which is maximally lacking in significant digits (like, if you were rounding off, 0.55% ends up as 1%).
The implied result is strange! How could participants’ AGI timelines possibly be so long? I notice ACX comments may explain this as a poor process of classifying people as “superforecasters” and/or “experts”.
I’d strongly like to see three other kinds of outcome analyzed in future tournaments, especially in the context of AI:
Authoritarian takeover: how likely is it that events in the next few decades weaken the US/EU and/or strengthen China (or another dictatorship), eventually leading to world takeover by dictatorship(s)? How likely is it that AGIs either (i) bestow upon a few people or a single person (*cough*Sam Altman) dictatorial powers or (ii) strengthen the power of existing dictators, either in their own country and/or by enabling territorial and/or soft-power expansion?
Dystopia: what’s the chance of some kind of AGI-induced hellscape in which life is worse for most people than today, with little chance of improvement? (This may overlap with other outcomes, of course)
Permanent loss of control: fully autonomous ASIs (genius-level and smarter) would likely take control of the world, such that humans no longer have influence. If this happens and leads to catastrophe (or utopia, for that matter), then it’s arguably more important to estimate when loss of control occurs than when the catastrophe itself occurs (and in general it seems like “date of the point of no return on the path to X” is more important than “date of X”, though the concept is fuzzier). Besides, I am very skeptical of any human’s ability to predict what will happen after a loss of control event. I’m inclined to think of such an event almost like an event horizon, which is a second reason that forecasting the event itself is more important than forecasting the eventual outcome.
The data on this table is strange! Originally Superforecasters’ gave 0.38% for extinction by 2100 (though 0.088% for RS top quintile) but on this survey it’s 0.225%. Why? Also, somehow the first number has 3 digits of precision while the second number is “1%” which is maximally lacking in significant digits (like, if you were rounding off, 0.55% ends up as 1%).
The implied result is strange! How could participants’ AGI timelines possibly be so long? I notice ACX comments may explain this as a poor process of classifying people as “superforecasters” and/or “experts”.
I’d strongly like to see three other kinds of outcome analyzed in future tournaments, especially in the context of AI:
Authoritarian takeover: how likely is it that events in the next few decades weaken the US/EU and/or strengthen China (or another dictatorship), eventually leading to world takeover by dictatorship(s)? How likely is it that AGIs either (i) bestow upon a few people or a single person (*cough*Sam Altman) dictatorial powers or (ii) strengthen the power of existing dictators, either in their own country and/or by enabling territorial and/or soft-power expansion?
Dystopia: what’s the chance of some kind of AGI-induced hellscape in which life is worse for most people than today, with little chance of improvement? (This may overlap with other outcomes, of course)
Permanent loss of control: fully autonomous ASIs (genius-level and smarter) would likely take control of the world, such that humans no longer have influence. If this happens and leads to catastrophe (or utopia, for that matter), then it’s arguably more important to estimate when loss of control occurs than when the catastrophe itself occurs (and in general it seems like “date of the point of no return on the path to X” is more important than “date of X”, though the concept is fuzzier). Besides, I am very skeptical of any human’s ability to predict what will happen after a loss of control event. I’m inclined to think of such an event almost like an event horizon, which is a second reason that forecasting the event itself is more important than forecasting the eventual outcome.