I am a bit worried about a narrative of “the forecasters think x-risk is low” when I know a bunch of excellent forecasters who have much higher AI x-risk probabilities.
For example, Samotsvety (who afaict have an excellent forecasting track record on domain-relevant questions) gave some estimates here (on sep-8-2022) :
A few of the headline aggregate forecasts are:
25% chance of misaligned AI takeover by 2100, barring pre-APS-AI catastrophe
81% chance of Transformative AI (TAI) by 2100, barring pre-TAI catastrophe
32% chance of AGI being developed in the next 20 years
Conversely, the median estimate of all domain level experts is probably lower than the 3.9% presented here. The sampling of experts is non-random: people who are already concerned about AI risk are more likely to do the voluntary survey. The sample here had ~40% of experts attending at least one AI meetup, which is not at all typical for AI experts as a group.
This could also be true of previous surveys, like the 2022 AI impacts survey which had a response rate of only 17%. I reckon that if you added in the other 83% of experts, the median estimate would drop by a fair margin.
The Samotsvety track record does straightforwardly look better than what I expect the median superforecaster’s track record to be (which I think is ~99th percentile in either the original Tetlock studies or on GJO), especially on AI. Though perhaps Tetlock’s team also selected for better forecasters than the median superforecaster? It’s unclear to me.
I am a bit worried about a narrative of “the forecasters think x-risk is low” when I know a bunch of excellent forecasters who have much higher AI x-risk probabilities.
For example, Samotsvety (who afaict have an excellent forecasting track record on domain-relevant questions) gave some estimates here (on sep-8-2022) :
Conversely, the median estimate of all domain level experts is probably lower than the 3.9% presented here. The sampling of experts is non-random: people who are already concerned about AI risk are more likely to do the voluntary survey. The sample here had ~40% of experts attending at least one AI meetup, which is not at all typical for AI experts as a group.
This could also be true of previous surveys, like the 2022 AI impacts survey which had a response rate of only 17%. I reckon that if you added in the other 83% of experts, the median estimate would drop by a fair margin.
“when I know a bunch of excellent forecasters...”
Perhaps your sampling techniques are better than Tetlock’s then.
The Samotsvety track record does straightforwardly look better than what I expect the median superforecaster’s track record to be (which I think is ~99th percentile in either the original Tetlock studies or on GJO), especially on AI. Though perhaps Tetlock’s team also selected for better forecasters than the median superforecaster? It’s unclear to me.