One possible explanation for the disparity is the sampling of participants: 42% of the domain experts had attended EA meetups, whereas only 9% of the superforecasters had (page 9 of report). This could have caused a systematic shift in opinion.
Another explanation: Anchoring bias. The general public changed their estimates of x-risk six orders of magnitude from 5% to 1 in 15 million when the question was phrased differently (page 29). Presumably at least some of this effect would persist for experts as well. Participants were given a list of previous predictions of AI x-risk which were mostly around the 5% range (page 132). I propose that the domain experts anchored to this value, whereas the superforecasters were more willing to deviate.
One possible explanation for the disparity is the sampling of participants: 42% of the domain experts had attended EA meetups, whereas only 9% of the superforecasters had (page 9 of report). This could have caused a systematic shift in opinion.
Another explanation: Anchoring bias. The general public changed their estimates of x-risk six orders of magnitude from 5% to 1 in 15 million when the question was phrased differently (page 29). Presumably at least some of this effect would persist for experts as well. Participants were given a list of previous predictions of AI x-risk which were mostly around the 5% range (page 132). I propose that the domain experts anchored to this value, whereas the superforecasters were more willing to deviate.
titotal—thanks for these helpful observations. Both sound plausible!