Executive summary: The author argues that estimates of existential risk vary by many orders of magnitude within and across groups, especially for AI risk, and that existing evidence does not clearly indicate which estimates are more reliable.
Key points:
The author analyzes survey data (especially the XPT) to measure how widely existential risk estimates diverge, without attempting to estimate the true probability.
Within-group disagreement is extremely large, with individuals in the same group differing by up to ~11 orders of magnitude on AI extinction risk.
Across groups, median estimates differ substantially (often by factors of 10–200), with superforecasters giving low estimates, domain experts higher ones, and AI safety/x-risk communities much higher (~20–30%).
AI risk estimates tend to be more widely dispersed than nuclear or other risks, and short-term AI forecasts (e.g. by 2030) show greater spread than long-term ones.
Survey methodology and framing can shift estimates by multiple orders of magnitude, especially for the general public, indicating high sensitivity to elicitation methods.
Attempts to validate forecasts using near-term predictive accuracy find no meaningful relationship with long-term x-risk estimates, leaving no clear basis for privileging one group’s judgments over others.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, andcontact us if you have feedback.
Executive summary: The author argues that estimates of existential risk vary by many orders of magnitude within and across groups, especially for AI risk, and that existing evidence does not clearly indicate which estimates are more reliable.
Key points:
The author analyzes survey data (especially the XPT) to measure how widely existential risk estimates diverge, without attempting to estimate the true probability.
Within-group disagreement is extremely large, with individuals in the same group differing by up to ~11 orders of magnitude on AI extinction risk.
Across groups, median estimates differ substantially (often by factors of 10–200), with superforecasters giving low estimates, domain experts higher ones, and AI safety/x-risk communities much higher (~20–30%).
AI risk estimates tend to be more widely dispersed than nuclear or other risks, and short-term AI forecasts (e.g. by 2030) show greater spread than long-term ones.
Survey methodology and framing can shift estimates by multiple orders of magnitude, especially for the general public, indicating high sensitivity to elicitation methods.
Attempts to validate forecasts using near-term predictive accuracy find no meaningful relationship with long-term x-risk estimates, leaving no clear basis for privileging one group’s judgments over others.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.