Executive summary: Given persistent expert disagreement about AI timelines, the author argues that adopting a broad distribution over when transformative AI will arrive—rather than committing to short or long timelines—is the epistemically humble and strategically sound approach, with implications for how individuals and communities should plan their work.
Key points:
The author defines transformative AI as a threshold where AI systems would be powerful enough to take over the world if misaligned or could double the rate of scientific and technological progress, and uses this to evaluate when timelines matter most for decision-relevant planning.
Expert forecasters disagree substantially on AI timelines, but the author notes that “long timelines have gotten crazy short” (shifting from 30+ years to 10-20 years) while “short timelines” now mean AI arriving within 2-5 years, with both camps updating on evidence.
Individual experts like Daniel Kokotaljo, despite being known as a short-timelines advocate, maintain broad distributions themselves (80% interval from 2027 to after 2050 for certain AI capabilities), and the broader expert community shows even greater overlap and uncertainty across forecasts.
The author recommends adopting a broad distribution over timelines rather than a single point estimate, noting that compressing uncertainty into one number obscures the fact that different time horizons (e.g., next presidential term vs. the one after that) represent “very different scenarios” requiring different hedging strategies.
In longer timelines (e.g., 2035 or beyond), the world will look substantially different due to geopolitical changes, technological shifts, possible AI-driven unemployment, and altered public sentiment about AI, which means approaches tailored to today’s world may not work and new possibilities may emerge.
Long-term projects like founding organizations, building movements, writing books, and foundational research have high leverage in longer-timeline worlds and should not be ruled out; even though a book project has a 1-in-5 chance of arriving too late given the author’s timelines, this leaves 80% of its expected value intact and addressing current neglect in AI safety creates additional value multipliers.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, andcontact us if you have feedback.
Executive summary: Given persistent expert disagreement about AI timelines, the author argues that adopting a broad distribution over when transformative AI will arrive—rather than committing to short or long timelines—is the epistemically humble and strategically sound approach, with implications for how individuals and communities should plan their work.
Key points:
The author defines transformative AI as a threshold where AI systems would be powerful enough to take over the world if misaligned or could double the rate of scientific and technological progress, and uses this to evaluate when timelines matter most for decision-relevant planning.
Expert forecasters disagree substantially on AI timelines, but the author notes that “long timelines have gotten crazy short” (shifting from 30+ years to 10-20 years) while “short timelines” now mean AI arriving within 2-5 years, with both camps updating on evidence.
Individual experts like Daniel Kokotaljo, despite being known as a short-timelines advocate, maintain broad distributions themselves (80% interval from 2027 to after 2050 for certain AI capabilities), and the broader expert community shows even greater overlap and uncertainty across forecasts.
The author recommends adopting a broad distribution over timelines rather than a single point estimate, noting that compressing uncertainty into one number obscures the fact that different time horizons (e.g., next presidential term vs. the one after that) represent “very different scenarios” requiring different hedging strategies.
In longer timelines (e.g., 2035 or beyond), the world will look substantially different due to geopolitical changes, technological shifts, possible AI-driven unemployment, and altered public sentiment about AI, which means approaches tailored to today’s world may not work and new possibilities may emerge.
Long-term projects like founding organizations, building movements, writing books, and foundational research have high leverage in longer-timeline worlds and should not be ruled out; even though a book project has a 1-in-5 chance of arriving too late given the author’s timelines, this leaves 80% of its expected value intact and addressing current neglect in AI safety creates additional value multipliers.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.