A stray observation from reading Scott Alexander’s post on his 2023 forecasting competition:
Scott singles out some forecasters who had particularly strong performance both this year and last year (he notes that being near the very top in one year seems noisy, with a significant role for luck), or otherwise seem likely to have strong signals of genuine predictive outperformance. These are: - Samotsvety - Metaculus - possibly Peter Wildeford— possibly Ezra Karger (Research Director at FRI).
I note that the first 3 above all have higher AI catastrophic/extinction risk estimates than the average superforecaster (I note Ezra given his relevance to the topic at hand, but don’t know his personal estimates)
Obviously, this is a low n sample, and very confounded by community effects and who happened to catch Scott’s eye (and confirmation bias in me noticing it, insofar as I also have higher risk estimates). But I’d guess there’s at least a decent chance that both (a) there are groups and aggregation methods that reliably outperform superforecasters and (b) these give higher estimates of AI risk.
A stray observation from reading Scott Alexander’s post on his 2023 forecasting competition:
Scott singles out some forecasters who had particularly strong performance both this year and last year (he notes that being near the very top in one year seems noisy, with a significant role for luck), or otherwise seem likely to have strong signals of genuine predictive outperformance. These are:
- Samotsvety
- Metaculus
- possibly Peter Wildeford—
possibly Ezra Karger (Research Director at FRI).
I note that the first 3 above all have higher AI catastrophic/extinction risk estimates than the average superforecaster (I note Ezra given his relevance to the topic at hand, but don’t know his personal estimates)
Obviously, this is a low n sample, and very confounded by community effects and who happened to catch Scott’s eye (and confirmation bias in me noticing it, insofar as I also have higher risk estimates). But I’d guess there’s at least a decent chance that both (a) there are groups and aggregation methods that reliably outperform superforecasters and (b) these give higher estimates of AI risk.