The question of how much we should update on AI risk by 2100 based on those results remains open. If the skeptics or the concerned group turn out to be mostly right about what 2030’s AI will be like, should we then trust their risk assessment for 2100 as well, and if so, how much?
I think it is also worth having in mind predictions about non-AI risks. The annual risk of human extinction from nuclear war from 2023 to 2050 estimated by the superforecasters, domain experts, general existential risk experts, and non-domain experts of the XPTis 602 k, 7.23 M, 10.3 M and 4.22 M times mine. If one believes XPT’s forecasters are overestimating nuclear extinction risk by 6 to 7 orders of magnitude (as I do), it arguably makes sense to put little trust in their predictions about AI extinction risk. I would be curious to know your thoughts on this.
In any case, I am still a fan of the research you presented in this post. Analysing agreements/​disagreements in a systematic way seems quite valuable to assess and decrease risk.
Thanks for sharing!
I think it is also worth having in mind predictions about non-AI risks. The annual risk of human extinction from nuclear war from 2023 to 2050 estimated by the superforecasters, domain experts, general existential risk experts, and non-domain experts of the XPT is 602 k, 7.23 M, 10.3 M and 4.22 M times mine. If one believes XPT’s forecasters are overestimating nuclear extinction risk by 6 to 7 orders of magnitude (as I do), it arguably makes sense to put little trust in their predictions about AI extinction risk. I would be curious to know your thoughts on this.
In any case, I am still a fan of the research you presented in this post. Analysing agreements/​disagreements in a systematic way seems quite valuable to assess and decrease risk.