Thank you a lot for this detailed answer. Especially points where superforecasters have provably been wrong on AI-related questions are very interesting and are certainly a very relevant argument against updating too much in their direction. Some kind of track record of superforecasters, experts, and public figures making predictions would be extremely interesting. Do you know whether something like this can be found somewhere?
To push back a bit against it being hard to find a good reference class and superforecasters having to rely on vibes: Yes, it might be hard, but aren’t superforecasters precisely those who have a great track record for finding a good methodology for making predictions, even when it’s hard? AI extinction is probably not the only question where making a forecast is tricky.
Thank you a lot for this detailed answer. Especially points where superforecasters have provably been wrong on AI-related questions are very interesting and are certainly a very relevant argument against updating too much in their direction. Some kind of track record of superforecasters, experts, and public figures making predictions would be extremely interesting. Do you know whether something like this can be found somewhere?
To push back a bit against it being hard to find a good reference class and superforecasters having to rely on vibes: Yes, it might be hard, but aren’t superforecasters precisely those who have a great track record for finding a good methodology for making predictions, even when it’s hard? AI extinction is probably not the only question where making a forecast is tricky.
Edit: Just a few days ago, we got this here, which is very relevant: https://forum.effectivealtruism.org/posts/fp5kEpBkhWsGgWu2D/assessing-near-term-accuracy-in-the-existential-risk