Thanks for carefully looking into this @Javier Prieto, this looks very interesting! I’m particularly intrigued by identifying different biases for different categories and wondered how much weight you’d put on this being a statistical artefact vs a real, persistent bias that you would continue to worry about. Concretely, if we waited until a comparable number of AI benchmark progress questions, say, resolved, what would your P(Metaculus is underconfident on AI benchmark progress again) be? (Looking only at the new questions.)
Some minor comments:
About 70% of the predictions at question close had a positive log score, i.e. they were better than predicting a maximally uncertain uniform distribution over the relevant range (chance level).
I think the author knows what’s going on here, but it may invite misunderstanding. This notion of “being better than predicting a […] uniform distribution” implies that a perfect forecast on the sum of two independent dice is “better than predicting a uniform distribution” only 2 out of 3 times, i.e. less than 70% of the time! (The probabilities for D_1+D_2 = 2,3,4,10,11, or 12 are all smaller than 1/#{possible outcomes}.)
The average log score at question close was 0.701 (Median: 0.868, IQR: [-0.165, 1.502][7]) compared to an average of 2.17 for all resolved continuous questions on Metaculus.
Given that quite a lot of these AI questions closed over a year before resolution, which is rather atypical for Metaculus, comparing log scores at question close seems a bit unfair. I think time-averaged scores would be more informative. (I reckon they’d produce a quantitatively different, albeit qualitatively similar picture.)
This also goes back to “Metaculus narrowly beats chance”: We tried to argue why we believe that this isn’t as narrow as others made it out to be (for reasonable definitions of “narrow”) here.
I’m fairly confident (let’s say 80%) that Metaculus has underestimated progress on benchmarks so far. This doesn’t mean it will keep doing so in the future because (i) forecasters may have learned from this experience to be more bullish and/or (ii) AI progress might slow down. I wouldn’t bet on (ii), but I expect (i) has already happened to some extent—it has certainly happened to me!
The other categories have fewer questions and some have special circumstances that make the evidence of bias much weaker in my view. Specifically, the biggest misses in “compute” came from GPU price spikes that can probably be explained by post-COVID supply chain disruptions and increased demand from crypto miners. Both of these factors were transient.
I like your example with the two independent dice. My takeaway is that, if you have access to a prior that’s more informative than a uniform distribution (in this case, “both dice are unbiased so their sum must be a triangular distribution”), then you should compare your performance against that. My assumption when writing this was that a (log-)uniform prior over the relevant range was the best we could do for these questions. This is in line with the fact that Metaculus’s log score on continuous questions is normalized using a (log-)uniform distribution.
That’s a good point re: different time horizons. I didn’t bother to check the average time between close and resolution for all questions on the platform, but, assuming it’s <<1 year as you suggest, I agree it’s an important caveat. If you know that number off the top of your head, I’ll add it to the post.
Disclaimer: I work for Metaculus.
Thanks for carefully looking into this @Javier Prieto, this looks very interesting! I’m particularly intrigued by identifying different biases for different categories and wondered how much weight you’d put on this being a statistical artefact vs a real, persistent bias that you would continue to worry about. Concretely, if we waited until a comparable number of AI benchmark progress questions, say, resolved, what would your P(Metaculus is underconfident on AI benchmark progress again) be? (Looking only at the new questions.)
Some minor comments:
I think the author knows what’s going on here, but it may invite misunderstanding. This notion of “being better than predicting a […] uniform distribution” implies that a perfect forecast on the sum of two independent dice is “better than predicting a uniform distribution” only 2 out of 3 times, i.e. less than 70% of the time! (The probabilities for D_1+D_2 = 2,3,4,10,11, or 12 are all smaller than 1/#{possible outcomes}.)
Given that quite a lot of these AI questions closed over a year before resolution, which is rather atypical for Metaculus, comparing log scores at question close seems a bit unfair. I think time-averaged scores would be more informative. (I reckon they’d produce a quantitatively different, albeit qualitatively similar picture.)
This also goes back to “Metaculus narrowly beats chance”: We tried to argue why we believe that this isn’t as narrow as others made it out to be (for reasonable definitions of “narrow”) here.
Thanks, Peter!
To your questions:
I’m fairly confident (let’s say 80%) that Metaculus has underestimated progress on benchmarks so far. This doesn’t mean it will keep doing so in the future because (i) forecasters may have learned from this experience to be more bullish and/or (ii) AI progress might slow down. I wouldn’t bet on (ii), but I expect (i) has already happened to some extent—it has certainly happened to me!
The other categories have fewer questions and some have special circumstances that make the evidence of bias much weaker in my view. Specifically, the biggest misses in “compute” came from GPU price spikes that can probably be explained by post-COVID supply chain disruptions and increased demand from crypto miners. Both of these factors were transient.
I like your example with the two independent dice. My takeaway is that, if you have access to a prior that’s more informative than a uniform distribution (in this case, “both dice are unbiased so their sum must be a triangular distribution”), then you should compare your performance against that. My assumption when writing this was that a (log-)uniform prior over the relevant range was the best we could do for these questions. This is in line with the fact that Metaculus’s log score on continuous questions is normalized using a (log-)uniform distribution.
That’s a good point re: different time horizons. I didn’t bother to check the average time between close and resolution for all questions on the platform, but, assuming it’s <<1 year as you suggest, I agree it’s an important caveat. If you know that number off the top of your head, I’ll add it to the post.