Metaculus is not an aggregator of expert predictions
Yeah, thatâs a good point. TY! It would be wrong to claim that Metaculus predictions are somehow âexpert predictionsâ. Weâll change the article to make it clearer that weâre not claiming that.
That said, we donât use the word prediction market. And AFAICT, Metaculus has a favourable track record compared to actual prediction markets. And probably to other mechanisms for aggregating info, too. So I think thereâs value to referencing them.
So perhaps the following, maybe as a footnote?
While Metaculus allows non-domain-experts, the bulk of their forecasters, to make predictions, Metaculusâ track record compares favourably to that of many AI experts and superforecasters. Hence the inclusion on this list.
I think if you donât note in the body of the text, rather than just in a footnote, that just anybody can predict anything on Metaculus, then this will inevitably be misleading to anyone who doesnât already know what Metaculus is, since you imply in the post that itâs an aggregator of expert predictions when you claim:
On the whole, experts think human-level AI is likely to arrive in your lifetime.
And then go on to list Metaculus as support for this claim. That implies Metaculus is aggregator of expert predictions.
Also, you include Metaculus in a long list of expert predictions without noting that itâs different from the other items on the list, which reinforces the implication that itâs an aggregator of expert predictions.
I think you should also explain what Samotsvety is in the body of the text and what its forecastersâ credentials are.
Invoking âexpertsâ and then using the term this loosely feels misleading.
I think it also bears mentioning the strange feature of the 2023 AI Impacts survey where thereâs a 69-year gap between the AI expertsâ prediction of âhigh-level machine intelligenceâ and âfull automation of labourâ (50% chance by 2116). This is such an important (and weird, and confusing) fact about the survey that I think it should be mentioned anytime that survey is brought up.
This is especially relevant since you say:
On the whole, experts think human-level AI is likely to arrive in your lifetime.
And if you think human-level AI means full automation of labour rather than high-level machine intelligence, then a 50% chance by 2116 (91 years from now) is not within the current life expectancy of most adults alive today or even most teenagers.
Metaculus accepts predictions from just anybody, so Metaculus is not an aggregator of expert predictions. Itâs not even a prediction market.
Yeah, thatâs a good point. TY! It would be wrong to claim that Metaculus predictions are somehow âexpert predictionsâ. Weâll change the article to make it clearer that weâre not claiming that.
That said, we donât use the word prediction market. And AFAICT, Metaculus has a favourable track record compared to actual prediction markets. And probably to other mechanisms for aggregating info, too. So I think thereâs value to referencing them.
So perhaps the following, maybe as a footnote?
What do you think?
I think if you donât note in the body of the text, rather than just in a footnote, that just anybody can predict anything on Metaculus, then this will inevitably be misleading to anyone who doesnât already know what Metaculus is, since you imply in the post that itâs an aggregator of expert predictions when you claim:
And then go on to list Metaculus as support for this claim. That implies Metaculus is aggregator of expert predictions.
Also, you include Metaculus in a long list of expert predictions without noting that itâs different from the other items on the list, which reinforces the implication that itâs an aggregator of expert predictions.
I think you should also explain what Samotsvety is in the body of the text and what its forecastersâ credentials are.
Invoking âexpertsâ and then using the term this loosely feels misleading.
I think it also bears mentioning the strange feature of the 2023 AI Impacts survey where thereâs a 69-year gap between the AI expertsâ prediction of âhigh-level machine intelligenceâ and âfull automation of labourâ (50% chance by 2116). This is such an important (and weird, and confusing) fact about the survey that I think it should be mentioned anytime that survey is brought up.
This is especially relevant since you say:
And if you think human-level AI means full automation of labour rather than high-level machine intelligence, then a 50% chance by 2116 (91 years from now) is not within the current life expectancy of most adults alive today or even most teenagers.