Metaculus is not an aggregator of expert predictions
Yeah, that’s a good point. TY! It would be wrong to claim that Metaculus predictions are somehow “expert predictions”. We’ll change the article to make it clearer that we’re not claiming that.
That said, we don’t use the word prediction market. And AFAICT, Metaculus has a favourable track record compared to actual prediction markets. And probably to other mechanisms for aggregating info, too. So I think there’s value to referencing them.
So perhaps the following, maybe as a footnote?
While Metaculus allows non-domain-experts, the bulk of their forecasters, to make predictions, Metaculus’ track record compares favourably to that of many AI experts and superforecasters. Hence the inclusion on this list.
I think if you don’t note in the body of the text, rather than just in a footnote, that just anybody can predict anything on Metaculus, then this will inevitably be misleading to anyone who doesn’t already know what Metaculus is, since you imply in the post that it’s an aggregator of expert predictions when you claim:
On the whole, experts think human-level AI is likely to arrive in your lifetime.
And then go on to list Metaculus as support for this claim. That implies Metaculus is aggregator of expert predictions.
Also, you include Metaculus in a long list of expert predictions without noting that it’s different from the other items on the list, which reinforces the implication that it’s an aggregator of expert predictions.
I think you should also explain what Samotsvety is in the body of the text and what its forecasters’ credentials are.
Invoking “experts” and then using the term this loosely feels misleading.
I think it also bears mentioning the strange feature of the 2023 AI Impacts survey where there’s a 69-year gap between the AI experts’ prediction of “high-level machine intelligence” and “full automation of labour” (50% chance by 2116). This is such an important (and weird, and confusing) fact about the survey that I think it should be mentioned anytime that survey is brought up.
This is especially relevant since you say:
On the whole, experts think human-level AI is likely to arrive in your lifetime.
And if you think human-level AI means full automation of labour rather than high-level machine intelligence, then a 50% chance by 2116 (91 years from now) is not within the current life expectancy of most adults alive today or even most teenagers.
Yeah, that’s a good point. TY! It would be wrong to claim that Metaculus predictions are somehow “expert predictions”. We’ll change the article to make it clearer that we’re not claiming that.
That said, we don’t use the word prediction market. And AFAICT, Metaculus has a favourable track record compared to actual prediction markets. And probably to other mechanisms for aggregating info, too. So I think there’s value to referencing them.
So perhaps the following, maybe as a footnote?
What do you think?
I think if you don’t note in the body of the text, rather than just in a footnote, that just anybody can predict anything on Metaculus, then this will inevitably be misleading to anyone who doesn’t already know what Metaculus is, since you imply in the post that it’s an aggregator of expert predictions when you claim:
And then go on to list Metaculus as support for this claim. That implies Metaculus is aggregator of expert predictions.
Also, you include Metaculus in a long list of expert predictions without noting that it’s different from the other items on the list, which reinforces the implication that it’s an aggregator of expert predictions.
I think you should also explain what Samotsvety is in the body of the text and what its forecasters’ credentials are.
Invoking “experts” and then using the term this loosely feels misleading.
I think it also bears mentioning the strange feature of the 2023 AI Impacts survey where there’s a 69-year gap between the AI experts’ prediction of “high-level machine intelligence” and “full automation of labour” (50% chance by 2116). This is such an important (and weird, and confusing) fact about the survey that I think it should be mentioned anytime that survey is brought up.
This is especially relevant since you say:
And if you think human-level AI means full automation of labour rather than high-level machine intelligence, then a 50% chance by 2116 (91 years from now) is not within the current life expectancy of most adults alive today or even most teenagers.