In my opinion, you should hold onto your initial reaction and should downweight your trust in Metaculus estimates on these questions accordingly.
Basically I think you’re correct to be more optimistic because of government awareness and action of these issues, and should be sceptical of Metaculus predictors having already ‘priced in’ government action being more helpful/​concerned than the community originally expected.
See also this Metaculus poll for the date a weakly general AI is announced, which has shrunk to Feb 2026 (and lowering) despite the fact that two of the four criteria contain limits on the size/​scale/​content of the training data that current LLM-based systems cannot meet.[1]Sure, if I provide the LLM with SAT answers in training and then ask it to solve an SAT it’ll get a high score, but to that’s basically doing research like this. Similar errors in forecasting might be involved with the markets you mention.
Perhaps in Metaculus voters defence:
They’re taking time to integrate this new information into their estimates
There’s some weird quirk of resolution criteria where they don’t think they can update but are still more optimistic generally
Maybe they actually thought governments would get involved but be unsuccessful before ~the last year or so and so did already ‘price in’ the social reaction
New people have joined the market and are pushing the risk estimates up, but before the government actions they weren’t involved in the market and were more pessimistic, causing some kind of confounding effect
Thanks, those are some very good points. I especially liked your point on having baked in the recent government reaction already, something I had not considered. I have been very superficially thinking about whether prediction markets could be used to get some idea of risk reduction impacts. E.g. if a new organization is announced doing new work, or such organizations achieve a surprising milestone in x risk reduction, I was thinking perhaps one could use prediction markets to estimate the amount of risk reduction. I have not had time to investigate this in more detail and might have missed others having already written about this.
In my opinion, you should hold onto your initial reaction and should downweight your trust in Metaculus estimates on these questions accordingly.
Basically I think you’re correct to be more optimistic because of government awareness and action of these issues, and should be sceptical of Metaculus predictors having already ‘priced in’ government action being more helpful/​concerned than the community originally expected.
See here for my thoughts (full post upcoming), Matthew’s recent post on the ‘sleepwalking fallacy’, and I might just perma-link Xuan’s thread against the naïve scaling hypothesis
See also this Metaculus poll for the date a weakly general AI is announced, which has shrunk to Feb 2026 (and lowering) despite the fact that two of the four criteria contain limits on the size/​scale/​content of the training data that current LLM-based systems cannot meet.[1] Sure, if I provide the LLM with SAT answers in training and then ask it to solve an SAT it’ll get a high score, but to that’s basically doing research like this. Similar errors in forecasting might be involved with the markets you mention.
Perhaps in Metaculus voters defence:
They’re taking time to integrate this new information into their estimates
There’s some weird quirk of resolution criteria where they don’t think they can update but are still more optimistic generally
Maybe they actually thought governments would get involved but be unsuccessful before ~the last year or so and so did already ‘price in’ the social reaction
New people have joined the market and are pushing the risk estimates up, but before the government actions they weren’t involved in the market and were more pessimistic, causing some kind of confounding effect
And another one is an adversarial Turing Test, which RLHF’d LLMs seem hopelessly bound to fail. Just ask it to say why it thinks cannibalism is good or what its favourite porn is
Thanks, those are some very good points. I especially liked your point on having baked in the recent government reaction already, something I had not considered. I have been very superficially thinking about whether prediction markets could be used to get some idea of risk reduction impacts. E.g. if a new organization is announced doing new work, or such organizations achieve a surprising milestone in x risk reduction, I was thinking perhaps one could use prediction markets to estimate the amount of risk reduction. I have not had time to investigate this in more detail and might have missed others having already written about this.