Here are a couple of excerpts from relevant comments from the Astral Codex Ten post about the tournament. From the anecdotes, it seems as though this tournament had some flaws in execution, namely that the “superforcasters” weren’t all that. But I want to see more context if anyone has it.
I signed up for this tournament (I think? My emails related to a Hybrid Forecasting-Persuasion tournament that at the very least shares many authors), was selected, and partially participated. I found this tournament from it being referenced on ACX and am not an academic, superforecaster, or in any way involved or qualified whatsoever. I got the Stage 1 email on June 15.
I participated and AIUI got counted as a superforecaster, but I’m really not. There was one guy in my group (I don’t know what happened in other groups) who said X-risk can’t happen unless God decides to end the world. And in general the discourse was barely above “normal Internet person” level, and only about a third of us even participated in said discourse. Like I said, haven’t read the full paper so there might have been some technique to fix this, but overall I wasn’t impressed.
(sclmlw) I’m sorry you didn’t get into the weeds of the tournament. My experience was that most of the best discussions came at later stages of the tournament. [...]
(Replies to magic9mushroom)
(Dogiv) I agree, unfortunately there was a lot of low effort participation, and a shocking number of really dumb answers, like putting the probability that something will happen by 2030 higher than the probability it will happen by 2050. In one memorable case a forecaster was answering the “number of future humans who will ever live” and put a number less than 100. I hope these people were filtered out and not included in the final results, but I don’t know.
Damien and I were in the same group and he wrote it up much better than I could.
FWIW I had AI extinction risk at 22% during the tournament and I would put it significantly higher now (probably in the 30s, though I haven’t built an actual model lately). Seeing the tournament results hardly affects my prediction at all. I think a lot of people in the tournament may have anchored on Ord’s estimate of 10% and Joe Carlsmith’s similar prediction, which were both mentioned in the question documentation, as the “doomer” opinion and didn’t want to go above it and be even crazier.
> (Sergio) I don’t think we were on the same team (based on your AI extinction forecast), but I also encountered several instances of low-effort participation and answers which were as baffling as those you mention at the beginning (or worse). One of my resulting impressions was that the selection process for superforecasters had not been very strict.
Here are a couple of excerpts from relevant comments from the Astral Codex Ten post about the tournament. From the anecdotes, it seems as though this tournament had some flaws in execution, namely that the “superforcasters” weren’t all that. But I want to see more context if anyone has it.
From Jacob:
From magic9mushroom:
Replies to those comments mostly concur: