Hi Michael. That is fair. On the other hand, what readers ignore or not depends on how the results are communicated. It would be harder to ignore uncertainty about AI timelines if AI2027 was e.g. AI2027-2047, and this would still undercommunicate uncertainty. The difference between the 90th and 10th percentile dates of artificial superintelligence (ASI), as defined below, is more than 100 years for Daniel Kokotajlo and Eli Lifland, the 2 main forecasters of the AI Futures Model (which superseded AI2027).
In my experience, here’s how these things go ~100% of the time:
Authors make up some numbers, and they include about a dozen caveats about the limitations of their model.
Readers ignore all the caveats and accuse them of claiming to be rigorous, even though they claimed no such thing.
AI 2027 is a great example of this.
Hi Michael. That is fair. On the other hand, what readers ignore or not depends on how the results are communicated. It would be harder to ignore uncertainty about AI timelines if AI2027 was e.g. AI2027-2047, and this would still undercommunicate uncertainty. The difference between the 90th and 10th percentile dates of artificial superintelligence (ASI), as defined below, is more than 100 years for Daniel Kokotajlo and Eli Lifland, the 2 main forecasters of the AI Futures Model (which superseded AI2027).