One of the authors responds to the comment you linked to and says he was already aware of the concept of the multiple stages fallacy when writing the paper.
But the point I was making in my comment above is how easy it is for reasonable, informed people to generate different intuitions that form the fundamental inputs of a forecasting model like AI 2027. For example, the authors intuit that something would take years, not decades, to solve. Someone else could easily intuit it will take decades, not years.
The same is true for all the different intuitions the model relies on to get to its thrilling conclusion.
Since the model can only exist by using many such intuitions as inputs, ultimately the model is effectively a re-statement of these intuitions, and putting these intuitions into a model doesn’t make them any more correct.
In 2-3 years, when it turns out the prediction of AGI in 2027 is wrong, it probably won’t be because of a math error in the model but rather because the intuitions the model is based on are wrong.
If they were already aware, they certainly didn’t do anything to address it, given their conclusion is basically a result of falling for it.
It’s more than just intuitions, it’s grounded in current research and recent progress in (proto) AGI. To validate the opposing intuitions (long timelines) requires more in the way of leaps of faith (to say that things will suddenly stop working as they have been). Longer timelines intuitions have also been proven wrong consistently over the last few years (e.g. AI constantly doing things people predicted were “decades away” just a few years, or even months, before).
One of the authors responds to the comment you linked to and says he was already aware of the concept of the multiple stages fallacy when writing the paper.
But the point I was making in my comment above is how easy it is for reasonable, informed people to generate different intuitions that form the fundamental inputs of a forecasting model like AI 2027. For example, the authors intuit that something would take years, not decades, to solve. Someone else could easily intuit it will take decades, not years.
The same is true for all the different intuitions the model relies on to get to its thrilling conclusion.
Since the model can only exist by using many such intuitions as inputs, ultimately the model is effectively a re-statement of these intuitions, and putting these intuitions into a model doesn’t make them any more correct.
In 2-3 years, when it turns out the prediction of AGI in 2027 is wrong, it probably won’t be because of a math error in the model but rather because the intuitions the model is based on are wrong.
If they were already aware, they certainly didn’t do anything to address it, given their conclusion is basically a result of falling for it.
It’s more than just intuitions, it’s grounded in current research and recent progress in (proto) AGI. To validate the opposing intuitions (long timelines) requires more in the way of leaps of faith (to say that things will suddenly stop working as they have been). Longer timelines intuitions have also been proven wrong consistently over the last few years (e.g. AI constantly doing things people predicted were “decades away” just a few years, or even months, before).