Thank you! We agree and [...], so hopefully, it’s more informative and is not about edge cases of Turing Test passing.
We chose to use an imperfect definition and indicated to forecasters that they should interpret the definition not “as is” but “in spirit” to avoid annoying edge cases.
Fair enough. I think people conceive of AGI too monolithically, and don’t sufficiently distinguish between the risk profiles of different trajectories. The difference between economic impact and x-risk is the most important, but I think it’s also worth forecasting domain-specific capabilities (natural language, robotics, computer vision, etc). Gesturing towards “the concept we all agree exists but can’t define” is totally fair, but I think the concept you’re gesturing towards breaks down in important ways.
Thank you! We agree and [...], so hopefully, it’s more informative and is not about edge cases of Turing Test passing.
Fair enough. I think people conceive of AGI too monolithically, and don’t sufficiently distinguish between the risk profiles of different trajectories. The difference between economic impact and x-risk is the most important, but I think it’s also worth forecasting domain-specific capabilities (natural language, robotics, computer vision, etc). Gesturing towards “the concept we all agree exists but can’t define” is totally fair, but I think the concept you’re gesturing towards breaks down in important ways.