Executive summary: The author argues that AI 2027 repeatedly misrepresents its cited scientific sources, using an example involving iterated distillation and amplification to claim that the book extrapolates far beyond what the underlying research supports.
Key points:
The author says AI 2027 cites a 2017 report on iterated amplification to suggest “self-improvement for general intelligence,” despite the report describing only narrow algorithmic tasks.
The author quotes the report stating that it provides no evidence of applicability to “complex real-world tasks” or “messy real-world decompositions.”
The author notes that the report’s experiments involve five toy algorithmic tasks such as finding distances in a graph, with no claims about broader cognitive abilities.
The author states that AI 2027 extrapolates from math and coding tasks with clear answers to predictions about verifying subjective tasks, without supplying evidence for this extrapolation.
The author argues that the referenced materials repeatedly disclaim any relevance to general intelligence, so AI 2027’s claims are unsupported.
The author says this is one of many instances where AI 2027 uses sources that do not substantiate its predictions, and promises a fuller review.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, andcontact us if you have feedback.
Executive summary: The author argues that AI 2027 repeatedly misrepresents its cited scientific sources, using an example involving iterated distillation and amplification to claim that the book extrapolates far beyond what the underlying research supports.
Key points:
The author says AI 2027 cites a 2017 report on iterated amplification to suggest “self-improvement for general intelligence,” despite the report describing only narrow algorithmic tasks.
The author quotes the report stating that it provides no evidence of applicability to “complex real-world tasks” or “messy real-world decompositions.”
The author notes that the report’s experiments involve five toy algorithmic tasks such as finding distances in a graph, with no claims about broader cognitive abilities.
The author states that AI 2027 extrapolates from math and coding tasks with clear answers to predictions about verifying subjective tasks, without supplying evidence for this extrapolation.
The author argues that the referenced materials repeatedly disclaim any relevance to general intelligence, so AI 2027’s claims are unsupported.
The author says this is one of many instances where AI 2027 uses sources that do not substantiate its predictions, and promises a fuller review.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.