Executive summary: This review argues that If Anyone Builds It, Everyone Dies fails to justify its sweeping claim that building superintelligence with current techniques would kill everyone, offering instead a vague and poorly supported case that does not adequately explain modern AI, engage counterarguments, or substantiate its core alignment argument.
Key points:
The reviewer contends that the book’s stated thesis—“If any company or group… builds an artificial superintelligence… then everyone… will die”—is not properly argued because the authors do not sufficiently explain modern AI systems, scaling risks, or why current safety efforts would fail.
The authors allegedly provide only a brief description of how AI works, assert that advanced systems will develop misaligned preferences, and argue such systems would kill everyone, while offering only vague criticisms of AI safety research.
The reviewer claims the book does not lay a strong foundation for a movement to ban artificial superintelligence because it inadequately explains AI development processes, safety efforts, or key counterarguments.
The reviewer highlights unanswered counterarguments, including why alignment would deteriorate with scale, why reinforcement learning from human feedback would fail, and why misalignment would not be detected before catastrophic capability.
The book’s central claim that AI systems will inevitably develop alien preferences—analogized to evolutionary mismatches like humans’ taste for sugar—is described as under-justified and lacking concrete examples of such misalignment occurring in current systems.
The reviewer concludes that the book does not advance discourse on AI existential risk and instead recommends 80,000 Hours’ “Risks from power-seeking AI systems” as a clearer account.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, andcontact us if you have feedback.
Executive summary: This review argues that If Anyone Builds It, Everyone Dies fails to justify its sweeping claim that building superintelligence with current techniques would kill everyone, offering instead a vague and poorly supported case that does not adequately explain modern AI, engage counterarguments, or substantiate its core alignment argument.
Key points:
The reviewer contends that the book’s stated thesis—“If any company or group… builds an artificial superintelligence… then everyone… will die”—is not properly argued because the authors do not sufficiently explain modern AI systems, scaling risks, or why current safety efforts would fail.
The authors allegedly provide only a brief description of how AI works, assert that advanced systems will develop misaligned preferences, and argue such systems would kill everyone, while offering only vague criticisms of AI safety research.
The reviewer claims the book does not lay a strong foundation for a movement to ban artificial superintelligence because it inadequately explains AI development processes, safety efforts, or key counterarguments.
The reviewer highlights unanswered counterarguments, including why alignment would deteriorate with scale, why reinforcement learning from human feedback would fail, and why misalignment would not be detected before catastrophic capability.
The book’s central claim that AI systems will inevitably develop alien preferences—analogized to evolutionary mismatches like humans’ taste for sugar—is described as under-justified and lacking concrete examples of such misalignment occurring in current systems.
The reviewer concludes that the book does not advance discourse on AI existential risk and instead recommends 80,000 Hours’ “Risks from power-seeking AI systems” as a clearer account.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.