Executive summary: The author reviews early transhumanist arguments that rushing to build friendly AI could prevent nanotech “grey goo” extinction, and concludes—largely by reductio—that expected value reasoning combined with speculative probabilities can be used to justify arbitrarily extreme funding demands without reliable grounding.
Key points:
Eliezer Yudkowsky, Nick Bostrom, Ray Kurzweil, and Ben Goertzel argued that aligned AGI should be developed as quickly as possible to defend against catastrophic nanotechnology risks such as self-replicating “grey goo.”
Yudkowsky made concrete forecasts around 1999–2000, assigning a 70%+ extinction risk from nanotechnology and predicting friendly AI within roughly 5–20 years, contingent on funding.
Bostrom argued that superintelligence is uniquely valuable as a defensive technology because it could shorten the vulnerability window between dangerous nanotech and effective countermeasures.
The post applies expected value reasoning to argue that even astronomically small probabilities of preventing extinction can dominate moral calculations when multiplied by extremely large numbers of potential future lives.
Using GiveWell-style cost-effectiveness estimates, the author shows how this logic can imply spending quadrillions of dollars—or even infinite resources—on rushing friendly AI development.
The author illustrates the implausibility of this reasoning by humorously proposing that, given sufficiently small but nonzero probabilities, funders should rationally support the author’s own friendly AI project.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: The author reviews early transhumanist arguments that rushing to build friendly AI could prevent nanotech “grey goo” extinction, and concludes—largely by reductio—that expected value reasoning combined with speculative probabilities can be used to justify arbitrarily extreme funding demands without reliable grounding.
Key points:
Eliezer Yudkowsky, Nick Bostrom, Ray Kurzweil, and Ben Goertzel argued that aligned AGI should be developed as quickly as possible to defend against catastrophic nanotechnology risks such as self-replicating “grey goo.”
Yudkowsky made concrete forecasts around 1999–2000, assigning a 70%+ extinction risk from nanotechnology and predicting friendly AI within roughly 5–20 years, contingent on funding.
Bostrom argued that superintelligence is uniquely valuable as a defensive technology because it could shorten the vulnerability window between dangerous nanotech and effective countermeasures.
The post applies expected value reasoning to argue that even astronomically small probabilities of preventing extinction can dominate moral calculations when multiplied by extremely large numbers of potential future lives.
Using GiveWell-style cost-effectiveness estimates, the author shows how this logic can imply spending quadrillions of dollars—or even infinite resources—on rushing friendly AI development.
The author illustrates the implausibility of this reasoning by humorously proposing that, given sufficiently small but nonzero probabilities, funders should rationally support the author’s own friendly AI project.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Humourously?!!