In If Anyone Builds It, Everyone Dies, authors Eliezer Yudkowsky and Nate Soares say machines will be all-knowing and all-powerful, but they don’t present any evidence, or discuss how computers might get to intelligence.
In a book presented as factual or scientific (or at least science-adjacent), the authors might have addressed the principles on which AI operates, considered what it might be capable of based on those principles, and asked whether we would label its potential behaviour “intelligent”.
But they don’t attempt any such assessment. They just insist AI will be smart without making a case.
In relatively substantial sections of the book, to illustrate its capacities they do point to some instances of AI being put to use—such as its success in predicting protein structures, or in chess, or in its text production capacities. But they leave unanswered obvious questions like, What is AI doing in such instances and are those tasks analogous to wider problems?
An author aiming at a more comprehensive analysis might, for example, point out that the jobs AI does well at involve finding correlations in data: It tallies up the number of times certain items have appeared near each other in a dataset (e.g. the items might be words, the dataset might be all the text on the internet) and then reproduces the highest frequency combinations. While people might be able to do that sort of tallying up with a pen and paper for small datasets, of say a few dozen items, a computer can do it relatively quickly with a dataset of a trillion items.
But the fact that a machine has far greater computing or tallying up capacities than humans does not necessarily mean it is able to be “intelligent”. There is no evidence that, for example, “intelligent” acts such as innovation are instances of correlation discovery or reproduction (which is what AI does).
What exactly computers can do and whether their operations might make them capable of things like having original thoughts—as in e.g. coming up with hypotheses for scientific research—are the sorts of matters I think it would be helpful to address in a book on computer intelligence. Especially a book in which it’s repeatedly asserted machines certainly will be smart soon.
Yudkowsky and Soares present no such analyses of machine capacities.
They do wonder:
“We can already observe AIs today that are superhuman in a variety of narrow domains — modern chess AIs, for example, are superhuman in the domain of chess. It’s natural to then ask what will happen when we build AIs that are superhuman at the tasks of scientific discovery, technological development, social manipulation, or strategic planning. And it’s natural to ask what will happen when we build AIs that outperform humans in all domains.” (ifanyonebuildsit.com/1/is-intelligence-a-meaningful-concept This website is an online supplement referenced in the book (p12))
So while AI is successful in “narrow domains” it’s presumed without discussion that such domains (chess, protein structure prediction, text production, etc.)—which involve specific datasets and goals, and in which correlation discovery is productive—may be analogous to other domains that have no clear datasets, methods, or goals (among other concerns), such as scientific discovery.
But leaving such essential matters unaddressed, Yudkowsky and Soares repeatedly insist AI will be smart (and soon):
“Superintelligent AI will predictably be developed at some point.” (p5)
“Ten years is not a lot of time to prepare for the dawn of machine superintelligence, even if we’re lucky enough to have that long.” (p204)
This belief in the coming intelligence of computers is odd considering they also believe humans do not know what “intelligence” is:
“humanity’s … state of knowledge about the workings of intelligence”, they say, is “dismal” (p207)
“This collection of challenges would look terrifying even if we understood the laws of intelligence; even if we understood how the heck these AIs worked… We don’t know.” (p176)
And also computers are not intelligent right now:
“the general reasoning abilities of o1 [advanced AI] are not up to human standards. … the big breakthroughs are produced by human researchers, not AIs (yet). … o1 is less intelligent than even the humans who don’t make big scientific breakthroughs. … Although o1 knows and remembers more than any single human, it is still in some important sense ‘shallow’ compared to a human twelve-year-old.” (p23)
So, they say, computers are not smart now, we don’t know what intelligence is, we don’t know how to make computers smart, but they certainly will be intelligent soon.
The authors go back and forth between thinking machines do auspicious things and thinking they are dumb, and between claiming humans don’t know what intelligence is, and at other times offering definitions of intelligence.
In one of the moments in which they feel they do have some grasp of intelligence, they give a definition of it (it’s “predicting” and “steering”) that isn’t very satisfying. Gauging from their vague explanations, it seems that Yudkowsky and Soares’ idea of “intelligence”—in one part of the book (e.g. p20; ifanyonebuildsit.com/1/more-on-intelligence-as-prediction-and-steering)—is that it consists more or less of the ability to do the sorts of things that computers do. This is a tautology committed by many people in the AI world who claim computers will be smart.
If your idea of intelligence is “the ability to do what computers do” then, true, “computers are intelligent”—that means: “computers can do what computers can do.”
The concept is flat. There’s no discussion of substantial problems of “intelligence”, but—according to Yudkowsky and Soares in other parts of the book—the lack of discussion of the problems doesn’t matter. To get to artificial intelligence, we don’t need to know what intelligence is:
“Humanity does not need to understand intelligence, in order to grow machines that are smarter than us.” (p39)
Machines that are smart don’t need to be built, they will be grown (the “growing” is their ability to carry out operations that are reproductions of correlations they’ve discovered, as opposed to being more directly programmed to carry out some or another operation).
At several points in the book, they reiterate that we may not need to have answers to important questions, because the machines themselves might come up with the answers, for example:
“the path to disaster may be shorter, swifter, than the path to humans building superintelligence directly. It may instead go through AI that is smart enough to contribute substantially to building even smarter AI. In such a scenario, there is a possibility and indeed an expectation of a positive feedback cycle called an ‘intelligence explosion’: an AI makes a smarter AI that figures out how to make an even smarter AI, and so on.” (p27)
So there’s no need to come up any theories of how intelligent machines will be built, because the machines themselves (that we don’t know how to build) will build intelligent machines. You don’t need to consider the engineering principles on which AI operates and attempt to figure out whether it’s capable of “thinking” and thus come up with arguments to support your claims. The smart computers that Yudkowsky and Soares admit don’t exist will solve the problems.
At other times, they suggest researchers will figure it out:
There is, repeatedly, no talk of the principles of machine operations, and any potential connection to possible feats of intelligence. Rather the authors just say “it will be figured out”.
Other arguments include the implication that because technological progress has been made in the past, machines will be smart:
This lack of an attempt to address the details of computer intelligence and instead to dismiss the matter with flat claims like “researchers will figure it out” or “AI itself will figure it out” or “progress has been made in the past” tarnishes Yudkowsky and Soares’ book.
They hardly discuss anything substantial.
To address one last error: In another section, they repeat a claim popular among some who think AI will be intelligent—that is, we don’t know how it works (and therefore it’s potentially capable of extraordinary things).
“A modern AI is a giant inscrutable mess of numbers. No humans have managed to look at those numbers and figure out how they’re thinking now, never mind deducing how AI thinking would change if AIs got smarter and started designing new AIs.” (p190)
“Nobody understands how those numbers make these AIs talk.” (p36)
The implication seems to be that you cannot say AI will not be intelligent, because you don’t know what it’s doing.
But the claim “we don’t know what it’s doing” is wrong. We do know what it’s doing. It’s a program written by humans and what it does is written down in the program.
When it’s said I don’t understand what AI is doing, what is meant is that I am not able to follow the billions of calculations it does and thus track the correlations it finds. True, it may find correlations that puzzle us because we cannot track them, because the number of computations is too large. But we know that it is finding correlations.
“Not understanding” AI in the sense meant is not the same as e.g. “not understanding” gravity. Two objects interacting with no contact is a mystery. What AI does is not a mystery. I can’t do billions of calculations and thus follow the correlations it’s finding. But I understand that it’s finding correlations.
But even if you’re not persuaded by that point, the thought “I don’t know what it’s doing but it will be intelligent” is not a very substantial one.
There are several other minor points Yudkowsky and Soares raise but this article would become too lengthy if I were to address them all and none of them alter the errors in the book.
To sum up, while Yudkowsky and Soares believe computers are shallower than twelve year olds, and think there’s a “missing piece”:
“For all we know, there are a dozen different factors that could serve as the ‘missing piece,’ such that, once an AI lab figures out that last puzzle piece, their AI really starts to take off and separate from the pack, like how humanity separated from the rest of the animals. The critical moments might come at us fast. We don’t necessarily have all that much time to prepare.” (ifanyonebuildsit.com/1/will-ai-cross-critical-thresholds-and-take-off)
Nowhere in the book do they touch on important matters of computer intelligence.
They don’t consider the operations AI carries out and attempt to discuss the potential scope of those operations. They don’t address the fact there’s no evidence that correlation discovery or reproduction (which is what AI does) could lead to intelligent feats e.g. innovation. They repeatedly allude to some unknown future answer (“once an AI lab figures out that last puzzle piece”) but never talk about the problems or solutions.
They claim machines will be intelligent, but they present no argument.
.
.
.
Postscript: The book contains many strange claims, but perhaps the oddest passage is when the authors, after having tried to make their case for 185 pages, acknowledge that the scientific community does not take their viewpoints seriously:
“If there aren’t thousands of horrified scientists and engineers leaping up to beg governments to shut down those particular AI labs, it tells you that it’s not just a problem of individuals. It means that whole field of science is in the stage of folk theory and blind optimism.” (p185)
Could it be that the “whole field of science” is wrong? Or it could be that Yukdowsky and Soares’ theories—put forward with no supporting evidence—are not compelling?
Yudkowsky and Soares’ Book Is Empty
In If Anyone Builds It, Everyone Dies, authors Eliezer Yudkowsky and Nate Soares say machines will be all-knowing and all-powerful, but they don’t present any evidence, or discuss how computers might get to intelligence.
In a book presented as factual or scientific (or at least science-adjacent), the authors might have addressed the principles on which AI operates, considered what it might be capable of based on those principles, and asked whether we would label its potential behaviour “intelligent”.
But they don’t attempt any such assessment. They just insist AI will be smart without making a case.
In relatively substantial sections of the book, to illustrate its capacities they do point to some instances of AI being put to use—such as its success in predicting protein structures, or in chess, or in its text production capacities. But they leave unanswered obvious questions like, What is AI doing in such instances and are those tasks analogous to wider problems?
An author aiming at a more comprehensive analysis might, for example, point out that the jobs AI does well at involve finding correlations in data: It tallies up the number of times certain items have appeared near each other in a dataset (e.g. the items might be words, the dataset might be all the text on the internet) and then reproduces the highest frequency combinations. While people might be able to do that sort of tallying up with a pen and paper for small datasets, of say a few dozen items, a computer can do it relatively quickly with a dataset of a trillion items.
But the fact that a machine has far greater computing or tallying up capacities than humans does not necessarily mean it is able to be “intelligent”. There is no evidence that, for example, “intelligent” acts such as innovation are instances of correlation discovery or reproduction (which is what AI does).
What exactly computers can do and whether their operations might make them capable of things like having original thoughts—as in e.g. coming up with hypotheses for scientific research—are the sorts of matters I think it would be helpful to address in a book on computer intelligence. Especially a book in which it’s repeatedly asserted machines certainly will be smart soon.
Yudkowsky and Soares present no such analyses of machine capacities.
They do wonder:
So while AI is successful in “narrow domains” it’s presumed without discussion that such domains (chess, protein structure prediction, text production, etc.)—which involve specific datasets and goals, and in which correlation discovery is productive—may be analogous to other domains that have no clear datasets, methods, or goals (among other concerns), such as scientific discovery.
But leaving such essential matters unaddressed, Yudkowsky and Soares repeatedly insist AI will be smart (and soon):
This belief in the coming intelligence of computers is odd considering they also believe humans do not know what “intelligence” is:
And also computers are not intelligent right now:
So, they say, computers are not smart now, we don’t know what intelligence is, we don’t know how to make computers smart, but they certainly will be intelligent soon.
The authors go back and forth between thinking machines do auspicious things and thinking they are dumb, and between claiming humans don’t know what intelligence is, and at other times offering definitions of intelligence.
In one of the moments in which they feel they do have some grasp of intelligence, they give a definition of it (it’s “predicting” and “steering”) that isn’t very satisfying. Gauging from their vague explanations, it seems that Yudkowsky and Soares’ idea of “intelligence”—in one part of the book (e.g. p20; ifanyonebuildsit.com/1/more-on-intelligence-as-prediction-and-steering)—is that it consists more or less of the ability to do the sorts of things that computers do. This is a tautology committed by many people in the AI world who claim computers will be smart.
If your idea of intelligence is “the ability to do what computers do” then, true, “computers are intelligent”—that means: “computers can do what computers can do.”
The concept is flat. There’s no discussion of substantial problems of “intelligence”, but—according to Yudkowsky and Soares in other parts of the book—the lack of discussion of the problems doesn’t matter. To get to artificial intelligence, we don’t need to know what intelligence is:
Machines that are smart don’t need to be built, they will be grown (the “growing” is their ability to carry out operations that are reproductions of correlations they’ve discovered, as opposed to being more directly programmed to carry out some or another operation).
At several points in the book, they reiterate that we may not need to have answers to important questions, because the machines themselves might come up with the answers, for example:
So there’s no need to come up any theories of how intelligent machines will be built, because the machines themselves (that we don’t know how to build) will build intelligent machines. You don’t need to consider the engineering principles on which AI operates and attempt to figure out whether it’s capable of “thinking” and thus come up with arguments to support your claims. The smart computers that Yudkowsky and Soares admit don’t exist will solve the problems.
At other times, they suggest researchers will figure it out:
There is, repeatedly, no talk of the principles of machine operations, and any potential connection to possible feats of intelligence. Rather the authors just say “it will be figured out”.
Other arguments include the implication that because technological progress has been made in the past, machines will be smart:
This lack of an attempt to address the details of computer intelligence and instead to dismiss the matter with flat claims like “researchers will figure it out” or “AI itself will figure it out” or “progress has been made in the past” tarnishes Yudkowsky and Soares’ book.
They hardly discuss anything substantial.
To address one last error: In another section, they repeat a claim popular among some who think AI will be intelligent—that is, we don’t know how it works (and therefore it’s potentially capable of extraordinary things).
The implication seems to be that you cannot say AI will not be intelligent, because you don’t know what it’s doing.
But the claim “we don’t know what it’s doing” is wrong. We do know what it’s doing. It’s a program written by humans and what it does is written down in the program.
When it’s said I don’t understand what AI is doing, what is meant is that I am not able to follow the billions of calculations it does and thus track the correlations it finds. True, it may find correlations that puzzle us because we cannot track them, because the number of computations is too large. But we know that it is finding correlations.
“Not understanding” AI in the sense meant is not the same as e.g. “not understanding” gravity. Two objects interacting with no contact is a mystery. What AI does is not a mystery. I can’t do billions of calculations and thus follow the correlations it’s finding. But I understand that it’s finding correlations.
But even if you’re not persuaded by that point, the thought “I don’t know what it’s doing but it will be intelligent” is not a very substantial one.
There are several other minor points Yudkowsky and Soares raise but this article would become too lengthy if I were to address them all and none of them alter the errors in the book.
To sum up, while Yudkowsky and Soares believe computers are shallower than twelve year olds, and think there’s a “missing piece”:
Nowhere in the book do they touch on important matters of computer intelligence.
They don’t consider the operations AI carries out and attempt to discuss the potential scope of those operations. They don’t address the fact there’s no evidence that correlation discovery or reproduction (which is what AI does) could lead to intelligent feats e.g. innovation. They repeatedly allude to some unknown future answer (“once an AI lab figures out that last puzzle piece”) but never talk about the problems or solutions.
They claim machines will be intelligent, but they present no argument.
.
.
.
Postscript: The book contains many strange claims, but perhaps the oddest passage is when the authors, after having tried to make their case for 185 pages, acknowledge that the scientific community does not take their viewpoints seriously:
Could it be that the “whole field of science” is wrong? Or it could be that Yukdowsky and Soares’ theories—put forward with no supporting evidence—are not compelling?
.
.
.
Take a look at my other articles:
Ilya Sutskever’s refuses to answer the Q: How will AGI be built?
Scientific reports are misrepresented in AI 2027
What words mean to computers
.
Twitter/X: x.com/OscarMDavies