This is a summary of the GPI Working Paper “Against the singularity hypothesis” by David Thorstad (published in Philosophical Studies). The summary was written by Riley Harris.
The singularity is a hypothetical future event in which machines rapidly become significantly smarter than humans. The idea is that we might invent an artificial intelligence (AI) system that can improve itself. After a single round of self-improvement, that system would be better equipped to improve itself than before. This process might repeat many times, and each time the AI system would become more capable and better equipped to improve itself even further. At the end of this (perhaps very rapid) process, the AI system could be much smarter than the average human. Philosophers and computer scientists have thought we should take the possibility of a singularity seriously (Solomonoff 1985, Good 1996, Chalmers 2010, Bostrom 2014, Russell 2019).
It is characteristic of the singularity hypothesis that AI will take years or months at the most to become many times more intelligent than even the most intelligent human.[1] Such extraordinary claims require extraordinary evidence. In the paper “Against the singularity hypothesis”, David Thorstad claims that we do not have enough evidence to justify the belief in the singularity hypothesis, and we should consider it unlikely unless stronger evidence emerges.
Reasons to think the singularity is unlikely
Thorstad is sceptical that machine intelligence can grow quickly enough to justify the singularity hypothesis. He gives several reasons for this.
Low-hanging fruit. Innovative ideas and technological improvements tend to become more difficult over time. For example, consider “Moore’s law”, which is (roughly) the observation that hardware capacities double every two years. Between 1971 and 2014 Moore’s law was maintained only with an astronomical increase in the amount of capital and labour invested into semiconductor research (Bloom et al. 2020). In fact, according to one leading estimate, there was an eighteen-fold drop in productivity over this period. While some features of future AI systems will allow them to increase the rate of progress compared to human scientists and engineers, they are still likely to experience diminishing returns as the easiest discoveries have already been made and only more difficult ideas are left.
Bottlenecks. AI progress relies on improvements in search, computation, storage and so on (each of these areas breaks down into many subcomponents). Progress could be slowed down by any of these subcomponents: if any of these are difficult to speed up, then AI progress will be much slower than we would naively expect. The classic metaphor here concerns the speed a liquid can exit a bottle, which is rate-limited by the narrow space near the opening. AI systems may run into bottlenecks if any essential components cannot be improved quickly (see Aghion et al., 2019).
Constraints. Resource and physical constraints may also limit the rate of progress. To take an analogy, Moore’s law gets more difficult to maintain because it is expensive, physically difficult and energy-intensive to cram ever more transistors in the same space. Here we might expect progress to eventually slow as physical and financial constraints provide ever greater barriers to maintaining progress.
Sublinear growth. How do improvements in hardware translate to intelligence growth? Thompson and colleagues (2022) find that exponential hardware improvements translate to linear gains in performance on problems such as Chess, Go, protein folding, weather prediction and the modelling of underground oil reservoirs. Over the past 50 years, transistors in our best circuits increased from 3,500 in 1972 to 114 billion in 2022. If intelligence grew linearly with transistor density computers would have become 33 million times more intelligent over this period. Instead, evidence suggests that intelligence growth is sublinear in hardware growth.
Arguments for the singularity hypothesis
Two key arguments have been given in favour of the singularity hypothesis. Thorstad analyses them and finds that they are not particularly strong.
Observational argument. Chalmers (2010) argues for the singularity hypothesis based on the proportionality thesis: that increases in intelligence always lead to at least proportionate increases in the ability to design intelligent systems. He supports this only briefly, observing that, for example, a small difference in design capability between Alan Turing and the average human led to a large difference in the ability of the systems they were able to design (the computer vs hardly anything of importance). The main problem with this argument is that it is local rather than global: It gives evidence that there are points in time where the proportionality thesis is correct, while to support the singularity hypothesis it would be necessary that the proportionality thesis is true at any time. In addition, Chalmers conflates design capabilities and intelligence.[2] Overall, Thorstad concludes that Chalmers’s argument fails and the observational argument does not vindicate the singularity hypothesis
Optimisation power argument. Bostrom (2014) claims that there will be a large amount of quality-weighted design effort applied to improving artificial systems, which will result in large increases in intelligence. He gives a rich and varied series of examples to support this claim. However, Thorstad finds that many of these examples are just plausible descriptions of artificial intelligence improving rapidly, not evidence that this will happen. Other examples end up being restatements of the singularity hypothesis (for example, that we could be only a single leap of software insight from an intelligence explosion). Thorstad is sceptical that these restatements provide any evidence at all for the singularity hypothesis.
One of the core parts of the argument is initially promising but relies on a misunderstanding. Bostrom claims that roughly constant design effort has historically led to systems doubling their capacity every 18 months. If this were true, then boosting a system’s intelligence could allow it to design a new system with even greater intelligence, where that second boost is even bigger than the first. This would allow intelligence to increase ever quicker. But, as discussed above, it was increasing design efforts that led to this improvement in hardware, and AI systems have progressed much more slowly. Overall, Thorstad remains sceptical that Bostrom has given any strong evidence or argument in favour of the singularity hypothesis.
Implications for longtermism and AI Safety
The singularity hypothesis implies that the world will be quickly transformed in the future. This idea is used by Bostrom (2012, 2014) and Yudkowsky (2013) to argue that advances in AI could threaten human extinction or permanently and drastically destroy humanity’s potential for future development. Increased scepticism about the singularity hypothesis might naturally lead to increased scepticism about their conclusion: that we should be particularly concerned about existential risk from artificial intelligence. This may also have implications for longtermism which uses existential risk mitigation (and AI risk mitigation in particular) as part of the central example of a longtermist intervention—at least insofar as this concern is driven by something like the above argument by Bostrom and Yudkowsky.
References
Philippe Aghion, Benjamin Jones, and Charles Jones (2019). Artificial intelligence and economic growth. In The economics of artificial intelligence: An agenda, pages 237–282. Edited by Ajay Agrawal, Joshua Gans, and Avi Goldfarb. University of Chicago Press.
Nicholas Bloom, Charles Jones, John Van Reenen, and Michael Webb (2020). Are ideas getting harder to find? American Economic Review 110, pages 1104–44.
Nick Bostrom (2012). The superintelligent will: Motivation and instrumental rationality in advanced artificial agents. Minds and Machines 22, pages 71–85.
Nick Bostrom (2014). Superintelligence. Oxford University Press.
David Chalmers (2010). The singularity: A philosophical analysis. Journal of Consciousness Studies 17\9-10, pages 7–65.
I.J. Good (1966). Speculations concerning the first ultraintelligent machine. Advances in Computers 6, pages 31–88.
Stuart Russell (2019). Human compatible: Artificial intelligence and the problem of control. Viking.
Ray Solomonoff (1985). The time scale of artificial intelligence: Reflections on social effects. Human Systems Management 5, pages 149–53.
Neil Thompson, Shuning Ge, and Gabriel Manso (2022). The importance of (exponentially more) computing power. ArXiv Preprint.
Eliezer Yudkowsky (2013). Intelligence explosion microeconomics. Machine Intelligence Research Institute Technical Report 2013-1.
- ^
In particular, Chalmers (2010) claims that future AI systems might be as far beyond the most intelligent human as the most intelligent human is beyond a mouse. Bostrom (2014) claims this process could happen in a matter of months or even minutes (Bostrom, 2014).
- ^
Some of Turing’s contemporaries were likely more intelligent than him, yet they did not design the first computer.
I’ve only read the summary, but my quick sense is that Thorstad is conflating two different versions of the singularity thesis (fast takeoff vs slow but still hyperbolic takeoff), and that these arguments fail to engage with the relevant considerations.
Particularly, Erdil and Besiroglu (2023) show how hyperbolic growth (and thus a “singularity”, though I dislike that terminology) can arise even when there are strong diminishing returns to innovation and sublinear growth with respect to innovation.
The paper does not claim that diminishing returns are a decisive obstacle to the singularity hypothesis (of course they aren’t: just strengthen your growth assumptions proportionally). It lists diminishing returns as one of five reasons to be skeptical of the singularity hypothesis, then asks for a strong argument in favor of the singularity hypothesis to overcome them. It reviews leading arguments for the singularity hypothesis and argues they aren’t strong enough to do the trick.
My point is that our best models of economic growth and innovation (such as the semi-endogeneous growth model that Paul Romer won the Nobel prize for) straightforwardly predict hyperbolic growth under the assumptions that AI can substitute for most economically useful tasks and that AI labor is accumulable (in the technical sense that you can translate a economic output into more AI workers). This is even though these models assume strong diminishing returns to innovation, in the vein of “ideas are getting harder to find”.
Furthermore, even if you weaken the assumptions of these models (for example assuming that AI won’t participate in scientific innovation, or that not every task can be automated) you still can get pretty intense accelerated growth (up to x10 greater than today’s frontier economies).
Accelerating growth has been the norm for most of human history, and growth rates of 10%/year or greater have been historically observed in, e.g. 2000s China, so I don’t think this is an unreasonable prediction to hold.
This is a paper about the technological singularity, not the economic singularity.
The economic singularity is an active area of discussion among leading economists. It is generally regarded as a fringe view, but one that many are willing to take seriously enough to discuss. Professional economists are capable of carrying this discussion out on their own, and I am happy to leave them to it. If your institute would like to contribute to this discussion, I would advise you to publish your work in a leading economics journal and to present your work at reputable economics departments and conferences. This would probably involve employing researchers with PhDs in economics and appointments in economics departments to conduct the relevant research. If you do not want to move the scholarly literature, you should defer to the literature, which is at present quite skeptical of most theses under the heading of the economic singularity.
It is important not to equivocate between accelerating growth (a claim about the rate of change of growth rates over time) and accelerated growth (a claim about growth rates having jumped to a higher level). The claim about “pretty intense accelerated growth (up to x10 greater than today’s frontier economies) is a claim of the second sort, whereas the singularity hypothesis is a claim of the first sort.
It is also important not to treat exponential growth (which grows at a constant relative rate) as accelerating growth. The claim about 10% annual growth rates is a claim about exponential growth, which is not accelerating growth.
It is deeply misleading to suggest that accelerating economic growth “has been the norm for most of human history”. The fastest growth we have ever seen over a sustained period is exponential growth, and that is constant, not accelerating. Most historical growth rates were far slower than economic growth today. I think you might mean that we have transitioned over time from slower to faster growth modes. That is not to say that any of these growth modes have been types of accelerating growth.
I don’t see the distinction here. William Nordhaus used the term “economic singularity” in the same sense as the technological singularity. Economists generally believe that technological innovation is the main cause of long-term economic growth, making these two topics inherently interconnected.
From my understanding of historical growth rate estimates this is wrong. (As in, it is not “deeply misleading”.)
To me, this sounds very similar to “economic growth has accelerated over time”. And it sounds like this has happened over a long total period of time.
Maybe you think it has been very discrete with phases (seems unlikely to me as the dominant driver is likely to be population growth and better ability for technological development (e.g. reducing malnutrition)). Or maybe you think that it is key that the change in the rate of growth has historically been slow in sidereal time.
I’m aware of various people considering trying to argue with economists about explosive growth (e.g. about the conclusions of this report).
In particular, the probability of explosive growth if you condition on human level machine intelligence. More precisely, something like human level machine intelligence and human level robotic bodies where the machine intelligence requires 10^14 FLOP / human equivalent second (e.g. 1⁄10 of an H100), can run 5x faster than humans using current hardware, and the robotic bodies cost $20,000 (on today’s manufactoring base).
From my understanding they didn’t ever end up trying to do this.
Personally, I argued against this being a good use of time:
It seems unlikely to me that the economists actually take these ideas seriously and their actual crux is most like “this is crazy, so I reject the premise”.
It doesn’t seem likely that the economists perspective is very enlightening for us (e.g. I don’t expect they would have many useful contributions).
I don’t think it seems that useful to persuade arbitrary economists from a credibility/influence perspective.
So, I think the main question here is a question of whether this is a good use of time.
I think it’s probably better to start by trying to talk with economists rather than trying to write a paper.
Looking at this paper now, I’m not convinced that Erdil and Besiroglu offer a good counter argument. Let me try to explain why and see if you disagree.
Their claim is about economic growth. It seems that they are exploring considerations for and against the claim that future AI systems will accelerate economic growth by an order of magnitude or more. But even if this was true, it doesn’t seem like it would result in a significant chance of extinction.
The main reason for believing the claim about economic growth doesn’t apply to stronger versions of the singularity hypothesis. As far as I can tell, the main reason to believe that the economic growth will happen is that AI might be able to automate most or all of the work done by human workers today. However, it seems like further argument is needed to claim that AI will also be smart enough to overpower us.
They give additional considerations against the singularity hypothesis. While the strongest arguments in favour of rapid economic growth don’t apply to the stronger singularity hypothesis, I think they present arguments against which do apply. The most interesting ones for me were that regulation could slow down AI development, we may not deploy powerful AI systems due to concerns about alignment, and previous seemingly revolutionary technologies like computers, electricity, cars and aeroplanes arguably didn’t lead to large accelerations in economic growth.
I suspect a lot of the disagreement here is about whether the singularity hypothesis is along the lines of:
1. AI becomes capable enough to do lots or most economically useful tasks.
2. AI becomes capable enough to directly manipulate and overpower all humans, regardless of our efforts to resist and steer the future in a directions that good for us.
I think of the singularity hypothesis as being along the lines of “growth will accelerate a lot”. I might operationalize this as predicting that the economy will go up by more than a factor of 10 in the period of a decade. (This threshold deliberately chosen to be pretty tame by singularity predictions, but pretty wild by regular predictions.)
I think this is pretty clearly stronger than your 1 but weaker than your 2. (It might be close to predicting that AI systems become much smarter than humans without access to computers or AI tools, but this is compatible with humans remaining easily and robustly in control.)
I think this growth-centred hypothesis is important and deserves a name, and “singularity” is a particularly good name for it. Your 1 and 2 also seem like they could use names, but I think they’re easier to describe with alternate names, like “mass automation of labour” or “existential risk from misaligned AI”.
FYI for interested readers: a different summary of this paper was previously posted on the forum by Nicholas Kruus. There is a bit of discussion of the paper in the comments there.
Perhaps it’s missing from the summary, but there is trivially a much stronger argument that doesn’t seem addressed here.
Humans must be pretty close to the stupidest possible things that could design things smarter than them.
This is especially true when it comes to the domain of scientific R&D, where we only have even our minimal level of capabilities because it turns out that intelligence generalizes from e.g. basic tool-use and social modeling to other things.
We know that we can pretty reliably create systems that are superhuman in various domains when we figure out a proper training regime for those domains. e.g. AlphaZero is vastly superhuman in chess/go/etc, GPT-3 is superhuman at next token prediction (to say nothing of GPT-4 or subsequent systems), etc.
The nature of intelligent search processes is to route around bottlenecks. The argument re: bottlenecks proves too much, and doesn’t even seem to stand up historically. Why did bottlenecks not fail to stymie superhuman capabilities in the domains where we’re achieved them?
Humanity, today, could[1] embark on a moderately expensive project to enable wide-scale genomic selection for intelligence, which within a single generation would probably produce a substantial number of humans smarter than any who’ve ever lived. Humans are not exactly advantaged in their ability to iterate here, compared to AI.
The general shape of Thorstad’s argument doesn’t really make it clear what sort of counterargument he would admit as valid. Like, yes, humans have not (yet) kicked off any process of obvious, rapid, recusive self-improvement. That is indeed evidence that it might take humans a few decades after they invent computing technology to do so. What evidence, short of us stumbling into the situation under discussion, would be convincing?
(Social and political bottlenecks do exist, but the technology is pretty straightforward.)
This seems false to me.
I think it’s helpful to consider the interaction of compute, algorithms, data, and financial/human capital.
I’m not sure that many people think that “search” or “storage” are important levers for computing
I guess RAM isn’t implausible as a bottleneck, but I probably wouldn’t decouple that from chip progress more broadly.
Compute
progress seems driven by many small improvements instead of one large change. There are many ideas that might work when designing chips, manufacturing equipment, etc., and progress in general seems to be fairly steady and regular. and progress is pretty distributed
Algorithms
again, the story I tend to hear from people inside the labs, as well as various podcasts and Twitter, is that many ideas might work, and it’s mostly a case of testing various ideas empirically in a compute-efficient manner
and performance gains can come from multiple places, e.g., performance engineering, better implementations of components, architectural improvements, hyperparameter search … which are approximately independent of each other
Data—I think data is a more plausible bottleneck—it seems to me that either synthetic generation works or it doesn’t.
That said my main issue is that you shouldn’t consider any of these factors as “independent bottlenecks.” If there isn’t enough data, you can try to develop more data-efficient algorithms or dedicate more compute to producing more data. If you’re struggling to make progress on algorithms, you can just keep scaling up, throwing more data and compute at the problem, etc.
I do think bottlenecks may exist, and identifying them is an important step in determining how to forecast, regulate, and manage AI progress, but, I don’t think interactions between AI progress inputs should be used as an argument against a fast take-off or approximately monotonically increasing rates of AI progress up to extremely powerful AI.
Executive summary: The singularity hypothesis, which posits that AI will rapidly become much smarter than humans, is unlikely given the lack of strong evidence and the presence of factors that could slow AI progress.
Key points:
The singularity hypothesis suggests AI could become significantly smarter than humans in a short timeframe through recursive self-improvement.
Factors like diminishing returns, bottlenecks, resource constraints, and sublinear intelligence growth relative to hardware improvements make the singularity less likely.
Key arguments for the singularity, the observational argument and the optimization power argument, are not particularly strong upon analysis.
Increased skepticism of the singularity hypothesis may reduce concern about existential risk from AI and impact longtermist priorities.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.