Summary: Against the Singularity Hypothesis (David Thorstad)

This post summarizesAgainst the Singularity Hypothesis,” a Global Priorities Institute Working Paper by David Thorstad. This post is part of my sequence of GPI Working Paper summaries. For more, Thorstad’s blog, Reflective Altruism, has a three-part series on this paper.

Introduction

The effective altruism community has allocated substantial resources to catastrophic risks from AI, partly motivated by the singularity hypothesis about AI’s rapid advancement. While many[1] AI experts and philosophers have defended the singularity hypothesis, Thorstad argues the case for it is surprisingly thin.

Thorstad describes the singularity hypothesis in (roughly) the following three parts:[2]

  1. Self-Improvement: Artificial agents will become able to increase their own quantity of general intelligence.

  2. Intelligence Explosion: For a sustained period, their general intelligence will grow at an accelerating rate, creating exponential or hyperbolic growth that causes them to quickly surpass human intelligence by orders of magnitude.

  3. Singularity: This will produce a discontinuity in human history, after which humanity’s fate—living in a digital form, extinct, or powerless—depends largely on our interactions with artificial agents.

Growth

Thorstad offers five reasons to doubt the intelligence growth rate proposed by the singularity hypothesis.

  1. Extraordinary claims require extraordinary evidence: Proposing that exponential or hyperbolic growth will occur for a prolonged period,[3] is an extraordinary claim that requires many excellent reasons to suspect it’s correct. Until this high burden of evidence is met, it’s appropriate to place very low credence on the singularity hypothesis.

  2. Good ideas become harder to find: Idea-generating becomes increasingly difficult as low-hanging fruit is picked. For example, spending on drug and agricultural research has seen rapidly diminishing returns.[4] AI will likely be no exception, as hardware improvement (e.g. Moore’s law) is slowing. Even if the rate of diminishing research productivity is small, its effects become substantial as it compounds over many cycles of self-improvement.[5]

  3. Bottlenecks: No algorithm can run quicker than its slowest component, so, unless every component can be sped up at once, bottlenecks may arise. Even a single bottleneck would halt an intelligence explosion, and we should expect them to emerge because…

    1. There is limited room for improvement in certain processes (e.g., search algorithms)

    2. There are physical resource constraints (we shouldn’t expect supply chains’ output to increase a thousandfold or more very quickly)

  4. Physical constraints: Regardless of path, improving AI will eventually face intractable limitations from resource constraints and laws of physics, likely slowing intelligence growth. Consider Moore’s law’s demise:

    1. Circuits’ energy requirements have massively increased—increasing costs and overheating.[6]

    2. Capital is drying up, as semiconductor plant prices have skyrocketed.[7]

    3. Our best transistors’ diameter is now that of ten atoms, making manufacturing increasingly difficult and soon subject to quantum uncertainties.[8]

  5. Sublinearity: Technological capabilities[9] have been rapidly improving, meaning, if intelligence grows proportionally to them, then continuing current trends would create exponential intelligence growth. But intelligence grows sublinearly to these capabilities, not proportionally.

    1. Consider almost any performance metric plausibly correlated with intelligence—e.g., Chess, Go, protein folding, weather and oil reserve prediction—historically, exponential increases in the quantity of computation power yield merely linear gains.[10] If these performance metrics are misleading, proponents of the singularity hypothesis have provided no alternatives with consistent exponential improvement.[11]

    2. Or consider Moore’s law: In the last 50 years, circuits’ transistor counts increased 33-millionfold, so if intelligence grew linearly with hardware capacity, computers should be 33 million times more intelligent than 50 years ago.[12]

The Observational Argument

Chalmers (p.20) argues against diminishing growth rates with what Thorstad calls the observational argument:

  1. Small differences in design capacities (e.g., Turing vs. an average human)

  2. Lead to large differences in resulting designs (e.g., computers vs. nothing important)

Thorstad has two objections to the observational argument:

  1. It’s local, not global: It relies on one observation (Turing), which is not evidence for a claim of sustained growth rates, because it merely samples a single point on a curve. It also considers growth rates in computing’s infancy—a period before low-hanging fruit is plucked, resources dry up, or bottlenecks arise.

  2. Intelligence ≠ design capacity: We can’t deny the possibility that Turing’s peers were more intelligent than him but simply less design-capable. So, this example doesn’t show that increases in intelligence—rather than design capacitybring proportional increases in the capacity to design intelligent systems. Otherwise, we’d have to explain why people more intelligent than Turing lacked his capacity to design such systems.

Recalcitrance and optimization power

Bostrom’s argument for the intelligence explosion relies on two things:

  1. Optimization power will be high. Optimization power is the quality-weighted design effort toward improving artificial systems.

  2. Recalcitrance will be low. Recalcitrance is the amount of optimization power needed to increase intelligence by one unit at the current margin.

Thorstad divides Bostrom’s case[13] into three categories.

Plausible but over-interpreted scenarios

Bostrom’s first scenario is this. Say the first human-level AI is an emulation of a human brain. We’d likely face high recalcitrance working toward this emulation, but it may drop afterward. Thorstad admits recalcitrance would likely drop after this breakthrough, but argues we have no reason to suspect it’ll be sustained for long enough.

Bostrom’s second scenario envisions large increases in agents’ datasets that bring increased intelligence. Thorstad finds this plausible but insufficient to suddenly create superintelligence. Humanity’s collective knowledge already comprises these data, but we’ve only gotten so far.

Restating the core hope

Bostrom offers two more reasons for low recalcitrance:

  1. A clever software insight may produce superintelligence in a single leap.

  2. Artificial agents could improve themselves via rapid software insights once they reach a certain level of domain-general reasoning ability.

Thorstad argues both of these are implausible, meaning they require supporting evidence that Bostrom has yet to provide.

Mis-interpreting history

Bostrom’s account of the intelligence explosion has two assumptions:

  1. Optimization power increases linearly in artificial systems’ intelligence.

  2. Recalcitrance decreases rapidly. Thorstad finds this implausible. Bostrom justifies his assumption based on historical improvement rates from Moore’s law and software advances. They suggest agents’ intelligence has been doubling every 18 months, which “entails recalcitrance declining as the inverse of system power” (Bostrom, p. 76). Thorstad argues this 18-month doubling time is inconsistent with historical sublinear intelligence growth from hardware improvements.[14] The last fifty years have seen diminishing intelligence returns from hardware improvements, suggesting recalcitrance has been increasing, not rapidly decreasing.

Philosophical Implications

  1. Uploading: The singularity hypothesis motivates much discussion of mind uploading. Its unlikelihood suggests postponing judgment on uploading’s difficult questions until we better understand its basic science, logistics, and philosophy.

  2. AI risk: The singularity hypothesis underpins the Bostrom-Yudkowsky argument for AI being an existential threat. Given the unlikelihood of the singularity hypothesis, we should reduce our concern about existential risk from AI, insofar as it’s driven by the Bostrom-Yudkowsky argument or similar considerations.

  3. Longtermism: Doubting the singularity hypothesis may provide empirical evidence against longtermism, as doing so will reduce many people’s expected hingyness/​perilousness and reduce their expected probability of existential risk during this century.

Conclusion /​ Brief Summary

The singularity hypothesis posits sustained accelerating growth in AI’s general intelligence thanks to recurring self-improvement. Thorstead argues against this rapid, sustained, growth rate:

  1. It’s an extraordinary claim that requires commensurately extraordinary evidence. Our credence should begin very low.

  2. Ideas for improvement will become harder to find as systems’ intelligence grows. Research has historically seen diminishing returns.

  3. Like most other growth processes, intelligence growth will likely be stalled by bottlenecks, such as limited room for improvement.

  4. Resource constraints and fundamental laws of physics will hinder intelligence growth. The end of Moore’s law is a good example.

  5. Intelligence grows sublinearly with improvements in underlying quantities such as memory and computation speed, meaning rapid intelligence growth may require infeasibly fast growth in these quantities.

Thorstead objects to two key philosophical arguments for the singularity hypothesis. He argues…

  1. Chalmers’ argument relies on a single, unrepresentative observation (Turing), meaning it applies locally, not globally. It also conflates intelligence with design capacity.

  2. Bostrom’s argument relies on…

    1. Plausible scenarios over-interpreted to support his argument to a greater extent than they reasonably do

    2. Implausible claims without evidence

    3. Mis-interpretated historical trends

Thorstead believes doubting the singularity hypothesis gives us reason to:

  1. Postpone judgment on mind uploading until we better understand the basics.

  2. Reduce our estimates of existential risk from AI, insofar as they’re motivated by the the Bostrom-Yudkowsky argument or similar considerations.

  3. Confront empirical evidence against longtermism, as doubting the singularity hypothesis reduces this century’s expected existential risk and hingyness/​perilousness.

For more, see the paper itself or Thorstad’s blog, Reflective Altruism, which has a three-part series on this paper.

  1. ^

    See David Chalmers (2010; 2012), Nick Bostrom (2014), I.J. Good (1966), Ray Solomonoff (1985), and Stuart Russell (2019).

  2. ^

    This description is largely based on arguments by David Chalmers, Nick Bostrom, and I.J. Good.

  3. ^

    Under Chalmers’ account, the growth rate must be sustained at least until machines exceed humans in intelligence by as much as humans exceed mice. Under Richard Loosemore and Ben Goertzel’s account, it must last at least until machines become 2-3 orders of magnitude more generally intelligent than humans.

  4. ^

    “The number of FDA-approved drugs per billion dollars of inflation-adjusted research expenditure decreased from over forty drugs per billion in the 1950s to less than one drug per billion in the 2000s (Scannell et al. 2012). And in the twenty years from 1971 to 1991, inflation-adjusted agricultural research expenditures in developed nations rose by over sixty percent, yet growth in crop yields per acre dropped by fifteen percent (Alston et al. 2000).”

  5. ^

    And many cycles of self-improvement are likely necessary for the orders of magnitude increase in intelligence proposed by the singularity hypothesis.

  6. ^
  7. ^
  8. ^
  9. ^

    Thorstead uses the term “underlying quantities,” and intends to refer to quantities such as processing speed, memory and search depth.

  10. ^
  11. ^

    Additionally, if the slow pace of performance increase arises from diminishing research productivity, the problem becomes reallocated, not solved.

  12. ^

    “An immediate reaction to that claim is that it is implausible. Perhaps more carefully, if advocates of the singularity hypothesis want to make such claims, they need to do two things. First, they need to clarify the relevant notion of intelligence on which it makes sense to speak of an intelligence increase on this scale having occurred. And second, they need to explain how the relevant notion of intelligence can do the work that their view demands. For example, they need to explain why we should expect increases in intelligence to lead to proportional increases in the ability to design intelligent agents (Section 4) and why we should attribute impressive and godlike powers to agents several orders of magnitude more intelligent than the average human (Section 6).”

  13. ^

    “To the best of my knowledge, this section surveys every detailed suggestion from Chapter 4 of Superintelligence in support of low recalcitrance and high optimization power.”

  14. ^

    See bullet point number 5 in the “Growth” section of this summary or Section 3.5 of Thorstad’s “Against the Singularity Hypothesis.”