I’ve only read the summary, but my quick sense is that Thorstad is conflating two different versions of the singularity thesis (fast takeoff vs slow but still hyperbolic takeoff), and that these arguments fail to engage with the relevant considerations.
Particularly, Erdil and Besiroglu (2023) show how hyperbolic growth (and thus a “singularity”, though I dislike that terminology) can arise even when there are strong diminishing returns to innovation and sublinear growth with respect to innovation.
The paper does not claim that diminishing returns are a decisive obstacle to the singularity hypothesis (of course they aren’t: just strengthen your growth assumptions proportionally). It lists diminishing returns as one of five reasons to be skeptical of the singularity hypothesis, then asks for a strong argument in favor of the singularity hypothesis to overcome them. It reviews leading arguments for the singularity hypothesis and argues they aren’t strong enough to do the trick.
My point is that our best models of economic growth and innovation (such as the semi-endogeneous growth model that Paul Romer won the Nobel prize for) straightforwardly predict hyperbolic growth under the assumptions that AI can substitute for most economically useful tasks and that AI labor is accumulable (in the technical sense that you can translate a economic output into more AI workers). This is even though these models assume strong diminishing returns to innovation, in the vein of “ideas are getting harder to find”.
Furthermore, even if you weaken the assumptions of these models (for example assuming that AI won’t participate in scientific innovation, or that not every task can be automated) you still can get pretty intense accelerated growth (up to x10 greater than today’s frontier economies).
Accelerating growth has been the norm for most of human history, and growth rates of 10%/year or greater have been historically observed in, e.g. 2000s China, so I don’t think this is an unreasonable prediction to hold.
This is a paper about the technological singularity, not the economic singularity.
The economic singularity is an active area of discussion among leading economists. It is generally regarded as a fringe view, but one that many are willing to take seriously enough to discuss. Professional economists are capable of carrying this discussion out on their own, and I am happy to leave them to it. If your institute would like to contribute to this discussion, I would advise you to publish your work in a leading economics journal and to present your work at reputable economics departments and conferences. This would probably involve employing researchers with PhDs in economics and appointments in economics departments to conduct the relevant research. If you do not want to move the scholarly literature, you should defer to the literature, which is at present quite skeptical of most theses under the heading of the economic singularity.
It is important not to equivocate between accelerating growth (a claim about the rate of change of growth rates over time) and accelerated growth (a claim about growth rates having jumped to a higher level). The claim about “pretty intense accelerated growth (up to x10 greater than today’s frontier economies) is a claim of the second sort, whereas the singularity hypothesis is a claim of the first sort.
It is also important not to treat exponential growth (which grows at a constant relative rate) as accelerating growth. The claim about 10% annual growth rates is a claim about exponential growth, which is not accelerating growth.
It is deeply misleading to suggest that accelerating economic growth “has been the norm for most of human history”. The fastest growth we have ever seen over a sustained period is exponential growth, and that is constant, not accelerating. Most historical growth rates were far slower than economic growth today. I think you might mean that we have transitioned over time from slower to faster growth modes. That is not to say that any of these growth modes have been types of accelerating growth.
This is a paper about the technological singularity, not the economic singularity.
I don’t see the distinction here. William Nordhaus used the term “economic singularity” in the same sense as the technological singularity. Economists generally believe that technological innovation is the main cause of long-term economic growth, making these two topics inherently interconnected.
It is deeply misleading to suggest that accelerating economic growth “has been the norm for most of human history”.
From my understanding of historical growth rate estimates this is wrong. (As in, it is not “deeply misleading”.)
Most historical growth rates were far slower than economic growth today. I think you might mean that we have transitioned over time from slower to faster growth modes.
To me, this sounds very similar to “economic growth has accelerated over time”. And it sounds like this has happened over a long total period of time.
Maybe you think it has been very discrete with phases (seems unlikely to me as the dominant driver is likely to be population growth and better ability for technological development (e.g. reducing malnutrition)). Or maybe you think that it is key that the change in the rate of growth has historically been slow in sidereal time.
If your institute would like to contribute to this discussion, I would advise you to publish your work in a leading economics journal and to present your work at reputable economics departments and conferences.
I’m aware of various people considering trying to argue with economists about explosive growth (e.g. about the conclusions of this report).
In particular, the probability of explosive growth if you condition on human level machine intelligence. More precisely, something like human level machine intelligence and human level robotic bodies where the machine intelligence requires 10^14 FLOP / human equivalent second (e.g. 1⁄10 of an H100), can run 5x faster than humans using current hardware, and the robotic bodies cost $20,000 (on today’s manufactoring base).
From my understanding they didn’t ever end up trying to do this.
Personally, I argued against this being a good use of time:
It seems unlikely to me that the economists actually take these ideas seriously and their actual crux is most like “this is crazy, so I reject the premise”.
It doesn’t seem likely that the economists perspective is very enlightening for us (e.g. I don’t expect they would have many useful contributions).
I don’t think it seems that useful to persuade arbitrary economists from a credibility/influence perspective.
So, I think the main question here is a question of whether this is a good use of time.
I think it’s probably better to start by trying to talk with economists rather than trying to write a paper.
Looking at this paper now, I’m not convinced that Erdil and Besiroglu offer a good counter argument. Let me try to explain why and see if you disagree.
Their claim is about economic growth. It seems that they are exploring considerations for and against the claim that future AI systems will accelerate economic growth by an order of magnitude or more. But even if this was true, it doesn’t seem like it would result in a significant chance of extinction.
The main reason for believing the claim about economic growth doesn’t apply to stronger versions of the singularity hypothesis. As far as I can tell, the main reason to believe that the economic growth will happen is that AI might be able to automate most or all of the work done by human workers today. However, it seems like further argument is needed to claim that AI will also be smart enough to overpower us.
They give additional considerations against the singularity hypothesis. While the strongest arguments in favour of rapid economic growth don’t apply to the stronger singularity hypothesis, I think they present arguments against which do apply. The most interesting ones for me were that regulation could slow down AI development, we may not deploy powerful AI systems due to concerns about alignment, and previous seemingly revolutionary technologies like computers, electricity, cars and aeroplanes arguably didn’t lead to large accelerations in economic growth.
I suspect a lot of the disagreement here is about whether the singularity hypothesis is along the lines of:
1. AI becomes capable enough to do lots or most economically useful tasks. 2. AI becomes capable enough to directly manipulate and overpower all humans, regardless of our efforts to resist and steer the future in a directions that good for us.
I think of the singularity hypothesis as being along the lines of “growth will accelerate a lot”. I might operationalize this as predicting that the economy will go up by more than a factor of 10 in the period of a decade. (This threshold deliberately chosen to be pretty tame by singularity predictions, but pretty wild by regular predictions.)
I think this is pretty clearly stronger than your 1 but weaker than your 2. (It might be close to predicting that AI systems become much smarter than humans without access to computers or AI tools, but this is compatible with humans remaining easily and robustly in control.)
I think this growth-centred hypothesis is important and deserves a name, and “singularity” is a particularly good name for it. Your 1 and 2 also seem like they could use names, but I think they’re easier to describe with alternate names, like “mass automation of labour” or “existential risk from misaligned AI”.
I’ve only read the summary, but my quick sense is that Thorstad is conflating two different versions of the singularity thesis (fast takeoff vs slow but still hyperbolic takeoff), and that these arguments fail to engage with the relevant considerations.
Particularly, Erdil and Besiroglu (2023) show how hyperbolic growth (and thus a “singularity”, though I dislike that terminology) can arise even when there are strong diminishing returns to innovation and sublinear growth with respect to innovation.
The paper does not claim that diminishing returns are a decisive obstacle to the singularity hypothesis (of course they aren’t: just strengthen your growth assumptions proportionally). It lists diminishing returns as one of five reasons to be skeptical of the singularity hypothesis, then asks for a strong argument in favor of the singularity hypothesis to overcome them. It reviews leading arguments for the singularity hypothesis and argues they aren’t strong enough to do the trick.
My point is that our best models of economic growth and innovation (such as the semi-endogeneous growth model that Paul Romer won the Nobel prize for) straightforwardly predict hyperbolic growth under the assumptions that AI can substitute for most economically useful tasks and that AI labor is accumulable (in the technical sense that you can translate a economic output into more AI workers). This is even though these models assume strong diminishing returns to innovation, in the vein of “ideas are getting harder to find”.
Furthermore, even if you weaken the assumptions of these models (for example assuming that AI won’t participate in scientific innovation, or that not every task can be automated) you still can get pretty intense accelerated growth (up to x10 greater than today’s frontier economies).
Accelerating growth has been the norm for most of human history, and growth rates of 10%/year or greater have been historically observed in, e.g. 2000s China, so I don’t think this is an unreasonable prediction to hold.
This is a paper about the technological singularity, not the economic singularity.
The economic singularity is an active area of discussion among leading economists. It is generally regarded as a fringe view, but one that many are willing to take seriously enough to discuss. Professional economists are capable of carrying this discussion out on their own, and I am happy to leave them to it. If your institute would like to contribute to this discussion, I would advise you to publish your work in a leading economics journal and to present your work at reputable economics departments and conferences. This would probably involve employing researchers with PhDs in economics and appointments in economics departments to conduct the relevant research. If you do not want to move the scholarly literature, you should defer to the literature, which is at present quite skeptical of most theses under the heading of the economic singularity.
It is important not to equivocate between accelerating growth (a claim about the rate of change of growth rates over time) and accelerated growth (a claim about growth rates having jumped to a higher level). The claim about “pretty intense accelerated growth (up to x10 greater than today’s frontier economies) is a claim of the second sort, whereas the singularity hypothesis is a claim of the first sort.
It is also important not to treat exponential growth (which grows at a constant relative rate) as accelerating growth. The claim about 10% annual growth rates is a claim about exponential growth, which is not accelerating growth.
It is deeply misleading to suggest that accelerating economic growth “has been the norm for most of human history”. The fastest growth we have ever seen over a sustained period is exponential growth, and that is constant, not accelerating. Most historical growth rates were far slower than economic growth today. I think you might mean that we have transitioned over time from slower to faster growth modes. That is not to say that any of these growth modes have been types of accelerating growth.
I don’t see the distinction here. William Nordhaus used the term “economic singularity” in the same sense as the technological singularity. Economists generally believe that technological innovation is the main cause of long-term economic growth, making these two topics inherently interconnected.
From my understanding of historical growth rate estimates this is wrong. (As in, it is not “deeply misleading”.)
To me, this sounds very similar to “economic growth has accelerated over time”. And it sounds like this has happened over a long total period of time.
Maybe you think it has been very discrete with phases (seems unlikely to me as the dominant driver is likely to be population growth and better ability for technological development (e.g. reducing malnutrition)). Or maybe you think that it is key that the change in the rate of growth has historically been slow in sidereal time.
I’m aware of various people considering trying to argue with economists about explosive growth (e.g. about the conclusions of this report).
In particular, the probability of explosive growth if you condition on human level machine intelligence. More precisely, something like human level machine intelligence and human level robotic bodies where the machine intelligence requires 10^14 FLOP / human equivalent second (e.g. 1⁄10 of an H100), can run 5x faster than humans using current hardware, and the robotic bodies cost $20,000 (on today’s manufactoring base).
From my understanding they didn’t ever end up trying to do this.
Personally, I argued against this being a good use of time:
It seems unlikely to me that the economists actually take these ideas seriously and their actual crux is most like “this is crazy, so I reject the premise”.
It doesn’t seem likely that the economists perspective is very enlightening for us (e.g. I don’t expect they would have many useful contributions).
I don’t think it seems that useful to persuade arbitrary economists from a credibility/influence perspective.
So, I think the main question here is a question of whether this is a good use of time.
I think it’s probably better to start by trying to talk with economists rather than trying to write a paper.
Looking at this paper now, I’m not convinced that Erdil and Besiroglu offer a good counter argument. Let me try to explain why and see if you disagree.
Their claim is about economic growth. It seems that they are exploring considerations for and against the claim that future AI systems will accelerate economic growth by an order of magnitude or more. But even if this was true, it doesn’t seem like it would result in a significant chance of extinction.
The main reason for believing the claim about economic growth doesn’t apply to stronger versions of the singularity hypothesis. As far as I can tell, the main reason to believe that the economic growth will happen is that AI might be able to automate most or all of the work done by human workers today. However, it seems like further argument is needed to claim that AI will also be smart enough to overpower us.
They give additional considerations against the singularity hypothesis. While the strongest arguments in favour of rapid economic growth don’t apply to the stronger singularity hypothesis, I think they present arguments against which do apply. The most interesting ones for me were that regulation could slow down AI development, we may not deploy powerful AI systems due to concerns about alignment, and previous seemingly revolutionary technologies like computers, electricity, cars and aeroplanes arguably didn’t lead to large accelerations in economic growth.
I suspect a lot of the disagreement here is about whether the singularity hypothesis is along the lines of:
1. AI becomes capable enough to do lots or most economically useful tasks.
2. AI becomes capable enough to directly manipulate and overpower all humans, regardless of our efforts to resist and steer the future in a directions that good for us.
I think of the singularity hypothesis as being along the lines of “growth will accelerate a lot”. I might operationalize this as predicting that the economy will go up by more than a factor of 10 in the period of a decade. (This threshold deliberately chosen to be pretty tame by singularity predictions, but pretty wild by regular predictions.)
I think this is pretty clearly stronger than your 1 but weaker than your 2. (It might be close to predicting that AI systems become much smarter than humans without access to computers or AI tools, but this is compatible with humans remaining easily and robustly in control.)
I think this growth-centred hypothesis is important and deserves a name, and “singularity” is a particularly good name for it. Your 1 and 2 also seem like they could use names, but I think they’re easier to describe with alternate names, like “mass automation of labour” or “existential risk from misaligned AI”.