I support people poking at the foundations of these arguments. And I especially appreciated the discussion of bottlenecks, which I think is an important topic and often brushed aside in these discussions.
That said, I found that this didn’t really speak to the reasons I find most compelling in favour of something like the singularity hypothesis. Thorstad says in the second blog post:
If each doubling of intelligence is harder to bring about than the last, then even if all AI research is eventually done by recursively self-improving AI systems, the pace of doubling will steadily slow.
I think this is wrong. (Though the paper itself avoids making the same mistake.) There are lots of coherent models where the effective research output of the AI systems is growing faster than the difficulty of increasing intelligence, leading to accelerating improvements despite each doubling of intelligence getting harder than the last. These are closely analogous to the models which can (depending on some parameter choices) produce a singularity in economic growth by assuming endogenous technological growth.
In general I agree with Thorstad that the notion of “intelligence” is not pinned down enough to build tight arguments on it. But I think that he goes too far in inferring that the arguments aren’t there. Rather I think that the strongest versions of the arguments don’t directly route through an analysis of intelligence, but something more like the economic analysis. If further investments in AI research drive the price-per-unit-of-researcher-year-equivalent down in fast enough, this could lead to hyperbolic increases in the amount of effective research progress, and this could in turn lead to rapid increases in intelligence—however one measures that. I agree that this isn’t enough to establish that things will be “orders of magnitude smarter than humans”, but for practical purposes the upshot that “there will be orders of magnitude more effective intellectual labour from AI than from humans” does a great deal of work.
On the argument that extraordinary claims require extraordinary evidence, I’d have been interested to see Thorstad’s takes on the analyses which suggest that long-term historical growth rates are hyperbolic, e.g. Roodman (2020). I think of that as one of the more robust long-term patterns in world history. The hypothesis which says “this pattern will approximately continue” doesn’t feel to me to be extraordinary. You might say “ah, but that doesn’t imply a singularity in intelligence”, and I would agree—but I think that if you condition on this kind of future hyperbolic growth in the economy, the hypothesis that there will be a very large accompanying increase in intelligence (however that’s measured) also seems kind of boring rather than extraordinary.
I support people poking at the foundations of these arguments. And I especially appreciated the discussion of bottlenecks, which I think is an important topic and often brushed aside in these discussions.
That said, I found that this didn’t really speak to the reasons I find most compelling in favour of something like the singularity hypothesis. Thorstad says in the second blog post:
I think this is wrong. (Though the paper itself avoids making the same mistake.) There are lots of coherent models where the effective research output of the AI systems is growing faster than the difficulty of increasing intelligence, leading to accelerating improvements despite each doubling of intelligence getting harder than the last. These are closely analogous to the models which can (depending on some parameter choices) produce a singularity in economic growth by assuming endogenous technological growth.
In general I agree with Thorstad that the notion of “intelligence” is not pinned down enough to build tight arguments on it. But I think that he goes too far in inferring that the arguments aren’t there. Rather I think that the strongest versions of the arguments don’t directly route through an analysis of intelligence, but something more like the economic analysis. If further investments in AI research drive the price-per-unit-of-researcher-year-equivalent down in fast enough, this could lead to hyperbolic increases in the amount of effective research progress, and this could in turn lead to rapid increases in intelligence—however one measures that. I agree that this isn’t enough to establish that things will be “orders of magnitude smarter than humans”, but for practical purposes the upshot that “there will be orders of magnitude more effective intellectual labour from AI than from humans” does a great deal of work.
On the argument that extraordinary claims require extraordinary evidence, I’d have been interested to see Thorstad’s takes on the analyses which suggest that long-term historical growth rates are hyperbolic, e.g. Roodman (2020). I think of that as one of the more robust long-term patterns in world history. The hypothesis which says “this pattern will approximately continue” doesn’t feel to me to be extraordinary. You might say “ah, but that doesn’t imply a singularity in intelligence”, and I would agree—but I think that if you condition on this kind of future hyperbolic growth in the economy, the hypothesis that there will be a very large accompanying increase in intelligence (however that’s measured) also seems kind of boring rather than extraordinary.