Daniel, You provide good evidence that we will experience a period of SIE. Still I think we can make a second argument that this period of SIE will come to an end. Perhaps it even points towards a second way to assess consequences of SIE.
My notion of an asymptotic performances is easiest seen on a much simpler problem. Consider the task of doing of doing parallel multiplication in silicon. Over the years we have definitely improved the multiplication performance in speed and chip area (for a fixed lithography tech level). I expect there was a period of time where is somehow the speed of human innovation was proportional to current multiplication speed then we would have seen a period of SIE for chip multipliers. Still as our designs approached the (unknown) asymptotic limit of multiplication performance in our chip design this explosion would level off again.
In the same way, if fix the task of running an AI agent capable of ASARA and fix the HW, then there must exist an asymptotically best design theoretically possible. From these if follows that period of SIE must stop as designs approach this asymptote.
This raises an interesting secondary question: How many multiples exist between our first ASARA system, and the asymptotically best one? If that is 10x, that implies a certain profile for SIE, if it is 10,000x then it is a very different profile for SIE. In the end it might be this multiple rather than the velocity of SIE that has greater sway over its societal outcome.
We agree there is some limit. We discuss this in the report (from footnote 26):
Physical limits come into play for a couple reasons. First, the hardware stock introduces limits in how fast improvements can be made to software. For instance, signals can only travel so fast within the hardware, and software improvements cannot occur faster than these improvements can be implemented in the hardware. Second, given a fixed stock of physical hardware, there is a (incredibly large, yet still technically) finite number of distinct algorithms that could be run on the hardware. The finite number of possible algorithms sets a fundamental limit on how intelligent an AI system on the hardware could be. As these physical limits are approached, the rate of software improvement (and also r) must decrease. It’s also possible other limits exist well below these limits, or that r will decrease well before these limits are approached for other reasons.
Determining how high this limit is above the first ASARA systems is a very difficult question. That said, we think there are reasons to suspect the limit is far above the first ASARA systems:
But there isn’t a good reason to expect this limit to be only slightly above the first ASARA systems, which may be imagined as approximately just substituting for human workers within relevant cognitive domains. Humans are presumably not the most intelligent lifeform possible, but simply the first lifeform on Earth intelligent enough to engage in activities like science and engineering. The human range for cognitive attributes is wide, and humans continue to gain from expanding population and specialization, as well as various cultural developments, indicating no fundamental limit in sight. In addition, ASARA will most likely be trained with orders of magnitude more computational power than estimates of how many “computations” the human brain uses over a human’s development into adulthood, suggesting there’s significant room for efficiency improvements in training ASARA systems to match human learning.
Daniel, You provide good evidence that we will experience a period of SIE. Still I think we can make a second argument that this period of SIE will come to an end. Perhaps it even points towards a second way to assess consequences of SIE.
My notion of an asymptotic performances is easiest seen on a much simpler problem. Consider the task of doing of doing parallel multiplication in silicon. Over the years we have definitely improved the multiplication performance in speed and chip area (for a fixed lithography tech level). I expect there was a period of time where is somehow the speed of human innovation was proportional to current multiplication speed then we would have seen a period of SIE for chip multipliers. Still as our designs approached the (unknown) asymptotic limit of multiplication performance in our chip design this explosion would level off again.
In the same way, if fix the task of running an AI agent capable of ASARA and fix the HW, then there must exist an asymptotically best design theoretically possible. From these if follows that period of SIE must stop as designs approach this asymptote.
This raises an interesting secondary question: How many multiples exist between our first ASARA system, and the asymptotically best one? If that is 10x, that implies a certain profile for SIE, if it is 10,000x then it is a very different profile for SIE. In the end it might be this multiple rather than the velocity of SIE that has greater sway over its societal outcome.
Thoughts on this?
--Dan
We agree there is some limit. We discuss this in the report (from footnote 26):
Determining how high this limit is above the first ASARA systems is a very difficult question. That said, we think there are reasons to suspect the limit is far above the first ASARA systems: