I feel this claim is disconnected with the definition of the singularity given in the paper:
The singularity hypothesis begins with the supposition that artificial agents will gain the ability to improve their own intelligence. From there, it is claimed that the intelligence of artificial agents will grow at a rapidly accelerating rate, producing an intelligence explosion in which artificial agents quickly become orders of magnitude more intelligent than their human creators. The result will be a singularity, understood as a fundamental discontinuity in human history beyond which our fate depends largely on how we interact with artificial agents
Further in the paper you write:
The singularity hypothesis posits a sustained period of accelerating growth in the general intelligence of artificial agents.
[Emphasis mine]. I can’t see any reference for either the original definition and later addition of “sustained”.
Ah—that comes from the discontinuity claim. If you have accelerating growth that isn’t sustained for very long, you get something like population growth from 1800-2000, where the end result is impressive but hardly a discontinuity comparable to crossing the event horizon of a black hole.
(The only way to go around the assumption of sustained growth would be to post one or a few discontinuous leaps towards superintelligence. But that’s harder to defend, and it abandons what was classically taken to ground the singularity hypothesis, namely the appeal to recursive self-improvement).
The result will be a singularity, understood as a fundamental discontinuity in human history beyond which our fate depends largely on how we interact with artificial agents
The discontinuity is a result of humans no longer being the smartest agents in the world, and no longer being in control of our own fate. After this point, we’ve entered an event horizon where the output is almost entirely unforeseeable.
If you have accelerating growth that isn’t sustained for very long, you get something like population growth from 1800-2000
If, after surpassing humans, intelligence “grows” exponentially for another 200 years, do you not think we’ve passed an event horizon? I certainly do!
If not, using the metric of single agent intelligence (i.e. not the sum of intelligence in a group of agents), at what point during an exponential growth curve that intersects human level intelligence, would you defining as crossing the event horizon?
I feel this claim is disconnected with the definition of the singularity given in the paper:
Further in the paper you write:
[Emphasis mine]. I can’t see any reference for either the original definition and later addition of “sustained”.
Ah—that comes from the discontinuity claim. If you have accelerating growth that isn’t sustained for very long, you get something like population growth from 1800-2000, where the end result is impressive but hardly a discontinuity comparable to crossing the event horizon of a black hole.
(The only way to go around the assumption of sustained growth would be to post one or a few discontinuous leaps towards superintelligence. But that’s harder to defend, and it abandons what was classically taken to ground the singularity hypothesis, namely the appeal to recursive self-improvement).
As you write:
The discontinuity is a result of humans no longer being the smartest agents in the world, and no longer being in control of our own fate. After this point, we’ve entered an event horizon where the output is almost entirely unforeseeable.
If, after surpassing humans, intelligence “grows” exponentially for another 200 years, do you not think we’ve passed an event horizon? I certainly do!
If not, using the metric of single agent intelligence (i.e. not the sum of intelligence in a group of agents), at what point during an exponential growth curve that intersects human level intelligence, would you defining as crossing the event horizon?