Ah—that comes from the discontinuity claim. If you have accelerating growth that isn’t sustained for very long, you get something like population growth from 1800-2000, where the end result is impressive but hardly a discontinuity comparable to crossing the event horizon of a black hole.
(The only way to go around the assumption of sustained growth would be to post one or a few discontinuous leaps towards superintelligence. But that’s harder to defend, and it abandons what was classically taken to ground the singularity hypothesis, namely the appeal to recursive self-improvement).
The result will be a singularity, understood as a fundamental discontinuity in human history beyond which our fate depends largely on how we interact with artificial agents
The discontinuity is a result of humans no longer being the smartest agents in the world, and no longer being in control of our own fate. After this point, we’ve entered an event horizon where the output is almost entirely unforeseeable.
If you have accelerating growth that isn’t sustained for very long, you get something like population growth from 1800-2000
If, after surpassing humans, intelligence “grows” exponentially for another 200 years, do you not think we’ve passed an event horizon? I certainly do!
If not, using the metric of single agent intelligence (i.e. not the sum of intelligence in a group of agents), at what point during an exponential growth curve that intersects human level intelligence, would you defining as crossing the event horizon?
Ah—that comes from the discontinuity claim. If you have accelerating growth that isn’t sustained for very long, you get something like population growth from 1800-2000, where the end result is impressive but hardly a discontinuity comparable to crossing the event horizon of a black hole.
(The only way to go around the assumption of sustained growth would be to post one or a few discontinuous leaps towards superintelligence. But that’s harder to defend, and it abandons what was classically taken to ground the singularity hypothesis, namely the appeal to recursive self-improvement).
As you write:
The discontinuity is a result of humans no longer being the smartest agents in the world, and no longer being in control of our own fate. After this point, we’ve entered an event horizon where the output is almost entirely unforeseeable.
If, after surpassing humans, intelligence “grows” exponentially for another 200 years, do you not think we’ve passed an event horizon? I certainly do!
If not, using the metric of single agent intelligence (i.e. not the sum of intelligence in a group of agents), at what point during an exponential growth curve that intersects human level intelligence, would you defining as crossing the event horizon?