One important datapoint is progress on AI using various metrics has usually been the first time discontinuous, but after that things start decelerating into continuous growth. So that’s a point in favor of gradualism.
That’s a good point! Although I guess one reply you could have to this is that we shouldn’t expect paradigm shifts to slow down, and indeed I think most of Yudkowsky’s probability mass is on something like “there is a paradigm shift in AI which rapidly unlocks the capabilities for general intellgence”, rather than e.g. continuous scaling from current systems.
Yeah, that might be a crux of mine. Although in this case he should have longer timelines because paradigm shifts take quite a while to do, and aren’t nearly so fast that we can get such a conceptual breakthrough in 10 years. In fact this could take centuries to get the new paradigm.
If we require entirely new paradigms or concepts to get AGI, then we can basically close up the field of AI and declare safety achieved.
One important datapoint is progress on AI using various metrics has usually been the first time discontinuous, but after that things start decelerating into continuous growth. So that’s a point in favor of gradualism.
That’s a good point! Although I guess one reply you could have to this is that we shouldn’t expect paradigm shifts to slow down, and indeed I think most of Yudkowsky’s probability mass is on something like “there is a paradigm shift in AI which rapidly unlocks the capabilities for general intellgence”, rather than e.g. continuous scaling from current systems.
Yeah, that might be a crux of mine. Although in this case he should have longer timelines because paradigm shifts take quite a while to do, and aren’t nearly so fast that we can get such a conceptual breakthrough in 10 years. In fact this could take centuries to get the new paradigm.
If we require entirely new paradigms or concepts to get AGI, then we can basically close up the field of AI and declare safety achieved.