Directly relevant quotes from the articles for easier reference:
Paul Christiano:
“This story seems consistent with the historical record. Things are usually preceded by worse versions, even in cases where there are weak reasons to expect a discontinuous jump. The best counterexample is probably nuclear weapons. But in that case there were several very strong reasons for discontinuity: physics has an inherent gap between chemical and nuclear energy density, nuclear chain reactions require a large minimum scale, and the dynamics of war are very sensitive to energy density.”
“I’m not aware of many historical examples of this phenomenon (and no really good examples)—to the extent that there have been “key insights” needed to make something important work, the first version of the insight has almost always either been discovered long before it was needed, or discovered in a preliminary and weak version which is then iteratively improved over a long time period. ”
“Over the course of training, ML systems typically go quite quickly from “really lame” to “really awesome”—over the timescale of days, not months or years.
But the training curve seems almost irrelevant to takeoff speeds. The question is: how much better is your AGI then the AGI that you were able to train 6 months ago?”
AIImpacts:
“Discontinuities larger than around ten years of past progress in one advance seem to be rare in technological progress on natural and desirable metrics. We have verified around five examples, and know of several other likely cases, though have not completed this investigation. ”
“Supposing that AlphaZero did represent discontinuity on playing multiple games using the same system, there remains a question of whether that is a metric of sufficient interest to anyone that effort has been put into it. We have not investigated this.
Whether or not this case represents a large discontinuity, if it is the only one among recent progress on a large number of fronts, it is not clear that this raises the expectation of discontinuities in AI very much, and in particular does not seem to suggest discontinuity should be expected in any other specific place.”
“We have not investigated the claims this argument is premised on, or examined other AI progress especially closely for discontinuities.”
Directly relevant quotes from the articles for easier reference:
Paul Christiano:
“This story seems consistent with the historical record. Things are usually preceded by worse versions, even in cases where there are weak reasons to expect a discontinuous jump. The best counterexample is probably nuclear weapons. But in that case there were several very strong reasons for discontinuity: physics has an inherent gap between chemical and nuclear energy density, nuclear chain reactions require a large minimum scale, and the dynamics of war are very sensitive to energy density.”
“I’m not aware of many historical examples of this phenomenon (and no really good examples)—to the extent that there have been “key insights” needed to make something important work, the first version of the insight has almost always either been discovered long before it was needed, or discovered in a preliminary and weak version which is then iteratively improved over a long time period. ”
“Over the course of training, ML systems typically go quite quickly from “really lame” to “really awesome”—over the timescale of days, not months or years.
But the training curve seems almost irrelevant to takeoff speeds. The question is: how much better is your AGI then the AGI that you were able to train 6 months ago?”
AIImpacts:
“Discontinuities larger than around ten years of past progress in one advance seem to be rare in technological progress on natural and desirable metrics. We have verified around five examples, and know of several other likely cases, though have not completed this investigation. ”
“Supposing that AlphaZero did represent discontinuity on playing multiple games using the same system, there remains a question of whether that is a metric of sufficient interest to anyone that effort has been put into it. We have not investigated this.
Whether or not this case represents a large discontinuity, if it is the only one among recent progress on a large number of fronts, it is not clear that this raises the expectation of discontinuities in AI very much, and in particular does not seem to suggest discontinuity should be expected in any other specific place.”
“We have not investigated the claims this argument is premised on, or examined other AI progress especially closely for discontinuities.”
Thanks for these links, this is very useful material!