From the post we don’t get information about the acceleration rate of AI capabilities but on the impact on the economy. This argument is thus against slow takeoff with economic consequences but not against slow takeoff without much economic consequences.
So updating from that towards a discontinuous takeoff doesn’t seem right. You should be updating from slow takeoff with economic consequences to slow takeoff without economic consequences.
There will be a complete 4 year interval in which world output doubles, before the first 1 year interval in which world output doubles. (Similarly, we’ll see an 8 year doubling before a 2 year doubling, etc.)
At some point there will be incredibly powerful AI systems. They will have many consequences, but one simple consequence is that world output will grow much more quickly. I think this is a good barometer for other transformative effects, including large military advantages.
I believe that before we have incredibly powerful AI, we will have AI which is merely very powerful. This won’t be enough to create 100% GDP growth, but it will be enough to lead to (say) 50% GDP growth. I think the likely gap between these events is years rather than months or decades.
In particular, this means that incredibly powerful AI will emerge in a world where crazy stuff is already happening (and probably everyone is already freaking out). If true, I think it’s an important fact about the strategic situation.
Though there are other takeoff-ish questions that are worth discussing, yeah.
Thanks for this clarification! I guess the “capability increase over time around and after reaching human level” is more important than the “GDP increase over time” to look at how hard alignment is. It’s likely why I assumed takeoff meant the former. Now I wonder if there is a term for “capability increase over time around and after reaching human level”...
Yes. In a slow takeoff scenario where we have AI that can double GDP in 4 years, I don’t regulations will stand in the way. Some countries will adopt the new technologies and then other countries will follow when they realize they are falling behind. NIMBYs and excessive regulations are a problem for economic growth when GDP is growing by 2% or 3% but probably won’t matter much if GDP is growing by 20% or 30%.
From the post we don’t get information about the acceleration rate of AI capabilities but on the impact on the economy. This argument is thus against slow takeoff with economic consequences but not against slow takeoff without much economic consequences.
So updating from that towards a discontinuous takeoff doesn’t seem right. You should be updating from slow takeoff with economic consequences to slow takeoff without economic consequences.
Does that make sense?
Paul Christiano operationalizes slow/soft takeoff as:
Though there are other takeoff-ish questions that are worth discussing, yeah.
Thanks for this clarification! I guess the “capability increase over time around and after reaching human level” is more important than the “GDP increase over time” to look at how hard alignment is. It’s likely why I assumed takeoff meant the former. Now I wonder if there is a term for “capability increase over time around and after reaching human level”...
I guess I don’t understand how slow takeoff can happen without economic consequences.
Like takeoff (in capabilities progress) may still be slow, but the impact of AI is more likely to be discontinuous in that case.
I was probably insufficiently clear on that point.
Yes. In a slow takeoff scenario where we have AI that can double GDP in 4 years, I don’t regulations will stand in the way. Some countries will adopt the new technologies and then other countries will follow when they realize they are falling behind. NIMBYs and excessive regulations are a problem for economic growth when GDP is growing by 2% or 3% but probably won’t matter much if GDP is growing by 20% or 30%.