My guess is that a success might look more like, 1. We use software and early AI to become more wise/intelligent 2. That wisdom/intelligence helps people realize how to make a good/safe plan for AGI. Maybe this means building it very slowly, maybe it means delaying it indefinitely.
Or maybe if we can discover how to use “primitive” AI usefully enough, we decide we never need AGI.
(This is an immediate reaction, not something I have ever thought about in detail)
My guess is that a success might look more like,
1. We use software and early AI to become more wise/intelligent
2. That wisdom/intelligence helps people realize how to make a good/safe plan for AGI. Maybe this means building it very slowly, maybe it means delaying it indefinitely.