My main concern would be that it takes the same very approximating stance as much other writing in the area, conflating all kinds of algorithmic progress into a single scalar ‘quality of the algorithms’.
You do moderately well here, noting that the most direct interpretation of your model regards speed or runtime compute efficiency, yielding ‘copies that can be run’ as the immediate downstream consequence (and discussing in a footnote the relationship to ‘intelligence’[1] and the distinction between ‘inference’ and training compute).
I worry that many readers don’t track those (important!) distinctions and tend to conflate these concepts. For what it’s worth, by distinguishing these concepts, I have come to the (tentative) conclusion that a speed/compute efficiency explosion is plausible (though not guaranteed), but an ‘intelligence’ explosion in software alone is less likely, except as a downstream effect of running faster (which might be nontrivial if pouring more effective compute into training and runtime yields meaningful gains).
Of course, ‘intelligence’ is also very many-dimensional! I think the most important factor in discussions like these regarding takeoff is ‘sample efficiency’, since that’s quite generalisable and feeds into most downstream applications of more generic ‘intelligence’ resources. This is relevant to R&D because sample efficiency affects how quickly you can accrue research taste, which controls the stable level of your exploration quality. Domain-knowledge and taste are obviously less generalisable, and harder to get in silico alone.
This is lovely, thank you!
My main concern would be that it takes the same very approximating stance as much other writing in the area, conflating all kinds of algorithmic progress into a single scalar ‘quality of the algorithms’.
You do moderately well here, noting that the most direct interpretation of your model regards speed or runtime compute efficiency, yielding ‘copies that can be run’ as the immediate downstream consequence (and discussing in a footnote the relationship to ‘intelligence’[1] and the distinction between ‘inference’ and training compute).
I worry that many readers don’t track those (important!) distinctions and tend to conflate these concepts. For what it’s worth, by distinguishing these concepts, I have come to the (tentative) conclusion that a speed/compute efficiency explosion is plausible (though not guaranteed), but an ‘intelligence’ explosion in software alone is less likely, except as a downstream effect of running faster (which might be nontrivial if pouring more effective compute into training and runtime yields meaningful gains).
Of course, ‘intelligence’ is also very many-dimensional! I think the most important factor in discussions like these regarding takeoff is ‘sample efficiency’, since that’s quite generalisable and feeds into most downstream applications of more generic ‘intelligence’ resources. This is relevant to R&D because sample efficiency affects how quickly you can accrue research taste, which controls the stable level of your exploration quality. Domain-knowledge and taste are obviously less generalisable, and harder to get in silico alone.