Thanks! :) I find Grace’s paper a little bit unsatisfying. From the outside, fields around like SAT, factoring, scheduling and linear optimization seem only weakly analogous to the fields around developing general thinking capabilities. It seems to me that the former is about hundreds of researchers going very deep into very specific problems and optimizing a ton to produce slightly more elegant and optimal solutions, whereas the latter is more about smart and creative “pioneers” having new insights how to frame the problem correctly and finding new relatively simple architectures that make a lot of progress.
What would be more informative for me?
by above logic maybe I would focus more on progress of younger fields within computer science
also maybe there is a way to measure how “random” praciticioners perceive the field to be—maybe just asking them how surprised they are by recent breakthroughs is a solid measures of how many other potential breakthroughs are still out there
also I’d be interested in solidifying my very rough impression that breakthroughs like transformers or GANs relatively simple algorithms in comparison with breakthroughs in other areas of computer science
evolution’s algorithmic progress would maybe also be informative to me, i.e. how much trial and error was roughly invested to make specific jumps
e.g. I’m reading Pearls Book of Why and he makes a tentative claim that counterfactual reasoning is something that appeared at some point, and the first sign we can report of it is the lion-man from roughly 40.000 years ago
though of course evolution did not aim at general intelligence, e.g. saying “evolution took hundreds of millions of years to develop an AGI” in this context seems disanalogous
how big of a fraction of human cognition do we actually need for TAI? E.g. we might save about an order of magnitude by ditching vision and focussing on language?
Thanks! :) I find Grace’s paper a little bit unsatisfying. From the outside, fields around like SAT, factoring, scheduling and linear optimization seem only weakly analogous to the fields around developing general thinking capabilities. It seems to me that the former is about hundreds of researchers going very deep into very specific problems and optimizing a ton to produce slightly more elegant and optimal solutions, whereas the latter is more about smart and creative “pioneers” having new insights how to frame the problem correctly and finding new relatively simple architectures that make a lot of progress.
What would be more informative for me?
by above logic maybe I would focus more on progress of younger fields within computer science
also maybe there is a way to measure how “random” praciticioners perceive the field to be—maybe just asking them how surprised they are by recent breakthroughs is a solid measures of how many other potential breakthroughs are still out there
also I’d be interested in solidifying my very rough impression that breakthroughs like transformers or GANs relatively simple algorithms in comparison with breakthroughs in other areas of computer science
evolution’s algorithmic progress would maybe also be informative to me, i.e. how much trial and error was roughly invested to make specific jumps
e.g. I’m reading Pearls Book of Why and he makes a tentative claim that counterfactual reasoning is something that appeared at some point, and the first sign we can report of it is the lion-man from roughly 40.000 years ago
though of course evolution did not aim at general intelligence, e.g. saying “evolution took hundreds of millions of years to develop an AGI” in this context seems disanalogous
how big of a fraction of human cognition do we actually need for TAI? E.g. we might save about an order of magnitude by ditching vision and focussing on language?
Sherry et al. have a more exhaustive working paper about algorithmic progress in a wide variety of fields.