Thanks, super interesting! In my very premature thinking, the question of algorithmic progress is most load-bearing. My background is in cognitive science and my broad impression is that
human cognition is not *that* crazy complex,
that I wouldn’t be surprised at all if one of the broad architectural ideas I’ve seen floating around on human cognition could afford “significant” steps towards proper AGI
e.g. how Bayesian inference and Reinforcement Learning maybe realized in the predictive coding framework was impressive to me, for example flashed out by Steve Byrnes on LessWrong
or e.g. rough sketches of different systems that fulfill specific functions like in the further breakdown of System 2 in Stanovich’s Rationality and the Reflective Mind
when thinking about how many „significant“ steps or insights we still need until AGI, I think more on the order of less than ten
(I’ve heard the idea of “insight-based forecasting” from a Joscha Bach interview)
those insights might not be extremely expensive and, once had, cheap-ish to implement
e.g. the GANs story maybe fits this, they’re not crazy complicated, not crazy hard to implement, but very powerful
This all feels pretty freewheeling so far. Would be really interested in further thoughts or reading recommendation on algorithmic progress.
My approach to thinking about algorithmic progress has been to try to extrapolate the rate of past progress forward; I rely on two sources for this, a paper by Katja Grace and a paper by Danny Hernandez and Tom Brown. One question I’d think about when forming a view on this is whether arguments like the ones you make should lead you to expect algorithmic progress to be significantly faster than the trendline, or whether those considerations are already “priced in” to the existing trendline.
And yes, thanks, the point about thinking with trendlines in mind is really good.
Maybe those two developments could be relevant:
bigger number of recent ML/CogSci/Comp. Neuroscience graduates that academically grew up in times of noticeable AI progress and much more widespread aspirations to build AGI than the previous generation
related to my question about non-academic open-source projects: If there is a certain level of computation necessary to solve interesting general reasoning gridworld problems with new algorithms, then we might unlock a lot of work in the coming years
Thanks! :) I find Grace’s paper a little bit unsatisfying. From the outside, fields around like SAT, factoring, scheduling and linear optimization seem only weakly analogous to the fields around developing general thinking capabilities. It seems to me that the former is about hundreds of researchers going very deep into very specific problems and optimizing a ton to produce slightly more elegant and optimal solutions, whereas the latter is more about smart and creative “pioneers” having new insights how to frame the problem correctly and finding new relatively simple architectures that make a lot of progress.
What would be more informative for me?
by above logic maybe I would focus more on progress of younger fields within computer science
also maybe there is a way to measure how “random” praciticioners perceive the field to be—maybe just asking them how surprised they are by recent breakthroughs is a solid measures of how many other potential breakthroughs are still out there
also I’d be interested in solidifying my very rough impression that breakthroughs like transformers or GANs relatively simple algorithms in comparison with breakthroughs in other areas of computer science
evolution’s algorithmic progress would maybe also be informative to me, i.e. how much trial and error was roughly invested to make specific jumps
e.g. I’m reading Pearls Book of Why and he makes a tentative claim that counterfactual reasoning is something that appeared at some point, and the first sign we can report of it is the lion-man from roughly 40.000 years ago
though of course evolution did not aim at general intelligence, e.g. saying “evolution took hundreds of millions of years to develop an AGI” in this context seems disanalogous
how big of a fraction of human cognition do we actually need for TAI? E.g. we might save about an order of magnitude by ditching vision and focussing on language?
Thanks, super interesting! In my very premature thinking, the question of algorithmic progress is most load-bearing. My background is in cognitive science and my broad impression is that
human cognition is not *that* crazy complex,
that I wouldn’t be surprised at all if one of the broad architectural ideas I’ve seen floating around on human cognition could afford “significant” steps towards proper AGI
e.g. how Bayesian inference and Reinforcement Learning maybe realized in the predictive coding framework was impressive to me, for example flashed out by Steve Byrnes on LessWrong
or e.g. rough sketches of different systems that fulfill specific functions like in the further breakdown of System 2 in Stanovich’s Rationality and the Reflective Mind
when thinking about how many „significant“ steps or insights we still need until AGI, I think more on the order of less than ten
(I’ve heard the idea of “insight-based forecasting” from a Joscha Bach interview)
those insights might not be extremely expensive and, once had, cheap-ish to implement
e.g. the GANs story maybe fits this, they’re not crazy complicated, not crazy hard to implement, but very powerful
This all feels pretty freewheeling so far. Would be really interested in further thoughts or reading recommendation on algorithmic progress.
My approach to thinking about algorithmic progress has been to try to extrapolate the rate of past progress forward; I rely on two sources for this, a paper by Katja Grace and a paper by Danny Hernandez and Tom Brown. One question I’d think about when forming a view on this is whether arguments like the ones you make should lead you to expect algorithmic progress to be significantly faster than the trendline, or whether those considerations are already “priced in” to the existing trendline.
And yes, thanks, the point about thinking with trendlines in mind is really good.
Maybe those two developments could be relevant:
bigger number of recent ML/CogSci/Comp. Neuroscience graduates that academically grew up in times of noticeable AI progress and much more widespread aspirations to build AGI than the previous generation
related to my question about non-academic open-source projects: If there is a certain level of computation necessary to solve interesting general reasoning gridworld problems with new algorithms, then we might unlock a lot of work in the coming years
Thanks! :) I find Grace’s paper a little bit unsatisfying. From the outside, fields around like SAT, factoring, scheduling and linear optimization seem only weakly analogous to the fields around developing general thinking capabilities. It seems to me that the former is about hundreds of researchers going very deep into very specific problems and optimizing a ton to produce slightly more elegant and optimal solutions, whereas the latter is more about smart and creative “pioneers” having new insights how to frame the problem correctly and finding new relatively simple architectures that make a lot of progress.
What would be more informative for me?
by above logic maybe I would focus more on progress of younger fields within computer science
also maybe there is a way to measure how “random” praciticioners perceive the field to be—maybe just asking them how surprised they are by recent breakthroughs is a solid measures of how many other potential breakthroughs are still out there
also I’d be interested in solidifying my very rough impression that breakthroughs like transformers or GANs relatively simple algorithms in comparison with breakthroughs in other areas of computer science
evolution’s algorithmic progress would maybe also be informative to me, i.e. how much trial and error was roughly invested to make specific jumps
e.g. I’m reading Pearls Book of Why and he makes a tentative claim that counterfactual reasoning is something that appeared at some point, and the first sign we can report of it is the lion-man from roughly 40.000 years ago
though of course evolution did not aim at general intelligence, e.g. saying “evolution took hundreds of millions of years to develop an AGI” in this context seems disanalogous
how big of a fraction of human cognition do we actually need for TAI? E.g. we might save about an order of magnitude by ditching vision and focussing on language?
Sherry et al. have a more exhaustive working paper about algorithmic progress in a wide variety of fields.