I don’t know anything about machine learning, AI or math, but I’m really uncertain about the technical section in the paper, on “Can 2020 algorithms scale to TAI?.”
One major issue is that in places in her paper, the author expresses doubt that “2020 algorithms” can be the basis for computation for this exercise. However, she only deals with feed forward neural nets for the technical section.
This is really off to leave out other architectures.
If you try using feed forward neural nets, and compare them to RNN/LSTM for things like sequence like text generation, it’s really clear they have a universe of difference. I think there’s many situations where you can’t get similar functionality in a DNN (or get things to converge at all) even with much more “compute”/parameter size. On the other hand, plain RNN/LTSM will work fine, and these are pretty basic models today.
Hi Charles, thanks for all the comments! I’ll reply to this one first since it seems like the biggest crux. I completely agree with you that feedforward NNs != RNN/LSTM… and that I haven’t given a crisp argument that the latter can’t scale to TAI. But I don’t think I claim to in the piece! All I wanted to do here was to (1) push back against the claim that the UAT for feedforward networks provides positive evidence that DL->TAI, and (2) give an example of a strategy that could be used to argue in a more principled way that other architectures won’t scale up to certain capabilities, if one is able to derive effective theories for them as was done for MLPs by Roberts et al. (I think it would be really interesting to show this for other architectures and I’d like to think more about it in the future.)
Is the UAT mentioned anywhere in the bio anchors report as a reason for thinking DL will scale to TAI? I didn’t find any mentions of it quickly ctrl-fing in any of the 4 parts or the appendices.
Nested inside of the above issue, another problem is that the author seems to use “proof-like” rhetoric in arguments, when she needs to provide broader illustrations that could generalize for intuition, because the proof actually isn’t there.
Sometimes some statements don’t seem to resemble how people use mathematical argumentation in disciplines like machine learning or economics.
To explain, the author begins with an excellent point that it’s bizarre and basically statistically impossible that a feed forward network can learn to do certain things through limited training, even though the actual execution in the model would be simple.
One example is that it can’t learn the mechanics of addition for numbers larger than it has seen computed in training.
Basically, the most “well trained”/largest feed forward DNN that uses backprop training, will never add 99+1 correctly, if it was only trained on adding smaller numbers like 12+17 if these calculations never total 100. This is because in backprop, the network literally needs to see and create processes for the 100 digits. This is despite the fact that it’s simple (for a vast DNN) to “mechanically have” the capability to perform true logical addition.
Immediately starting from the above point, I think author wants to suggests that, in the same way it’s impossible to get this functionality above, this constrains what feed forward networks would do (and these ideas should apply to deep learning or 2020 technology for biological anchors).
However, everything sort of changes here. The author says:
I’s not clear what is being claimed or what is being built on above.
What computations are foreclosed or what can’t be achieved in feed forward nets?
While the author shows that addition with n+1 digits can’t be achieved by training with addition with numbers with n digits”, and certainly many other training to outcomes are prevented, why would this generally rule out capability, and why would this stop other (maybe very sophisticated) training strategies/simulations from producing models that could be dangerous?
The author says the “upshot is that the class of solutions searched over by feedforward networks in practice seems to be (approximately) the space of linear models with all possible features” and “this is a big step up from earlier ML algorithms where one has to hand-engineer the features”.
But that seems to allow general transformations on the features. If so, that is incredibly powerful. It doesn’t seem to constrain functionality (of these feed forward networks)?
Why would the logic which relies on a technical proof (which I am guessing relies on a “topological-like” argument that requires the smooth structure of feed forward neural nets), apply to even to RNN or LTSM, or transformers?
Regarding the questions about feedforward networks, a really short answer is that regression is a very limited form of inference-time computation that e.g. rules out using memory. (Of course, as you point out, this doesn’t apply to other 2020 algorithms beyond MLPs.) Sorry about the lack of clarity—I didn’t want to take up too much space in this piece going into the details of the linked papers, but hopefully I’ll be able to do a better job explaining it in a review of those papers that I’ll post on LW/AF next week.
(I also want to reply to your top-level comments about the evolutionary anchor, but am a bit short on time to do it right now (since for those questions I don’t have cached technical answers and will have to remind myself about the context). But I’ll definitely get to it next week.)
Thanks for the responses, they give a lot more useful context.
(I also want to reply to your top-level comments about the evolutionary anchor, but am a bit short on time to do it right now (since for those questions I don’t have cached technical answers and will have to remind myself about the context). But I’ll definitely get to it next week.)
If it frees up your time, I don’t think you need to write the above, unless you specifically want to. It seems reasonable to interpret that point on “evolutionary anchors” as a larger difference on the premise, and that is not fully in scope of the post. This difference and its phrasing is more disagreeable/overbearing to answer, so it’s also less worthy of a response.
I don’t know anything about machine learning, AI or math, but I’m really uncertain about the technical section in the paper, on “Can 2020 algorithms scale to TAI?.”
One major issue is that in places in her paper, the author expresses doubt that “2020 algorithms” can be the basis for computation for this exercise. However, she only deals with feed forward neural nets for the technical section.
This is really off to leave out other architectures.
If you try using feed forward neural nets, and compare them to RNN/LSTM for things like sequence like text generation, it’s really clear they have a universe of difference. I think there’s many situations where you can’t get similar functionality in a DNN (or get things to converge at all) even with much more “compute”/parameter size. On the other hand, plain RNN/LTSM will work fine, and these are pretty basic models today.
Hi Charles, thanks for all the comments! I’ll reply to this one first since it seems like the biggest crux. I completely agree with you that feedforward NNs != RNN/LSTM… and that I haven’t given a crisp argument that the latter can’t scale to TAI. But I don’t think I claim to in the piece! All I wanted to do here was to (1) push back against the claim that the UAT for feedforward networks provides positive evidence that DL->TAI, and (2) give an example of a strategy that could be used to argue in a more principled way that other architectures won’t scale up to certain capabilities, if one is able to derive effective theories for them as was done for MLPs by Roberts et al. (I think it would be really interesting to show this for other architectures and I’d like to think more about it in the future.)
Is the UAT mentioned anywhere in the bio anchors report as a reason for thinking DL will scale to TAI? I didn’t find any mentions of it quickly ctrl-fing in any of the 4 parts or the appendices.
Yes, it’s mentioned on page 19 of part 4 (as point 1, and my main concern is with point 2b).
Ah, thanks for the pointer
Nested inside of the above issue, another problem is that the author seems to use “proof-like” rhetoric in arguments, when she needs to provide broader illustrations that could generalize for intuition, because the proof actually isn’t there.
Sometimes some statements don’t seem to resemble how people use mathematical argumentation in disciplines like machine learning or economics.
To explain, the author begins with an excellent point that it’s bizarre and basically statistically impossible that a feed forward network can learn to do certain things through limited training, even though the actual execution in the model would be simple.
One example is that it can’t learn the mechanics of addition for numbers larger than it has seen computed in training.
Basically, the most “well trained”/largest feed forward DNN that uses backprop training, will never add 99+1 correctly, if it was only trained on adding smaller numbers like 12+17 if these calculations never total 100. This is because in backprop, the network literally needs to see and create processes for the 100 digits. This is despite the fact that it’s simple (for a vast DNN) to “mechanically have” the capability to perform true logical addition.
Immediately starting from the above point, I think author wants to suggests that, in the same way it’s impossible to get this functionality above, this constrains what feed forward networks would do (and these ideas should apply to deep learning or 2020 technology for biological anchors).
However, everything sort of changes here. The author says:
I’s not clear what is being claimed or what is being built on above.
What computations are foreclosed or what can’t be achieved in feed forward nets?
While the author shows that addition with n+1 digits can’t be achieved by training with addition with numbers with n digits”, and certainly many other training to outcomes are prevented, why would this generally rule out capability, and why would this stop other (maybe very sophisticated) training strategies/simulations from producing models that could be dangerous?
The author says the “upshot is that the class of solutions searched over by feedforward networks in practice seems to be (approximately) the space of linear models with all possible features” and “this is a big step up from earlier ML algorithms where one has to hand-engineer the features”.
But that seems to allow general transformations on the features. If so, that is incredibly powerful. It doesn’t seem to constrain functionality (of these feed forward networks)?
Why would the logic which relies on a technical proof (which I am guessing relies on a “topological-like” argument that requires the smooth structure of feed forward neural nets), apply to even to RNN or LTSM, or transformers?
Regarding the questions about feedforward networks, a really short answer is that regression is a very limited form of inference-time computation that e.g. rules out using memory. (Of course, as you point out, this doesn’t apply to other 2020 algorithms beyond MLPs.) Sorry about the lack of clarity—I didn’t want to take up too much space in this piece going into the details of the linked papers, but hopefully I’ll be able to do a better job explaining it in a review of those papers that I’ll post on LW/AF next week.
(I also want to reply to your top-level comments about the evolutionary anchor, but am a bit short on time to do it right now (since for those questions I don’t have cached technical answers and will have to remind myself about the context). But I’ll definitely get to it next week.)
Thanks for the responses, they give a lot more useful context.
If it frees up your time, I don’t think you need to write the above, unless you specifically want to. It seems reasonable to interpret that point on “evolutionary anchors” as a larger difference on the premise, and that is not fully in scope of the post. This difference and its phrasing is more disagreeable/overbearing to answer, so it’s also less worthy of a response.
Thanks for writing your ideas.