My understanding of their claim is more something like:
There will never be an AI system that is generally intelligent like a human is. Put differently, we can never have synoptic models of the behaviour of human beings. Where a synoptic model is a model that can be used to engineer the system or to emulate its behaviour.
This is because, in order for deep neural networks to work they need to be trained on data with the same variance as the data to which the trained algorithm will be applied. I.e. training data which is representative. General human intelligence is a complex system which doesn’t have said distribution. There are mathematical limits on the ability to predict the behaviours of a system like this that does not have a distribution. Therefore whilst we can have gradually more satisfactory simple models of human behaviour (like chatGPT is for written language) they will never the same level as humans.
To put it simply: We can’t create AGI by training algorithms on datasets, because human intelligence does not have a representative data set.
As such I think your response “It is possible to create new things without fundamentally understanding how they work internally” misses the mark. The claim is not that “we can’t understand how to model complex systems therefore they can’t be adequately modelled”. It’s more something like “there are fundamental limits to the possibility of adequately emulating complex systems (including humans), regardless of whether we understand them more or not”.
My personal take is that i’m unsure how important it is to be able to accurately model human intelligence. Perhaps modelling some approximate of human intelligence (in the way that chatGPT is approximately good enough as a written chat bot) is sufficient enough to stimulate the creation of something more approximately intelligent than that, and so on. In the same way that chatGPT can answer PhD level questions that the majority of humans cannot.
Thanks Tristan, for your comment and apologies for my late reply.
I appreciated your well thought out points, in particular that: “there are fundamental limits to the possibility of adequately emulating complex systems (including humans), regardless of whether we understand them more or not.”
This may indeed be the case. To be frank, I guess the best I can say is that I honestly don’t know haha.
As for your other point that, “i’m unsure how important it is to be able to accurately model human intelligence. Perhaps modelling some approximate of human intelligence (in the way that chatGPT is approximately good enough as a written chat bot) is sufficient enough.”
I think I’m in agreement with you here. It doesn’t really matter what flavor the intelligence happens to take. I think what really matters is whether or not it is powerful enough to outperform human intelligence. And ChatGPT and other AI models have definitely shown they can in a variety of fields.
Admittedly, since writing this, I’ve found my interests move away from AI Safety and to other areas, such as Buddhism and mindfulness. Maybe I’ll find myself back in AI safety in the future, who knows haha.
Anyways, thanks again for your thoughtful comment! :)
My understanding of their claim is more something like:
There will never be an AI system that is generally intelligent like a human is. Put differently, we can never have synoptic models of the behaviour of human beings. Where a synoptic model is a model that can be used to engineer the system or to emulate its behaviour.
This is because, in order for deep neural networks to work they need to be trained on data with the same variance as the data to which the trained algorithm will be applied. I.e. training data which is representative. General human intelligence is a complex system which doesn’t have said distribution. There are mathematical limits on the ability to predict the behaviours of a system like this that does not have a distribution. Therefore whilst we can have gradually more satisfactory simple models of human behaviour (like chatGPT is for written language) they will never the same level as humans.
To put it simply: We can’t create AGI by training algorithms on datasets, because human intelligence does not have a representative data set.
As such I think your response “It is possible to create new things without fundamentally understanding how they work internally” misses the mark. The claim is not that “we can’t understand how to model complex systems therefore they can’t be adequately modelled”. It’s more something like “there are fundamental limits to the possibility of adequately emulating complex systems (including humans), regardless of whether we understand them more or not”.
My personal take is that i’m unsure how important it is to be able to accurately model human intelligence. Perhaps modelling some approximate of human intelligence (in the way that chatGPT is approximately good enough as a written chat bot) is sufficient enough to stimulate the creation of something more approximately intelligent than that, and so on. In the same way that chatGPT can answer PhD level questions that the majority of humans cannot.
Note: My understanding of Barry’s argument is limited to this lecture (https://www.youtube.com/watch?v=GV1Ma2ehpxo) and this article (http://www.hunfi.hu/nyiri/AI/BS_paper.pdf).
Thanks Tristan, for your comment and apologies for my late reply.
I appreciated your well thought out points, in particular that: “there are fundamental limits to the possibility of adequately emulating complex systems (including humans), regardless of whether we understand them more or not.”
This may indeed be the case. To be frank, I guess the best I can say is that I honestly don’t know haha.
As for your other point that, “i’m unsure how important it is to be able to accurately model human intelligence. Perhaps modelling some approximate of human intelligence (in the way that chatGPT is approximately good enough as a written chat bot) is sufficient enough.”
I think I’m in agreement with you here. It doesn’t really matter what flavor the intelligence happens to take. I think what really matters is whether or not it is powerful enough to outperform human intelligence. And ChatGPT and other AI models have definitely shown they can in a variety of fields.
Admittedly, since writing this, I’ve found my interests move away from AI Safety and to other areas, such as Buddhism and mindfulness. Maybe I’ll find myself back in AI safety in the future, who knows haha.
Anyways, thanks again for your thoughtful comment! :)