Shah thinks there are several things that could change his beliefs, including:
If he learned that evolution actually baked a lot into humans (‘nativism’), he would lengthen the amount of time he thinks there will be before AGI.
Tooby and Cosmides are big advocates for the “massive modularity” view—a huge amount of human cognition takes place in specialized, task-tailored modules rather than on one big, domain-general “computer”. Common examples of these sorts of modules are:
Chomsky’s universal grammar: There’s not enough language data for children to learn languages in the absence of inductive biases.
Social exchange: People perform much better at the Wason selection task when the domain is social exchange rather than fully abstract.
Unfortunately, I don’t know of any review collecting and examining evidence for the massive modularity view.
Not much, sadly. I don’t actually intend to learn about it in the near future, because I don’t think timelines are particularly decision-relevant to me (though they are to others, especially funders). Thanks for the links!
Tooby and Cosmides are big advocates for the “massive modularity” view—a huge amount of human cognition takes place in specialized, task-tailored modules rather than on one big, domain-general “computer”.
On my view, babies would learn a huge amount about the structure of the world simply by interacting with it (pushing over an object can in principle teach you a lot about objects, causality, intuitive physics, etc), and this leads to general patterns that we later call “inductive biases” for more complex tasks. For example, hierarchy is a very useful way to understand basically any environment we are ever in; perhaps babies develop a sense of “hierarchy” which then gets applied to language, explaining how children learn languages so fast.
From the Wikipedia page you linked, challenges to a “rationality” based view:
1. Evolutionary theories using the idea of numerous domain-specific adaptions have produced testable predictions that have been empirically confirmed; the theory of domain-general rational thought has produced no such predictions or confirmations.
I wish they said what these predictions were. I’m not going to chase down this reference.
2. The rapidity of responses such as jealousy due to infidelity indicates a domain-specific dedicated module rather than a general, deliberate, rational calculation of consequences.
This is a good point; in general emotions are probably not learned, for the most part. I’m not sure what’s going on there.
3. Reactions may occur instinctively (consistent with innate knowledge) even if a person has not learned such knowledge.
I agree that reflexes are “built-in” and not learned; reflexes are also pretty different from e.g. language. Obviously not everything our bodies do is “learned”, reflexes, breathing, digestion, etc. all fall into the “built-in” category. I don’t think this says much about what leads humans to be good at chess, language, plumbing, soccer, gardening, etc, which is what I’m more interested in.
It seems likely to me that you might need the equivalent of reflexes, breathing, digestion, etc. if you want to design a fully autonomous agent that learns without any human support whatsoever, but we will probably instead design an agent that (initially) depends on us to keep the electricity flowing, to fix any wiring issues, to keep up the Internet connection, etc. (In contrast, human parents can’t ensure that the child keeps breathing, so you need an automatic, built-in system for that.)
perhaps babies develop a sense of “hierarchy” which then gets applied to language, explaining how children learn languages so fast.
Though if we are to believe this paper at face value (I haven’t evaluated it), babies start learning in the womb. (The paper claims that the biases depend on which language is spoken around the pregnant mother, which suggests that it must be learned, rather than being “built-in”.)
Chomsky’s universal grammar: There’s not enough language data for children to learn languages in the absence of inductive biases.
I think there’s more recent work in computational linguistics that challenges this. Unfortunately I can’t summarize it since I only took an overview course a long time ago. I’ve been wondering whether I should read up on language evolution at some point. Mostly because it seems really interesting, but also because it’s a field I haven’t seen being discussed in EA circles, and it seems potentially useful to have this background when it comes to evaluating/interpreting AI milestones and so on. In any case, if someone understands computational linguistics, language evolution and how it relates to the nativism debate, I’d be extremely interested in a summary!
Tooby and Cosmides are big advocates for the “massive modularity” view—a huge amount of human cognition takes place in specialized, task-tailored modules rather than on one big, domain-general “computer”. Common examples of these sorts of modules are:
Chomsky’s universal grammar: There’s not enough language data for children to learn languages in the absence of inductive biases.
Social exchange: People perform much better at the Wason selection task when the domain is social exchange rather than fully abstract.
Unfortunately, I don’t know of any review collecting and examining evidence for the massive modularity view.
(Not sure how much of this Shah already knows.)
Not much, sadly. I don’t actually intend to learn about it in the near future, because I don’t think timelines are particularly decision-relevant to me (though they are to others, especially funders). Thanks for the links!
On my view, babies would learn a huge amount about the structure of the world simply by interacting with it (pushing over an object can in principle teach you a lot about objects, causality, intuitive physics, etc), and this leads to general patterns that we later call “inductive biases” for more complex tasks. For example, hierarchy is a very useful way to understand basically any environment we are ever in; perhaps babies develop a sense of “hierarchy” which then gets applied to language, explaining how children learn languages so fast.
From the Wikipedia page you linked, challenges to a “rationality” based view:
I wish they said what these predictions were. I’m not going to chase down this reference.
This is a good point; in general emotions are probably not learned, for the most part. I’m not sure what’s going on there.
I agree that reflexes are “built-in” and not learned; reflexes are also pretty different from e.g. language. Obviously not everything our bodies do is “learned”, reflexes, breathing, digestion, etc. all fall into the “built-in” category. I don’t think this says much about what leads humans to be good at chess, language, plumbing, soccer, gardening, etc, which is what I’m more interested in.
It seems likely to me that you might need the equivalent of reflexes, breathing, digestion, etc. if you want to design a fully autonomous agent that learns without any human support whatsoever, but we will probably instead design an agent that (initially) depends on us to keep the electricity flowing, to fix any wiring issues, to keep up the Internet connection, etc. (In contrast, human parents can’t ensure that the child keeps breathing, so you need an automatic, built-in system for that.)
Though if we are to believe this paper at face value (I haven’t evaluated it), babies start learning in the womb. (The paper claims that the biases depend on which language is spoken around the pregnant mother, which suggests that it must be learned, rather than being “built-in”.)
I think there’s more recent work in computational linguistics that challenges this. Unfortunately I can’t summarize it since I only took an overview course a long time ago. I’ve been wondering whether I should read up on language evolution at some point. Mostly because it seems really interesting, but also because it’s a field I haven’t seen being discussed in EA circles, and it seems potentially useful to have this background when it comes to evaluating/interpreting AI milestones and so on. In any case, if someone understands computational linguistics, language evolution and how it relates to the nativism debate, I’d be extremely interested in a summary!