The community of people most focused on keeping up the drumbeat of near-term AGI predictions seems insular, intolerant of disagreement or intellectual or social non-conformity (relative to the group’s norms), and closed-off to even reasonable, relatively gentle criticism (whether or not they pay lip service to listening to criticism or perform being open-minded). It doesn’t feel like a scientific community. It feels more like a niche subculture. It seems like a group of people just saying increasingly small numbers to each other (10 years, 5 years, 3 years, 2 years), hyping each other up (either with excitement or anxiety), and reinforcing each other’s ideas all the time. It doesn’t seem like an intellectually healthy community.
My impression is that a lot of people who believe in short AGI timelines (e.g. AGI by January 1, 2030) and who believe in some strong version of the scaling hypothesis (e.g. LLMs will scale to AGI with relatively minor fundamental changes but with greatly increased training compute, inference compute, and/or training data) are in an echo chamber where they just reinforce each other’s ideas all the time.
What might look like vigorous disagreement is, in many cases, when you zoom out, people with broadly similar views arguing around the margins (e.g. AGI in 3 years vs. 7 years; minimal non-scaling innovations on LLMs vs. modest non-scaling innovations on LLMs).
If people stop to briefly consider what a well-informed critic like Yann LeCun has to say about the topic, it’s usually to make fun of him and move on.
It will seem more obvious that you’re right if the people you choose to listen to are the people who broadly agree with you and if you meet well-informed disagreement from people like Yann Lecun or François Chollet with dismissal, ridicule, or hostility. This is a recipe for overconfidence. Taken to an extreme, this approach can lead people down a path where they end up deeply misguided.
Let me repeat something I said in the OP:
My impression is that a lot of people who believe in short AGI timelines (e.g. AGI by January 1, 2030) and who believe in some strong version of the scaling hypothesis (e.g. LLMs will scale to AGI with relatively minor fundamental changes but with greatly increased training compute, inference compute, and/or training data) are in an echo chamber where they just reinforce each other’s ideas all the time.
What might look like vigorous disagreement is, in many cases, when you zoom out, people with broadly similar views arguing around the margins (e.g. AGI in 3 years vs. 7 years; minimal non-scaling innovations on LLMs vs. modest non-scaling innovations on LLMs).
If people stop to briefly consider what a well-informed critic like Yann LeCun has to say about the topic, it’s usually to make fun of him and move on.
It will seem more obvious that you’re right if the people you choose to listen to are the people who broadly agree with you and if you meet well-informed disagreement from people like Yann Lecun or François Chollet with dismissal, ridicule, or hostility. This is a recipe for overconfidence. Taken to an extreme, this approach can lead people down a path where they end up deeply misguided.