This principle is especially true for relatively unexplored fields with relatively few people working on them. If theres only like 10 people working on some sub-field of AI, then it’s actually highly likely that all of them are missing something important. This is especially true when you factor in group think and homogeneity: If all 10 of them are mathematicians, then it would be completely unsurprising if they shared flawed assumptions on other fields like computer science or biology.
Everyone being wrong on some core assumption is actually fairly common, if the assumption in question is untested and the set of “everyone” is not that large. This is one of the reasons I am unbothered by having substantially different opinions on AI risk to the majority here.
This principle is especially true for relatively unexplored fields with relatively few people working on them. If theres only like 10 people working on some sub-field of AI, then it’s actually highly likely that all of them are missing something important. This is especially true when you factor in group think and homogeneity: If all 10 of them are mathematicians, then it would be completely unsurprising if they shared flawed assumptions on other fields like computer science or biology.
Everyone being wrong on some core assumption is actually fairly common, if the assumption in question is untested and the set of “everyone” is not that large. This is one of the reasons I am unbothered by having substantially different opinions on AI risk to the majority here.