There’s deference (adopting views of people & groups we respect as experts), and then there’s anti-deference (rejecting views of people & groups who are arguably experts in some domain, but whose views contradict the dominant AI safety/EA narrative—e.g. Steven Pinker, Gary Marcus, others skeptical of AI X risk and/or speed of AI development).
Anti-deference can also be somewhat irrational, tribal, and conformist, such that if Gary Marcus says deep learning systems have cognitive architectures that can’t possibly support AGI, and if nobody in AI safety research takes him seriously, then we might react to his pessimism by updating even harder to think that AGI is arriving sooner than we would otherwise have predicted.
Anti-deference can also take a more generalized form of ignoring whole fields of study that haven’t been well-connected to the AI safety/EA in-group, but that have some potentially informative things to say about AGI timelines; this could include mainstream cognitive science, evolutionary psychology, intelligence research, software engineering, electrical engineering, history of technology, national security and intelligence, corporate intellectual property law, etc.
Sam—good points. I would add:
There’s deference (adopting views of people & groups we respect as experts), and then there’s anti-deference (rejecting views of people & groups who are arguably experts in some domain, but whose views contradict the dominant AI safety/EA narrative—e.g. Steven Pinker, Gary Marcus, others skeptical of AI X risk and/or speed of AI development).
Anti-deference can also be somewhat irrational, tribal, and conformist, such that if Gary Marcus says deep learning systems have cognitive architectures that can’t possibly support AGI, and if nobody in AI safety research takes him seriously, then we might react to his pessimism by updating even harder to think that AGI is arriving sooner than we would otherwise have predicted.
Anti-deference can also take a more generalized form of ignoring whole fields of study that haven’t been well-connected to the AI safety/EA in-group, but that have some potentially informative things to say about AGI timelines; this could include mainstream cognitive science, evolutionary psychology, intelligence research, software engineering, electrical engineering, history of technology, national security and intelligence, corporate intellectual property law, etc.