Thanks for writing this post, it’s a very succinct way to put it (I’ve struggled to formulate and raise this question with the AI community).
My personal opinion is that AI research—like many other fields—relies on “the simplest definition” of concepts that it can get away with for those notions that lie outside the field. This is not a problem in itself, as we can’t all be phds in every field (not that this would solve the problem). However, my view is there are many instances where AI research and findings rely on axioms—or yield results—that require specific interpretations of concepts (re: intelligence, agency, human psychology, neuroscience etc.) that have a speculative or at least “far-from-the-average” interpretation. This is not helped by the fact that many of these terms do not always have consensus in their own respective fields. I think that when presenting AI ideas and pitches many overlook the nuance required to formulate/explain AI research given such assumptions. This is especially important for AI work that is no longer theoretical or purely about solving algorithmic or learning problems—but extends to other scientific fields and the broader society (e.g. AI-safety).
Thanks for writing this post, it’s a very succinct way to put it (I’ve struggled to formulate and raise this question with the AI community).
My personal opinion is that AI research—like many other fields—relies on “the simplest definition” of concepts that it can get away with for those notions that lie outside the field. This is not a problem in itself, as we can’t all be phds in every field (not that this would solve the problem). However, my view is there are many instances where AI research and findings rely on axioms—or yield results—that require specific interpretations of concepts (re: intelligence, agency, human psychology, neuroscience etc.) that have a speculative or at least “far-from-the-average” interpretation. This is not helped by the fact that many of these terms do not always have consensus in their own respective fields. I think that when presenting AI ideas and pitches many overlook the nuance required to formulate/explain AI research given such assumptions. This is especially important for AI work that is no longer theoretical or purely about solving algorithmic or learning problems—but extends to other scientific fields and the broader society (e.g. AI-safety).