Related to what others (e.g., harfe) have already commented, it seems a sad truth that many domain experts reason poorly as soon as you go slightly outside the prevalent framings of their domain. For instance, someone may have a good track record improving current-day ML systems but lack interest in forecasting anything that’s several years in the future. Or they may not be thinking about questions like whether particular trends break around the time ML systems become situationally aware of being in “training” (because we’re far away from this, so it has never happened thus far.) If domain-experts had a burning desire to connect their expertise to “What’s important for the course of the future of humanity?” and get things right, get to the truth, they’d already be participating more in EA discourse. Which isn’t to say that everyone with an interest in these topics would endorse the conclusions prevalent within EA – but at least they’d be familiar with these conclusions and the arguments. The fact that they’re only domain experts, and not also existing contributors to EA discourse, is often evidence that they on some level lack interest in the questions at hand. (In practice, this often manifests itself as them saying stupid things when they get dragged into a discussion, but more fundamentally, the reason for this is because they massively underestimate the depth behind EA thinking simply because it’s outside their plate and because they lack a burning desire to think through big-picture questions.)
FWIW, I think Open Phil often commissioned domain experts to review their reports. (They probably tried to select experts who are interested enough in EA thinking to engage with it carefully, so this creates a selection effect, which you could argue introduces a bias. But the counterargument is that it’s no use commissioning experts who you expect will misrepresent your work when they review it. So you want to select experts who’ve previously engaged intelligently with shorter version of the argument – and that sadly disqualifies a significant portion of narrow-domain experts.)
Related to what others (e.g., harfe) have already commented, it seems a sad truth that many domain experts reason poorly as soon as you go slightly outside the prevalent framings of their domain. For instance, someone may have a good track record improving current-day ML systems but lack interest in forecasting anything that’s several years in the future. Or they may not be thinking about questions like whether particular trends break around the time ML systems become situationally aware of being in “training” (because we’re far away from this, so it has never happened thus far.) If domain-experts had a burning desire to connect their expertise to “What’s important for the course of the future of humanity?” and get things right, get to the truth, they’d already be participating more in EA discourse. Which isn’t to say that everyone with an interest in these topics would endorse the conclusions prevalent within EA – but at least they’d be familiar with these conclusions and the arguments. The fact that they’re only domain experts, and not also existing contributors to EA discourse, is often evidence that they on some level lack interest in the questions at hand. (In practice, this often manifests itself as them saying stupid things when they get dragged into a discussion, but more fundamentally, the reason for this is because they massively underestimate the depth behind EA thinking simply because it’s outside their plate and because they lack a burning desire to think through big-picture questions.)
FWIW, I think Open Phil often commissioned domain experts to review their reports. (They probably tried to select experts who are interested enough in EA thinking to engage with it carefully, so this creates a selection effect, which you could argue introduces a bias. But the counterargument is that it’s no use commissioning experts who you expect will misrepresent your work when they review it. So you want to select experts who’ve previously engaged intelligently with shorter version of the argument – and that sadly disqualifies a significant portion of narrow-domain experts.)