Good post—I’m glad to see discussion of this topic. Here’s an alternative methodology that takes a more “black box” approach:
Accomplishments—If someone is able to do something that others find difficult, this is evidence of expertise. Examples: If someone wins a chess tournament, this is evidence of chess expertise. If someone makes correct economic forecasts, this is evidence of economics expertise. (Notably, writing a popular book may mostly indicate expertise in writing popular books—I’ve heard of credentialed people who wrote popular books that were said to be misrepresentations according to field insiders.) Surprisingly, track records of meaningful accomplishments are often ignored in judging expertise.
Ability × Time Studying Quality Sources—Given the existence of general intelligence) (one of the better replicated areas of psychology?), and other factors predicting general effectiveness, expertise at any intellectual task is evidence of the ability to acquire expertise at other intellectual tasks, given study time. Say I’ve worked with both Person A and Person B on a software development team, and I’m more impressed by the software Person A writes than the software Person B does (see Accomplishments). If I know that both Person A and Person B spent a year getting a master’s degree in a particular math subfield, and they have a disagreement about some aspect of that subfield, I’m more inclined to trust Person A than Person B.
Recommendations/Transitive Judgements of Expertise—Once you’ve established someone as an expert, you can use their judgements on expertise as evidence about who else is an expert. This can be done recursively to expand your body of recognized experts. For example, Physicist A was part of the team that developed nuclear bombs (see Accomplishments). Physicist A is a professor at The University of X, and Physicist B graduated with a PhD from The University of X. Physicist A was on Physicist B’s doctoral committee and approved Physicist B’s PhD. Physicist C scored high on their GREs and landed a spot studying under Physicist B, eventually obtaining their doctorate (see Ability × Time Studying Quality Sources). Thus the development of the nuclear bomb, plus this chain of recommendations, causes me to believe that Physicist C is a physics expert.
This suggests a heuristic for determining the reliability of degrees in different academic fields: Check to see whether the field has tangible external accomplishments. The fact that physicists managed to invent nuclear bombs suggests to me that physicists have “real expertise”. I don’t know of a comparable achievement on the part of evolutionary psychologists, so I’m less sure evolutionary psychologists have “real expertise”. Although if they seem like intelligent people who have spent a long time thinking carefully about the topic, working from accurate & representative data, I will probably listen to them anyway (see Ability × Time Studying Quality Sources).
Recommendations become less trustworthy if you suspect the person making the recommendation is dishonest or has a conflict of interest. But this applies to listening to expert advice in general: There’s always the risk that a bona fide expert will lead you astray because they don’t like you and want to see you fail, they are more concerned with appearing socially desirable than telling the truth, or they are just having an off day. Most academically certified experts are certified by a group of people, so at that point you start looking at the possibility of bad departmental incentives and other group thinking failures.
I do think university degrees have decent predictive power in distinguishing expertise—universities are incentivized to correctly certify experts in order to maintain their brand, and universities often flaunt the accomplishments of their faculty & graduates (e.g. “We have X Nobel Prize winners on the faculty”) in order to build that brand.
“Check to see whether the field has tangible external accomplishments.”
This is a good one. I think you can decently hone your expertise assessment by taking an outside view which incorporates base-rates of strong expertise in the field amongst average practitioners, as well as the variance. (Say that five-times fast.) For example:
Forecasters: very low baserate, high variance
Doctors: high baserate, low-medium variance
Normal car repairpeople: medium baserate, low-medium variance (In this case, there is a more salient and practical ceiling to expertise. While a boxer might continuously improve her ability to box until she wins all possible matches (a really high ceiling), a repairperson can’t make a car dramatically “more repaired” than others. Though I suppose she might improve her speed at the process.)
Users of forks, people who walk, people who can recognize faces: high baserate, low variance
Mealsquares founders: enormously high baserate, extremely low variance =)
I’m cross-posting this excerpt from Thinking Fast and Slow that’s relevant to the question of whether expertise is even possible in a given field. It seems in some cases you are better off using a statistical model.
Good post—I’m glad to see discussion of this topic. Here’s an alternative methodology that takes a more “black box” approach:
Accomplishments—If someone is able to do something that others find difficult, this is evidence of expertise. Examples: If someone wins a chess tournament, this is evidence of chess expertise. If someone makes correct economic forecasts, this is evidence of economics expertise. (Notably, writing a popular book may mostly indicate expertise in writing popular books—I’ve heard of credentialed people who wrote popular books that were said to be misrepresentations according to field insiders.) Surprisingly, track records of meaningful accomplishments are often ignored in judging expertise.
Ability × Time Studying Quality Sources—Given the existence of general intelligence) (one of the better replicated areas of psychology?), and other factors predicting general effectiveness, expertise at any intellectual task is evidence of the ability to acquire expertise at other intellectual tasks, given study time. Say I’ve worked with both Person A and Person B on a software development team, and I’m more impressed by the software Person A writes than the software Person B does (see Accomplishments). If I know that both Person A and Person B spent a year getting a master’s degree in a particular math subfield, and they have a disagreement about some aspect of that subfield, I’m more inclined to trust Person A than Person B.
Recommendations/Transitive Judgements of Expertise—Once you’ve established someone as an expert, you can use their judgements on expertise as evidence about who else is an expert. This can be done recursively to expand your body of recognized experts. For example, Physicist A was part of the team that developed nuclear bombs (see Accomplishments). Physicist A is a professor at The University of X, and Physicist B graduated with a PhD from The University of X. Physicist A was on Physicist B’s doctoral committee and approved Physicist B’s PhD. Physicist C scored high on their GREs and landed a spot studying under Physicist B, eventually obtaining their doctorate (see Ability × Time Studying Quality Sources). Thus the development of the nuclear bomb, plus this chain of recommendations, causes me to believe that Physicist C is a physics expert.
This suggests a heuristic for determining the reliability of degrees in different academic fields: Check to see whether the field has tangible external accomplishments. The fact that physicists managed to invent nuclear bombs suggests to me that physicists have “real expertise”. I don’t know of a comparable achievement on the part of evolutionary psychologists, so I’m less sure evolutionary psychologists have “real expertise”. Although if they seem like intelligent people who have spent a long time thinking carefully about the topic, working from accurate & representative data, I will probably listen to them anyway (see Ability × Time Studying Quality Sources).
Recommendations become less trustworthy if you suspect the person making the recommendation is dishonest or has a conflict of interest. But this applies to listening to expert advice in general: There’s always the risk that a bona fide expert will lead you astray because they don’t like you and want to see you fail, they are more concerned with appearing socially desirable than telling the truth, or they are just having an off day. Most academically certified experts are certified by a group of people, so at that point you start looking at the possibility of bad departmental incentives and other group thinking failures.
I do think university degrees have decent predictive power in distinguishing expertise—universities are incentivized to correctly certify experts in order to maintain their brand, and universities often flaunt the accomplishments of their faculty & graduates (e.g. “We have X Nobel Prize winners on the faculty”) in order to build that brand.
More links:
http://lesswrong.com/lw/9xs/feed_the_spinoff_heuristic/ - to invert this idea, in order to find someone who has expertise, try to figure out who would have an incentive to make themselves an expert?
http://lesswrong.com/lw/4ba/some_heuristics_for_evaluating_the_soundness_of/
http://lesswrong.com/lw/28i/what_is_bunk/
http://lesswrong.com/lw/eck/how_to_tell_apart_science_from_pseudoscience_in_a/
Eliezer offers some thoughts about identifying correct contrarians in this essay.
“Check to see whether the field has tangible external accomplishments.”
This is a good one. I think you can decently hone your expertise assessment by taking an outside view which incorporates base-rates of strong expertise in the field amongst average practitioners, as well as the variance. (Say that five-times fast.) For example:
Forecasters: very low baserate, high variance
Doctors: high baserate, low-medium variance
Normal car repairpeople: medium baserate, low-medium variance (In this case, there is a more salient and practical ceiling to expertise. While a boxer might continuously improve her ability to box until she wins all possible matches (a really high ceiling), a repairperson can’t make a car dramatically “more repaired” than others. Though I suppose she might improve her speed at the process.)
Users of forks, people who walk, people who can recognize faces: high baserate, low variance
Mealsquares founders: enormously high baserate, extremely low variance =)
I’m cross-posting this excerpt from Thinking Fast and Slow that’s relevant to the question of whether expertise is even possible in a given field. It seems in some cases you are better off using a statistical model.