Why and how to assess expertise

After my pre­vi­ous hippy dippy post, I figured I’d bal­ance the scale with one that is more con­tent-heavy. The fol­low­ing post is about a skill that will be im­por­tant for many effec­tive al­tru­ists to learn: ex­per­tise as­sess­ment. The thoughts be­low are from over a hun­dred hours of things like de­sign­ing EA Outreach’s hiring pro­cess, in­ter­view­ing lots of job can­di­dates, re­view­ing 1000+ EA Global ap­pli­ca­tions, and cre­at­ing the Pareto Fel­low­ship eval­u­a­tion pipeline.

Why learn how to as­sess ex­per­tise?

Im­prov­ing the world will in­volve draw­ing upon many knowl­edge and skill do­mains. Bar­ring de­vel­op­ment of the skill-learn­ing chair from the Ma­trix, your ca­pac­ity is too limited to mas­ter all of these do­mains.

Thus, you must rely upon ex­perts.

How­ever, there are challenges to ex­pert iden­ti­fi­ca­tion. Firstly, there are many who mas­quer­ade as ex­perts. This mas­querad­ing may be de­liber­ate—e.g., prob­a­bly most for­tune tel­lers—or due to poor self-eval­u­a­tion—e.g., many col­lege sopho­mores. Se­condly, there are many ex­perts who are not fully aware of their ex­per­tise. This will of­ten be true for “in­tu­itive” ex­perts, e.g., ex­perts at charisma or ex­perts at im­plic­itly mod­el­ing other peo­ple. Thirdly, there are en­tire do­mains in which the base-rate of true ex­per­tise in the do­main’s phe­nom­ena amongst even highly cre­den­tialed peo­ple is quite low. From both my per­sonal ex­pe­rience in a num­ber of labs, and from com­ing across lots of find­ings like this one, I can tell you cog­ni­tive sci­ence is one such ex­am­ple of a field. This will also be the case in many other so­cial sci­ences, fi­nan­cial fore­cast­ing, con­sult­ing, etc. This means that some­one’s job ti­tle or the let­ters “PhD” in some­one’s email sig­na­ture are of­ten (usu­ally?) not suffi­cient mark­ers of ex­per­tise. (This is prob­a­bly the most com­mon trap for those seek­ing ex­pert con­sul­ta­tion.)

Thus, you must learn how to as­sess ex­per­tise.

Here are some cir­cum­stances when hav­ing ex­per­tise as­sess­ment tools is in­cred­ibly use­ful:

  • You are ask­ing some­one for im­por­tant in­for­ma­tion (e.g., as when Open Philan­thropy Pro­ject con­sults policy ex­perts for its US Policy in­ves­ti­ga­tions)

  • You are figur­ing out where to donate (e.g., you want to see whether a given or­ga­ni­za­tion ac­tu­ally has the abil­ity to do the world-im­prove­ment ac­tivi­ties that they are in­tend­ing to do).

  • You are eval­u­at­ing a po­ten­tial new hire (e.g., you want to see whether a can­di­date for a mar­ket­ing po­si­tion is in fact good at mar­ket­ing)

The para­dox of ex­per­tise assessment

A so­ciol­o­gist friend of mine told me a story about a Chi­nese em­peror named Qin Shi Huang. As the story goes, Qin Shi Huang sought to live for­ever. So, he re­cruited a mul­ti­tude of ad­vi­sors to find the hid­den se­cret to im­mor­tal­ity. Un­for­tu­nately, the men he re­cruited had no abil­ity to as­sess which ma­gi­ci­ans or al­chemists might be ex­perts in im­mor­tal­ity elix­irs. Fur­ther­more, the em­peror had no abil­ity to as­sess which ad­vi­sors might be ex­perts in as­sess­ing ex­perts in im­mor­tal­ity elix­irs. (Note even fur­ther that there are no true ex­perts in the rele­vant do­mains here. This may re­mind you of fields such as tech­nolog­i­cal fore­cast­ing.) In the end, as these things tended to go, a lot of ad­vi­sors got ex­e­cuted.

This has a lot in com­mon with Meno’s Para­dox. Here is Meno’s para­dox rephrased as a para­dox of ex­per­tise as­sess­ment:

  1. If you are not an ex­pert in do­main D, it is im­pos­si­ble to in­de­pen­dently as­sess whether some­one is an ex­pert in D.

  2. You are not an ex­pert in D.

  3. There­fore, it’s im­pos­si­ble for you to in­de­pen­dently as­sess whether some­one is an ex­pert in D.

This ar­gu­ment is prob­a­bly false. Par­tic­u­larly, premise 1:

  1. If there ex­ist DGM—do­main-gen­eral mark­ers that tell you whether some­one may be an ex­pert—then it is pos­si­ble to in­de­pen­dently as­sess whether some­one is an ex­pert in D, even if you are not an ex­pert in D.

  2. There ex­ist DGM.

  3. There­fore it is pos­si­ble to in­de­pen­dently as­sess whether some­one is an ex­pert in D, even if you are not an ex­pert in D.

To fol­low I will de­scribe some do­main gen­eral mark­ers for as­sess­ing ex­per­tise across do­mains.

I’ve com­bined them into cou­ple “back-pocket” meth­ods—ones you can use to eval­u­ate peo­ple you meet at con­fer­ences or din­ners, or for peo­ple who you de­liber­ately seek out. They are definitely not ex­haus­tive, but they should cover most cases. I’ve listed the first back-pocket method be­low. The sec­ond back-pocket method I’d rather not spread widely, since it is much more game-able. If you’d be in­ter­ested in it, feel free to email me at tyler@cen­tre­fore­ffec­tivealtru­ism.org. I also have a much more heavy duty spread­sheet-based method if you have a par­tic­u­larly high-stake ex­per­tise as­sess­ment task (e.g., you’re choos­ing which global poverty team to join for the next sev­eral years, or which an­i­mal welfare or­ga­ni­za­tion to give an enor­mous amount of money to.)

Ne­c­es­sary con­di­tions for ex­per­tise: the P-I-F-T method

The claim: each of the four crite­ria be­low are nec­es­sary (but, im­por­tantly, not suffi­cient) con­di­tions for ex­per­tise. You may find them to be pretty ob­vi­ous, but ask your­self: do I ex­plic­itly as­sess for these things while en­gag­ing ex­perts? If not, you may run the risk of, e.g., hiring the wrong peo­ple or act­ing on faulty in­for­ma­tion.

To gain fluency in as­sess­ing each of the fol­low­ing con­di­tions, I recom­mend the fol­low­ing pro­cess:

  1. Write down a list of ex­am­ples where it is crit­i­cal for you to trust ei­ther your own or some­one else’s knowl­edge or skills. Th­ese are ex­am­ples where ei­ther you are the ex­pert or you are count­ing on an ex­pert.

  2. Think of ways in which you can ap­ply the crite­ria be­low to the ex­pert.

Fur­ther­more, each of the ex­am­ples be­low will test your in­tu­itions on ex­per­tise through ques­tions at the end of them. Please try to an­swer the ques­tions be­fore look­ing at the an­swer.

P: Pro­cess­ing of rele­vant information

Is the per­son perform­ing de­tailed men­tal op­er­a­tions upon rele­vant data, be­yond the ones which non-ex­perts perform? Is this the sort of pro­cess­ing which would plau­si­bly yield ex­per­tise in the rele­vant do­main?

Examples

{You want to in­vite a philos­o­phy ex­pert to speak at your con­fer­ence.}

Brad and Will en­counter an ar­gu­ment. Brad, the an­a­lyt­i­cal philos­o­phy novice, as­sesses whether it feels in­tu­itively true. Will, the an­a­lyt­i­cal philos­o­phy ex­pert not only as­sesses whether the ar­gu­ment feels in­tu­itively true, but also as­sesses its con­cep­tual clar­ity and hid­den premises. Who do you think has the bet­ter marker of ex­per­tise? Why?

--

An­swer: Brad lacks a ro­bust pro­cess; Will does not. In­vite Will.

--

{You want to learn per­sua­sion.}

Steve and Hilary must per­suade Rochelle the French bu­reau­crat to ex­pe­d­ite their visa pro­cess. Both Steve and Hilary note that Rochelle is wear­ing delight­ful chartreuse ear­rings. Steve, the per­sua­sion novice, runs the men­tal pro­cess of not­ing that Rochelle is a per­son, and that peo­ple gen­er­ally re­spond well to friendli­ness. Hilary, the per­sua­sion ex­pert, notes that Rochelle’s chartreuse ear­rings are the only flam­boy­ant cloth­ing items that any­one in the office is wear­ing. Based on a large sam­ple size of pro­cessed ex­pe­rience now stored in her sys­tem one, she guesses that the chartreuse ear­rings could sig­nify that Rochelle wants to dis­t­in­guish her­self from her fel­low bu­reau­crats—to show that she is not just an­other bu­reau­crat. Based on this, Hilary makes a gam­ble to say things which shows that she ap­pre­ci­ates Rochelle’s unique­ness—e.g., “You’re the friendliest-look­ing em­bassy em­ployee I’ve ever met!” Who do you think has the bet­ter marker of ex­per­tise? Why?

--

Steve lacks a ro­bust pro­cess; Hilary does not. Learn from Hilary.

--

{You are a foun­da­tion pro­gram officer de­cid­ing who to give a grant to.}

Martha and Leanne are both pub­lished neu­ro­scien­tists who study the lat­eral genicu­late nu­cleus. You have already read their grant pro­pos­als, and they seem to be of com­pa­rable qual­ity. Even though you are not an ex­pert in genicu­late nu­clei, let alone lat­eral ones, you must choose one per­son to give a grant to. Thus, you must de­cide who is the po­ten­tially more rev­olu­tion­ary sci­en­tist. Both Martha and Leanne seem to have broad knowl­edge of most pub­lished work on the topic. They ap­pear to be equally in­tel­li­gent. How­ever, Leanne takes the novel ap­proach of gem-min­ing dy­nam­i­cal mod­els in physics and epi­demiol­ogy that seem to de­scribe similar phe­nom­ena in the lat­eral genicu­late nu­cleus. She also spends a lot of time de­vis­ing thought ex­per­i­ments and free-as­so­ci­at­ing around tricky ques­tions. Th­ese meth­ods seem a bit un­usual, but in your grant-mak­ing ex­pe­rience, you’ve found that—all else equal—sci­en­tists who use un­usual meth­ods tend to pro­duce more in­no­va­tive work. Who do you think has the bet­ter marker of ex­per­tise? Why?

--

You bet on Leanne over Martha. In this case, both Martha and Leanne prob­a­bly have ro­bust pro­cesses. How­ever, Martha lacks a pro­cess that yields rev­olu­tion­ary ex­per­tise; Leanne more likely does not. You prob­a­bly made the right bet.

--

Ways of assessing

I. Ask ques­tions which will re­veal the men­tal pro­cesses of ex­perts, such as:

a. “Be­fore you fix a com­puter, what’s your gen­eral di­ag­nos­tic pro­cess like?” (For a com­puter re­pair spe­cial­ist)

b. “The con­structs of con­fi­dence and self-effi­cacy seem very similar. How do you tell the differ­ence be­tween them?” (For an ex­pert in so­cial psy­chol­ogy)

c. “How does the qual­ity of Richard Dawkin’s work com­pare to that of other evolu­tion­ary biol­o­gists? Do you have any cri­tiques?” (For an ex­pert in evolu­tion­ary biol­ogy)

d. “Let’s say I want to stage a magic trick in an ex­tremely crowded room. How would I do that? What about in a room with very loud mu­sic?” (For an ex­pert party ma­gi­cian)

II. Find out whether they’ve been part of a job, pro­gram, or men­tor­ship what would have equipped them with spe­cial men­tal pro­cesses.

I: In­ter­ac­tion with rele­vant in­for­ma­tion sources

Is the per­son reg­u­larly in­ter­fac­ing with rele­vant data? Is this the sort of data that an ex­pert would plau­si­bly en­gage? Note that merely en­coun­ter­ing rele­vant sources is not suffi­cient. The ex­pert needs to have paid at­ten­tion to these sources, as the first ex­am­ple will illus­trate.

Examples

{You want to hire a graphic de­signer.} Bob and Maria are walk­ing down a city street. Bob, a graphic de­sign novice, pays no at­ten­tion to the signs and ad­ver­tise­ments along the side of the street, even though they are within his field of vi­sion. Maria, an ex­pert, pays full at­ten­tion to these things. She notes the lack of spa­tial al­ign­ment amongst el­e­ments in the dry cleaner’s sign. As she passes a Louis Vuit­ton ad at the bus stop, she ogles the beau­tiful ball ser­ifs of the Bauer Bodoni bold italic type­face (in­ci­den­tally, my fa­vorite font!). Color schemes, ge­om­e­try, and vi­sual flows all jump out at her as ob­jects onto them­selves. Who do you think has the bet­ter marker of ex­per­tise? Why?

--

Bob en­coun­ters rele­vant data, but does not en­gage it; Maria both en­coun­ters and en­gages rele­vant data. Hire Maria.

--

{You want to learn how to fundraise.}

Cas­san­dra is a quan­ti­ta­tive fi­nance ex­pert but a novice at fundrais­ing. Jake is an ex­pert at fundrais­ing. Jake is con­stantly im­mers­ing him­self in fundrais­ing case stud­ies, talk­ing to other ex­perts, and meet­ing with fun­ders. Cas­san­dra, on the other hand, in­ter­faces with sources like math­e­mat­i­cal mod­els of mar­kets. Who do you think has the bet­ter marker of ex­per­tise? Why?

--

Cas­san­dra does not in­ter­face with rele­vant enough in­for­ma­tion sources; Jake does do so. Con­sult Jake.

--

{You want to im­prove the effec­tive­ness of your team.}

Brian is a Prince­ton aca­demic who claims to be an ex­pert in team effec­tive­ness. The ev­i­dence: He has an­a­lyzed 1000 small fam­ily busi­nesses and has been pub­lished mul­ti­ple times in Science. Miranda does not claim to be an ex­pert in team effec­tive­ness, but sev­eral peo­ple have sug­gested that she might be. The ev­i­dence: she is the rare type of ven­ture cap­i­tal­ist who formerly founded a suc­cess­ful startup, ran a large com­pany, and now sits on non­prof­its boards and in­vests in com­pa­nies of all sizes (and has a win­ning track record do­ing so). Who do you think has the bet­ter marker of ex­per­tise? Why?

--

Un­less your or­ga­ni­za­tion is a small fam­ily busi­ness, Brian has prob­a­bly not in­ter­faced with rele­vant in­for­ma­tion sources. Miranda, on the other hand, has en­gaged a wide va­ri­ety of or­ga­ni­za­tions. There is a good chance that her ideas about team effec­tive­ness might be higher qual­ity, since she will likely have ab­stracted or­ga­ni­za­tion-gen­eral les­sons from a more di­verse sam­ple. This is a more difficult case than the other two, but if faced with a de­ci­sion be­tween the two, I would con­sult Miranda in­stead of Brian.

--

Ways of assessing

I. Ask ques­tions which will re­veal what sorts of in­for­ma­tion they en­gage, such as:

a. “Tell me about in­di­vi­d­ual cases in your man­age­ment ex­pe­rience.” (For a man­ager you might hire)

b. “What sorts of things do you pay at­ten­tion to when you’re at an event?” (For an event di­rec­tor)

c. “Roughly how many pieces do you edit in an av­er­age month?” (For an ed­i­tor)

d. “Which pa­pers would you recom­mend read­ing to un­der­stand the cut­ting edge in hy­per­bolic ge­om­e­try?” (For an ex­pert in hy­per­bolic ge­om­e­try)

II. Find out whether they’ve been part of a job, pro­gram, or men­tor­ship what would have given them strong sam­ples of rele­vant in­for­ma­tion.

III. See how fluently they can gen­er­ate ex­am­ples of phe­nom­ena in the do­main. The more ex­am­ples they can gen­er­ate, the bet­ter.

F: Feed­back with rele­vant metrics

Does the per­son have (or have they had) feed­back loops that help them ac­cu­rately cal­ibrate whether they are in­creas­ing their ex­per­tise or mak­ing ac­cu­rate judge­ments?

In do­mains where re­al­ity does not give good feed­back, they need to have a set of well-honed heuris­tics or proxy feed­back meth­ods to cor­rect for bet­ter out­put if the re­sult is go­ing to be re­li­ably good (this goes for, e.g., philos­o­phy, so­ciol­ogy, long-term pre­dic­tion). In do­mains where re­al­ity can give good feed­back, they don’t nec­es­sar­ily need well-honed heuris­tics or proxy feed­back meth­ods (e.g., mas­sage, auto re­pair, sword­fight­ing, etc.). All else equal, su­pe­rior feed­back loops have the fol­low­ing at­tributes (ideal­ized ver­sions be­low):

  • Speed (you learn about dis­crep­an­cies be­tween cur­rent and de­sired out­put quickly af­ter tak­ing an ac­tion so you can course-cor­rect)

  • Fre­quency (the feed­back loop hap­pens fre­quently, giv­ing you more sam­ples to cal­ibrate on)

  • Val­idity (the feed­back loop is helping you get closer to the out­put you ac­tu­ally care about)

  • Reli­a­bil­ity (the feed­back loop con­sis­tently re­turns similar dis­crep­an­cies in re­sponse to you tak­ing similar ac­tions)

  • De­tail (the feed­back loop gives you a large amount of in­for­ma­tion about the differ­ence be­tween cur­rent and de­sired out­put)

  • Saliency (the feed­back loop de­liv­ers at­ten­tion­ally or mo­ti­va­tion­ally salient feed­back)

Examples

{You want to pre­dict tech­nol­ogy timelines} Julie and Kate both claim to be ex­perts in tech­nolog­i­cal fore­cast­ing. When you ask Julie how she cal­ibrate her pre­dic­tions, she replies, “Mainly, I just have sense for these sorts of things. But I also do things like mon­i­tor Google Trends, read lots of ar­ti­cles on tech­nol­ogy, and ask lots of peo­ple what they think will hap­pen. I’ve been do­ing this for 20 years.” She then points to a num­ber of suc­cess­ful pre­dic­tions she’s made. When you ask the same ques­tion to Kate, she replies, “Well, in the short term, it’s been shown that lin­ear mod­els of tech­nolog­i­cal progress are the best, so I tend to use those to cal­ibrate on the times­pan of 1-3 years. If I make longer term pre­dic­tions, I try to tell as many sto­ries as pos­si­ble for how those pre­dic­tions may be false. Then I try to make care­ful ar­gu­ments that rule out these sto­ries. Fur­ther­more, I always check whether my pre­dic­tions di­verge sub­stan­tially from other tech­nolog­i­cal fore­cast­ers. If they do, I try to figure out why. I’ve also iden­ti­fied a num­ber of tech­nolog­i­cal fore­cast­ers who have con­sis­tently good track records, and I study their meth­ods, ev­i­dence, and pre­dic­tions care­fully. Fi­nally, when­ever one of my pre­dic­tions turn out to be false, I spend about a week figur­ing out whether there is any gen­eral prin­ci­ple to be learned to guard against be­ing wrong in the fu­ture.” Who do you think has the bet­ter marker of ex­per­tise? Why?

--

Tech­nolog­i­cal fore­cast­ing is a do­main in which re­al­ity doesn’t provide strong feed­back, so you need proxy feed­back. Julie does not have good proxy feed­back while Kate does have rel­a­tively de­cent proxy feed­back meth­ods. Bar­ring spe­cial in­for­ma­tion about Julie, Kate’s pre­dic­tions are likely to be more re­li­able, all else equal.

--

{You want to choose a pi­ano teacher}

Both Ned and Me­gan are pi­ano teach­ers. Of the two, Ned is a much bet­ter pi­anist, hav­ing won many awards and played at Carnegie Hall many times. You ask both Ned and Me­gan how they can tell whether their teach­ing is work­ing for a given stu­dent. Ned replies that he sim­ply looks at the out­comes: if a stu­dent prac­tices un­der him for sev­eral years, they be­come much bet­ter. “Ba­si­cally, I show them how to play scales and pieces well, and then I check in about once ev­ery other week to make sure they are prac­tices the drills I showed them.” Me­gan replies with a de­tailed set of ways she can note rate of progress and how she ad­justs her teach­ing ac­cord­ingly. “For ex­am­ple, I know whether a stu­dent has ‘chun­ked’ a given chord through the fol­low­ing method: I stand be­hind the pi­ano and quickly turn around a piece of pa­per with a chord on it and time how many mil­lisec­onds it takes for a stu­dent to re­act and play the chord. Also, I each week I ask them to hon­estly re­port on whether they feel as if the chord is still a se­ries of notes or whether it feels more like ‘one note.’ This in­di­cates that the chord has be­come a ‘gestalt’ in the stu­dents mind. Another ex­am­ple: when­ever a stu­dent makes an er­ror while play­ing a piece, I mark the cor­re­spond­ing area in the sheet mu­sic. Even­tu­ally, I can then tell what types of er­rors a stu­dent gen­er­ally makes by an­a­lyz­ing the dark­est ar­eas on var­i­ous pieces—the places with the most pen marks.” Me­gan con­tinues to tell you similar ex­am­ples. Who do you think has the bet­ter marker of ex­per­tise? Why?

--

In this case, while Ned may be the bet­ter pi­anist, he may not be the rel­a­tive ex­pert at teach­ing pi­ano. It would seem he lacks rele­vant feed­back loops to tell him whether he is suc­cess­ful at teach­ing. While he notes that his stu­dents im­prove over time, he is not en­ter­tain­ing the pos­si­bil­ity that they may have im­proved coun­ter­fac­tu­ally over time with­out his in­ter­ven­tion.

--

{You want to hire a man­ager}

Both Todd and Greg have ap­plied for a man­ager po­si­tion at your or­ga­ni­za­tion. You ask each of them about their pro­cess for mon­i­tor­ing the rate at which their teams are mak­ing progress on goals. Todd: “I have ev­ery­one on a sys­tem where I can mon­i­tor the amount of Po­modoros each per­son is com­plet­ing. If cer­tain team mem­bers are lag­ging be­hind in their amount of Po­modoros, I give them a pep talk, af­ter which the amount tends to go back up.” Greg: “I have each team mem­ber set daily sub­goals. Then I look at two things: (a) whether these sub­goals tend to al­ign to the broader goals and (b) whether they are achiev­ing the sub­goals they set for them­selves. If a team mem­ber is lag­ging be­hind in (a) or (b), I give them a pep talk, af­ter which they tend to perform bet­ter.”

--

In this case, both Todd and Greg have de­cent feed­back loops. How­ever, Todd’s feed­back loop is more likely to fall vic­tim to Good­hart’s law. In other words, though his method might be high in re­li­a­bil­ity, the mea­surePo­modoro-max­i­miza­tion might ac­ci­den­tally be­come the tar­get, even though the in­tended tar­get is goal com­ple­tion. Greg’s feed­back loop is higher in val­idity, in that it mea­sures the tar­get he ac­tu­ally cares about more tightly.

--

Ways of assessing

I. Ask ques­tions which will re­veal the de­tails of their feed­back loops (and whether they have them), such as:

a. “Let’s say I’m already a profi­cient coder, but I want to learn how to code at the level of a mas­ter. What sorts of prob­lems might I prac­tice on to move from profi­ciency to mas­tery? Are there any text­books I should read?” (For a soft­ware en­g­ineer)

b. “In what ways do peo­ple typ­i­cally stum­ble when they try to im­prove at data anal­y­sis?” (For a data an­a­lyst)

c. “How do tell whether a mar­ket­ing cam­paign is work­ing?” (For a pro­fes­sional mar­keter)

d. “Can you tell me a bit about how you learn?”

II. Find out whether they’ve been part of a job, pro­gram, or men­tor­ship what would have given them strong feed­back loops.

III. Some­times, peo­ple with tacit ex­per­tise will not be able to ar­tic­u­late their feed­back loops. An­a­lyze whether re­al­ity pro­vides ro­bust feed­back in their do­main. For ex­am­ple, a bike-rider might not be able to de­scribe the feed­back loops through which they learned bike-rid­ing. How­ever, re­al­ity au­to­mat­i­cally pro­vides feed­back in the do­main by caus­ing novice bike-rid­ers to fall over, un­til they ac­cu­mu­late enough pro­ce­du­ral knowl­edge to bal­ance on two wheels.

T: Time spent on the above

This one is the most straight­for­ward of all the nec­es­sary con­di­tions for ex­per­tise. (Thus, I won’t go into much de­tail.) Sim­ply: An ex­pert needs to have spent enough time pro­cess­ing and in­ter­act­ing with the rele­vant data with ro­bust feed­back loops.

Ask: Has this ex­pert put a plau­si­bly suffi­cient amount of time into learn­ing or us­ing the skill in or­der to gain ex­per­tise?

For some skills, like us­ing a spoon, there is a short la­tency be­tween be­gin­ner­hood and ex­per­tise. For oth­ers, like hav­ing well-cal­ibrated poli­ti­cal views, there is quite a long la­tency. Ac­cord­ingly, you can prob­a­bly trust the av­er­age claim about spoon-use and should be sus­pi­cious of the av­er­age claim about poli­tics.


There you have it: the PIFT method for as­sess­ing ba­sic con­di­tions for ex­per­tise. Here is what the un­der­ly­ing model looks like: The PIL model

One easy way to re­mem­ber the method: “If the per­son claims to be an ex­pert and is not, say, ‘pift!’”

If it seems that I have misi­den­ti­fied or failed to iden­tify a nec­es­sary con­di­tion for ex­per­tise, please let me know!