Comparative advantage in the talent market

The con­cept of com­par­a­tive ad­van­tage is well known within the Effec­tive Altru­ism com­mu­nity. For dona­tions, it is rea­son­ably well known and im­ple­mented, think of donor lot­ter­ies or dona­tion trad­ing across coun­tries to take bet­ter ad­van­tage of tax ex­emp­tions.

In this post I’m out­lin­ing how the idea of com­par­a­tive ad­van­tage can be ap­plied to the tal­ent mar­ket.

The first part will be about some gen­eral im­pli­ca­tions of differ­ences be­tween peo­ple and how tal­ent should be al­lo­cated ac­cord­ingly. In the sec­ond part I will ar­gue that EAs should pri­ori­tise per­sonal fit for de­cid­ing what to work on, even if this means work­ing in a cause area that they don’t con­sider a top pri­or­ity. Fi­nally, I’ll con­sider some com­mon ob­jec­tions to this.

How peo­ple differ in the tal­ent market

In the tal­ent mar­ket, there are differ­ences among many more di­men­sions than in the dona­tion mar­ket. Peo­ple have differ­ent skills, differ­ent lev­els of ex­pe­rience, differ­ent prefer­ences for hours worked, ge­o­graph­i­cal lo­ca­tions, pay and their flex­i­bil­ity with re­gards to those, differ­ent lev­els of risk aver­sion in terms of ca­reer cap­i­tal and differ­ent prefer­ences for cause ar­eas.

Let’s look at differ­ences in com­par­a­tive ad­van­tages in skill. Imag­ine two peo­ple are in­ter­ested in solv­ing fac­tory farm­ing. One of them has a biol­ogy de­gree and a lot of ex­pe­rience as an anti-fac­tory-farm­ing ac­tivist, while the other one has a his­tory de­gree and only a bit of ex­pe­rience as an ac­tivist. Due to the prin­ci­ple of com­par­a­tive ad­van­tage it is still best for the ex­pe­rienced ac­tivist to go into meat re­place­ment re­search and the less ex­pe­rienced ac­tivist into ad­vo­cacy.

How­ever, this ar­gu­ment de­pends on how scarce tal­ent in ad­vo­cacy and meat re­place­ment re­search are rel­a­tive to each other. If the world had an ex­ces­sive amount of peo­ple ca­pa­ble to do good meat re­place­ment re­search (which it does not) but a short­age of anti fac­tory farm­ing ac­tivists, our ac­tivism ex­pe­rienced biol­o­gist should go into ad­vo­cacy too.

In gen­eral, when we think about com­par­a­tive ad­van­tage and how to al­lo­cate tal­ent, this is a good heuris­tic to use: which traits are we short of in the tal­ent mar­ket? If you have one of those traits, maybe you should go in and fill the gap.

For ex­am­ple, we are cur­rently short of op­er­a­tions tal­ent. The post by 80,000 hours men­tions that even if you’re as good as re­search as other EAs, you should still con­sider tak­ing on an op­er­a­tions role given our cur­rent lack of EAs in op­er­a­tions.

Also, we cur­rently lack peo­ple in biorisk, so even if you con­sider AI Safety a more im­por­tant cause area than biorisk, maybe you should go into biorisk as­sum­ing you have an ap­pro­pri­ate skill set.

It also seems likely that we don’t have enough peo­ple will­ing to start big new pro­jects which are likely to fail. If you’re un­usu­ally risk neu­tral, don’t mind work­ing long hours and can deal with the prospect of likely failure, you should con­sider tak­ing on one of those, even if you think you’d be just as good as other EAs at re­search or earn­ing to give.

Some­thing to keep in mind here is which refer­ence class we are us­ing to think about peo­ple’s com­par­a­tive ad­van­tages. Since we want to al­lo­cate tal­ent across the EA com­mu­nity, the refer­ence class that usu­ally makes most sense to use is the EA Com­mu­nity. This is less true if there are peo­ple out­side the EA Com­mu­nity filling roles the EA Com­mu­nity thinks should be filled, i.e. it is pos­si­ble to re­place EAs with non-EAs. An ex­am­ple would be de­vel­op­ment economists work­ing at large in­ter­na­tional or­gani­sa­tions like the UN. Given that the world already has a de­cent amount of them, we aren’t in so much need to fill those roles with EAs.

How­ever, we can also err by think­ing about a too nar­row refer­ence class. Peo­ple, in­clud­ing EAs, are prone to com­par­ing them­selves to peo­ple they spend most time with (or peo­ple who look most im­pres­sive on Face­book). This is a prob­lem be­cause peo­ple tend to cluster with peo­ple who are most like them. So when they should be think­ing about what their com­par­a­tive ad­van­tage is within the EA com­mu­nity, they might ac­ci­den­tally think of their com­par­a­tive ad­van­tage among their EA friends in­stead.

If all your EA friends are into start­ing new EA pro­jects just like you, but you think they’re much bet­ter than you at it, your com­par­a­tive ad­van­tage across the whole of EA might still be to start new EA pro­jects. This is true es­pe­cially given the lack of peo­ple able and will­ing to start good new pro­jects.

I think EAs us­ing in­con­sis­tent refer­ence classes to judge com­par­a­tive ad­van­tage in the tal­ent mar­ket is a com­mon er­ror and we should try harder to avoid it.

Some con­sid­er­a­tions for com­par­a­tive ad­van­tage in the tal­ent mar­ket are already well known and im­ple­mented. Peo­ple are well aware that it makes more sense for some­one early in their ca­reer to do a ma­jor ca­reer switch than for some­one who is already ex­pe­rienced in a par­tic­u­lar field. This is a mes­sage that has been com­mu­ni­cated by 80,000 Hours for a long time. It is com­mon sense any­way.

How­ever, there are some strate­gies to al­lo­cate tal­ent bet­ter which aren’t im­ple­mented enough apart from EAs just not think­ing enough about com­par­a­tive ad­van­tage. Some peo­ple who would be a good fit for high im­pact jobs out­side the cor­po­rate sec­tor are put off by their low pay. If you have no other obli­ga­tions and don’t mind a fru­gal lifestyle, these po­si­tions are a rel­a­tively bet­ter fit for you. But if you’re not and would be a good fit oth­er­wise, ne­go­ti­at­ing for higher pay with your prospec­tive em­ployer fails, then one op­tion is to try to find a donor to sup­ple­ment your in­come. (This is not only a nice the­ory, I know of cases of this hap­pen­ing.)

Co­op­er­at­ing in the tal­ent mar­ket across cause areas

There’s an­other ar­gu­ment about al­lo­cat­ing tal­ent in the tal­ent mar­ket which I think is severely un­der­ap­pre­ci­ated: Peo­ple should be will­ing to work in cause ar­eas which aren’t their top pick or even ones they don’t find com­pel­ling, if ac­cord­ing to their per­sonal fit a role in those cause ar­eas is their com­par­a­tive ad­van­tage within the EA com­mu­nity. Our tal­ent would be al­lo­cated much bet­ter and we would thus in­crease our im­pact as a com­mu­nity.

Con­sider the ar­gu­ment on a small scale, with Alli­son and Bet­tina try­ing to make ca­reer de­ci­sions:

Alli­son con­sid­ers an­i­mal suffer­ing the most im­por­tant cause area. She’s fa­mil­iar with the ar­gu­ments out­lin­ing the dan­ger of the de­vel­op­ment of AGI. But she is not that con­vinced. Alli­son’s main area of com­pe­tence is her Ma­chine Learn­ing PhD and her policy back­ground. Given her ex­pe­rience, she could be a good fit for AI tech­ni­cal safety re­search or AI policy.

Bet­tina how­ever is trained in eco­nomics and has been a farm an­i­mal ac­tivist in the past. How­ever, she’s sud­denly a lot less con­vinced an­i­mal suffer­ing is the most im­por­tant cause area and thinks AI is vastly more im­por­tant.

Alli­son might well do fine start­ing work­ing on abol­ish­ing fac­tory farm­ing and Bet­tina might well find an ac­cept­able po­si­tion in the AI field. But prob­a­bly Alli­son would do much bet­ter work­ing on AI and Bet­tina would do much bet­ter work­ing on abol­ish­ing fac­tory farm­ing. If they co­op­er­ate with each other and switch places, their com­bined im­pact will be much higher, re­gard­less of how valuable work on AI Safety or abol­ish­ing fac­tory farm­ing re­ally is.

The same prin­ci­ple ex­tends to the whole of the EA com­mu­nity. Our im­pact as a com­mu­nity will be higher if we are will­ing to al­lo­cate tal­ent ac­cord­ing to their com­par­a­tive ad­van­tages across the whole of EA (‘tal­ent trad­ing’) and not just in­di­vi­d­ual causes.

There are two main counter ar­gu­ments I’ll con­sider, which are the ones I’ve most of­ten heard ar­gued. One ar­gu­ment is that peo­ple wouldn’t be mo­ti­vated enough to ex­cel in their job if they don’t be­lieve their job is the best they can do ac­cord­ing to their own val­ues. The other ar­gu­ment I’ve heard is ‘peo­ple just don’t do that’. I think this one has ac­tu­ally more merit than peo­ple re­al­ise.

Most no­tably though, I have not yet en­coun­tered much dis­agree­ment on the­o­ret­i­cal grounds.

‘Peo­ple wouldn’t be mo­ti­vated enough.’

It does seem true that peo­ple need to be mo­ti­vated to do well in their job. How­ever, I’m less con­vinced peo­ple need to be­lieve their job is the best they can per­son­ally do ac­cord­ing to their own val­ues to have that mo­ti­va­tion. Many peo­ple in the Effec­tive Altru­ism com­mu­nity have switched cause ar­eas at least once, their mo­ti­va­tion must be some­what malle­able.

Per­son­ally, I’m not very mo­ti­vated to work on an­i­mal suffer­ing con­sid­er­ing all the hu­man suffer­ing and ex­tinc­tion risk there is. I don’t think this is un­fix­able though. Watch­ing videos of an­i­mals in fac­tory farms would likely do the trick. I’ve also found work­ing in this area more com­pel­ling since listen­ing to the 80,000 hours pod­cast with Lewis Bol­lard. He pre­sented abol­ish­ing fac­tory farm­ing as a more in­tel­lec­tu­ally in­ter­est­ing prob­lem than I had pre­vi­ously con­sid­ered it.

How­ever, I think it’s rare that lack of mo­ti­va­tion is peo­ple’s true re­jec­tion. If it was, I’d ex­pect to see many more peo­ple talk­ing about how we could ‘hack’ our mo­ti­va­tion bet­ter.

In case lack of mo­ti­va­tion does turn out to be the main rea­son peo­ple don’t do enough tal­ent trad­ing across cause ar­eas, I think there are more ac­tions we could take to deal with it.

‘Peo­ple don’t do that.’

The ar­gu­ment in fa­vor of tal­ent trad­ing across cause ar­eas re­quires peo­ple to ac­tu­ally co­op­er­ate. The rea­son the Effec­tive Altru­ism Com­mu­nity doesn’t co­op­er­ate enough in its tal­ent mar­ket might well be that we’re stuck in a defect­ing nash equil­ibrium. Peo­ple in the EA Com­mu­nity know other peo­ple don’t go into cause ar­eas they don’t fancy, so they aren’t will­ing to do it ei­ther. There are po­ten­tial solu­tions to this: set­ting up a bet­ter norm and fa­cil­i­tat­ing ex­plicit trades.

We can set up a bet­ter norm by chang­ing the so­cial re­wards and ex­pec­ta­tions. It is ad­mirable if some­one works in a cause area that isn’t their top pick. If we ob­serve peo­ple co­op­er­at­ing, other peo­ple will co­op­er­ate too. If you are do­ing di­rect work in a cause area that isn’t your top pick, you might want to con­sider be­com­ing more pub­lic about this fact. There is a fair num­ber of peo­ple who don’t work in their top pick cause area or even cause ar­eas they are much less con­vinced of than their peers, but cur­rently they don’t ad­ver­tise this fact.

At the very least, as a com­mu­nity we should be able to ex­tend the cause ar­eas peo­ple are will­ing to work in even if we won’t have ev­ery­one will­ing to work in cause ar­eas they’re pretty un­con­vinced of.

One way to get to a bet­ter norm is to fa­cil­i­tate ex­plicit tal­ent trades, akin to do­ing dona­tion trades. To set up dona­tion trades, peo­ple ask oth­ers for con­nec­tions, ei­ther in their lo­cal EA net­work or on­line, or they con­tact CEA to get matched with ma­jor donors.

We can do the same for trad­ing tal­ent. Peo­ple think­ing about work­ing in an­other cause area can ask around whether there’s some­one con­sid­er­ing switch­ing to a cause area preferred by them. How­ever, trad­ing places in this sce­nario brings ma­jor prac­ti­cal challenges, so it is likely not vi­able in most cases.

A more eas­ily im­ple­mentable solu­tion is to search for a donor will­ing to offset a cause area switch, i.e. make a dona­tion to the cause area the tal­ent will be leav­ing.

There might also be good ar­gu­ments against the con­cept of tal­ent trad­ing across cause ar­eas on the­o­ret­i­cal or prac­ti­cal grounds that I haven’t listed here. A cyn­i­cal in­ter­pre­ta­tion why peo­ple aren’t will­ing enough to co­op­er­ate across cause ar­eas might be that peo­ple con­sider their cause area their ‘tribe’ they want to sig­nal alle­giance to and only want to ap­pear smart and ded­i­cated to the peo­ple within that cause area.

All that said, peo­ple’s mo­ti­va­tions, tal­ent and val­ues are cor­re­lated so there’s a limit on how much the the­o­ret­i­cal ar­gu­ment in favour of work­ing in other cause ar­eas will ap­ply.

Which ar­gu­ments against co­op­er­at­ing in the tal­ent mar­ket across cause ar­eas can you think of? Do you think peo­ple are con­sid­er­ing their com­par­a­tive ad­van­tages in the tal­ent mar­ket enough, whether within or across cause ar­eas? What are di­men­sions peo­ple can differ on in the tal­ent mar­ket with prac­ti­cal im­pli­ca­tions that I haven’t listed?

Sum­mary: If we want to al­lo­cate our tal­ent in the EA com­mu­nity well, we need to con­sider peo­ple’s com­par­a­tive ad­van­tages across var­i­ous di­men­sions, es­pe­cially the ones that have a ma­jor im­pact on their per­sonal fit. Peo­ple should be more will­ing to work in cause ar­eas that don’t match their cause area prefer­ences if they have a big com­par­a­tive ad­van­tage in per­sonal fit there.


Spe­cial thanks goes to Ja­cob Hil­ton who re­viewed a draft of this post.