Comparative advantage in the talent market

The concept of comparative advantage is well known within the Effective Altruism community. For donations, it is reasonably well known and implemented, think of donor lotteries or donation trading across countries to take better advantage of tax exemptions.

In this post I’m outlining how the idea of comparative advantage can be applied to the talent market.

The first part will be about some general implications of differences between people and how talent should be allocated accordingly. In the second part I will argue that EAs should prioritise personal fit for deciding what to work on, even if this means working in a cause area that they don’t consider a top priority. Finally, I’ll consider some common objections to this.

How people differ in the talent market

In the talent market, there are differences among many more dimensions than in the donation market. People have different skills, different levels of experience, different preferences for hours worked, geographical locations, pay and their flexibility with regards to those, different levels of risk aversion in terms of career capital and different preferences for cause areas.

Let’s look at differences in comparative advantages in skill. Imagine two people are interested in solving factory farming. One of them has a biology degree and a lot of experience as an anti-factory-farming activist, while the other one has a history degree and only a bit of experience as an activist. Due to the principle of comparative advantage it is still best for the experienced activist to go into meat replacement research and the less experienced activist into advocacy.

However, this argument depends on how scarce talent in advocacy and meat replacement research are relative to each other. If the world had an excessive amount of people capable to do good meat replacement research (which it does not) but a shortage of anti factory farming activists, our activism experienced biologist should go into advocacy too.

In general, when we think about comparative advantage and how to allocate talent, this is a good heuristic to use: which traits are we short of in the talent market? If you have one of those traits, maybe you should go in and fill the gap.

For example, we are currently short of operations talent. The post by 80,000 hours mentions that even if you’re as good as research as other EAs, you should still consider taking on an operations role given our current lack of EAs in operations.

Also, we currently lack people in biorisk, so even if you consider AI Safety a more important cause area than biorisk, maybe you should go into biorisk assuming you have an appropriate skill set.

It also seems likely that we don’t have enough people willing to start big new projects which are likely to fail. If you’re unusually risk neutral, don’t mind working long hours and can deal with the prospect of likely failure, you should consider taking on one of those, even if you think you’d be just as good as other EAs at research or earning to give.

Something to keep in mind here is which reference class we are using to think about people’s comparative advantages. Since we want to allocate talent across the EA community, the reference class that usually makes most sense to use is the EA Community. This is less true if there are people outside the EA Community filling roles the EA Community thinks should be filled, i.e. it is possible to replace EAs with non-EAs. An example would be development economists working at large international organisations like the UN. Given that the world already has a decent amount of them, we aren’t in so much need to fill those roles with EAs.

However, we can also err by thinking about a too narrow reference class. People, including EAs, are prone to comparing themselves to people they spend most time with (or people who look most impressive on Facebook). This is a problem because people tend to cluster with people who are most like them. So when they should be thinking about what their comparative advantage is within the EA community, they might accidentally think of their comparative advantage among their EA friends instead.

If all your EA friends are into starting new EA projects just like you, but you think they’re much better than you at it, your comparative advantage across the whole of EA might still be to start new EA projects. This is true especially given the lack of people able and willing to start good new projects.

I think EAs using inconsistent reference classes to judge comparative advantage in the talent market is a common error and we should try harder to avoid it.

Some considerations for comparative advantage in the talent market are already well known and implemented. People are well aware that it makes more sense for someone early in their career to do a major career switch than for someone who is already experienced in a particular field. This is a message that has been communicated by 80,000 Hours for a long time. It is common sense anyway.

However, there are some strategies to allocate talent better which aren’t implemented enough apart from EAs just not thinking enough about comparative advantage. Some people who would be a good fit for high impact jobs outside the corporate sector are put off by their low pay. If you have no other obligations and don’t mind a frugal lifestyle, these positions are a relatively better fit for you. But if you’re not and would be a good fit otherwise, negotiating for higher pay with your prospective employer fails, then one option is to try to find a donor to supplement your income. (This is not only a nice theory, I know of cases of this happening.)

Cooperating in the talent market across cause areas

There’s another argument about allocating talent in the talent market which I think is severely underappreciated: People should be willing to work in cause areas which aren’t their top pick or even ones they don’t find compelling, if according to their personal fit a role in those cause areas is their comparative advantage within the EA community. Our talent would be allocated much better and we would thus increase our impact as a community.

Consider the argument on a small scale, with Allison and Bettina trying to make career decisions:

Allison considers animal suffering the most important cause area. She’s familiar with the arguments outlining the danger of the development of AGI. But she is not that convinced. Allison’s main area of competence is her Machine Learning PhD and her policy background. Given her experience, she could be a good fit for AI technical safety research or AI policy.

Bettina however is trained in economics and has been a farm animal activist in the past. However, she’s suddenly a lot less convinced animal suffering is the most important cause area and thinks AI is vastly more important.

Allison might well do fine starting working on abolishing factory farming and Bettina might well find an acceptable position in the AI field. But probably Allison would do much better working on AI and Bettina would do much better working on abolishing factory farming. If they cooperate with each other and switch places, their combined impact will be much higher, regardless of how valuable work on AI Safety or abolishing factory farming really is.

The same principle extends to the whole of the EA community. Our impact as a community will be higher if we are willing to allocate talent according to their comparative advantages across the whole of EA (‘talent trading’) and not just individual causes.

There are two main counter arguments I’ll consider, which are the ones I’ve most often heard argued. One argument is that people wouldn’t be motivated enough to excel in their job if they don’t believe their job is the best they can do according to their own values. The other argument I’ve heard is ‘people just don’t do that’. I think this one has actually more merit than people realise.

Most notably though, I have not yet encountered much disagreement on theoretical grounds.

‘People wouldn’t be motivated enough.’

It does seem true that people need to be motivated to do well in their job. However, I’m less convinced people need to believe their job is the best they can personally do according to their own values to have that motivation. Many people in the Effective Altruism community have switched cause areas at least once, their motivation must be somewhat malleable.

Personally, I’m not very motivated to work on animal suffering considering all the human suffering and extinction risk there is. I don’t think this is unfixable though. Watching videos of animals in factory farms would likely do the trick. I’ve also found working in this area more compelling since listening to the 80,000 hours podcast with Lewis Bollard. He presented abolishing factory farming as a more intellectually interesting problem than I had previously considered it.

However, I think it’s rare that lack of motivation is people’s true rejection. If it was, I’d expect to see many more people talking about how we could ‘hack’ our motivation better.

In case lack of motivation does turn out to be the main reason people don’t do enough talent trading across cause areas, I think there are more actions we could take to deal with it.

‘People don’t do that.’

The argument in favor of talent trading across cause areas requires people to actually cooperate. The reason the Effective Altruism Community doesn’t cooperate enough in its talent market might well be that we’re stuck in a defecting nash equilibrium. People in the EA Community know other people don’t go into cause areas they don’t fancy, so they aren’t willing to do it either. There are potential solutions to this: setting up a better norm and facilitating explicit trades.

We can set up a better norm by changing the social rewards and expectations. It is admirable if someone works in a cause area that isn’t their top pick. If we observe people cooperating, other people will cooperate too. If you are doing direct work in a cause area that isn’t your top pick, you might want to consider becoming more public about this fact. There is a fair number of people who don’t work in their top pick cause area or even cause areas they are much less convinced of than their peers, but currently they don’t advertise this fact.

At the very least, as a community we should be able to extend the cause areas people are willing to work in even if we won’t have everyone willing to work in cause areas they’re pretty unconvinced of.

One way to get to a better norm is to facilitate explicit talent trades, akin to doing donation trades. To set up donation trades, people ask others for connections, either in their local EA network or online, or they contact CEA to get matched with major donors.

We can do the same for trading talent. People thinking about working in another cause area can ask around whether there’s someone considering switching to a cause area preferred by them. However, trading places in this scenario brings major practical challenges, so it is likely not viable in most cases.

A more easily implementable solution is to search for a donor willing to offset a cause area switch, i.e. make a donation to the cause area the talent will be leaving.

There might also be good arguments against the concept of talent trading across cause areas on theoretical or practical grounds that I haven’t listed here. A cynical interpretation why people aren’t willing enough to cooperate across cause areas might be that people consider their cause area their ‘tribe’ they want to signal allegiance to and only want to appear smart and dedicated to the people within that cause area.

All that said, people’s motivations, talent and values are correlated so there’s a limit on how much the theoretical argument in favour of working in other cause areas will apply.

Which arguments against cooperating in the talent market across cause areas can you think of? Do you think people are considering their comparative advantages in the talent market enough, whether within or across cause areas? What are dimensions people can differ on in the talent market with practical implications that I haven’t listed?

Summary: If we want to allocate our talent in the EA community well, we need to consider people’s comparative advantages across various dimensions, especially the ones that have a major impact on their personal fit. People should be more willing to work in cause areas that don’t match their cause area preferences if they have a big comparative advantage in personal fit there.


Special thanks goes to Jacob Hilton who reviewed a draft of this post.