Summary: A naive understanding of replaceability might lead us to think too highly of earning to give. A more complete understanding might lead us away from it, as described here and here. An even more complete understanding, based on the possibility of differences in priorities, might lead us back again. In particular,
If you earn to give or get a position at an organization they wouldn’t have otherwise filled, you can ensure you contribute counterfactually to your own priorities, but in a position that would have otherwise been filled by someone about as suitable as you, you may be mostly contributing counterfactually to the priorities of people who have been displaced by you taking that position, and these priorities may differ substantially from your own.
Highly competitive cause-neutral positions may have lower impact according to your prioritization than previously thought, if your priorities differ significantly from the average of those applying to the positions or of the EA community as a whole.
Naive replaceability
Naively, if you’re considering working in position P at charity X or position Q, earning to give, at company Y, you might compare the following two impact differences and choose the corresponding position for which it is higher:
Your impact in position P at charity X—A’s impact in position P at charity X
Your impact in position Q, earning to give, at company Y—B’s impact in position Q at company Y
where A and B are the people who’d be in those positions if you weren’t. The counterfactual you’re comparing both 1. and 2. to is one in which you do nothing.
If position P is highly competitive, you might expect 1. to be small (assuming you’d get the position), while 2. looks big as long as the charities you donate to are funding-constrained.
But this can’t always be right, since otherwise all EAs should earn to give, and we wouldn’t know where to give, because there would be no one focused on prioritization and cost-effectiveness analysis! (Or we’d be relying on the research of people working full-time earning to give.)
A more complete picture
There are a few things missing in the above analysis. Sometimes, because of a talent constraint, the position would be otherwise unfilled, and we should have:
1. = You in position P at charity X—no one in position P at charity X
It’s also possible you in position P at charity X will bring more people to it or similar charities overall.
Furthermore, we haven’t considered more displacements that would happen if you take position P at charity X. If you displace A, they might displace someone else somewhere else:
A in position R—C in position R, where A would be instead of P
We should add this to 1., and there can be other similar terms to be added to 1., for more displacements:
+ (C in position S—D in position S) + (D in position T—E in position T) + …
I’ll call this sequence of displacements a displacement chain (or replacement chain).
One of these displacements could push an altruist into earning to give or an extra otherwise unfilled position, like I mentioned above, and you might expect this to be the biggest term in the sum. However, further along the chain, the positions may become less important (since they’re less desirable), so the more terms before this one in the chain, the smaller I’d expect it to be, and the largest term might actually often come earlier.
Similarly, we get more terms for 2., but smaller since people working in the industry are usually much less altruistic. So, after adding the extra terms to 1. and 2. as appropriate, it looks like their values should be closer.
Differing priorities along the chain
However, the further along the displacement chain, the less altruistic you should expect the people to be and, crucially, the less you should expect them to prioritize the same causes as you. This is regression toward the mean. If you get the position P at charity X, and the person A you displace ends up focusing on a cause you hardly care at all about (you might have had overlapping but nonidentical priorities), you should expect the next term we added to 1. to be pretty small, discounting based on prioritization. There might be a lot of overlap in your prioritization at this first step, though, so this might not usually be a significant concern here. However, this term might not be big in the first place if it’s a displacement of A into a position Q at another organization that would have been filled anyway.
Furthermore, it seems like you should apply discounting to all the other terms for differences in cause prioritization, and more of it the further along the chain, although the multiplying factor need not tend towards 0, since if the chain stays in the EA community, there’s still significant overlap with your priorities. For example, if you and 10% of the EA community prioritize cause A above all other causes, then you might expect the multiplying factor based on differing causes to be at least 0.1 (=10%). If the chain is long enough, you might expect the factor to be around 0.1 for the term that should have been the biggest, possibly an altruist pushed into earning to give or an otherwise unfilled position.
What I get from this is that, all else equal, you should prefer chains where the big terms (before prioritization discounting) come earlier, since you discount earlier terms less, in expectation, especially if your priorities differ significantly from other applicants’ in the alternatives or the EA community on average. This favours earning to give (more than before this analysis for different priorities, not absolutely) and “extra positions” that wouldn’t be filled without you, because most of the impact is in the first term, and to a lesser extent, positions for which there’s a lot of overlap with earning to give, e.g. possibly quantitative/tech, management and operations.
So, if you’re thinking of applying for a position for which you expect the other applicants to have very different priorities from you, other than the one for the given cause (and your priorities differ significantly from the average in EA), it might actually be better to let them have it instead, because then you can better ensure you contribute counterfactually to your own priorities, rather than contributing counterfactually to their priorities, or the priorities of someone else down the chain.
Aside on honesty
It might even make sense, in theory, to apply to EA organizations working on causes you support the least, so that you can displace the EA who would have otherwise worked there towards causes you support more. I have two objections:
1. This is probably not the most effective thing you could do, even if it worked.
2. I don’t think you could do this without lying to or otherwise misleading the org about your concern for their cause, and given the risks to your own reputation and to coordination and trust in the EA community as a whole, no one should do this.
Cause-neutral charities
The kinds of work at charities I suspect the argument applies most against are cause-neutral, like meta EA, facilitating donations to and raising funds for EA charities cause-neutrally, cause prioritization research and community-building. But these kinds of roles are basically public goods to the EA community, since each cause area benefits from them, and public goods tend to be underproduced if left to markets.
RC Forward, which allows Canadians to get tax credits for donating to EA charities, recently starting taking fees for this service, and this makes perfect sense for a cause-neutral org like it! Unfortunately, I don’t think there’s any similar solution for cause-neutral roles at EA charities. We may be asking people to use their careers to counterfactually benefit causes they don’t prioritize. This only really makes sense to me, from the top of my head, for EAs
1. whose cause prioritization is similar to that of the community as a whole or are happy to defer to it,
2. who prioritize “cause-neutral causes” or meta-EA, like cause prioritization or community building,
3. who are looking to build career capital at EA orgs, or
4. who are a much better fit than the candidate they displace (or someone else along the displacement chain is a much better than the one they displace, or no hire, if it ends at an otherwise unfilled position, all according to your cause prioritization).
This is probably not exhaustive.
Maybe we have enough EAs like this that it’s not a problem? There doesn’t seem to be any shortage of applicants to cause-neutral EA orgs.
Replaceability with differing priorities
Summary: A naive understanding of replaceability might lead us to think too highly of earning to give. A more complete understanding might lead us away from it, as described here and here. An even more complete understanding, based on the possibility of differences in priorities, might lead us back again. In particular,
If you earn to give or get a position at an organization they wouldn’t have otherwise filled, you can ensure you contribute counterfactually to your own priorities, but in a position that would have otherwise been filled by someone about as suitable as you, you may be mostly contributing counterfactually to the priorities of people who have been displaced by you taking that position, and these priorities may differ substantially from your own.
Highly competitive cause-neutral positions may have lower impact according to your prioritization than previously thought, if your priorities differ significantly from the average of those applying to the positions or of the EA community as a whole.
Naive replaceability
Naively, if you’re considering working in position P at charity X or position Q, earning to give, at company Y, you might compare the following two impact differences and choose the corresponding position for which it is higher:
Your impact in position P at charity X—A’s impact in position P at charity X
Your impact in position Q, earning to give, at company Y—B’s impact in position Q at company Y
where A and B are the people who’d be in those positions if you weren’t. The counterfactual you’re comparing both 1. and 2. to is one in which you do nothing.
If position P is highly competitive, you might expect 1. to be small (assuming you’d get the position), while 2. looks big as long as the charities you donate to are funding-constrained.
But this can’t always be right, since otherwise all EAs should earn to give, and we wouldn’t know where to give, because there would be no one focused on prioritization and cost-effectiveness analysis! (Or we’d be relying on the research of people working full-time earning to give.)
A more complete picture
There are a few things missing in the above analysis. Sometimes, because of a talent constraint, the position would be otherwise unfilled, and we should have:
1. = You in position P at charity X—no one in position P at charity X
It’s also possible you in position P at charity X will bring more people to it or similar charities overall.
Furthermore, we haven’t considered more displacements that would happen if you take position P at charity X. If you displace A, they might displace someone else somewhere else:
A in position R—C in position R, where A would be instead of P
We should add this to 1., and there can be other similar terms to be added to 1., for more displacements:
+ (C in position S—D in position S) + (D in position T—E in position T) + …
I’ll call this sequence of displacements a displacement chain (or replacement chain).
One of these displacements could push an altruist into earning to give or an extra otherwise unfilled position, like I mentioned above, and you might expect this to be the biggest term in the sum. However, further along the chain, the positions may become less important (since they’re less desirable), so the more terms before this one in the chain, the smaller I’d expect it to be, and the largest term might actually often come earlier.
Similarly, we get more terms for 2., but smaller since people working in the industry are usually much less altruistic. So, after adding the extra terms to 1. and 2. as appropriate, it looks like their values should be closer.
Differing priorities along the chain
However, the further along the displacement chain, the less altruistic you should expect the people to be and, crucially, the less you should expect them to prioritize the same causes as you. This is regression toward the mean. If you get the position P at charity X, and the person A you displace ends up focusing on a cause you hardly care at all about (you might have had overlapping but nonidentical priorities), you should expect the next term we added to 1. to be pretty small, discounting based on prioritization. There might be a lot of overlap in your prioritization at this first step, though, so this might not usually be a significant concern here. However, this term might not be big in the first place if it’s a displacement of A into a position Q at another organization that would have been filled anyway.
Furthermore, it seems like you should apply discounting to all the other terms for differences in cause prioritization, and more of it the further along the chain, although the multiplying factor need not tend towards 0, since if the chain stays in the EA community, there’s still significant overlap with your priorities. For example, if you and 10% of the EA community prioritize cause A above all other causes, then you might expect the multiplying factor based on differing causes to be at least 0.1 (=10%). If the chain is long enough, you might expect the factor to be around 0.1 for the term that should have been the biggest, possibly an altruist pushed into earning to give or an otherwise unfilled position.
What I get from this is that, all else equal, you should prefer chains where the big terms (before prioritization discounting) come earlier, since you discount earlier terms less, in expectation, especially if your priorities differ significantly from other applicants’ in the alternatives or the EA community on average. This favours earning to give (more than before this analysis for different priorities, not absolutely) and “extra positions” that wouldn’t be filled without you, because most of the impact is in the first term, and to a lesser extent, positions for which there’s a lot of overlap with earning to give, e.g. possibly quantitative/tech, management and operations.
So, if you’re thinking of applying for a position for which you expect the other applicants to have very different priorities from you, other than the one for the given cause (and your priorities differ significantly from the average in EA), it might actually be better to let them have it instead, because then you can better ensure you contribute counterfactually to your own priorities, rather than contributing counterfactually to their priorities, or the priorities of someone else down the chain.
Aside on honesty
It might even make sense, in theory, to apply to EA organizations working on causes you support the least, so that you can displace the EA who would have otherwise worked there towards causes you support more. I have two objections:
1. This is probably not the most effective thing you could do, even if it worked.
2. I don’t think you could do this without lying to or otherwise misleading the org about your concern for their cause, and given the risks to your own reputation and to coordination and trust in the EA community as a whole, no one should do this.
Cause-neutral charities
The kinds of work at charities I suspect the argument applies most against are cause-neutral, like meta EA, facilitating donations to and raising funds for EA charities cause-neutrally, cause prioritization research and community-building. But these kinds of roles are basically public goods to the EA community, since each cause area benefits from them, and public goods tend to be underproduced if left to markets.
RC Forward, which allows Canadians to get tax credits for donating to EA charities, recently starting taking fees for this service, and this makes perfect sense for a cause-neutral org like it! Unfortunately, I don’t think there’s any similar solution for cause-neutral roles at EA charities. We may be asking people to use their careers to counterfactually benefit causes they don’t prioritize. This only really makes sense to me, from the top of my head, for EAs
1. whose cause prioritization is similar to that of the community as a whole or are happy to defer to it,
2. who prioritize “cause-neutral causes” or meta-EA, like cause prioritization or community building,
3. who are looking to build career capital at EA orgs, or
4. who are a much better fit than the candidate they displace (or someone else along the displacement chain is a much better than the one they displace, or no hire, if it ends at an otherwise unfilled position, all according to your cause prioritization).
This is probably not exhaustive.
Maybe we have enough EAs like this that it’s not a problem? There doesn’t seem to be any shortage of applicants to cause-neutral EA orgs.