Regarding asking EAs to do work for which they are overqualified and that non-EAs could do, I wonder whether financial incentives come into play here.
As a general rule, charitable organizations pay their employees below-market salaries and expect that the psychological value employees get from working for an organization they are aligned with (“warm fuzzies,” to save space) covers the difference. Although some might disagree, I think this is a good practice in many roles and up to a certain point—you often want to select to some extent for the extent to which the job candidate gets warm fuzzies working for your organization vs. is just doing it for the paycheck.
To the extent an organization’s general pay strategy is—say -- 70% of market rate (expecting the other 30% in warm fuzzies), that isn’t going to be competitive for people who don’t value the warm fuzzies significantly.
Imagine you have three types of jobs in the world—private-sector, Save the Puppies (StP), and opera. Alice really likes puppies but only mildly likes opera, so she values StP fuzzies but minimally values opera fuzzies. She would be equally happy with a private-sector job, a 30% haircut to receive StP fuzzies, or a 5% haircut to receive opera fuzzies. Bob has similar preferences except that he values opera fuzzies and only mildly values StP fuzzies. Claire places only mild value on all fuzzies.
Suppose StP has a job opening that needs someone with an 80K level of qualifications/experience. Alice is a more qualified candidate (private-sector market rate = 100K) than Bob or Claire (whose rate = 80K). However, she is actually cheaper for StP (will work for 70K) than Bob or Claire (will work for 76K). Thus, there is a natural incentive to hire Alice for work she is overqualified for—plus demonstrated alignment to StP’s mission probably has some value for the organization, especially if it is smaller and finds it inefficient to separate out tasks for which alignment is important.
That is, of course, merely a model. But EA, both by its nature and its recruiting strategy, generates a population of EAs who are highly qualified/capable, so the Alice/Bob/Claire hypothetical is more likely to happen in EA than in StP. Since liking puppies is generally consistent across ability levels, StP can probably find someone at the 80K level who is aligned with StP and will work for 56K.
Regarding asking EAs to do work for which they are overqualified and that non-EAs could do, I wonder whether financial incentives come into play here.
As a general rule, charitable organizations pay their employees below-market salaries and expect that the psychological value employees get from working for an organization they are aligned with (“warm fuzzies,” to save space) covers the difference. Although some might disagree, I think this is a good practice in many roles and up to a certain point—you often want to select to some extent for the extent to which the job candidate gets warm fuzzies working for your organization vs. is just doing it for the paycheck.
To the extent an organization’s general pay strategy is—say -- 70% of market rate (expecting the other 30% in warm fuzzies), that isn’t going to be competitive for people who don’t value the warm fuzzies significantly.
Imagine you have three types of jobs in the world—private-sector, Save the Puppies (StP), and opera. Alice really likes puppies but only mildly likes opera, so she values StP fuzzies but minimally values opera fuzzies. She would be equally happy with a private-sector job, a 30% haircut to receive StP fuzzies, or a 5% haircut to receive opera fuzzies. Bob has similar preferences except that he values opera fuzzies and only mildly values StP fuzzies. Claire places only mild value on all fuzzies.
Suppose StP has a job opening that needs someone with an 80K level of qualifications/experience. Alice is a more qualified candidate (private-sector market rate = 100K) than Bob or Claire (whose rate = 80K). However, she is actually cheaper for StP (will work for 70K) than Bob or Claire (will work for 76K). Thus, there is a natural incentive to hire Alice for work she is overqualified for—plus demonstrated alignment to StP’s mission probably has some value for the organization, especially if it is smaller and finds it inefficient to separate out tasks for which alignment is important.
That is, of course, merely a model. But EA, both by its nature and its recruiting strategy, generates a population of EAs who are highly qualified/capable, so the Alice/Bob/Claire hypothetical is more likely to happen in EA than in StP. Since liking puppies is generally consistent across ability levels, StP can probably find someone at the 80K level who is aligned with StP and will work for 56K.