I worry that EA jobs are too good a deal e.g better benefits, better salary and more impact. When one or a couple of those would be enough to motivate someone into that job.
Imagine if Google [edit: in its early high-growth phase] said something like this—our company is so impactful that we ought to pay a salary far below the industry standard, to avoid making our job offers “too good”. Clearly this is wrong. Yes, there is an effect in this direction. But if you stoop down to nonprofit salaries, you will lose more from being unable to recruit selfish talent, than you would lose from overpaying the altruistic talent.
Note also that if the talent is truly as fully altruistic as they would have to be for your logic to work out, they could negotiate their salary down, or donate it on, so the cost of overpaying them should be quite small indeed.
Agree with your conclusion but I don’t see the Google analogy. Google doesn’t expect its employees to be prosociallly or impact motivated. And what is a good decision logic for maximising Google profits might correspond to a terrible logic for an EA org to follow. E.g., unpredictable product rollouts to confuse the competition, trying to lock in markets and systems.
Sorry, I was picturing an early-stage Google that could expect their staff to be at least a bit altruistic. They had a giant ratio of users to staff, such that each staff member genuinely would have an enormous positive impact, and growth and impact were aligned at least somewhat.
Imagine if Google [edit: in its early high-growth phase] said something like this—our company is so impactful that we ought to pay a salary far below the industry standard, to avoid making our job offers “too good”. Clearly this is wrong. Yes, there is an effect in this direction. But if you stoop down to nonprofit salaries, you will lose more from being unable to recruit selfish talent, than you would lose from overpaying the altruistic talent.
Note also that if the talent is truly as fully altruistic as they would have to be for your logic to work out, they could negotiate their salary down, or donate it on, so the cost of overpaying them should be quite small indeed.
Agree with your conclusion but I don’t see the Google analogy. Google doesn’t expect its employees to be prosociallly or impact motivated. And what is a good decision logic for maximising Google profits might correspond to a terrible logic for an EA org to follow. E.g., unpredictable product rollouts to confuse the competition, trying to lock in markets and systems.
Sorry, I was picturing an early-stage Google that could expect their staff to be at least a bit altruistic. They had a giant ratio of users to staff, such that each staff member genuinely would have an enormous positive impact, and growth and impact were aligned at least somewhat.