but it concerns me that entry level positions in EA are now being advertised at what would be CEO-level salaries at other nonprofits
I share these concerns, especially given the prevailing discrepancy in salaries across cause areas, (e.g. ACE only just elevated their ED salary to ~$94k, whereas several mid-level positions at Redwood are listed at $100k - $170k) and I imagine likely to become more dramatic with money pouring specifically into meta and longtermist work. My concern here is that cause impartial EAs are still likely to go for the higher salary, which could lead to an imbalance in a talent-constrained landscape.
My concern here is that cause impartial EAs are still likely to go for the higher salary, which could lead to an imbalance in a talent-constrained landscape.
I have the opposite intuition. I think it’s good to pay for impact, and not just (e.g.) moral merit or job difficulty. It’s good for our movement if prices carry signals about how much the work is valued by the rest of the movement. If anything I’d be much more worried about nonprofit prices being divorced from impact/results (I’ve tried to write a post about this like five times and gave up each time).
I think that I agree with many aspects of the spirit of this, but it is fairly unclear to me that if organizations just tried to pay market rates for people to the extent that is possible it would result in this—I don’t think funding is distributed across priorities according to the values of the movement as a whole (or even via some better conception of priorities where more engaged people were weighted more highly or something, etc.), and I think different areas in the movement have different philosophies around compensation, so it seems like there are other factors warping funding being ideally distributed. It seems really unclear to me if EA salaries currently are actually carrying signals about impact, as opposed to mostly telling us something about the funding overhang/relative ease of securing funding in various spaces (which I think is uncorrelated with impact to some extent). I guess to the extent that salaries seem correlated with impact (which I think is possibly happening but am uncertain), I’m not sure the reason is that it is the EA job market pricing in impact.
I’m pretty pro compensation going up in the EA space (at least to some extent across the board, and definitely in certain areas), but I think my biggest worry is that it might make it way harder to start new groups—the amount of seed funding a new organization needs to get going when the salary expectations are way higher (even in a well funded area) seems like a bigger barrier to overcome, even just psychologically, for entrepreneurial people who want to build something.
Though also I think a big thing happening here is that lots of longtermism/AI orgs. are competing with tech companies for talent, and other organizations are competing with non-EA businesses that pay less than tech companies, so the salary stratification is just naturally going to happen.
I agree, pricing in impact seems reasonable. But do you think this is currently happening? if so, by what mechanism? I think the discrepancies between Redwood and ACE salaries are much more likely explained by norms at the respective orgs and funding constraints rather than some explicit pricing of impact.
I agree the system is far from perfect and we still have a lot of room to grow. Broadly I think donors (albeit imperfectly) give more money to places they think are expected to have higher impact, and an org prioritizes (albeit imperfectly) having higher staffing costs if they think staffing on the margin is relatively more important to the org’s marginal success.
I think we’re far from that idealistic position now but we can and are slowly moving towards it.
Sometimes people in EA will target scalability and some will target cost-effectiveness. In some cause areas scalability will matter more and in some cost-effectiveness will matter more. E.g. longtermists seem more focused on scalability of new projects than those working on global health. Where scalability matters more there is more incentive for higher salaries (oh we can pay twice as much and get 105% of the befit – great). As such I expect there to be an imbalance in slaries between cases areas.
This has certainly been my experience with the people funding me to do longtermist work giving the feedback that they don’t care about cost-effectiveness if I can scale a bit quicker at higher costs I should do so and the people paying me to do neartermist work having a more frugal approach to spending resources.
As such my intuition aligns with Rockwell that:
My concern here is that cause impartial EAs are still likely to go for the higher salary, which could lead to an imbalance in a talent-constrained landscape.
On the other hand some EA folk might actively go for lower salaries with the view that they will likely have a higher counterfactual impact in roles that pay less or driven by some sense of self-scarificingness being important.
As an aside, I wonder if you (and OP) mean a different thing by “cause impartial” than I do. I interpret “cause impartial” as “I will do whatever actions maximize the most impact (subject to personal constraints), regardless of cause area.” Whereas I think some people take it to mean a more freeform approach to cause selection that’s more like “Oh I don’t care what job I do as long as it has the “EA” stamp?” (maybe/probably I’m strawmanning here).
I think that’s a fair point. I normally mean the former (the impact maximising one) but in this context was probably reading it in the context OP used it more like that later (the EA stamp one). Good to clarify what was meant here, sorry for any confusion.
I feel like this isn’t the definition being used here, but when I use cause neutral, I mean what you describe, and when use cause impartial I mean something like and intervention that will lift all boats without respect to cause.
I think there’s a problem with this. Salary as a signal for perceived impact may be good, but I’m worried some other incentives may creep in. Say you’re paying a higher amount for AI safety work, and now another billionaire becomes convinced in the AI safety case and start funding you too. You could use that money for actual impact, or you could just pay your current or future employees more. And I don’t see a real way to discern the two.
I share these concerns, especially given the prevailing discrepancy in salaries across cause areas, (e.g. ACE only just elevated their ED salary to ~$94k, whereas several mid-level positions at Redwood are listed at $100k - $170k) and I imagine likely to become more dramatic with money pouring specifically into meta and longtermist work. My concern here is that cause impartial EAs are still likely to go for the higher salary, which could lead to an imbalance in a talent-constrained landscape.
I have the opposite intuition. I think it’s good to pay for impact, and not just (e.g.) moral merit or job difficulty. It’s good for our movement if prices carry signals about how much the work is valued by the rest of the movement. If anything I’d be much more worried about nonprofit prices being divorced from impact/results (I’ve tried to write a post about this like five times and gave up each time).
I think that I agree with many aspects of the spirit of this, but it is fairly unclear to me that if organizations just tried to pay market rates for people to the extent that is possible it would result in this—I don’t think funding is distributed across priorities according to the values of the movement as a whole (or even via some better conception of priorities where more engaged people were weighted more highly or something, etc.), and I think different areas in the movement have different philosophies around compensation, so it seems like there are other factors warping funding being ideally distributed. It seems really unclear to me if EA salaries currently are actually carrying signals about impact, as opposed to mostly telling us something about the funding overhang/relative ease of securing funding in various spaces (which I think is uncorrelated with impact to some extent). I guess to the extent that salaries seem correlated with impact (which I think is possibly happening but am uncertain), I’m not sure the reason is that it is the EA job market pricing in impact.
I’m pretty pro compensation going up in the EA space (at least to some extent across the board, and definitely in certain areas), but I think my biggest worry is that it might make it way harder to start new groups—the amount of seed funding a new organization needs to get going when the salary expectations are way higher (even in a well funded area) seems like a bigger barrier to overcome, even just psychologically, for entrepreneurial people who want to build something.
Though also I think a big thing happening here is that lots of longtermism/AI orgs. are competing with tech companies for talent, and other organizations are competing with non-EA businesses that pay less than tech companies, so the salary stratification is just naturally going to happen.
I agree, pricing in impact seems reasonable. But do you think this is currently happening? if so, by what mechanism? I think the discrepancies between Redwood and ACE salaries are much more likely explained by norms at the respective orgs and funding constraints rather than some explicit pricing of impact.
I agree the system is far from perfect and we still have a lot of room to grow. Broadly I think donors (albeit imperfectly) give more money to places they think are expected to have higher impact, and an org prioritizes (albeit imperfectly) having higher staffing costs if they think staffing on the margin is relatively more important to the org’s marginal success.
I think we’re far from that idealistic position now but we can and are slowly moving towards it.
Sometimes people in EA will target scalability and some will target cost-effectiveness. In some cause areas scalability will matter more and in some cost-effectiveness will matter more. E.g. longtermists seem more focused on scalability of new projects than those working on global health. Where scalability matters more there is more incentive for higher salaries (oh we can pay twice as much and get 105% of the befit – great). As such I expect there to be an imbalance in slaries between cases areas.
This has certainly been my experience with the people funding me to do longtermist work giving the feedback that they don’t care about cost-effectiveness if I can scale a bit quicker at higher costs I should do so and the people paying me to do neartermist work having a more frugal approach to spending resources.
As such my intuition aligns with Rockwell that:
On the other hand some EA folk might actively go for lower salaries with the view that they will likely have a higher counterfactual impact in roles that pay less or driven by some sense of self-scarificingness being important.
As an aside, I wonder if you (and OP) mean a different thing by “cause impartial” than I do. I interpret “cause impartial” as “I will do whatever actions maximize the most impact (subject to personal constraints), regardless of cause area.” Whereas I think some people take it to mean a more freeform approach to cause selection that’s more like “Oh I don’t care what job I do as long as it has the “EA” stamp?” (maybe/probably I’m strawmanning here).
I think that’s a fair point. I normally mean the former (the impact maximising one) but in this context was probably reading it in the context OP used it more like that later (the EA stamp one). Good to clarify what was meant here, sorry for any confusion.
I feel like this isn’t the definition being used here, but when I use cause neutral, I mean what you describe, and when use cause impartial I mean something like and intervention that will lift all boats without respect to cause.
Thanks! That’s helpful.
I think there’s a problem with this. Salary as a signal for perceived impact may be good, but I’m worried some other incentives may creep in. Say you’re paying a higher amount for AI safety work, and now another billionaire becomes convinced in the AI safety case and start funding you too. You could use that money for actual impact, or you could just pay your current or future employees more. And I don’t see a real way to discern the two.