Thanks for writing this. I share some of this uneasiness—I think there are reputational risks to EA here, for example by sponsoring people to work in the Bahamas. I’m not saying there isn’t a potential justification for this but the optics of it are really pretty bad.
This also extends to some lazy ‘taking money from internet billionaires’ tropes. I’m not sure how much we should consider bad faith criticisms like this if we believe we’re doing the right thing, but it’s an easy hit piece (and has already been done, e.g. a video attacked someone from the EA community running for congress about being part-funded by Sam Bankman-Fried—I’m deliberately not linking to it here because it’s garbage).
Finally, I worry about wage inflation in EA. EA already mostly pays at the generous end of nonprofit salaries, and some of the massive EA orgs pay private-sector level wages (reasonably, in my view—if you’re managing $600m/year at GiveWell, it’s not unreasonable to be well-paid for that). I’ve spent most of my career arguing that people shouldn’t have to sacrifice a comfortable life if they want to do altruistic work—but it concerns me that entry level positions in EA are now being advertised at what would be CEO-level salaries at other nonprofits. There is a good chance, I think, that EA ends up paying professional staff significantly more to do exactly the same work to exactly the same standard as before, which is a substantive problem; and there is again a real reputational risk here.
it concerns me that entry level positions in EA are now being advertised at what would be CEO-level salaries at other nonprofits
Suppose there was no existing nonprofit sector, or perhaps that everyone who worked there was an unpaid volunteer, so the only comparison was to the private sector. Do you think that the optimal level of compensation would differ significantly in this world?
In general I’m skeptical that the existence of a poorly paid fairly dysfunctional group of organisations should inspire us to copy them, rather than the much larger group of very effective orgs who do practice competitive, merit based compensation.
Non-profits are different from for-profits in that their aim isn’t profit. But putting that aside, what should effective altruist charities be more similar to: for-profits or non-EA non-profits?
71% of the responding EAs thought that EA charities should be more like for-profits.
59% of the responding non-EAs took the same view. (Though this group was small.)
Obviously this poll isn’t representative so should be taken with a grain of salt.
A quick note about the use of “bad faith criticisms” — I don’t think it’s the case that every argument against “taking money from internet billionaires” is bad faith, where bad faith is defined as falsely presenting one’s motives, consciously using poor evidence or reasoning, or some other intentional duplicitousness.
It seems perfectly possible for one to coherently and in good faith argue that EA should not take money from billionaires. Perhaps in practice you find such high-quality good faith takes lacking, but in any case I think it’s important not to categorically dismiss them as bad faith.
but it concerns me that entry level positions in EA are now being advertised at what would be CEO-level salaries at other nonprofits
I share these concerns, especially given the prevailing discrepancy in salaries across cause areas, (e.g. ACE only just elevated their ED salary to ~$94k, whereas several mid-level positions at Redwood are listed at $100k - $170k) and I imagine likely to become more dramatic with money pouring specifically into meta and longtermist work. My concern here is that cause impartial EAs are still likely to go for the higher salary, which could lead to an imbalance in a talent-constrained landscape.
My concern here is that cause impartial EAs are still likely to go for the higher salary, which could lead to an imbalance in a talent-constrained landscape.
I have the opposite intuition. I think it’s good to pay for impact, and not just (e.g.) moral merit or job difficulty. It’s good for our movement if prices carry signals about how much the work is valued by the rest of the movement. If anything I’d be much more worried about nonprofit prices being divorced from impact/results (I’ve tried to write a post about this like five times and gave up each time).
I think that I agree with many aspects of the spirit of this, but it is fairly unclear to me that if organizations just tried to pay market rates for people to the extent that is possible it would result in this—I don’t think funding is distributed across priorities according to the values of the movement as a whole (or even via some better conception of priorities where more engaged people were weighted more highly or something, etc.), and I think different areas in the movement have different philosophies around compensation, so it seems like there are other factors warping funding being ideally distributed. It seems really unclear to me if EA salaries currently are actually carrying signals about impact, as opposed to mostly telling us something about the funding overhang/relative ease of securing funding in various spaces (which I think is uncorrelated with impact to some extent). I guess to the extent that salaries seem correlated with impact (which I think is possibly happening but am uncertain), I’m not sure the reason is that it is the EA job market pricing in impact.
I’m pretty pro compensation going up in the EA space (at least to some extent across the board, and definitely in certain areas), but I think my biggest worry is that it might make it way harder to start new groups—the amount of seed funding a new organization needs to get going when the salary expectations are way higher (even in a well funded area) seems like a bigger barrier to overcome, even just psychologically, for entrepreneurial people who want to build something.
Though also I think a big thing happening here is that lots of longtermism/AI orgs. are competing with tech companies for talent, and other organizations are competing with non-EA businesses that pay less than tech companies, so the salary stratification is just naturally going to happen.
I agree, pricing in impact seems reasonable. But do you think this is currently happening? if so, by what mechanism? I think the discrepancies between Redwood and ACE salaries are much more likely explained by norms at the respective orgs and funding constraints rather than some explicit pricing of impact.
I agree the system is far from perfect and we still have a lot of room to grow. Broadly I think donors (albeit imperfectly) give more money to places they think are expected to have higher impact, and an org prioritizes (albeit imperfectly) having higher staffing costs if they think staffing on the margin is relatively more important to the org’s marginal success.
I think we’re far from that idealistic position now but we can and are slowly moving towards it.
Sometimes people in EA will target scalability and some will target cost-effectiveness. In some cause areas scalability will matter more and in some cost-effectiveness will matter more. E.g. longtermists seem more focused on scalability of new projects than those working on global health. Where scalability matters more there is more incentive for higher salaries (oh we can pay twice as much and get 105% of the befit – great). As such I expect there to be an imbalance in slaries between cases areas.
This has certainly been my experience with the people funding me to do longtermist work giving the feedback that they don’t care about cost-effectiveness if I can scale a bit quicker at higher costs I should do so and the people paying me to do neartermist work having a more frugal approach to spending resources.
As such my intuition aligns with Rockwell that:
My concern here is that cause impartial EAs are still likely to go for the higher salary, which could lead to an imbalance in a talent-constrained landscape.
On the other hand some EA folk might actively go for lower salaries with the view that they will likely have a higher counterfactual impact in roles that pay less or driven by some sense of self-scarificingness being important.
As an aside, I wonder if you (and OP) mean a different thing by “cause impartial” than I do. I interpret “cause impartial” as “I will do whatever actions maximize the most impact (subject to personal constraints), regardless of cause area.” Whereas I think some people take it to mean a more freeform approach to cause selection that’s more like “Oh I don’t care what job I do as long as it has the “EA” stamp?” (maybe/probably I’m strawmanning here).
I think that’s a fair point. I normally mean the former (the impact maximising one) but in this context was probably reading it in the context OP used it more like that later (the EA stamp one). Good to clarify what was meant here, sorry for any confusion.
I feel like this isn’t the definition being used here, but when I use cause neutral, I mean what you describe, and when use cause impartial I mean something like and intervention that will lift all boats without respect to cause.
I think there’s a problem with this. Salary as a signal for perceived impact may be good, but I’m worried some other incentives may creep in. Say you’re paying a higher amount for AI safety work, and now another billionaire becomes convinced in the AI safety case and start funding you too. You could use that money for actual impact, or you could just pay your current or future employees more. And I don’t see a real way to discern the two.
concerns me that entry level positions in EA are now being advertised at what would be CEO-level salaries at other nonprofits
A further and related worry I have resulting from this is that is it is anti-competitive - it makes critiquing the EA movement that much harder. If you’re an ‘insider’ or ‘market leader’ EA organisation you (by definition) have the ability to pay these very high salaries. This means you can, to some degree, ‘price out’ challenger organisations who have new ideas.
These are very standard economic worries but, for some reason, I don’t hear the EA landscape being described in these terms.
I think the ‘econ analysis of the EA labour market’ has been explored fairly well—I highly recommend this treatment by Jon Behar. I also find myself (and others) commonly in the comment threads banging the drum for it being beneficial to pay more, or why particular ideas to not do so (or pay EA employees less) are not good ones.
Notably, ‘standard economic worries’ point in the opposite direction here. On the standard econ-101 view, “Org X struggles as competitor Org Y can pay higher salaries”, or “Cause ~neutral people migrate to ‘hot’ cause area C, attracted by higher pay” are desirable features, rather than bugs, of competition. Donors/‘consumers’ demand more of Y’s product than X’s (or more of C generally), and the price signal of higher salaries acts to attract labour to better satisfy this demand (both from reallocation within the ‘field’, and by incentivizing outsiders to join in). In aggregate, both workers and donors expect to benefit from the new status quo.
In contrast, trying to intervene in the market to make life easier for those losing out in this competition is archetypally (and undesirably) anti-competitive. The usual suggestion (implied here, but expressly stated elsewhere) is unilateral or mutual agreement between orgs to pay their employees less—or refrain from paying them more. The usual econ-101 story is this is a bad idea as although this can anoint a beneficiary (i.e. Those who run and donate to Org X, who feel less heat from Org Y potentially poaching their staff), it makes the market more inefficient overall, and harms/exploits employees (said activity often draws the ire of antitrust regulators). To cash out explicitly who can expect to lose out:
Employees at Org X, who lose the option of migrating to more lucrative employment.
Employees at Org Y, who lose out in getting paid less than they otherwise would.
(probably) Org Y, who by artificially suppressing salary can expect a supply shortfall versus a preferable market equilibrium (as they value marginal labour more than the artificially suppressed price).
Donors to Org Y, who (typically) prefer their donations lead to more Org Y activity, rather than being partially siphoned off in an opaque transfer subsidy to Org X. Even donors who would want to ‘buy’ more Org Y and Org X could do so more efficiently with donation splitting.
Also, on the econ-101 story, Orgs can’t unfairly screw each other by over-cutting each other on costs. If a challenger can’t compete with an incumbent on salary, their problem really is they can’t convince donors to give it more money (despite its relatively discounted labour), which implies donors agreeing with the incumbent, not the challenger, that it is the better use of marginal scarce resources.
Naturally, there are corner cases where this breaks down—e.g. if labour supply was basically inelastic, upping pay just wastes money: none of these seem likely. Likewise how efficient the ‘EA labour market’ is unclear—but if inefficient and distorted, the standard econ reflex would be hesitant this could be improved by adding in more distortions and inefficiencies. Also, as being rich does not mean being right, economic competition could distort competition in the marketplace of ideas. But even if the market results are not synonymous with the balance of reason, they are probably not orthogonal either. If Animal-welfare-leaning Alice goes to Redwood over ACE, it also implies she’s not persuaded the intrinsic merit of ACE is that much greater to warrant a large altruistic donation from her in terms of salary sacrifice; if Mike the mega donor splashes the cash on AI policy but is miserly about mental health, this suggests he thinks the former is more promising than the latter. Even if the economic weighting (wealth) was completely random, this noisily approximates equal weight voting on the merits—I’d guess it weakly correlates with epistemic accuracy.
So I think Org (or cause) X, if it is on the wrong side of these dynamics, should basically either get better or accept the consequences of remaining worse, e.g.:
Try and persuade donors it is underappreciated on the direct merits.
Or try and persuade them they should have a hypothecated exploratory budget for areas which do not currently, but might in future, have attractive direct merits (and Org X would satisfy these criteria)
Alternatively, accept their budget constraints mean they hire fewer staff at market rates.
Or accept they can’t compete on salary, try to get more staff on lower salaries, but accept this strategy will result in a more constrained recruitment pool (e.g. only staff highly committed to the cause, those without the skill sets to be hired by Org Y and company).
Appealing for subsidies of various types seems unlikely to work (as although they are in Org X’s interest, they aren’t really in anyone else’s) and probably is -EV from most idealized ‘ecosystem wide’ perspectives.
I suspect we’re speaking at cross-purposes and doing different ‘econ 101’ analyses. If the EA world were one of perfect competition (lots of buyers and sellers, competition of products, ease of entry and exit, buyers have full information, equal market share) I’d be inclined to agree with you. In that case, I would effectively be arguing for less competitive organisations to get subsidies.
That is not, however, the world I observe. Suppose I describe a market along the following lines. One or two firms consume over 90% of the goods whilst also being sellers of goods. There are only a handful of other sellers. The existing firms coordinate their activities with each other, including the sellers mostly agreeing not to directly compete over products. Access to the market and to information about the available goods is controlled by the existing players. Some participants fear (rightly or wrongly) that criticising the existing players or the structure of the market will result in them being blacklisted.
Does such a market seem problematically uncompetitive? Would we expect there to be non-trivial barriers to entry for new firms seeking to compete on particular goods? Does this description bear any similarity to the EA world? Unfortunately, I fear the answer to all three of the questions is yes.
So, to draw it back to the original point, for the market incumbents to offer very high salaries to staff is one way in which such firms might use their market power to ‘price out’ the competition. Of course, if one happened to think that it would bad, all things considered, for that competition to succeed, then of course one might not mind this state of affairs.
Just to understand your argument—is the reputational risk your only concern, or do you also have other concerns? E.g. are you saying it’s intrinsically wrong to pay such wages?
“There is a good chance, I think, that EA ends up paying professional staff significantly more to do exactly the same work to exactly the same standard as before, which is a substantive problem;”
At least in this hypothetical example it would seem naively ineffective (not taking into account things like signaling value) to pay people more salary for same output. (And fwiw here I think qualities like employee wellbeing is part of “output”. But it is unclear how directly salary helps that area.)
At least in this hypothetical example it would seem naively ineffective (not taking into account things like signaling value) to pay people more salary for same output.
Right, though I guess many would argue that by paying more you increase the output—either by attracting more productive staff or by increasing the productivity of existing staff (by allowing them to, e.g. make purchases that buy time). (My view is probably that the first of those effects is the stronger one.)
But of course the relative strength of these different considerations is tricky to work out. There is no doubt some point beyond which it is ineffective to pay staff more.
Thanks for writing this. I share some of this uneasiness—I think there are reputational risks to EA here, for example by sponsoring people to work in the Bahamas. I’m not saying there isn’t a potential justification for this but the optics of it are really pretty bad.
This also extends to some lazy ‘taking money from internet billionaires’ tropes. I’m not sure how much we should consider bad faith criticisms like this if we believe we’re doing the right thing, but it’s an easy hit piece (and has already been done, e.g. a video attacked someone from the EA community running for congress about being part-funded by Sam Bankman-Fried—I’m deliberately not linking to it here because it’s garbage).
Finally, I worry about wage inflation in EA. EA already mostly pays at the generous end of nonprofit salaries, and some of the massive EA orgs pay private-sector level wages (reasonably, in my view—if you’re managing $600m/year at GiveWell, it’s not unreasonable to be well-paid for that). I’ve spent most of my career arguing that people shouldn’t have to sacrifice a comfortable life if they want to do altruistic work—but it concerns me that entry level positions in EA are now being advertised at what would be CEO-level salaries at other nonprofits. There is a good chance, I think, that EA ends up paying professional staff significantly more to do exactly the same work to exactly the same standard as before, which is a substantive problem; and there is again a real reputational risk here.
Suppose there was no existing nonprofit sector, or perhaps that everyone who worked there was an unpaid volunteer, so the only comparison was to the private sector. Do you think that the optimal level of compensation would differ significantly in this world?
In general I’m skeptical that the existence of a poorly paid fairly dysfunctional group of organisations should inspire us to copy them, rather than the much larger group of very effective orgs who do practice competitive, merit based compensation.
I was struck by this argument so ran a Twitter poll.
71% of the responding EAs thought that EA charities should be more like for-profits.
59% of the responding non-EAs took the same view. (Though this group was small.)
Obviously this poll isn’t representative so should be taken with a grain of salt.
A quick note about the use of “bad faith criticisms” — I don’t think it’s the case that every argument against “taking money from internet billionaires” is bad faith, where bad faith is defined as falsely presenting one’s motives, consciously using poor evidence or reasoning, or some other intentional duplicitousness.
It seems perfectly possible for one to coherently and in good faith argue that EA should not take money from billionaires. Perhaps in practice you find such high-quality good faith takes lacking, but in any case I think it’s important not to categorically dismiss them as bad faith.
Agreed. I wasn’t clear in the original post but I particularly had in mind this one attack ad, which is intellectually bad faith.
I share these concerns, especially given the prevailing discrepancy in salaries across cause areas, (e.g. ACE only just elevated their ED salary to ~$94k, whereas several mid-level positions at Redwood are listed at $100k - $170k) and I imagine likely to become more dramatic with money pouring specifically into meta and longtermist work. My concern here is that cause impartial EAs are still likely to go for the higher salary, which could lead to an imbalance in a talent-constrained landscape.
I have the opposite intuition. I think it’s good to pay for impact, and not just (e.g.) moral merit or job difficulty. It’s good for our movement if prices carry signals about how much the work is valued by the rest of the movement. If anything I’d be much more worried about nonprofit prices being divorced from impact/results (I’ve tried to write a post about this like five times and gave up each time).
I think that I agree with many aspects of the spirit of this, but it is fairly unclear to me that if organizations just tried to pay market rates for people to the extent that is possible it would result in this—I don’t think funding is distributed across priorities according to the values of the movement as a whole (or even via some better conception of priorities where more engaged people were weighted more highly or something, etc.), and I think different areas in the movement have different philosophies around compensation, so it seems like there are other factors warping funding being ideally distributed. It seems really unclear to me if EA salaries currently are actually carrying signals about impact, as opposed to mostly telling us something about the funding overhang/relative ease of securing funding in various spaces (which I think is uncorrelated with impact to some extent). I guess to the extent that salaries seem correlated with impact (which I think is possibly happening but am uncertain), I’m not sure the reason is that it is the EA job market pricing in impact.
I’m pretty pro compensation going up in the EA space (at least to some extent across the board, and definitely in certain areas), but I think my biggest worry is that it might make it way harder to start new groups—the amount of seed funding a new organization needs to get going when the salary expectations are way higher (even in a well funded area) seems like a bigger barrier to overcome, even just psychologically, for entrepreneurial people who want to build something.
Though also I think a big thing happening here is that lots of longtermism/AI orgs. are competing with tech companies for talent, and other organizations are competing with non-EA businesses that pay less than tech companies, so the salary stratification is just naturally going to happen.
I agree, pricing in impact seems reasonable. But do you think this is currently happening? if so, by what mechanism? I think the discrepancies between Redwood and ACE salaries are much more likely explained by norms at the respective orgs and funding constraints rather than some explicit pricing of impact.
I agree the system is far from perfect and we still have a lot of room to grow. Broadly I think donors (albeit imperfectly) give more money to places they think are expected to have higher impact, and an org prioritizes (albeit imperfectly) having higher staffing costs if they think staffing on the margin is relatively more important to the org’s marginal success.
I think we’re far from that idealistic position now but we can and are slowly moving towards it.
Sometimes people in EA will target scalability and some will target cost-effectiveness. In some cause areas scalability will matter more and in some cost-effectiveness will matter more. E.g. longtermists seem more focused on scalability of new projects than those working on global health. Where scalability matters more there is more incentive for higher salaries (oh we can pay twice as much and get 105% of the befit – great). As such I expect there to be an imbalance in slaries between cases areas.
This has certainly been my experience with the people funding me to do longtermist work giving the feedback that they don’t care about cost-effectiveness if I can scale a bit quicker at higher costs I should do so and the people paying me to do neartermist work having a more frugal approach to spending resources.
As such my intuition aligns with Rockwell that:
On the other hand some EA folk might actively go for lower salaries with the view that they will likely have a higher counterfactual impact in roles that pay less or driven by some sense of self-scarificingness being important.
As an aside, I wonder if you (and OP) mean a different thing by “cause impartial” than I do. I interpret “cause impartial” as “I will do whatever actions maximize the most impact (subject to personal constraints), regardless of cause area.” Whereas I think some people take it to mean a more freeform approach to cause selection that’s more like “Oh I don’t care what job I do as long as it has the “EA” stamp?” (maybe/probably I’m strawmanning here).
I think that’s a fair point. I normally mean the former (the impact maximising one) but in this context was probably reading it in the context OP used it more like that later (the EA stamp one). Good to clarify what was meant here, sorry for any confusion.
I feel like this isn’t the definition being used here, but when I use cause neutral, I mean what you describe, and when use cause impartial I mean something like and intervention that will lift all boats without respect to cause.
Thanks! That’s helpful.
I think there’s a problem with this. Salary as a signal for perceived impact may be good, but I’m worried some other incentives may creep in. Say you’re paying a higher amount for AI safety work, and now another billionaire becomes convinced in the AI safety case and start funding you too. You could use that money for actual impact, or you could just pay your current or future employees more. And I don’t see a real way to discern the two.
I also share these worries.
A further and related worry I have resulting from this is that is it is anti-competitive - it makes critiquing the EA movement that much harder. If you’re an ‘insider’ or ‘market leader’ EA organisation you (by definition) have the ability to pay these very high salaries. This means you can, to some degree, ‘price out’ challenger organisations who have new ideas.
These are very standard economic worries but, for some reason, I don’t hear the EA landscape being described in these terms.
[own views etc]
I think the ‘econ analysis of the EA labour market’ has been explored fairly well—I highly recommend this treatment by Jon Behar. I also find myself (and others) commonly in the comment threads banging the drum for it being beneficial to pay more, or why particular ideas to not do so (or pay EA employees less) are not good ones.
Notably, ‘standard economic worries’ point in the opposite direction here. On the standard econ-101 view, “Org X struggles as competitor Org Y can pay higher salaries”, or “Cause ~neutral people migrate to ‘hot’ cause area C, attracted by higher pay” are desirable features, rather than bugs, of competition. Donors/‘consumers’ demand more of Y’s product than X’s (or more of C generally), and the price signal of higher salaries acts to attract labour to better satisfy this demand (both from reallocation within the ‘field’, and by incentivizing outsiders to join in). In aggregate, both workers and donors expect to benefit from the new status quo.
In contrast, trying to intervene in the market to make life easier for those losing out in this competition is archetypally (and undesirably) anti-competitive. The usual suggestion (implied here, but expressly stated elsewhere) is unilateral or mutual agreement between orgs to pay their employees less—or refrain from paying them more. The usual econ-101 story is this is a bad idea as although this can anoint a beneficiary (i.e. Those who run and donate to Org X, who feel less heat from Org Y potentially poaching their staff), it makes the market more inefficient overall, and harms/exploits employees (said activity often draws the ire of antitrust regulators). To cash out explicitly who can expect to lose out:
Employees at Org X, who lose the option of migrating to more lucrative employment.
Employees at Org Y, who lose out in getting paid less than they otherwise would.
(probably) Org Y, who by artificially suppressing salary can expect a supply shortfall versus a preferable market equilibrium (as they value marginal labour more than the artificially suppressed price).
Donors to Org Y, who (typically) prefer their donations lead to more Org Y activity, rather than being partially siphoned off in an opaque transfer subsidy to Org X. Even donors who would want to ‘buy’ more Org Y and Org X could do so more efficiently with donation splitting.
Also, on the econ-101 story, Orgs can’t unfairly screw each other by over-cutting each other on costs. If a challenger can’t compete with an incumbent on salary, their problem really is they can’t convince donors to give it more money (despite its relatively discounted labour), which implies donors agreeing with the incumbent, not the challenger, that it is the better use of marginal scarce resources.
Naturally, there are corner cases where this breaks down—e.g. if labour supply was basically inelastic, upping pay just wastes money: none of these seem likely. Likewise how efficient the ‘EA labour market’ is unclear—but if inefficient and distorted, the standard econ reflex would be hesitant this could be improved by adding in more distortions and inefficiencies. Also, as being rich does not mean being right, economic competition could distort competition in the marketplace of ideas. But even if the market results are not synonymous with the balance of reason, they are probably not orthogonal either. If Animal-welfare-leaning Alice goes to Redwood over ACE, it also implies she’s not persuaded the intrinsic merit of ACE is that much greater to warrant a large altruistic donation from her in terms of salary sacrifice; if Mike the mega donor splashes the cash on AI policy but is miserly about mental health, this suggests he thinks the former is more promising than the latter. Even if the economic weighting (wealth) was completely random, this noisily approximates equal weight voting on the merits—I’d guess it weakly correlates with epistemic accuracy.
So I think Org (or cause) X, if it is on the wrong side of these dynamics, should basically either get better or accept the consequences of remaining worse, e.g.:
Try and persuade donors it is underappreciated on the direct merits.
Or try and persuade them they should have a hypothecated exploratory budget for areas which do not currently, but might in future, have attractive direct merits (and Org X would satisfy these criteria)
Alternatively, accept their budget constraints mean they hire fewer staff at market rates.
Or accept they can’t compete on salary, try to get more staff on lower salaries, but accept this strategy will result in a more constrained recruitment pool (e.g. only staff highly committed to the cause, those without the skill sets to be hired by Org Y and company).
Appealing for subsidies of various types seems unlikely to work (as although they are in Org X’s interest, they aren’t really in anyone else’s) and probably is -EV from most idealized ‘ecosystem wide’ perspectives.
[also speaking in a personal capacity, etc.]
Hello Greg.
I suspect we’re speaking at cross-purposes and doing different ‘econ 101’ analyses. If the EA world were one of perfect competition (lots of buyers and sellers, competition of products, ease of entry and exit, buyers have full information, equal market share) I’d be inclined to agree with you. In that case, I would effectively be arguing for less competitive organisations to get subsidies.
That is not, however, the world I observe. Suppose I describe a market along the following lines. One or two firms consume over 90% of the goods whilst also being sellers of goods. There are only a handful of other sellers. The existing firms coordinate their activities with each other, including the sellers mostly agreeing not to directly compete over products. Access to the market and to information about the available goods is controlled by the existing players. Some participants fear (rightly or wrongly) that criticising the existing players or the structure of the market will result in them being blacklisted.
Does such a market seem problematically uncompetitive? Would we expect there to be non-trivial barriers to entry for new firms seeking to compete on particular goods? Does this description bear any similarity to the EA world? Unfortunately, I fear the answer to all three of the questions is yes.
So, to draw it back to the original point, for the market incumbents to offer very high salaries to staff is one way in which such firms might use their market power to ‘price out’ the competition. Of course, if one happened to think that it would bad, all things considered, for that competition to succeed, then of course one might not mind this state of affairs.
Just to understand your argument—is the reputational risk your only concern, or do you also have other concerns? E.g. are you saying it’s intrinsically wrong to pay such wages?
“There is a good chance, I think, that EA ends up paying professional staff significantly more to do exactly the same work to exactly the same standard as before, which is a substantive problem;”
At least in this hypothetical example it would seem naively ineffective (not taking into account things like signaling value) to pay people more salary for same output. (And fwiw here I think qualities like employee wellbeing is part of “output”. But it is unclear how directly salary helps that area.)
Right, though I guess many would argue that by paying more you increase the output—either by attracting more productive staff or by increasing the productivity of existing staff (by allowing them to, e.g. make purchases that buy time). (My view is probably that the first of those effects is the stronger one.)
But of course the relative strength of these different considerations is tricky to work out. There is no doubt some point beyond which it is ineffective to pay staff more.