To be clear: I don’t think suppressing pay is a suboptimal way to foster a strong culture. I think driving to low salaries is sign-negative for this goal.
I agree with you that “if you have low/suppressed pay, you harm your recruitment”. I think we disagree on how prevalent the antecedent is: I think the 80k stat you cite elsewhere is out of date—although I think some orgs still are paying in a fairly flat band around ‘entry level graduate salary’, I think others do pay more (whether enough to match market isn’t clear, but I think the shortfall is less stark than it used to be).
The latter seems substantially better than the former by my lights (well, substituting ‘across the board’ for ‘let the market set prices’.)
The standard econ-101 story for this is (in caricature) that markets tend to efficiently allocate scarce resources, and you generally make things worse overall if you try to meddle in them (although you can anoint particular beneficiaries).
The mix of strategies to soft-suppress (i.e. short of frank collusion/oligospony) salaries below market rate will probably be worse than not doing so—the usual predictions are a labour supply shortfall, with the most able applicants preferentially selecting themselves out (if I know I’ll only realistically get $X at an EA org but I can get $1.5X in the private sector, that’s a disincentive, and one that costs the EA org if they value my marginal labour more than X), and probably misallocation issues (bidding up wages gives a mechanism for the highest performers to match into the highest performing orgs).
It’s also worth stressing the “Have a maximum, but ask the applicant to make a first suggestion; don’t disclose wages; discourage employees sharing their salary with other employees” isn’t an EA innovation—they are pretty standard practice in salary negotiation on the employer side, and they conspire to undermine employee bargaining position. Canny employees being confronted with ‘how much do you need?’ may play along with the charade (“I need $10k more for leisure and holidays which are—promise! - strictly necessary to ensure my peak productivity!”) or roll their eyes at the conceit (“So I need $X in the sense you need to offer $X or I won’t take the job”).
‘Paying by necessity’ probably gets into legal trouble in various jurisdictions. Paying Alice more than (similarly situated) Bob because (e.g.) she has kids is unlikely to fly. (More generally, the perverse incentives on taking ‘pay by necessity’ at face value are left as an exercise to the reader).
Heavily obfuscating compensation disadvantages poorer/less experienced/less willing negotiators. I think I remember some data suggesting there are demographic trends in these factors—insofar as it does, it seems likely to lead to unjust bias in compensation.
Typical sentiment is that employees rather than employers are the weaker party, at greater risk of exploitative or coercive practice. I don’t understand why in EA contexts we are eager to endorse approaches that systematically benefit the latter at the expense of the former.
Not trying to push down the ceiling doesn’t mean you have to elevate the floor. People can still offer their services ‘at a discount’ if they want to. Although this still a bad idea, one could always pay at market and hope employees give their ‘unnecessary’ money back to you.
I’m a big fan of having some separation between personal and professional life (and I think a lot of EA errs in eliding the two too much). Insofar as these aren’t identical—insofar as “Greg the human” isn’t “Greg the agent of his EA employers will”, interests of (EA) employer and (EA) employee won’t perfectly converge: my holiday to Rome or whatever isn’t a ‘business expense’; the most marginal activity of my org isn’t likely to be the thing that I consider the best use of my funds. Better to accept this (and strike mutually beneficial deals) rather than pretending otherwise.
[Obvious conflicts of interest given I work for an EA org—that said, I have argued similar points before that was the case]
I’m also extremely sceptical of (in caricature) ‘if people aren’t willing to take a pay-cut, how do we know they really care?’ reasoning—as you say, one doesn’t see many for-profit companies use the strategy of ‘We need people who believe in our mission, so we’re going to offer market −20% to get a stronger staff cohort’. In addition to the points made (explicitly) in the OP on this, there’s an adverse selection worry: low salaries may filter for dedication, but also lower-performers without better ‘exit options’.
(Although I endorse it anyway, I have related ‘EA exceptionalism’ worries about the emphasis on mission alignment etc. Many non-profits (and most for profits) don’t or can’t rely on being staffed with people who passionately invest in their brand, and yet can be very successful.)
That said, my impression is the EA community is generally learning this lesson. Although the benchmarks are hard, most orgs that can now offer competitive(ish) compensation. It is worth noting the reverse argument: if lots of EA roles are highly over-subscribed, this doesn’t seem a reason to raise salaries for at least these roles—it might suggest EA orgs can afford to drop them(!)
A lot has been written on trying to explain why EA orgs (including ones with a lot of resources) say they struggle to find the right people, whilst a lot of EA people say they really struggle find work for an EA org. What I think may explain this mismatch the EA community can ‘supply’ lots of generally able and motivated people, whilst EA org demand skews more to those with particular specialised skills. Thus jobs looking for able generalists have lots of applicants yet the ‘person spec’ for other desired positions have few or zero appointable candidates.
This doesn’t give a clear ‘upshot’ in terms of setting compensation: it’s possible that orgs who set a premium on chasing up the tail of their best generalists applicant may see increasing salary still pay dividends even when they have more than enough appointable candidates to choose from now; supply of specialised people might sufficiently depend on non-monetary considerations to be effectively inelastic.
My overall impression agrees with OP. It’s probably more economically efficient to set compensation at or around market, rather than approximating this with a mix of laggy and hard to reallocate transfer contributions of underpaid labour. Insofar as less resource-rich orgs cannot afford to do this, they are fortunate that there are a lot of able people who are also willing to make de facto donations of their earning power to them. Yet this should be recognised as sub-optimal, rather being lionised as a virtue.
The latter. EA shouldn’t fund most research, but whether it is confirmatory or not is irrelevant. Psychedelics shouldn’t make the cut if we expect (as I argue above) we expect a lot of failure to replicate and regression, and the true effect to be unexceptional in the context of existing mental health treatment.
It does, but although that’s enough to make it worthwhile on the margin of existing medical research, that is not enough to make it a priority for the EA community.
0) I don’t know what the bar should be for calling something a ‘cause area’ or ‘EA interest’ should be, but I think this bar should be above (e.g.) ‘promising new drug treatment for bipolar disorder’, even though this is unequivocally a good thing. Wherever exactly this bar falls (I don’t think it needs to be ‘as promising as global health’), I don’t think psychedelics meet it.
1) My scepticism on the mental health benefits of psychedelics mainly rely on second-order causes for concern, namely:
1.1) There’s some weak wisdom of nature prior that blasting one of your neurotransmitter pathways for a short period is unlikely to be helpful. This objection is pretty weak, given existing psychiatric drugs are similarly crude (although one of their advantages by the lights of this consideration is they generally didn’t come to human attention by previous recreational use).
1.2) I get more sceptical as the number of (fairly independent) ‘upsides’ of a proposed intervention increases. The OP notes psychedelics could help with anxiety and depression and OCD and addiction and PTSD, which looks remarkably wide-ranging and gives suspicion of a ‘cure looking for a disease’. (That they are often mooted as having still other benefits on people without mental health issues such as improving creativity and empathy deepens my suspicion). Likewise, a cause that is proposed to be promising on long-termism and its negation pings suspicious convergence worries.
1.3) (Owed to Scott Alexander’s recent post). The psychedelic literature mainly comprises small studies generally conducted by ‘true believers’ in psychedelics and often (but not always) on self-selected and motivated participants. This seems well within the territory of scientific work vulnerable to replication crises.
1.4) Thus my impression is that although I wouldn’t be shocked if psychedelics are somewhat beneficial, I’d expect them to regress at least as far down to efficicacies observed in existing psychopharmacology, probably worse, and plausibly to zero. Adding to the armamentarium of therapy for mental illness (in expectation) is worthwhile, but not enough for a big slice of EA opinion: it being a promising candidate for further exploration relies on ‘neartermism’ and (conditional on this) the belief that mental health is similarly promising to standard global health interventions on NTDs etc.
2) On the ‘longtermism’ side of the argument, I agree it would be good—and good enough to be an important ‘cause’ - if there were ways of further enhancing human capital. (I bracket here the proposed mental health benefits, as my scepticism above applies even more strongly to the case that psychedelics are promising based on their benefits to EA community members’ mental health alone).
My impression is most of the story for ‘how do some people perform so well?’ will be a mix of traits/‘unmodifiable’ factors (e.g. intelligence, personality dispositions, propitious upbringing); very boring advice (e.g. ‘Sleep enough’, ‘exercise regularly’); and happenstance/good fortune. I’d guess there will be some residual variance left on the table after these have taken the lion’s share, and these scraps would be important to take. Yet I suspect a lot of this will be pretty idiographic/reducible to boring advice (e.g. anecdotally, novelists have their own peculiar habits for writing: IIRC Nabokov used index cards, Pullman has a writing shed, Gaiman a ‘novel writing pen’ - maybe ‘having a ritual for dedicated work’ matters, but which one is a matter of taste).
The evidence for psychedelic ‘enhancement’ is even thinner than psychedelic therapy, and labours under a more adverse prior. I agree the case for psychedelics here is comparable to CFAR/Paradigm/rationality training, but I would rule both out, not in.
3) I agree with agdfoster that psychedelics have reputational costs. This ‘bad rap’ looks unfair to me (notwithstanding the above, I’m confident that an ‘MDMA habit’ is much better for you than an alcohol, smoking, extreme sports, or social media one, none of which attract similar opprobrium), but it is decision-relevant all the same. If the upside was big enough, these costs would be worth paying, but I don’t think they are.
Good post. Some further considerations on the total view side of things (mostly culled from a very old working paper I have here where I suggest life extension may be bad—but N.B. besides its age and a few errors, my overall view is now tentatively pro rather than tentatively con).
0. LEV or not seems to be a distraction. The population ethics concerns don’t really change much either way if the offer on the table is LEV or merely ‘L’ (e.g. there’s a new drug which guarantees lifespan to 150 but no more).
1. As the contours of your argument imply, I think the core ethical issue on totalist-y lights would be whether there is a ‘packaging constraint’ on how one should allocate available lifetime to persons (e.g. better 1 800 year life versus 10 80 year lives, or vice versa), versus a broad cloud of empirical considerations and second order effects (although I think these probably dominate the calculus).
2. I don’t buy the story that life extension can be a free lunch. If it is better to ‘package’ lifespan into 80 year chunks versus millenia-sized chunks, whether or not to pursue this will have great impact across the future, so any initial ‘free benefit’ will be probably outweighed by ongoing misallocation across the future. (I suppose the story could be ’LEV, even if bad, is inevitable, and doing it sooner at least gets a bigger free lunch—but it seems in such a world there bigger scale problems to target).
3. On pure aggregation, the key seems to be whether lifespan has accelerating or diminishing marginal returns. As you say, intuitive survey by time-tradeoff gives conflicting recommendations: most would be averse to gambles like “Would you rather 5% chance of 2000 years (and 95% of dying right now) versus keeping your life expectancy?”, yet we’d also be averse to ‘Logan’s run’ (or Logan’s sprint) cases of splitting 80 year lives into 16 5-year lives (or, indeed, millions of 2 minute ones).
3.1 One natural reply to defuse ‘Logan’s run’ type reductios is to suggest it is confounded with human development. One might say our childhood and adolescence is in part an investment to enjoy the greater goods of adulthood. So perhaps we would take lifespan to have accelerating returns up commensurate to this, but maybe not for the interval of 20-ish to infinity (so if the returns diminish, there will usually be a break-even point whereby the ‘investment cost’ is matched by the diminishing returns loss, so making the ideal tiling of lives across time not ‘as long as possible’.
(We should probably be pretty surprised if the morally ‘optimal’ lifespan just-so-happened to match our actual lifespan which emerged from a mix of contingent biological facts. Of course, it could be the ‘optimal’ lifespan is shorter, not larger, than the one we can typically expect.)
3.2 There’s a natural consideration for diminishing returns on the idea that people may naturally prioritise the best things to do with their life first, and so extending their lives gives them opportunity (borrowing a bit from Bernard Williams) to engage in further projects which, although good, are not as good as those they prioritised before then. So packaging into smaller chunks offers the ability for the population over the time to complete more ‘most valuable’ projects.
3.3 On the other, there’s a murkier issue about maybe having a much longer life ‘unlocks’ opportunities which are better than those shorter lives can access. In the same way ‘living each day as your last’ when taken literally is terrible advice (many things people want to do take much longer than a day to accomplish), perhaps (say) observing changes over cosmological or geological timescales are much experiences than what one can do in decades. This looks fairly speculative/weak to me.
What seems more persuasive on the ‘increasing marginal returns’ side is the idea of positive interaction terms between experience moments. Some good things could be even better if they resonate with other previous moments, and so a longer prior life seems to provide further opportunity for this (e.g. insofar as ‘watching the grandchildren grow up’ is joyful, a longer life better ensures this occurs, among many other examples).
4 Egalitarianism, ‘justicy’-considerations, or prioritarianism will generally push towards packaging in shorter blocks rather than longer ones: the one which best gets around tricky different number cases is prioritarianism. Insofar as you are sympathetic to these views, these will seem to push against life extension.
4.1: I’m pretty sympathetic to Parfitian/deflationary accounts of personal identity, which would take the wind out of the sales of this line of argument (as there isn’t much remaining sense of a given person being better or worse off than another, nor of an index to which there’s a ‘you’ that accrues person moments which may have diminishing returns). Such a view also takes the wind out of the sails of a pro life extension case (as we should be relatively indifferent to whether future moments are linked to our present ones or otherwise), although there might be second order considerations (beyond those mentioned above, if most experience moments simply prefer to be linked up to more future ones, this is a pro tanto consideration in favour).
5 It seems the second order impacts are best distinguished from the ‘pure axiological’ issue above. It could be that very long lives are an imperfect allocation, but still best all-things considered if (for example) it allows people to develop much greater skill and ability and (say) produce works of even greater artistic genius. A challenge to trying to disentangle this is plausible scenarios which offer (radical) life extension likely involve other radical changes to the human condition: maybe we can also enhance ourselves in various ways too (and maybe these aren’t seperable, so maybe the moral cost we pay for improperly long lives is a price worth paying for the other benefits).
5.1 If we separate these and imagine some naive ‘eternal (or extended) youth’ scenario (e.g. people essentially like themselves, with a period of morbidity similar to what we’d expect now, but their period of excellent health extended by a long time), I’d agree this leans positive. Beyond skill building benefits, I’d speculate longer lives probably prompt less short-sightedness in policy and decision making.
My impression agrees with Issa’s: in EA, psychedelic use seems to go along with a cluster of bad epistemic practice (e.g. pseudoscience, neurobabble, ‘enlightenment’, obscurantism).
This trend is a weak one, with many exceptions; I also don’t know about direction of causation. Yet this is enough to make me recommend that taking psychedelics to ‘make one a better EA’ is very ill-advised.
Although private industry and EA organisations may have different incentives, a lot of law for the former will apply to the latter. Per Khorton, demanding the right to publish successful applicants CVs would be probably illegal in many places, and some ‘coordination’ between EA orgs (e.g. a draft system) seems likely to run afoul of competition law.
The lowest hanging fruit here (which seems a like a good idea) is to give measures of applicant:place ratios for calibration purposes.
Independent of legal worries, one probably doesn’t need to look at resumes to gauge applicant pool—most orgs have team pages, and so one can look at bios.
More extensive feedback to unsuccessful applicants is good, but it easier said than done, as explained by Kelsey Piper here.
I don’t think EA employers are ‘accountable to the community’ for how onerous their hiring process is, provided they make reasonable efforts inform potential applicants before they apply. If they’ve done this, then I’d default to leaving it to market participants to make decisions in their best interest.
‘Getting experience in North Korea’ is perhaps one of the worst things you can do if you want to work as a diplomat (or in government more broadly).
Taking US diplomats in particular (although this generalises well to other government roles, and to other countries) people in these roles—ditto ~half the federal government—require a security clearance. Going on your own initiative to a hostile foreign power (circumventing state department attempts to prevent US citizens going without their express dispensation due to safety concerns whilst you are at it) concisely demonstrates you are a giant security risk.
This impression gets little better (and plausibly even worse) if the explanation you offer for your visit is a (probably misguided) attempt to conduct tacit economic warfare against the NK government.
I don’t see that as surprising/concerning. Suppose someone approaches you with (e.g.) “Several people have expressed concerns about your behaviour—they swore us to secrecy about the details, but they seemed serious and credible to us (so much so we intend to take these actions).”
It looks pretty reasonable, if you trust their judgement, to apologise for this even if you lack precise knowledge of what the events in question are.
(Aside: I think having a mechanism which can work in confidence between the relevant parties is valuable for these sorts of difficult situations, and this can get undermined if lots of people start probing for more information and offering commentary.
This doesn’t mean this should never be discussed: these sorts of mechanisms can go wrong, and should be challenged if they do (I can think of an example where a serious failing would not have come to light if the initial ‘behind closed doors’ decision was respected). Yet this seems better done by people who are directly affected by and know the issue in question.)
Right, I (mis?)took the OP to be arguing “reducing salaries wouldn’t have an effect on labour supply, because it is price inelastic”, instead of “reducing salaries wouldn’t have enough of an effect to qualitatively change oversupply.
I’d expect a reduction but not a drastic one. Like I’d predict Open Phil’s applicant pool to drop to 500-600 from 800 if they cut starting salary by $10k-$15k.
This roughly cashes out to an income elasticity of labour (/applicant) supply of 1-2 (i.e. you reduce applicant supply by ~20% by reducing income ~~10%). Although a crisp comparison is hard to find, in the labour market you see figures generally <1, so this expectation slightly goes against the OP, given it suggests EA applicants are more compensation sensitive than typical.
(Obvious CoI/own views, but in my defence I’ve been arguing along these lines long before I had—or expected to have—an EA job.)
I agree ‘EA jobs’ provide substantial non monetary goods, and that ‘supply’ of willing applicants will likely outstrip available positions in ‘EA jobs’. Yet that doesn’t mean ‘supply’ of potential EA employees is (mostly) inelastic to compensation.
In principle, money is handy to all manner of interests one may have, including altruistic ones. Insofar as folks are not purely motivated by altruistic ends (and in such a way they’re indifferent to having more money to give away themselves) you’d expect them to be salary-sensitive. I aver basically everyone in EA is therefore (substantially) salary-sensitive.
In practice, I know of cases (including myself) where compensation played a role in deciding to change job, quit, not apply etc. I also recall on the forum remarks from people running orgs which cannot compensate as generously as others that this hurts recruitment.
So I’m pretty sure if you dropped salaries you would reduce the number of eager applicants (albeit perhaps with greater inelasticity than many other industries). As (I think) you imply, this would be a bad idea: from point of view of an org, controlling overall ‘supply’ of applicants shouldn’t be their priority (rather they set salaries as necessary to attract the most cost effective employees). For the wider community point of view, you’d want to avoid ‘EA underemployment’ in other ways than pushing to distort the labour market.
The inconvenience I had in mind is not in your list, and comprises things in the area of, “Prefer to keep the diet I’m already accustomed to”, “Prefer omnivorous diets on taste etc. grounds to vegan ones”, and so on. I was thinking of an EA who is omnivorous and feels little/no compunction about eating meat (either because they aren’t ‘on board’ with the moral motivation for animal causes in general, or doesn’t find the arguments for veganism persuasive in particular). I think switching to a vegan diet isn’t best described as a minor inconvenience for people like these.
But to be clear, this doesn’t entail any moral obligation whatsoever on the hotel to serve meat—it’s not like they are forcing omnivorous guests to be vegan, but just not cooking them free (non-vegan) food. If a vegan offers me to stay at their house a) for free, b) offers vegan food for free too, c) welcomes me to, if I’m not a fan of vegan food, get my own food to cook at their house whenever I like—which seems basically the counterfactual scenario if I wasn’t staying with them in the first place, and d) explains all of this before I come, they’ve been supererogatory in accommodating me, and it would be absurd for me to say they’ve fallen short in not serving me free omnivorous food which they morally object to.
Yet insofar as ‘free food’ is a selling point of the hotel, ‘free vegan food’ may not be so enticing to omnivorous guests. Obviously the offer is still generous by itself, leave alone combined with free accommodation, but one could imagine it making a difference on the margin to omnivores (especially if they are cost-sensitive).
Thus there’s a trade-off in between these people and vegans who would be put off if the hotel served meat itself (even if vegan options were also provided). It’s plausible to me the best option to pick here (leave alone any other considerations) is the more ‘vegan-friendly’ policy. But this isn’t because the trade-off is in fact illusory because the ‘vegan-friendly’ policy is has minimal/minor costs to omnivores after all.
[Empirically though, this doesn’t seem to amount to all that much given (I understand) the hotel hasn’t been struggling for guests.]
Beyond the ‘silent downvote → anon feedback’ substitution (good, even if ‘public comment’ is even better) substitution, there could also be a ‘public comment --> anon feedback’ one (less good).
That said, I’m in favour of an anon feedback option: I see karma mostly serving as a barometer of community sentiment (so I’m chary of disincentivizing downvotes as this probably impairs resolution). It isn’t a good way of providing feedback to the author (a vote is only a bit or two of information). Text is better—although for me, the main reasons I don’t ‘explain my downvotes’ are mostly time, but occasionally social considerations. An anon option at least removes the latter disincentive.
I think I get the idea:
Suppose (heaven forbid) a close relative has cancer, and there’s a new therapy which fractionally improves survival. The NHS doesn’t provide it on cost-effectiveness grounds. If you look around and see the NHS often provides treatment it previously ruled out if enough public sympathy can be aroused, you might be inclined try to do the same. If instead you see it is pretty steadfast (“We base our allocation on ethical principles, and only change this when we find we’ve made a mistake in applying them”), you might not be—or at least change your strategy to show the decision the NHS has made for your relative is unjust rather than unpopular.
None of this requires you to be acting in bad faith looking for ways of extorting the government—you’re just trying to do everything you can for a loved one (the motivation for pharmaceutical companies that sponsor patient advocacy groups may be less unalloyed). Yet (ideally) the government wants to encourage protest that highlights a policy mistake, and discourage those for when it has done the right thing for its population, but is against the interests of a powerful/photogenic/popular constituency. ‘Caving in’ to the latter type pushes in the wrong direction.
(That said, back in EA-land, I think a lot things that are ‘PR risks’ for EA look bad because they are bad (e.g. in fact mistaken, morally abhorrent, etc.), and so although PR considerations aren’t sufficient to want to discourage something, they can further augment concern.)
Related: David Manheim’s writing on network theory and scaling organisations.
A bit of both:
I’d like to see more forecasting skills/literacy ‘in the water’ of the EA community, in the same way statistical literacy is commonplace. A lot of EA is about making the world go better, and so a lot of (implicit) forecasting is done when deciding what to do. I’d generally recommend most people consider things like opening a Metaculus account, reading superforecasting, etc.
This doesn’t mean everyone should be spending (e.g.) 3 hours a day on this, given the usual story about opportunity costs. But I think (per the question topic) there’s also a benefit of a few people highly developing this skill (again, a bit like stats: it’s generally harder to design and conduct statistical analysis than to critique one already done, but you’d want some folks in EA who can do the former).