If we are talking about charity evaluations then reliability can be estimated directly so this is no longer a predictable error.
Hmm. This made me wonder whether the paper’s results depends on the decision-maker being uncertain about which options have been estimated reliably vs. unreliably. It seems possible that the effect could disappear if the reliability of my estimates varies but I know that the variance of my value estimate for option 1 is v_1, the one for option 2 v_2 etc. (even if the v_i vary a lot). (I don’t have time to check the paper or get clear on this I’m afraid.)
Is this what you were trying to say here?
Kind of an odd assumption that dependence on luck varies from player to player.
Intuitively, it strikes me as appropriate for some realistic situations. For example, you might try to estimate the performance of people based on quite different kinds or magnitudes of inputs; e.g. one applicant might have a long relevant track record, for another one you might just have a brief work test. Or you might compare the impact of interventions that are backed by very different kinds of evidence—say, a RCT vs. a speculative, qualitative argument.
Maybe there is something I’m missing here about why the assumption is odd, or perhaps even why the examples I gave don’t have the property required in the paper? (The latter would certainly be plausible as I read the paper a while ago, and even back then not very closely.)
I haven’t had time yet to think about your specific claims, but I’m glad to see attention for this issue. Thank you to contributing what I suspect is an important discussion!
You might be interested in the following paper which essentially shows that under an additional assumption the Optimizer’s Curse not only makes us overestimate the value of the apparent top option but in fact can make us predictably choose the wrong option.
Denrell, J. and Liu, C., 2012. Top performers are not the most impressive when extreme performance indicates unreliability. Proceedings of the National Academy of Sciences, 109(24), pp.9331-9336.
The crucial assumption roughly is that the reliability of our assessments varies sufficiently much between options. Intuitively, I’m concerned that this might apply when EAs consider interventions across different cause areas: e.g., our uncertainty about the value of AI safety research is much larger than our uncertainty about the short-term benefits of unconditional cash transfers.
(See also the part on the Optimizer’s Curse and endnote  on Denrell and Liu (2012) in this post by me, though I suspect it won’t teach you anything new.)
Thank you, this concrete analysis seems really useful to understand where the perception of skew toward EA organizations might be coming from.
Last year I talked to maybe 10 people over email, Skype, and at EA Global, both about what priority path to focus on, and then what to do within AI strategy. Based on my own experience last year, your “word of mouth is more skewed toward jobs at EA org than advice in 80K articles” conjecture feels true, though not overwhelmingly so. I also got advice from several people specifically on standard PhD programs, and 80K was helpful in connecting me with some of these people, for which I’m grateful. However, my impression (which might be wrong/distorted) was that especially people who themselves were ‘in the core of the EA community’ (e.g. working at an EA org themselves vs. a PhD student who’s very into EA but living outside of an EA hub) favored me working at EA organizations. It’s interesting that I recall few people saying this explicitly but have a pretty strong sense that this was their view implicitly, which maybe means that my guess about what is generally approved of within EA rather than people’s actual views is behind this impression. It could even be a case of pluralistic ignorance (in which case public discussions/post like this would be particularly useful).
Anyway, here are a few other hypotheses of what might contribute to a skew toward ‘EA jobs’ that’s stronger than what 80K literally recommends:
Number of people who meet the minimal bar for applying: Often, jobs recommended by 80K require specialized knowledge/skills, e.g. programming ability or speaking Chinese. By contrast, EA orgs seem to open a relatively large number of roles where roughly any smart undergraduate can apply.
Convenience: If you’re the kind of person who naturally hears about, say, the Open Phil RA job posting, it’s quite convenient to actually apply there. It costs time, but for many people ‘just time’ as opposed to creativity or learning how to navigate an unfamiliar field or community. For example, I’m a mathematician who was educated in Germany and considered doing a PhD in political science in the US. It felt like I had to find out a large number of small pieces of information someone familiar with the US education system or political science would know naturally. Also the option just generally seemed more scary and unattractive because it was in ‘unfamiliar terrain’. Relatedly, it was much easier to me to talk to senior staff at EA organizations than it was to talk to, say, a political science professor at a top US university. None of these felt like an impossible bar to overcome, but it definitely seemed to me that they skewed my overall strategy somewhat in favor of the ‘familiar’ EA space. I generally felt a bit that given that there’s so much attention on career choice in EA I had surprisingly little support and readily available knowledge after I had decided to broadly “go into AI strategy” (which I feel like my general familiarity with EA would have enabled me to figure out anyway, and was indeed my own best guess before I found out that many others agreed with this). NB as I said 80,000 Hours was definitely somewhat helpful even in this later stage, and it’s not clear to me if you could feasibly have done more (e.g. clearly 80K cannot individually help anyone with my level of commitment and potential to figure out details of how to execute their career plan). [I also suspect that I find things like figuring out the practicalities of how to get into a PhD program unusually hard/annoying, but more like 90th than 99th percentile.] But maybe there’s something we can collective do to help correct this bias, e.g. the suggestion of nurturing strong profession-specific EA networks seems like it would help with enabling EAs to enter that profession as well (as can research by 80K e.g. your recent page on US AI policy). To the extent that telling most people to work on AI prevents the start of such networks this seems like a cost to be aware of.
Advice for ‘EA jobs’ is more unequivocal, see this comment.
It probably is, but I don’t think this explanation is rationalizing. I.e. I don’t think this founder effect would provide a good reason to think that this distribution of knowledge and opinions is conducive to reaching the community’s goals.
Hmm, thanks for sharing your impression, I think talking about specific examples is often very useful to spot disagreements and help people learn from each other.
I’ve never lived in the US or otherwise participated in one of these communities, so I can’t tell from first-hand experience. But my loose impression is that there have been substantial disagreements both synchronically and diachronically within those movements; for example, in social justice about trans* issues or sex work, and in conservatism about interventionist vs. isolationist foreign policy, to name but a few examples. Of course, EAs disagree substantially about, say, their favored cause area. But my impression at least is that disagreements within those other movements can be much more acrimonious (jtbc, I think it’s mostly good that we don’t have this in EA), and also that the difference in ‘cultural vibe’ I would get from attending, say, a Black Lives Matters grassroots group meeting vs. a meeting of the Hilary Clinton presidential campaign team is larger than the one between the local EA group in Harvard and the EA Leaders Forum. Do your impressions of these things differ, or were you thinking of other manifestations of conformity?
(Maybe that’s comparing apples to oranges because a much larger proportion of EAs are from privileged backgrounds and in their 20s, and if one ‘controlled’ social justice and conservatism for these demographic factors they’d be closer to EA levels of conformity. OTOH maybe it’s something about EA that contributes to causing this demographic narrowness.)
Also, we have an explanation for the conformity within social justice and conservatism that on some readings might rationalize this conformity—namely Haidt’s moral foundations theory. To put it crudely, given that you’re motivated by fairness and care but not authority etc. maybe it just is rational to hold the ‘liberal’ bundle of views. (I think that’s true only to a limited but still significant extent, and also maybe that the story for why the mistakes reflected by the non-rational parts are so correlated is different from the one for EA in an interesting way.) By contrast, I’m not sure there is a similarly rationalizing explanation for why many EAs agree on both (i) there’s a moral imperative for cost-effectiveness, and (ii) you should one-box in Newcomb’s problem, and for why many know more about cognitive biases than about the leading theories for why the Industrial Revolution started in Europe rather than China.
Thank you, your comment made me realize both that I maybe wasn’t quite aware what meaning and connotations ‘community’ has for native speakers, and maybe that I was implicitly comparing EA against groups that aren’t a community in that sense. I guess it’s also quite unclear to me if I think it’s good for EA to be a community in this sense.
I don’t have relevant data nor have I thought very systematically about this, but my intuition is to strongly agree with basically everything you say.
In particular, I feel that the “Having exposure to a diverse range of perspectives and experiences is generally valuable.” squares fairly well with my own experience. There just are so many moving parts to how communities and organizations work—how to moderate meetings, how to give feedback, how much hierarchies and structure to have etc. etc. - that I think it’s fairly hard to even be aware of the full space of options (and impossible to experiment with a non-negligible fraction of it). Having an influx of people with diverse experiences in that respect can massively multiply the amount of information available on these intangible things. This seems particularly valuable to EA to me because I feel that relative to the community’s size there’s an unusual amount of conformity on these things within EA, perhaps due to the tight social connections within the community and the outsized influence of certain ‘cultural icons’.
Personally, I feel that I’ve learned a lot of the (both intellectual and interpersonal) skills that are most useful in my work right now outside of EA, and in fact that outside of EA’s core focus (roughly, what are the practical implications of ‘sufficiently consequentialist’ ethics) I’ve learned surprisingly little in EA even after correcting for only having been in the community for a small fraction of my life.
(Perhaps more controversially, I think this also applies to the epistemic rather than the purely cultural or organizational domain: I.e. my claim roughly is that things like phrasing lots of statement in terms of probabilities, having discussions mostly in Google docs vs. in person, the kind of people one circulates drafts to, how often one is forced to face a situation where one has to explain one’s thoughts to people one has never met before, and various small things like that affect the overall epistemic process in messy ways that are hard to track or anticipate other than by actually having experienced how several alternatives play out.)
Related: Julia Galef’s post about ‘Planners vs. Hayekians’. See in particular how she describes the Hayekians’ conclusion, which sounds similar to (though stronger than) your recommendation:
Therefore, the optimal approach to improving the world is for each of us to pursue projects we find interesting or exciting. In the process, we should keep an eye out for ways those projects might yield opportunities to produce a lot of social value — but we shouldn’t aim directly at value-creation.
My impression is that I’ve been disagreeing for a while with many EAs (my sample is skewed toward people working full-time at EA orgs in Oxford and especially Berlin) about how large the ‘Hayekian’ benefits from excellence in ‘conventional’ careers are. That is, how many unanticipated benefits will becoming successful in some field X have? I think I’ve consistently been more optimistic about this than most people I’ve talked to, which is one of several reasons for being less excited about ‘EA jobs’ relative to other options than I think many EAs. My reasoning here seems to broadly agree with yours, and I’m glad to see it spelled out that well.
(Apologies if you’ve linked to that in your post already, I didn’t thoroughly check all links.)
What about feedback that’s anonymous but public? This has some other downsides (e.g. misuse potential) but seems to avoid the first two problems you’ve pointed out.
My initial reaction is to really like the idea of being prompted to give anonymous feedback. I think there probably are also reasons against this, but maybe it’s at least worth thinking about.
(One reason why I like this is that it would be helpful for authors and mitigate problems such as the one expressed by the OP. Another reason is that it might change the patterns of downvotes in ways that are beneficial. For example, I currently almost never downvote something that’s not spam, but quite possibly it wouldn’t be optimal if everyone used downvotes as narrowly [though I’m not sure and feel confused about the proper role of downvotes in general]. At the same time, I often feel like the threshold for explaining my disagreement in a non-anonymous comment would be too high. I anticipate that the opportunity to add anonymous feedback to a downvote would sometimes make me express useful concerns or disagreements I currently don’t express.)
Thanks for sharing, I suspect this might be somewhat common. I’ve speculated about a related cause in another comment.
Of our top rated plan changes, only 25% involve people working at EA orgs
For what it’s worth, given how few EA orgs there are in relation to the number of highly dedicated EAs and how large the world outside of EA is (e.g. in terms of institutions/orgs that work in important areas or are reasonably good at teaching important skills), 25% actually strikes me as a high figure. Even if this was right, there might be good reasons for the figure being that high, e.g. it’s natural and doesn’t necessarily reflect any mistake that 80K knows more about which careers at EA orgs are high-impact, can do a better job at finding people for them etc. However, I would be surprised if as the EA movement becomes more mature the optimal proportion was as high.
(I didn’t read your comment as explicitly agreeing or disagreeing with anything in the above paragraph, just wanted to share my intuitive reaction.)
Thank you for your comments here, they’ve helped me understand 80K’s current thinking on the issue raised by the OP.
FWIW, without having thought systematically about this, my intuition is to agree. I’d be particularly keen to see:
More explicit models for what trainable skills and experiences are useful for improving the long-term future, or will become so in the future (as new institutions such as CSET are being established).
More actionable advice on how to train these skills.
My gut feeling is that in many places we could do a better job at utilizing skills and experiences people can get pretty reliably in the for-profit world, academia, or from other established ‘institutions’.
I’m aware this is happening to some extent already, e.g. GPI trying to interface with academia or 80K’s guide on US policy. I think both are great!
NB this is different from the idea that there are many other career paths that would be high-impact to stay in indefinitely. I think this is also true, but at least if one has a narrow focus on the long-term future I feel less sure if there are ‘easy wins’ left here.
(An underlying disagreement here might be: Is this feasible, or are we just too much bottlenecked by something like what Carrick Flynn has called ‘disentanglement’. Very crudely, I tend to agree that we’re bottlenecked by disentanglement but that there are still some improvements we can make along the above lines. A more substantive underlying question might be how important domain knowledge and domain-specific skills are for being able to do disentanglement, where my impression is that I place an unusually high value on them whereas other EAs are closer to ‘the most important thing is to hang out with other EAs and absorb the epistemic norms, results, and culture’.)
I didn’t think people consistently recommended EA orgs over other options
Interesting, thank you for this data point. My speculation was partly based on recently having talked to people who told me something like “you’re the first one [or one of very few among many] who doesn’t clearly recommend me to choose <EA org> over <some other good option>“. It’s good to know that this isn’t what always happens.
A speculative thought I just had on one possible reason for why some people are overly focussed on EA jobs relative to e.g. the other options you list here:
Identifying one’s highest-impact career option is quite challenging, and there is no way to easily conclusively verify a candidate answer.
Therefore (and for other reasons), many people rely a lot on advice provided by 80K and individual EAs they regard as suitable advisors.
At least within the core of the (longtermist) EA community, almost all sources of advice agree that one of the most competitive jobs at an explicitly EA-motivated org usually is among the top options for people who are a good fit.
However, for most alternatives there is significant disagreement among the most trusted sources of advice on whether these alternatives are competitive (in terms of expected impact) with an ‘EA job’, or indeed good ideas at all. For example, someone who I believe many people consult for career advice discouraged me from ‘train up as a cybersecurity expert’ - an option I had brought up (and according to my own impression still consider an attractive option) -, at least relative to working at an EA org. Similarly, there are significant disagreements about the value of academic degrees, even in machine learning (and a bunch of hard-to-resolve underlying disagreements e.g. about how much ML experience is essential/useful for AI safety and strategy).
As a result, people will often be faced with a distribution of views similar to: ‘Everyone agrees working at <EA org> would be great. Many people think a machine learning PhD would be great, one or two even think it’s better for me specifically, but a significant minority thinks it’s useless. One person was excited about cybersecurity, one person was pessimistic, and most said they couldn’t comment on it.’ Perhaps if all of these opinions had been conveyed with maximal reasoning transparency and one was extremely careful about aggregating the opinions this wouldn’t be a problem. But in practice I think this often means that ‘apply to <EA org>’ seems like the top option, at least in terms of psychological pull.
(Another contributing factor to the large number of applications to EA jobs, perhaps less so for how it affects people, may be that that few EA orgs have a very explicit model of the specific skills they require for their most competitive jobs—at least that’s my impression. As a result, they cannot offer reliable guidance people can use to decide if they’re a good fit apart from applying.)
In a nutshell, I’m worried that the people would not find the options you list exciting from their perspective, and instead would perceive not working in one of the 20 most competitive jobs at explicitly EA-motivated employers as some kind of personal shortcoming, hence the frustration.
I think the OP is evidence that his can happen e.g. because the author reports that
this is the message I felt I was getting from the EA community:
“Hey you! You know, all these ideas that you had about making the world a better place, like working for Doctors without Borders? They probably aren’t that great. The long-term future is what matters. And that is not funding constrained, so earning to give is kind of off the table as well. But the good news is, we really, really need people working on these things. We are so talent constraint… (20 applications later) … Yeah, when we said that we need people, we meant capable people. Not you. You suck.”
Note that I agree with you that in fact “[t]here are lots of exciting things for new EAs” including the options you’ve listed. However, even given this considered belief of mine, I think I was overly focussed on ‘EA jobs’ in a way that negatively affected my well-being.
Even when I consider that my guess is that I’m unusually susceptible to such psychological effects (though not extremely so, my crude guess would be ’80th to 99th percentile’), I’d expect some others to be similarly affected even if they agree—like I—about the impact of less competitive options.
Perhaps with “the kind of thing described in the original post” you meant specifically refer to the issue ‘people spend a lot of time applying for EA jobs’. Certainly a lot of the information in the OP and in one of my comments was about this. In that case I’d like to clarify that it’s not the time cost itself that’s the main cause of effects (i)-(iii) I described in the parent. In fact I somewhat regret to have contributed to the whole discussion perhaps being focused on time costs by providing more data exclusively about this. The core problem as I see it is how the OP, I, and I believe many others, think about and are psychologically affected by the current EA job market and the surrounding messaging. The objective market conditions (e.g. number of applicants for jobs) contribute to this, as do many aspects of messaging by EA orgs and EAs, as do things that have nothing to do with EA at all (e.g. people’s degree of neuroticism and other personality traits). I don’t have a strong view on which of these contributing factors is the best place to intervene.
I think there are at least two effects where the world loses impact: (i) People in less privileged positions not applying for EA jobs; sometimes one of these would actually have been the best candidate. (ii) More speculatively (in the sense that I can’t point to a specific example, though my prior is this effect is very likely to be non-zero), people in less privileged positions might realize that it’s not possible for them to apply for many of the roles they perceived to be described as highest-impact and this might reduce their EA motivation/dedication in general, and make them feel unwelcome in the community.
I emphatically agree that them taking another potentially impactful job is positive. In fact, as I said in another comment, I wish there was more attention on and support for identifying and promoting such jobs.
One thing that might be worth noting: I was only able to invest that many resources because of things like (i) having had an initial runway of more than $10,000 (a significant fraction of which I basically ‘inherited’ / was given to me for things like academic excellence that weren’t very effortful for me), (ii) having a good relationship to my sufficiently well-off parents that moving back in with them always was a safe backup option, (iii) having access to various other forms of social support (that came with real costs for several underemployed or otherwise struggling people in my network).
I do think current conditions mean that we ‘lose’ more people in less comfortable positions than we otherwise would.
Some related half-baked thoughts:
[Epistemic status: I appreciate that there are people who’ve thought about the EA talent landscape systematically and have access to more comprehensive information, e.g. perhaps some people at 80K or people doing recruiting for major EA orgs. I would therefore place significantly more weight on their impressions. I’m not one of these people. My thoughts are based on (i) having talked 10-100 hours with other EAs about related things over the last year, mostly in a non-focussed way, (ii) having worked full-time for 2 EA organizations (3 if one counts a 6-week internship), (iii) having hired 1-5 people for various projects at the Effective Altruism Foundation, (iv) having spent about 220h on changing my career last year, see another comment. I first heard of EA around October 2015, and have been involved in the community since April 2016. Most of that time I spent in Berlin, then over last summer and since October in Oxford.]
I echo the impression that several people I’ve talked to—including myself—were or are overly focussed on finding a job at a major EA org. This applies both in terms of time spent and number of applications submitted, and in terms of more fuzzy notions such as how much status or success is associated with roles. I’m less sure if I disagreed with these people about the actual impact of ‘EA jobs’ vs. the next best option, but it’s at least plausible to me that (relative to my own impression) some of them overvalue the relative impact of ‘EA jobs’. E.g. my own guess is that a machine learning graduate course is competitive with most ‘EA jobs’ one could do well without such an education. [I think this last belief of mine is somewhat unusual and at least some very thoughtful people in EA disagree with me about this.]
I think several people were in fact too optimistic about getting an ‘EA job’. It’s plausible they could have accessed information (e.g. do a Fermi estimate of how many people will apply for a role) that would have made them more pessimistic, but I’m not sure.
I know at least 2 people who unsuccessfully applied to a large number of ‘EA jobs’. (I’m aware there are many more.) I feel confident that they have several highly impressive relevant skills, e.g. because I’ve seen some of their writing and/or their CVs. I’m aware I don’t know the full distribution of their relevant skills, and that the people who made the hiring decisions are in a much better position to make them than I. I’m still left with a subjective sense of “wow, these people are really impressive, and I find it surprising that they could not find a job”. This contributes to (i) me feeling more pressure to perform well in and more doubtful about the counterfactual impact of my current role because I have a visceral sense that ‘the next best candidate would have been about as good as I or better’ / ‘it would in some sense be tragic or unfair if I don’t perform well’ (these aren’t endorsed beliefs, but still affect me) , (ii) me being more reluctant to introduce new people into the EA community because and I don’t want them to make frustrating experiences, (iii) me being worried that some of my friends and other community members will make frustrating experiences [which costs attention and life satisfaction but also sometimes time e.g. when talking with someone about their frustration—as an aside, I’d guess that the burden of emotional labor of the latter kind is disproportionately shouldered by relatively junior women in the community]. (None of these effects are very large. I don’t want to make this sound more dramatic than it is, but overall I think there are non-negligible costs even for someone like me who got one of the competitive jobs.)
I agree that identifying and promoting impactful roles outside of EA orgs may be both helpful for the ‘EA job market’ and impactful independently. I really like that the 80K job board sometimes includes such roles. I wonder if there is a diffusion of responsibility problem where identifying such jobs is no-one’s main goal and therefore doesn’t get done even if it would be valuable. [I also appreciate that this is really hard and costs a lot of time, and what I perceive to be 80K’s strategy on this, i.e. focussing on in-depth exploration of particularly valuable paths such as US AI policy, seems on the right track to me.]
I think communication around this is really hard in general, and something that is particularly tricky for people like me and most EAs that are young and have little experience with similar situations. I also think there are some unavoidable trade-offs between causing frustration and increasing the expected quality of applicants for important roles. I applaud 80K for having listened to concerns around this in the past and having taken steps such as publishing a clarifying article on ‘talent constraints’. I think as a community we can still do better, but I’m optimistic that the relevant actors will be able to do so and certain that they have good intentions. I’ve seen EA leaders have valuable and important conversations around this, but it’s not quite clear to me if anyone in particular ‘owns’ optimizing the EA talent landscape at large, and so again wonder if there is a diffusion of responsibility issue that prevents ‘easy wins’ such as better data/feedback collection from getting done (while also being open to the possibility that ‘optimizing the EA talent landscape’ is too broad or fuzzy for one person to focus on it).