I donated $5800.
lukeprog
FWIW I generally agree with Eli’s reply here. I think maybe EAG should 2x or 3x in size, but I’d lobby for it to not be fully open.
Sorry to hear about your long, very difficult experience. I think part of what happened is that it did in fact get a lot harder to get a job at leading EA-motivated employers in the past couple years, but that wasn’t clear to many EAs (including me, to some extent) until very recently, possibly as recently as this very post. So while it’s good news that the EA community has grown such that these particular high-impact jobs can attract talent sufficient for them to be so competitive, it’s unfortunate that this change wasn’t clearer sooner, and posts like this one help with that, albeit not soon enough to help mitigate your own 1.5 years of suffering.
Also, the thing about some people not having runway is true and important, and is a major reason Open Phil pays people to take our remote work tests, and does quite a few things for people who do an in-person RA trial with us (e.g. salary, health benefits, moving costs, severance pay for those not made a subsequent offer). We don’t want to miss out on great people just because they don’t have enough runway/etc. to interact with our process.
FWIW, I found some of your comments about “elite culture” surprising. For context: I grew up in rural Minnesota, then dropped out of counseling psychology undergrad at the University of Minnesota, then worked at a 6-person computer repair shop in Glendale, CA. Only in the past few years have I begun to somewhat regularly interact with many people from e.g. top schools and top tech companies. There are aspects of interacting with such “elites” that I’ve had to learn on the fly and to some degree am still not great at, but in my experience the culture in those circles is still pretty different from the culture at major EA-motivated employers, even though many of the staff at EA-motivated employers are now people who e.g. graduated from schools like Oxford or Harvard. For example, it’s not my experience that people at major EA organizations are as effusively positive as many people in non-EA “elite” circles are. In fact, I would’ve described the culture at the EA organizations I interact with the most in sorta opposite terms, in that it’s hard to get them excited about things. E.g. if you tell one of my Open Phil RA colleagues about a new study in Nature on some topic they care about, a pretty common reaction is to shrug and say “Yeah but who knows if it’s true; most of the time we dig into a top-journal study, it completely falls apart.” Or if you tell people at most EA orgs about a cool-sounding global health or poverty-reduction intervention, they’ll probably say “Could be interesting, but very low chance it’ll end up looking as cost-effective as AMF or even GiveDirectly upon further investigation, so: meh.” Also, EA-motivated employers are generally not as “credentialist,” in my experience, as most “elite” employers (perhaps except for tech companies).
Finally, re: “you never know for sure if it’s not just perfect meritocracy correctly filtering [certain people out].” I can’t speak to your case in particular, but at least w.r.t. Open Phil’s RA recruiting efforts (which I’ve been managing since early 2018), I think I am sure it’s not a perfect meritocracy. We think our application process probably has a high false negative rate (i.e. rejecting people who are actually strong fits, or would be with 3mo of training), and it’s just very difficult to reduce the false negative rate without also greatly increasing the false positive rate. Just to make this more concrete: in our 2018 RA hiring round, if somebody scored really well on our stage-3 work test, we typically thought “Okay, decent chance this person is a good fit,” but when somebody scored medium/low on it, we often threw up our hands and said “No clue if this person is a good fit or not, there are lots of reasons they could’ve scored poorly without actually being a poor fit, I guess we just don’t get to know either way without us and them paying infeasibly huge time costs.” (So why not just improve that aspect of our work tests? We’re trying, e.g. by contracting several “work test testers,” but it’s harder than one might think, at least for such ill-defined “generalist” roles.)
FWIW, one of my first projects at Open Phil, starting in 2015, was to investigate subjective well-being interventions as a potential focus area. We never published a page on it, but we did publish some conversation notes. We didn’t pursue it further because my initial findings were that there were major problems with the empirical literature, including weakly validated measures, unconvincing intervention studies, one entire literature using the wrong statistical test for decades, etc. I concluded that there might be cost-effective interventions in this space, perhaps especially after better measure validation studies and intervention studies are conducted, but my initial investigation suggested it would take a lot of work for us to get there, so I moved on to other topics.
At least for me, I don’t think this is a case of an EA funder repeatedly ignoring work by e.g. Michael Plant — I think it’s a case of me following the debate over the years and disagreeing on the substance after having familiarized myself with the literature.
That said, I still think some happiness interventions might be cost-effective upon further investigation, and I think our Global Health & Well-Being team has been looking into the topic again as that team has gained more research capacity in the past year or two.
Not sure it’s worth the effort, but I’d find the charts easier to read if you used a wider variety of colors.
Another historical point I’d like to make is that the common narrative about EA’s recent “pivot to longtermism” seems mostly wrong to me, or at least more partial and gradual than it’s often presented to be, because all four leading strands of EA — (1) neartermist human-focused stuff, mostly in the developing world, (2) animal welfare, (3) long-term future, and (4) meta — were all major themes in the movement since its relatively early days, including at the very first “EA Summit” in 2013 (see here), and IIRC for at least a few years before then.
- 12 Feb 2023 13:45 UTC; 22 points) 's comment on What are the best examples of object-level work that was done by (or at least inspired by) the longtermist EA community that concretely and legibly reduced existential risk? by (
- 7 Feb 2023 17:09 UTC; 13 points) 's comment on Proposal: Create A New Longtermism Organization by (
- 12 Feb 2023 14:03 UTC; 8 points) 's comment on What are the best examples of object-level work that was done by (or at least inspired by) the longtermist EA community that concretely and legibly reduced existential risk? by (
Oof, 8 weeks of effort to get 0⁄20 positions is pretty brutal. It’s easy to see how that would feel like your “Hey you!…” paragraph. And while I suspect you’re a bit of an outlier in time spent and positions applied for, I also think you’re pointing at something true about the current situation re: job openings at EA-motivated employers, as evidenced by how many upvotes this post has gotten, some of the comments on this page, and the data I’ve got as a result of managing Open Phil’s 2018 recruitment round of Research Analysts, during which we had to say “no” to tons of applicants with quite impressive resumes.
I’ve been writing up some reflections on that recruiting round, which I hope to share soon. One of my takeaways is something like “The base of talent out there is strong, and Open Phil’s current ability to deploy it is weak.” In that way we might be an extreme opposite of Teach for America, and I suspect many other EA-motivated orgs are as well.
Anyway, I plan to say more on these topics when I share my “reflections” post, but in the meantime I just want to say I’m sorry that you spent so much time applying to EA orgs and got no offers. Also, setting the time investment aside, it’s also just emotionally difficult to get an “Unfortunately, we’ve decided…” email, let alone receive 20 of them in a row.
A couple other random notes for now:
- A colleague of mine has heard some EAs — perhaps motivated by considerations like those in this post — saying stuff like “maybe I shouldn’t even try to apply because I don’t want to waste orgs’ time.” In case future potential Open Phil applicants end up reading this comment, let it be known that we don’t think it’s a waste of our time to process applications. If we don’t have the staff capacity to process all the applications we receive, we can always just drop a larger fraction of applicants at each stage. But if someone never applies, we have no opportunity at all to figure out how good a fit they might be. Also, what we’re looking for is pretty unclear (especially to potential applicants), and so e.g. some of our recent hires are people who told us they probably wouldn’t have bothered applying if we hadn’t proactively encouraged them to apply. Of course, an applicant could be worried about whether applying is worth their time, and that’s a different matter.
- I think it would’ve been good to mention that some of these organizations pay applicants for some/all of the time they spend on the application process. (Hopefully Open Phil isn’t the only one?)
Yes, we (Open Phil) have funded, and in some cases continue to fund, many non-EA think tanks, including the six you named and also Brookings, National Academies, Niskanen, Peterson Institute, CGD, CSIS, CISAC, CBPP, RAND, CAP, Perry World House, Urban Institute, Economic Policy Institute, Roosevelt Institute, Dezernat Zukunft, Sightline Institute, and probably a few others I’m forgetting.
I don’t know why the original post claimed “it is pretty rare for EAs to fund non-EA think tanks to do things.”
a Christian EA I heard about recently who lives in a van on the campus of the tech company he works for, giving away everything above $3000 per year
Will this person please give an in-depth interview on some podcast? Could be anonymous if desired.
Yeah, bummer, not happy about this.
FWIW I was substantially positively surprised by the amount and quality of the work you put out in 2019, though I didn’t vet any of it in depth. (And prior to 2019 I think I wasn’t aware of Rethink.)
This seems great to me, please do more.
Not sure I follow the part about how the kind of thing described in the original post makes you “more reluctant to introduce new people into the EA community.” There are lots of exciting things for EAs to do besides “apply to one or more of the 20 most competitive jobs at explicitly EA-motivated employers,” including “keep doing what you’re doing and engage with EA as an exciting hobby” and “apply to key positions in top-priority cause areas that are on the 80,000 Hours Job Board but aren’t at one of a handful of explicitly EA-motivated orgs” and “do earn to give for a while while gaining skills and then maybe transition to more direct work later or maybe not,” as well as other paths that are specific to particular priority causes, e.g. for AI strategy & policy I’d be excited to see EAs (a) train up in ML, for later work in either AI safety or AI strategy/policy, (b) follow these paths into a US AI policy career (esp. for US citizens, and esp. now that CSET exists), and (c) train up as a cybersecurity expert (I hope to say more later about why this path should be especially exciting for AI-interested EAs; also the worst that happens is that you’ll be in extremely high demand and highly paid).
- 2 Mar 2019 19:07 UTC; 29 points) 's comment on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation by (
- 28 Feb 2019 21:23 UTC; 28 points) 's comment on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation by (
An important point here is that if you’re considering this move, there’s a decent/good chance you’ll be able to find career transition funding so that you can have 3-12mo of runway during which you can full-time talk to people, read lots of stuff, apply to lots of things, etc. after you quit your job, so that you don’t have to burn through much or any of your savings while trying to make the transition work.
Huge +1 to this post! A few reflections:
As someone who has led or been involved in many hiring rounds in the last decade, I’d like to affirm most of the points above, e.g.: it’s very hard to predict what you’ll get offers for, you’ll sometimes learn about personal fit and improve your career capital, stated role “requirements” are often actually fairly flexible, etc.
Applicants who get the job, or make it to the final stage, often comment that they’re surprised they got so far and didn’t think they were a strong fit but applied because a friend told them they should apply anyway.
Apply to some roles even if you’re not sure you’d leave your current role anytime soon. Hiring managers often don’t reach out to some of their top prospects for a role because they have limited time and just assume that the prospect probably won’t leave their current role.
If you apply to a role on a whim and then make it past the first stage, you might find that your interest in the role grows as a result, e.g. because it “feels more real” and then you think about what that role would be like in a more concrete way, and because you’ve gotten a positive signal that the employer thinks you might be a fit.
Just getting your up-to-date information in an employers CRM can be valuable. I am constantly trying to help grantees and other contacts fill various open roles, and one of the main things I do is run filters on past Open Phil applicants to identify candidates matching particular criteria. I’ve helped connect several “unsuccessful” Open Phil applicants to other jobs, including e.g. to a think tank role which shortly thereafter led to a very influential role in the White House, and things like that. Of course we also check our lists of past applicants when trying to fill new roles at Open Phil, and in some cases we’ve hired people who we previously rejected for the first role they applied to.
That said, it’s helpful if you keep applying even if your info is already in a particular employer’s CRM, both to indicate interest in a particular role and because your situation may have changed. I often think a prospect won’t be interested in a role because, last I heard, only wanted to do roles like X and Y, or only in domain Z, or only after they finish their PhD, or whatever, and then sometimes I learn that 9mo later they changed their mind about some of that stuff so now they’re open to the role I was trying to fill but I didn’t learn that until after the hiring round was closed.
To support people in following this post’s advice, employers (including Open Phil?) need to make it even quicker for applicants to submit the initial application materials, perhaps by holding off on collecting some even fairly basic information until an applicant passes the initial screen.
Thanks for sharing your comment about personalized invitations, that’s interesting. At Open Phil, almost all our personalized invitations (even to people we already knew well) were only lightly personalized. But perhaps a noticeable fraction of people misperceived that as “high chance you’ll get the job if you apply,” or something. The Open Phil RA hiring committee is discussing this issue now, so thanks for raising it.
FWIW the EA forum seems subjectively much better to me than it did ~2 years ago, both in platform and in content, and much of that intuitively seems plausibly traceable to specific labor of the EA forum team. Thanks for all your work!
I wish “relative skeptics” about deep learning capability timelines such as Melanie Mitchell and Gary Marcus would move beyond qualitative arguments and try to build models and make quantified predictions about how quickly they expect things to proceed, a la Cotra (2020) or Davidson (2021) or even Kurzweil. As things stand today, I can’t even tell whether Mitchell or Marcus have more or less optimistic timelines than the people who have made quantified predictions, including e.g. authors from top ML conferences.
[EA has] largely moved away from explicit expected value calculations and cost-effectiveness analyses.
How so? I hadn’t gotten this sense. Certainly we still do lots of them internally at Open Phil.
Re: cost-effectiveness analyses always turning up positive, perhaps especially in longtermism. FWIW that hasn’t been my experience. Instead, my experience is that every time I investigate the case for some AI-related intervention being worth funding under longtermism, I conclude that it’s nearly as likely to be net-negative as net-positive given our great uncertainty and therefore I end up stuck doing almost entirely “meta” things like creating knowledge and talent pipelines.
- Why does (any particular) AI safety work reduce s-risks more than it increases them? by 3 Oct 2021 16:55 UTC; 48 points) (
- Have any EA nonprofits tried offering staff funding-based compensation? If not, why not? If so, how did it go? by 1 Dec 2021 15:07 UTC; 37 points) (
- What is the state of the art in EA cost-effectiveness modelling? by 4 Jun 2022 12:08 UTC; 20 points) (
- 4 May 2022 6:49 UTC; 17 points) 's comment on Most problems fall within a 100x tractability range (under certain assumptions) by (
- 8 Jun 2022 18:19 UTC; 2 points) 's comment on Transcript of Twitter Discussion on EA from June 2022 by (
FWIW, I wouldn’t say I’m “dumb,” but I dropped out of a University of Minnesota counseling psychology undergrad degree and have spent my entire “EA” career (at MIRI then Open Phil) working with people who are mostly very likely smarter than I am, and definitely better-credentialed. And I see plenty of posts on EA-related forums that require background knowledge or quantitative ability that I don’t have, and I mostly just skip those.
Sometimes this makes me insecure, but mostly I’ve been able to just keep repeating to myself something like “Whatever, I’m excited about this idea of helping others as much as possible, I’m able to contribute in various ways despite not being able to understand half of what Paul Christiano says, and other EAs are generally friendly to me.”
A couple things that have been helpful to me: comparative advantage and stoic philosophy.
At some point it would also be cool if there was some kind of regular EA webzine that published only stuff suitable for a general audience, like The Economist or Scientific American but for EA topics.