Thanks for this, really interesting.
It might be useful to include page views of ea.org in future, given that that’s arguably the page that has been most designed to be a good landing page for EA.
AI Impacts’ project on discontinuities in technological progress might have some relevant examples for this: https://aiimpacts.org/cases-of-discontinuous-technological-progress/
Thank you for opening this discussion—this feels like a really important topic. I’ve never been religious, and my parents moved around a lot when I was young. So I didn’t have the experience of growing up in a community, but it has always seemed really appealing to me. One thing I’ve been particularly glad about being surrounded by EAs is that it’s so accepted that living in group houses is a good idea. My parents generation, and even my non-EA friends, tend to feel that it’s weird to live with other adults, particularly when you’re married. But I’ve found living with friends to be immensely supportive and an easier way than usual to forge strong, lasting friendships. At the extreme of this, when I had a late term still birth my housemates cleared all evidence of baby away before I came home from hospital, made sure that all the friends I wanted to be told knew without me having to talk about it, bought groceries and cooked for me. This kind of community seems immensely valuable, quite apart from it being cheaper to share houses!
If you haven’t come across it yet, you might be interested to go to Secular Solstice gatherings (https://www.lesswrong.com/posts/ERboWueanAyqwKbiQ/boston-solstice-2018). They talk about challenges humanity has overcome and ones we still need to face, and sing songs like these (https://www.lesswrong.com/posts/ERboWueanAyqwKbiQ/boston-solstice-2018). Unfortunately they’re just once a year though!
As far as I’m aware, no grantmaking happened, for the reason in the paragraph before that line—that no charities doing effective work on it were found.
Only tangentially related, but you might be interested in the report on child marriage that Jacob Williamson did a few years ago from an EA point of view. In addition to discussing the harms caused by child marriage, it discusses various possible interventions for tackling it.
This sounds pretty sensible to me. On the other hand, if people are worried about it being harder for people who are already less plugged in to networks to get funding, you might not want an additional dimension on which these harder-to-evaluate grants could lose out compared to easier to evaluate ones (where the latter end up having a lower minimum threshold).
It also might create quite a bit of extra overhead for granters having to decide the opportunity cost case by case, which could reduce the number of grants they can make, or again push towards easier to evaluate ones.
I strongly agree with this. EA funds seemed to have a tough time finding grant makers who were both qualified and had sufficient time, and I would expect that to be partly because of the harsh online environment previous grant makers faced. The current team seems to have impressively addressed the worries people had in terms of donating to smaller and more speculative projects, and providing detailed write-ups on them. I imagine that in depth, harsh attacks on each grant decision will make it still harder to recruit great people for these committees, and mean those serving on them are likely to step down sooner. That’s not to say we shouldn’t be discussing the grants—presumably it’s useful for the committee to hear other people’s views on the grants to get more information about them. But following Ben’s suggestions seems crucial to EA funds continuing to be a useful way of donating into the future. In addition, to try to engage more in collaborative truthseeking rather than adversarial debate, we might try to:
Focus on constructive information / suggestions for future grants rather than going into depth on what’s wrong with grants already given.
Spend at least as much time describing which grants you think are good and how, so that they can be built on, as on things you disagree with.
You might also want to take longer-run effects into account, as is discussed in this article: http://globalprioritiesproject.org/2014/06/human-and-animal-interventions/
I actually don’t agree that the majority of of roles for our first 6 priority paths are ‘within the EA bubble’: my view is that this is only true of ‘working in EA organisations’ and ‘operations management in EA organisations’. As a couple of examples: ‘AI policy research and implementation’ is, as you indicate, something that could be done at places like FHI or CSET. But it might also mean joining a think tank like the Center for American Security, the Belfer Center or RAND; or it could mean joining a government department. EA orgs are pretty clearly the minority in both our older and newer articles on AI policy. ‘Global priorities researcher’ in academia could be done at GPI (where I used to work), but could also be done as an independent academic, whether that simply means writing papers on relevant topics, or joining/building a research group like the Institute for Future Studies (https://www.iffs.se/en/) in Stockholm.
One thing that could be going on here is that the roles people in the EA community hear about within a priority path are skewed towards those at EA orgs. The job board is probably better than what people hear about by word of mouth in the community, but it still suffers from the same skew—which we’d like to work towards reducing.
Thanks for all these useful tips Joey. Something I wanted to disagree with you on—the idea that it’s best only to apply for a couple of organisations / jobs. In my experience, most organisations aren’t put off by an applicant also looking into working at a broad range of other places. That makes sense to me for a couple of reasons: there are a huge number of very high impact roles out there and it’s really tough to tell which are the very most high impact; and as an individual it’s hard to know which job you’re going to be best suited for and so it makes sense to apply broadly.
I think the idea that it’s sensible to apply broadly both holds in the sense of applying for many different roles at EA organisations, but also in the sense of applying for jobs outside of EA organisations. There are ultimately very few jobs at EA organisations, so it’s unlikely anyone should be exclusively applying for those.
[I work for 80,000 Hours]
Thanks for your thoughts. I’m afraid I won’t be able to address everything, but I wanted to share a few considerations.
There were a few points here I particularly liked:
People should be thinking about the impact they can have in their career over the period of decades, rather than just the next year or so. This seems really useful to highlight, because it’s pretty difficult to keep in mind, particularly early on in your career.
We need to avoid a sense in the community that ‘direct work’ means ‘work in EA organisations’: the vast majority of the most impactful roles in the world are outside EA organisations—whether in government, academia, non-profits or companies.
The paths to these roles are very often going to be long, and involve building up skills, credibility/credentials and a network.
I agree that the phrase ‘skill bottleneck’ might fail to adequately capture resources like credentials and networks but we think that these forms of career capital are as important as specific skills. However, we think that they are most useful when they are reasonably relevant to a priority path. For example, we think Jason Matheny’s career capital is so valuable largely because his network and credentials were in national security, intelligence, U.S. policy, and emerging technology—areas we think are some of the most relevant to our priority problems. If he had worked at a management consulting firm or in corporate law he would still have acquired generally impressive networks and prestige but couldn’t have founded CSET.
There are a few things I disagree with:
You seem to be fairly positive about pretty broad capital building (eg working at McKinsey). While we used to recommend working in consulting early in people’s careers, we’ve updated pretty substantially away from that in favour of taking a more directed approach to your career. The idea is to try to find the specific area you think you think is most suited to you and where you’ll have the most impact, and then to try out roles directly relevant to that. That’s not to say, of course, that it will be clear what type of role you should pursue, but rather that it seems worth thinking about which types of role seem best suited to you, and then trying out things of that type. Often, people who are able to acquire prestigious generalist jobs (like McKinsey) are able to acquire more useful targeted jobs that would be nearly as good of a credential. For example, if you think you might be interested going into policy, it is probably better to take a job at a top think tank (especially if you can do work on a topic that’s relevant to one of our priority problem such as national security or emerging technology policy) than to do something like management consulting. The former has nearly as much general prestige, but has much more information value to help you decide whether to pursue policy, and will allow you to build up a network, knowledge (including tacit knowledge), and skills which are more relevant to roles in priority areas that you might aim for later in your career. One heuristic we sometimes use to compare the career capital of two opportunities is to ask in which option you’d expect your career to be more advanced in a priority path 5-10 years down the line. It’s sometimes the case that spending years getting broad career capital and then shifting into a relevant area will progress you faster than acquiring more targeted career capital but in our experience, narrow career capital wins out more often.
I agree that it’s really important for people to find jobs that truly interest them and which they can excel at. Having said that, I’m not that keen on the advice to start your career decision with what most fascinates you. Personally, I haven’t found it obvious what I’ll find interesting until I try it, which makes the advice not that action guiding. More importantly, in order to help others as much as we can, we really need to both work on the world’s most pressing problems and find what inputs are most needed in order to make progress on them. While this will describe a huge range of roles in a wide variety of areas, it will still be the minority of jobs. That makes me think it’s better to approach career decisions by first thinking through what problems in the world you think most need solving and what the biggest bottlenecks to them being solved are, followed by which of those tasks seem interesting and appealing to you, rather than starting with the question of which jobs seem most interesting and appealing.
I’m a little worried that people will take away the message from your piece that they shouldn’t apply to EA organisations early in their careers, or should turn down a job there if offered one. Like I said—the vast majority of the highest impact roles will be outside EA organisations, and of course there’ll be many people who are better suited to work elsewhere. But it still seems to be the case that organisations like the Open Philanthropy Project and GiveWell are occasionally interested in hiring people 0-2 years out of university. And while there seem to be some people to whom working at EA organisations seems more appealing than it should, there are also many people for whom it seems less appealing or cognitively available than it should. For example, while the people on this forum are likely to be very inclined to apply for jobs at EA organisations, many of the people I talk to in coaching don’t know that much about various EA organisations and why they might be good places to work.
I think the thing to bear in mind is that it’s important not only to apply for jobs at EA organisations. The total number of jobs advertised at EA organisations at any one time is only small, and new graduates should expect to apply to tens of jobs before getting one. Typically, the cost of applying to a valuable direct work job is fairly small relative to the benefit if it turns out you learn that you’re already in a position to start making large contributions to a priority area, as long as you’re at the same time applying to jobs that would help you generate career capital.
Unfortunately, as you say, it seems very difficult to convey accurate impressions—whether about how hard it is to get into various areas, or what kind of skill bottlenecks we currently think there are. I think this is in part due to people having such different starting points. I both come across people who had the impression that it was easy to get into AI safety or EA organisations and then struggled to do so, and people who thought it was so competitive there was no point in them even trying who (when strongly encouraged to do so) ended up excelling. We’re hoping that focusing more on the long-form material like the podcast will help to get a more nuanced picture across for people coming from different starting points.
I’m not sure if this is the kind of place you were thinking, but the EA work club is linked to on the 80,000 Hours Job Board page (https://80000hours.org/job-board/) - at the bottom under ‘Other places to find vacancies’
Love the good news roundup!
There actually is a listing like this: the EA Work Club (https://eawork.club/). Although it’s aiming at EA community jobs, rather than all possible jobs which could be defined as EA, which is maybe what you were after.
Let’s Fund has recently set up to try to get funding for neglected and speculate projects in effective altruism. They seem to particularly focus on research. It could be worth reaching out to them about whether your project is the kind they’d be interested in fundraising for.
In case you haven’t come across it yet, the 80,000 Hours job board has a filter for jobs which can be done remotely, which you might find useful.
It’s always great to see interesting new projects like this to improve the EA community! There might also be learnings for the project from EA Ventures which tried to coordinate between speculative EA projects and funders.
That’s what I meant by ‘though it turns out to be correct’. Sorry for being unclear.
I didn’t downvote the comment, but it did seem a little harsh to me. I can easily imagine being forwarded a draft article, and reading the text the person forwarding wrote, then looking at the draft, without reading the text in the email they were originally sent. (Hence missing text saying the draft was supposed to be confidential.) Assuming that Will read the part saying it was confidential seemed uncharitable to me (though it turns out to be correct). That seemed in surprising contrast to the understanding attitude taken to Julia’s mistake.
Thanks, this seems like a really useful guide!
One thing I find important in conversations, particularly if I’m doing them back to back, is writing down action points (eg people I want to introduce them to) as I go. People sometimes think it’s rude to do this on a phone, so probably having a note book with you is the best approach.
Something I struggle with is making sure that I build up enough rapport with a person fast that they will feel comfortable pushing back on things, and in particular bringing up more socially awkward considerations (eg I’ve heard that effective altruists don’t think it’s particularly impactful to get a job doing x but I’ve been working towards that goal for years, and hate the idea of never getting to do it). I’ve found it pretty useful watching other people who are really good at getting on with people meet new people, and seeing what they do that makes people feel quickly at ease. Because I know this is a weak spot of mine, I try after some of my 1-1 conversations to think through whether there was anything in particular that went well/badly on this dimension (I waited a while for them to respond after saying y, rather than bulldozering on...; when I pushed back on z I accidentally got into ‘philosophy debate’ mode rather than friendly discussion mode). I also find reading books that get me to think through these kinds of dynamics useful: I’ve found ‘Charisma Myth’ useful enough to have read it a couple of times, and right now I’m reading ‘Never Split the Difference’. (A lot of these kinds of books sound like they’ll be about getting your own way and persuading people into things they don’t want to do, but they actually spend most of their time on how to make sure you properly hear and understand the person you’re talking to, and help them feel at ease.)