What is the actual evidence (examples) in favor of Working at an EAO instead of ETG? Options I am considering are becoming a GR, working in AI safety or Strategy and policy fields and management positions. Relevant examples/sites are much appreciated.
I do not want claims (hiding under “We believe Y is true”). I am really looking for evidence (such as, Working as a GR at open phil would = XX impact. So ETG more than 150k will produce the same XX impact.)
I would hope the evidence to include factors for replaceability, donor contribution vs EA org contribution etc… Also evidence based links for these factors are much appreciated
So far the only example I have is from @milan griffes here: https://80000hours.org/2016/08/reflections-from-a-givewell-employee/
Thanks.
OP posted this question, worded slightly differently, on Facebook as well. I answered there, and they asked me to repost here.
[TLDR: I don’t think that anyone can give you the examples relevant for you. You need to find and speak to the relevant people, outside of EA as well. It is very doable, but there are also workarounds if you can’t decide right now. Action creates clarity.]
I think 80K is actually saying it is better for most people to do direct work (including but not limited to neglected roles at EA orgs) than ETG. So don’t just consider EAO vs ETG. Direct work most likely will /not/ be at an EA org, but would be within a field important to a top EA cause area.
The preference for roles outside of EA makes sense to me, because, while an EA org is likely to find a few good top candidates they consider value-aligned, acting in the wider world using EA principles is a much more reliable (and even stronger) counterfactual. The counterfactual hire at a non-EA business/org/foundation is unlikely to operate with EA in mind.
This is similar to how earning to give is more of a reliable counterfactual than working at an EA org, in that you are almost certainly adding 100% extra money in the pot—the candidate who would have gotten your high-paying job would almost certainly not have donated to an effective charity.
In the end, though, the path for you depends on a lot. You must consider your personal fit and your EA comparative advantage. It also depends on how you expect your favored cause areas, funding, and EA as a movement to evolve. I recommend brain dumping and drafting out as much as you can regarding those 5 things to clarify expectations and cruxes! If you can find cruxes, then you can investigate with expert interviews. Reach out to individuals, not orgs.
Regarding direct work options, reach out to individuals in roles that you could see yourself in (within or outside of an EA org). Even if you are stuck with half a dozen possible roles, that is narrowed down enough that you can ask people in those roles:
-If they feel they are making an impact
-What other options they were deciding between and why they chose what they did
-Where they think the field will go and what will it need
-If they think you would be a good fit.
Now you can compare ETG to what you learned about direct work. You can interview people earning to give in the field you’d work within and people related to philanthropy in the space you’d be donating to. That could look like:
-Fundraisers for an org you love
-Grantmakers in the space
-Relevant foundation researchers, coordinators, and others
Then see if they expect extra annual donations of A-B to be better/worse than direct work of X, Y, or Z.
If you need to further clarify ETG advantage, you can speak to hiring managers or heads of people at EA or non-EA places you’d be excited to work at. Ask them how much better their best candidate tends to be than their second-best candidate.
On the whole, informational interviews are priceless.
You can find all these people using this method or by asking others if they know someone who is doing a certain role.
Here is a recent forum post on how to prepare for informational interviews (keep in mind you might want to be more formal toward non-EAs). Don’t forget to say thanks and cement the bond afterward. If you can help the person in any small way, you should.
And here are two blurbs from 80k encouraging informational interviews and other types of exploration.
So, long story short, you will need to find those people, examples, and evidence that are relevant to you. I get that it is really not easy… I’m in the middle of it too. But just keep getting things down on paper and things will start to become clearer. Take it bit by bit, and try to get more (not necessarily complete) clarity on one aspect per day.
Also, you don’t have to have your path figured out now. If you can narrow it down to 2-3 options, see what next step you could take that would be relevant to both/all paths. If you are at the exact branching point today, then try out a role for a year in a way that should give you pretty good career capital for the other option(s). Then switch to try out another role in a year’s time if it still is not clear. Most likely, a couple of key cruxes will arise while you work.
Action creates clarity, so don’t worry about getting things perfect for now. You actually just need to learn enough to take your immediate next step with confidence.
Good luck and feel free to PM
I don’t need examples “relevant to me”. I just wanted to know what sort of impact people are making say in Open Phil CE or other EAs considering relevant factors such as replaceability in fields like GR, AI SSP, and management positions. Sorry that was not clear.
This is a claim what 80khours makes. Do you have ONE example for this claim?
It looks like “career advice” to me. What I am asking seems to be different. Evidence for claims “working at EA org is better than ETG (for most people)” based on NOW. That’s all.
I don’t know what you mean with cruxes. I guess you mean things that are stopping you from going further. The QUESTION posted is what I wanted more info about. But sadly NO ONE seems to be able answer it.
I reached out to several orgs and people regarding the question above. But most of them aren’t able to provide me any useful info such as amount of dollars moved or replaceability. The only people that answered me are Peter Hurford and Jon Behar. I also wanted to get some info on the fully longtermism projects but I have 0 info.
Maybe I check with Aaron Gertler on how to go about it or try arranging one on ones during the conference in London in a few months.
Sounds good in case in case I manage an interview. Thanks.
Thanks for outlining how to go about determining the value and comparing different options. I appreciate it. Interviews might be something I would need to go after. I shall see how I can do that. Emailing, pming and posting in ea forums does not seem to help or am missing something.
It appears that people don’t have these numbers and are not interested in them. I didn’t get any useful answer from EAF, Open Phil, FHI, CEA and didn’t get far with this EXACT SAME QUESTION on replaceability.
Perfect. I hope EAG can help me out with IIs (informational interviews).
Good luck to you. Have you tried reaching out to people, what was the response like. Who responded? What did you ask exactly? (Maybe in pm? I can share with you more details of my approach and info, if it helps)
Well am 29 and want to know if I should go in the direction of a masters (to keep ETG open) or upskill in research at EAOs.
Thank you for the post. Will do.
I mean, it’s kinda intertwined, right? Presumably you are earning to give to fund people to do stuff. So someone needs to do that stuff. That person could be you. Or you could be the one funding. I think it really comes down to comparative advantage / personal fit (how good are you personally at doing vs. earning?) and marginal resources (of the orgs you would donate to or work for, how much do they want a talented person versus more money?).
In short, I think getting general examples of people having a high impact by working in an EA org would be misleading for anyone actually making this kind of career path decision.
How do I do this Peter? I would think I need to start with what values of impact I can get with ETG and working at an EAorg? And based on the outcome I can choose to get better/pursue in ETG or GR-sills.
For example, if it turns out that 30k donation is enough to meet the EA org impact, then I would do an MS and get a job in the FAANG and 30k would be easy to donate. But if it turns out that working at GiveWell creates an impact of 200k as a GR, then I would rather spend the next few years doing focused practice on GR-skills as I know for that 200k in donations is going to be super hard unless I do something like trading (which I can’t). I would like to maximize my impact.
So I am looking for examples that show how people came to the conclusion that it is better to work in research in an EAO rather than ETG. These examples would include replaceability and other factors I think.
I don’t want general examples. I would like specific examples of impact of people in GR and management positions in Open Phil (and the like), AI safety (technical researcher) positions in OpenAI (and the like) etc...
AT WHAT ETG DO I BECOME INDIFFERENT TO WORKING IN EA ORGS (specifically GR, Safety strategy research and management positions)?
To make a very long story very short, I think you should focus on trying to get a direct work job while still doing what is needed to keep your FAANG* options open. Then apply to direct work jobs and see if you get one. If you don’t, pivot to FAANG.
Also, while doing a FAANG job, you could still aim to build relevant skills for direct work and then switch. This is what I did (except I didn’t work in FAANG specifically).
Also, from what I know, donating $200k/yr while working in FAANG is possible for the top ~10% of engineers after ~5 years.
~
*For those following along who don’t know, FAANG = Facebook Amazon Apple Netflix Google
Peter please bear with me.
So it looks like you are suggesting that ALL DIRECT WORK (DW) any day is better than FAANG type of work, provided you get a job, EVEN if THE MARKET pool IS has many strong applicants. Is that correct?
I think I can focus on one, either on keeping FAANG open or on DW opportunities. I am 29, Indian by birth and working in Netherlands right now. The common route to a Big Bucks FAANG job (hence California), would require 50k$ in costs and a Master’s degree to get into the US. And I probably need to start masters in 1-2 years max, if I hope to be a FAANG guy in US (Guess, feeling). So prepping on this from “now” on would be option 1.
I don’t think I will make it to Direct work jobs now based on what I have seen. I would need to work intensely on it separately as well, depending on what type of job. This would be option 2 provided I know what to focus on. Focusing on option 1 and 2 I think will be hard at the same time I think in this case! Thoughts?
Direct work in what? Each seems to need its own separate prep: GR, AI safety tech researcher, Management positions
How do I compare different opportunities? It circles back again I think to calculations, examples of values.
On the other hand I could try to COPY YOU.
Get a Data Science Job in the US (by doing a Master’s maybe?)
Be REALLY GREAT at something! Have atleast a Triple Master Rank on Kaggle (for e.g.,) (2-3 years maybe)
Be involved with EA community (treasurer, research manager-->No idea how to get there though!)
Build relevant skills for direct work (Not sure what “relevant skills” mean)
And SOMEHOW IT WILL WORK OUT! (possibly because there is a lot of overlap between research, Data science?)
Can you give 2 examples of relevant skills you built for a particular direct work? And how you built it?
Wow. The Power of ETG at FAANG.
I suppose I’m not directly answering your question, but I think it might be pretty hard to answer well, if you want to try to account for replaceability properly, because many people can end up in different positions because of you taking or not taking a job at an EA org, and it wouldn’t be easy to track them. I doubt anyone has tried to. See this and my recent post.
If one hasn’t taken into account replaceability, or the displacement chain, how do you know it is better to work in EA orgs rather than ETG (for X dollars).
Milan Griffes reports with a replaceability of 10% (guess) and attributing 60% (guess) contribution to the donor, that his impact was 244k. Now if you remove the replaceability it is 2.4m.
And the 80khours article you cited on replaceability seems to be so off with its suggestions. 80khours are suggesting that “Often if you turn down a skilled job, the role simply won’t be filled at all because there’s no suitable substitute available”. Whilst the only evidence I can find says completely otherwise: Carricks take on AI S&P, Peter representing RC, Open Phil’s hiring round, Jon Behar’s comments, EAF’s hiring round.
As for your post, I saw it as well, and gained on the “displacement chain” verbiage and calculation. It was very difficult for me to follow the discussion on difference in priorities. In any case, I think we need atleast one real example to test a claim.
How are people so confident in saying that working at an EAO is better than doing ETG especially considering how “full” the talent pool is (Carricks take on AI S&P, Peter representing RC, Open Phil’s hiring round, Jon Behar’s comments)?
What is the evidence?
New charities will sometimes be started to make more EA org positions, and they wouldn’t get far if they didn’t have people who were the right fit for them. Rethink Priorities and Charity Entrepreneurship are relatively new (although very funding-constrained, and this might be the bottleneck for their hiring and the bottleneck for starting new charities like them). Charity Entrepreneurship is starting many more EA orgs with their incubation program (incubated charities here). Maybe worth reaching out to them to see what their applicant pool is like?
I think there are also specific talent bottlenecks, see [1], [2], [3]. Actually, this last one comes from Animal Advocacy Careers, a charity incubated by Charity Entrepreneurship to meet the effective animal advocacy talent bottlenecks.
Btw, I think you have the wrong link for Carricks.
Thanks for mentioning AAC!
Not sure about Rethink Priorities but minor correction is that last time I spoke to CE about this, they didn’t see funding as a substantial constraint for them. They felt more constrained by high quality applicants to their programme.
Edit: CE are now looking for funding, so are at least partly funding constrained!
Good idea. I will contact them as well to see the talent pool. If they still need “high-quality people”, somehow getting better (gaining) in that direction seems like a good opportunity.
Micheal, I have written an article here: http://agent18.github.io/is-ea-bottlenecked-2.html in my unfinished blogspace about [1] and [2]. I really don’t find evidence for their claims of bottlenecks. Or I don’t understand what they are trying to say. For example, GR in GPR is recommended by 80khours in their high-impact-careers post, also in the surveys, also in the separate problem profiles etc… but yet during open phil’s round on there is literally 100s of “good resumes” and “many candidates worthy of positions” but OP could not consume all of them.
Peter Hurford can also be seen talking about the lack of Talent constrian in GR (I think)
This I really need to look into. Thanks for that.
Thanks. Corrected it. Sorry about that.
Bottom line
I don’t know how people evaluate which career to choose. Many people are redirecting me to posts from 80khours. But I find only claims there. When I ask organizations on value generated replaceability I don’t get any info from them. I think people do a guess at max, falling prey to vague words like Career Capital or possibly primarily focusing on what they are good at or I don’t know.
Anyways… It seems like a dead end to think that I can actually evaluate what I should be doing. Your thoughts?
How did you end up choosing to go to DarwinAI? Why not something else like GR in GPR or FAAANG?
I’d say it was kind of decided for me, since those other options were ruled out at the time. I applied to internships at some EA orgs, but didn’t have any luck. Then, I did a Master’s in computational math. Then, I started working part-time at a machine learning lab at the university while I looked for full-time work. I applied to AI internships at the big tech companies, but didn’t have any luck. I got my job at DarwinAI because I was working for two of its cofounders at the lab. I had no industry experience before that.
I’m currently trying to transition to effective animal advocacy research, reading more research, offering to review research before publication, applying to internships and positions at the orgs, and studying more economics/stats, one of the bottlenecks discussed here, with quantitative finance as a second choice, and back to deep learning in the industry as my third. I feel that EA orgs have been a bit weak on causal inference (from observational data), which falls under econometrics/stats.
Your options sounds solid. I guess your 28 and can thus still get into relatively different quantitative Finance.
But, how did you decide that it is best for you to dedicate your time to AAR? You could be working at GiveWell/Open Phil as a GR, or in OpenAI/MIRI in AI safety research (especially with your CS and Math background), you could also be working in ETG at the FAANG. Also 80khours no where seems to suggest that AAR of all the things are “high-impact-careers” nor does the EA survey say anything about it. In fact the survey talks about GR and AI safety.
And did you account for replaceability and other factors? If so, how did you arrive at these numbers?
So you hope to apply causal inference in AAR?
Lastly I want to thank you from the heart for taking your time and effot to respond to me. Appreciate it brother.
26, but 2 years isn’t a big difference. :)
So I’m choosing AAR over other causes due to my cause prioritization, which depends on both my ethical views (I’m suffering-focused) and empirical views (I have reservations about longtermist interventions, since there’s little feedback, and I don’t feel confident in any of their predictions and hence cost-effectiveness estimates). 80,000 Hours is very much pushing longtermism now. I’m more open to being convinced about suffering risks, specifically.
I’m leaning against a job consisting almost entirely of programming, since I came to not enjoy it that much, so I don’t think I’d be motivated to work hard enough to make it to $200K/year in income. I like reading and doing research, though, so AI research and quantitative finance might still be good options, even if they involve programming.
I didn’t do any explicit calculations. The considerations I wrote about replaceability in my post and the discussion here have had me thinking that I should take ETG to donate to animal charities more seriously.
I think econometrics is not very replaceable in animal advocacy research now, and it could impact the grants made by OPP and animal welfare funds, as well as ACE’s recommendations.
I’ll try a rough comparison now. I think there’s more than $20 million going around each year in effective animal advocacy, largely from OPP. I could donate ~1% ($200K) of that in ETG if I’m lucky. On the other hand, if do research for which I’d be hard to replace and that leads to different prioritization of interventions, I could counterfactually shift a good chunk of that money to (possibly far) more cost-effective opportunities. I’d guess that corporate campaigns alone are taking >20% of EEA’s resources; good intervention research (on corporate campaigns or other interventions) could increase or decrease that considerably. Currently only a few people at Humane League Labs and a few (other) economists (basically studying the effects of reforms in California) have done or are doing this kind of econometrics and causal inference research. Maybe around the equivalent of 4 working on this full-time now. So my guess is that another person working on this could counterfactually shift > 1% of EAA funding in expectation to opportunities twice as cost-effective. This seems to beat ETG donating $200k/year.
Happy to help! This was useful for me, too. :)
(Oh, besides economics, I’m also considering grad school in philosophy, perhaps for research on population ethics, suffering-focused views and consciousness.)
From this older article:
Not very good evidence, though, without word directly from GiveWell.
More on threshold hiring here, but no EA-specific examples.
Thanks Michael. As you said, we would need to confirm it from GiveWell. In 2019 they planned to hire 3-5 for new research staffs. It looks like they are physically limiting the growth of GiveWell compared to the available “talent pool” as expressed in Open Phil’s hiring round. Also the priors suggest that GiveWell would like to “grow slowly”: https://blog.givewell.org/2013/08/29/we-cant-simply-buy-capacity/
So I really doubt we should go by the claim of 80k in this regard.
Note that GiveWell stated longer-term plans to “more than double” the size of their research team by early 2022. I assume that one of their bottlenecks is that they recently chose a new Managing Director who will add more research management capacity, but who doesn’t start until July 2020. I wouldn’t be surprised if hiring scales up after that (though I don’t know for sure that it will).