Thanks! I actually ran through the whole 80k job board a few weeks back, but I like your filters (and am seeing a few new roles already). I’ll give the talk a listen (and the article a read); thanks for sharing!
JeremyR
Thanks for the thoughtful and detailed reply Ben!
I’m not the most risk-seeking, so I think I’ll need to reflect on the trade-off of taking a more indirect route in the hopes of landing an EA role while giving up the “capital” I’m told I have for my first role post-consulting. Will mull over what you’ve shared!
I’ve never taken part in a reading group (outside of seminars and the like in undergrad), and have no plans to do so, and yet I really enjoyed reading this piece! Thoughtfully and clearly laid out, with novel ideas I hadn’t come across before. I’ll be sure to pass it on to friends who take part.
I’m glad Aaron nudged you to write this and that he included it in his digest email!
Agree with Josh’s take on Jews in EA and Effective Tzedakah (though I’d agree strictly speaking the concept of tzedakah is at least broader than charitable giving). I think “Effective Altruism and Judaism” (maybe EAJ?) is my favorite! That said, RE “EA for Jews”—any chance you can ask the folks at EA for Christians how they feel the name has worked out for them?
Yasher Koach—I’m a fan! As an Orthodox Jew myself I’ve been collecting some EA-relevant halakhic/biblical texts on this “source sheet” to eventually get back around to. It needs a lot of fleshing out, not to mention much clearer structure; perhaps this project will be the kick in the pants I’ve needed.
I’m personally still grappling with the same sorts of tension referenced in Raffi’s post (linked above). Though I think a number of halakhic texts align quite neatly with an EA direction, a very well-known / internalized notion in the Orthodox Jewish world is the concept of aniyei ircha kodmim—the poor of your city come first i.e., proximity matters, which of course is… less well-aligned to EA thinking.
Given that, I think there’s particular value in shining a light on some of those halakhic sources which emphasize the relative weighting of need and/or imperative to save lives to help foster more critical thinking among Orthodox Jews with regard to their giving, careers, volunteering etc. Hopeful that can be folded into this project!
Vanguard’s website does not state that they can accept cryptocurrency, but I confirmed with a representative that they take donations of cryptocurrency if the value of the contribution is at least $50,000.
Schwab also told me (in Nov 2020) that they only accept cryptocurrency if the contribution is >$50,000, and their vendors charge a 1% fee on Bitcoin and a $3,500 flat fee for Ethereum. I spoke to Fidelity Charitable who told me they had no minimum contribution for cryptocurrency, but I didn’t inquire about fees.
Really really impressive write-up; thanks for putting this together and hope it sparks more discussion on lead as well as more of these write-ups!
I’m not sure how to understand this line referring to the International Lead Association. Could you clarify if the expectation is that ILA would be an ally vs. an opponent (or does Pure Earth not yet have a belief either way)?
“Pure Earth believes them to be an ally or an opponent on a campaign to clean up informal lead battery recycling but we have not spoken to ILA ourselves.”
Posting as an individual who is a consultant, not on behalf of my employer
Hi, one such consultant checking in! I had this post open from the moment I saw it in this week’s EA Forum digest, but… I (like many other consultants) work a silly number of hours during the work week so just reading the post in detail now.
I’m a member of, but don’t run, the EACN network and my take is it’s a group of consultants interested in EA with highly varied degrees of familiarity / interest: from “oh, I think I’ve heard of GiveWell?” to “I’m only working here because GiveWell rejected my job application.”
80,000 Hours’ old career survey pointed me toward management consulting ~7-8 years ago (affirming a path I was already planning on following) and it’s the only job full-time I’ve had. I’d be surprised if any of us had ever had an EA client (closest I’m aware of is Bill & Melinda Gates Foundation), though I’ve unsuccessfully pitched my employer on doing pro-bono work with a top GiveWell charity.
I agree with Niklas that it seems to me it’d make sense for EA groups to start off by hiring existing consultants / consultancies to prove out the use-case and demand before expecting a boutique firm to get off the ground, but… as a matter of practice what I imagine would happen is as follows:
You’d be set up with the global health / social impact / non-profit side of the consultancy (while plenty of us, myself included, do commercial work—and so would never hear about the project)
The “expertise” would come from the more senior members of the consultancy (e.g., Partners), who might know a lot about, say, global health but are less likely to be familiar with EA (both because they’re older and because they’ve built a book of business with the sorts of companies that pay for consulting… which hasn’t been EA)
The “brawn” would come from generalists—which is where there are some folks who are EA-aligned—but who are usually not selected for projects based on their own content expertise
You’d need a ton of consistent demand with a single consultancy to be able to “develop” experts, much less keep up a large enough pool of brawn with EA knowledge to reliably execute this work [which I think cuts in favor of the boutique firm model]. As soon as one project finishes I’m expected to move to the next, so unless something is actively sold and in need of a person at my tenure the very next day I’ll be moved on to something else for 3-6 months and won’t be pulled off even if a great EA project sells 1 week later
All that said, I’d venture to say almost every major corporation and government relies on generalist consultancies to varying degrees, even for fairly technical / specialized work. I think that should at least raise questions on how important EA-familiarity is for the work described above—it may be a narrower slice of work that really demands it than the author of this post imagines. [To be clear, not trying to shill here—I’m too junior to sell work myself—just sharing an “insider” perspective / trying to help re-calibrate priors.]
Posting as an individual who is a consultant, not on behalf of my employer
Let me start off by saying that’s an interesting question, and one I can’t give a highly confident answer to because I don’t know that I’ve ever had a conversation with a colleague about truth qua truth.
That said, my short answer would be: I think many of us care about truth, I think our work can be shaped by factors other than truth-seeking, and I think if the statement of work or client need is explicitly about truth / having the tough conversations, consultants wouldn’t find it especially hard to deliver on that. The only factor particular to consulting that I could see weighing against truth-seeking would be the desire to sell future work to the client… but to me that’s resolved by clients making clear that what the client values is truth, which would keep incentives well-aligned.
My longer answer...
I think most of my colleagues do care about truth, and are willing to take a firm stance on what they believe is right even if it’s a tough message for the client to hear. [Indeed I’ve explicitly heard firm leadership share examples of such behavior… which I think is an indicator that a) it does happen but b) it’s not a given which ties to...]
...I think there’s a recognition that at the end of the day, we have formal signed statements of work regarding what our clients expect us to deliver, and our foremost obligation is to deliver according to that contract (and secondarily, to their satisfaction) rather than to “truth”
If our contracts were structured in a more open-ended manner or explicitly framed around us delivering the truth, I see no reason (other than the aforementioned) why we would do anything other than provide that honest perspective
I wonder the extent to which employees of EA organizations feel competing forces against truth (e.g., I need to keep my job, not rock the boat, say controversial things that could upset donors) - I think you could make a case that consultants are actually better poised to do some of that truth-seeking e.g., if it’s a true one-off contract
To your 2nd question about >70%:
I don’t think this framing is really putting your original question another way (to sprinkle in some consulting-ese I think “the question behind your question” is something else)
That said, my “safe,” not-super-helpful, and please-don’t-selectively-quote-this-out-of-context answer is less than half the time...
...But that’s because most of the work I (and I’d venture to say, most of us) do isn’t about truth-seeking, so it’s not the sort of thing about which reasonable people of good will will have meaningful disagreement. Rather, the work is about further developing a client’s hypothesis, or helping them understand how best to pursue an objective, or helping them execute a process in which they lack expertise [all generally in the service of increasing client profitability]
Should “reduction” in the quote below (my emphasis) read “increase?”
“This is hard to justify intuitively—it implies that we should ignore the near-term costs, and (taken to the extreme) could justify almost any atrocity in the pursuit of a miniscule reduction of long-term value.”
Personally, I would donate to the Long Term Future Fund over the global health fund, and would expect it to be perhaps 10-100x more cost-effective (and donating to global health is already very good). This is mainly because I think issues like AI safety and global catastrophic biorisks are bigger in scale and more neglected than global health. Coming up with an actual number is difficult – I certainly don’t think they’re overwhelmingly better.
Not to pick nits but what would you consider “overwhelmingly better?” 1000x? I’d have said 10x so curious to understand how differently we’re calibrated / the scales we think on.
Got it—thanks for taking the time to respond!
On the other hand, taxes are not entirely “money lost”—a good part of government spending goes into causes that you may not be entirely averse to—although it’s hard to tell what a marginal dollar will do, e.g. whether it will be used to cut the taxes of millionaires, or to provide social benefits to the poor.
To your point on marginal impact—governments certainly don’t spend money they take in dollar for dollar, and in fact it seems the correlation between intake and expenditure is quite far from 1:1. US government debt is on the order of trillions of dollars, so while its maybe slightly better than flushing your money down the toilet, I’m not sure I’d value it much higher
New cause area: Traffic congestion
I know the footnotes in this piece don’t currently work :( I pasted my write-up from a Google doc based on this guidance but it seems something broke in my attempt. If anyone here can help me figure out how to get those sorted, that’d be much appreciated!
Relatedly, two upfront notes I’d have liked to add toward the start but couldn’t get to work as footnotes in the editor:
Almost all of the data I used in this piece came from the Texas A&M Transportation Institute’s (TTI) annual Urban Mobility Report, which is not peer-reviewed. It seems to be the only real game in town on the topic of traffic’s scale and effects, and is incredibly thorough. I spoke to David Schrank, one of its co-authors, in drafting this piece and made sure I had a (very) surface-level understanding of TTI’s methodology, but ultimately my findings do hinge largely on their work. This goes without saying, but further review is warranted before considering allocating meaningful resources accordingly
COVID dramatically altered the traffic landscape over the last several years, and is likely to leave a lasting mark. How lasting remains to be seen—TTI’s latest report is based on 2020 data—but when it comes to my analyses I generally rely on pre-COVID (2014-2019) data. It’s worth being explicit that—at its peak—COVID dramatically reduced traffic, and the work from home policies it begot will almost certainly lead to a step-change in traffic moving forward. While in some sense this means low-hanging fruit has already been plucked, COVID has also shifted the “Overton window,” allowing for discussion of opportunities that a few short years back seemed far-fetched
Thanks Adina! Agree it’s an awesome tool; the link was in my draft but I really should have incorporated it!
Taking the tool “one step further” (e.g., trying to size the impact of each intervention in a more standardized manner) is probably one of the most clear-cut (and possibly high-return) next steps a funder could take if they were interested in further pursuing the topic.
Sharing my reflections on the piece here (not directly addressing this particular post but my own reflections I shared with a friend.)
While I agree with lots of points the author makes and think he raises valuable critiques of EA, I don’t find his arguments related to SBF to be especially compelling. My run-through of the perceived problems within EA that the author describes and my reactions:The dominance of philosophy. I personally find parts of long-termism kooky and I’m not strongly compelled by many of its claims, but the Vox author doesn’t explain how this relates to SBF (or his misdeeds)… it feels more like shoehorning a critique of EA in to a piece on SBF?
Porous boundaries between billionaires and their giving. So yes it sounds like SBF was very directly involved in the philanthropy his funds went toward but I don’t think that caused (much? any?) incremental reputational harm to EA vs. a world where he created the “SBF family foundation” and had other people running the organization.
If I wanted to rescue this argument, maybe I could say SBF’s behavior here is representative of a common trait of his (at FTX and in his charity) – SBF doesn’t even have the dignity to surround himself with yes-men; he insists on doing it all himself! And maybe that’s a red-flag RE cult of personality/genius and/or fraud that EA should have caught on to.
I will say, though, that the FTX Future Fund had a board/team that was fairly star-studded and ran a big re-granting program (i.e., let others make grants with their money). Which is to say I’m not sure how directly involved SBF actually was in the giving. [As an aside, I think it’s fine for billionaires to direct their own giving and am a lot more suspect of non-profit bloat and organizational incentives than the Vox author is.]
3. Utilitarianism free of guardrails. I agree a lack of guardrails is a problem, but:
a) On utilitarianism’s own account it seems to me you should recognize that if you commit massive fraud you’ll probably get caught and it will all be worthless (+ cause serious reputational harm to utilitarianism), so then committing the fraud is doing utilitarianism wrong. [I don’t think I’m no-true-Scotsman-ing here?]
b) More importantly… the author doesn’t explain how unabashed utilitarianism led to SBF’s actions—it’s sort of vaguely hand-waving and trying to make a point by association vs. actual causal reasoning / proof, in the same vein as the dominance of philosophy point above? I guess the steelman is: SBF wanted to do the most good at any cost, and genuinely thought the best way to do so was to commit fraud (?) A bit tough for me to swallow.
4. Utilitarianism full of hubris. A rare reference to evidence (well, an unconfirmed account, but at least it’s something!) Comparing the St. Petersburg paradox to SBF figuring let’s double-or-nothing our way out of letting Alameda default is an interesting point to make, but SBF’s take on this was so wild as to surprise other EA-ers. So it strikes me as a point in favor of “SBF has absurd viewpoints and his actions reflect that” vs. “EA enabled SBF.” Meanwhile the author moves directly from this anecdote to “This is not, I should say, the first time a consequentialist movement has made this kind of error” (emphasis added). SBF != the movement and I think the consensus EA view is the opposite of SBF’s, so this feels misleading at best.
One EA critique in the piece that resonated with me—and I’m not sure I’d seen put so succinctly elsewhere is:
“The philosophy-based contrarian culture means participants are incentivized to produce ‘fucking insane and bad’ ideas, which in turn become what many commentators latch to when trying to grasp what’s distinctive about EA.”
While not about SBF, it’s a point I don’t see us talking about often enough with regard to EA perceptions / reputation and I appreciated the author making it.
TL;DR: I thought it was an interesting and thought-provoking piece with some good critiques of EA, but the author (or—perhaps more likely—editor who wrote the title / sub-headers) bit off more than they could chew in actually connecting EA to SBF’s actions.
Just seeing this, but yes it was a quote from the original piece! FWIW I appreciate your use of “weird” vs. the original author’s more colorful language (though no idea if that’s what your pre-edit comment was in reference to)
Hi all
I’m wondering if folks have suggestions for what EA organizations and / or roles could best leverage the skill set of management consultants? There are quite a few of us interested in EA and it’s a job with relatively high churn (plenty of folks open to opportunities!), but I’m not sure there’s much of a “pipeline” from consulting to EA today.
Back in the day—when I was already planning to enter the industry - an 80,000 Hours quiz result suggested management consulting, and I’ve been doing the job which I’ve generally enjoyed for the last 5+ years. I’ve been earning to give, but would like to explore potential for direct work—just not sure where my experience / skills could best translate.
Here’s my LinkedIn page and I’m happy to share a resume with detailed experience if useful. But, in short, I went to a top US university (no grad degree), jumped to a top management consulting firm, and have worked across most major industries (energy, healthcare, finance, retail, private equity, etc.) across a range of for-profit organizations.
For those at roughly my tenure who leave consulting for the private sector, the most likely next step is “middle management” (e.g., Director roles) in corporate strategy. In terms of concrete skills, I’d say my strengths are in verbal communications, managing varied stakeholders, operating in ambiguous environments / learning quickly, “soft” analysis (i.e., Excel), developing presentations, and coaching --> skills that I think most large corporations value but which aren’t exactly differentiating or suggestive of particular roles within EA. I’m also not sure if there are many EA organizations big enough to have a “middle management” cohort (i.e., supervising teams, but not leading an organization).
I’m especially passionate about helping others think about their own giving and the financial side of maximizing donations / minimizing taxes. If I had my druthers, my ranked preferences within EA would probably be: meta-EA, direct global health / poverty work, and x-risks toward the bottom (uncouth, I know).
Thanks in advance for any thoughts!