just adding to this, there is the ea consulting network whose members are all, well, ea-aligned consultants, though i don’t know exactly what competencies most people have.
Those are some pretty compelling numbers, but I’d be a lot more optimistic if they were engaged enough to show up in the comments here. (Maybe — I could imagine they’re engaged with EA ideas in other ways, but now we’re into territory where I’d feel like I’d need to do more vetting.)
Posting as an individual who is a consultant, not on behalf of my employer
Hi, I’m one of the co-organizers of EACN, running the McKinsey EA community and currently co-authoring a forum post about having an impact as a management consultant (to add some nuance and insider perspectives to what 80k is writing on the topic: https://80000hours.org/articles/alternatives-to-consulting/).
First let me voice a +1 to everything Jeremy has said here already—with the possible exception that I know several McKinsey partners are interfacing with the EA movement on particular causes like Animal Welfare, Governance of AI, pandemic preparedness and climate change. However I don’t know the exact scope of our client work in either field and haven’t heard of projects for EA orgs (I’ve worked with several of these topics for the McKinsey Global Insitute, see e.g. this report: https://www.mckinsey.com/business-functions/sustainability/our-insights/climate-risk-and-response-physical-hazards-and-socioeconomic-impacts?cid=app)
Second, I’m happy to jump on a 30-60 minute call in July/August to discuss if the EACN or some of its members can be helpful in making something like this happen—you can reach me at jakob_graabak[at]mckinsey[dot]com. (Luke, Ozzie, any of the Peters, any others?)
One example of how we could help: for “Talent Loans” I can imagine that we could use the McKinsey EA Community to find the right people in a more efficient way than described above. I of course understand that most EA orgs likely won’t become regular McKinsey clients, but I can try to talk to some of our partners about how we could run 2-3 pilot projects with e.g. Open Phil in a mutually beneficial way. Perhaps that would also work as a proof of demand and would drive more people into this space.
Love the idea of a having call and a pilot project (if this is what is most useful). We might even explore the options for pro bono work in the EACN as I know that some partners in BCG are looking for strong partnerships in their regions. I imagine that might also be the case for McKinsey, Accenture, Bain, … .
I also agree that almost all consultancies already do EA-aligned work (not to the extent, we would like them to of course) and have expertise in many relevant fields. E.g., my last project was to do an impact assessment (incl. counterfactual impact etc.) of a 300+M€ government grant, which addressed an EA cause area. At Accenture, BCG and Capgemini members of the EACN are actively reaching out to partners to push EA relevant topics even more. So we have a broad network of contact persons within the EACN and the different firms we could reach out to depending on the needs.
Jakob and Jona, what do you think about crowdsourcing/creating something like this for EA relevant consultants and posting about it on the forum when filled? See my response above to Ozzy for context. Jakob, I’ll send an email as well just to make sure you see it. I hear that people tend to be busy at Mckinsey!
Thanks for the link—I was not aware of this but have added my name to it.
To your question, I don’t know if it would be helpful. I haven’t tried to do consulting for EA orgs yet, and I know that some who have tried to do this have found it hard because of lack of demand. To the first point in your comment: Maybe a document like this and a forum post could unlock some demand, but I’m not sure. The best way to learn would be to simply test it!
Thanks. I agree! On the point of not knowing about the link, I’ll mention that I think it is all too often the case that useful EA resources remain relatively unknown.
I occasionally find out about a resource after it would have been very helpful for me. Even when I know resources are out there I can’t often remember where I found them.
With that in mind, I think we could do better/more awareness raising for good resources. I think that EA forum posts are good for that because the forum is well indexed easily searchable. Posts can also be found via a google search. Hence the suggestion for posting about it in the forum. I’d also recommend mentioning it wherever it is relevant (e.g., as a comment in any new posts by consultants new to EA for example).
Posting as an individual who is a consultant, not on behalf of my employer
Hi, one such consultant checking in! I had this post open from the moment I saw it in this week’s EA Forum digest, but… I (like many other consultants) work a silly number of hours during the work week so just reading the post in detail now.
I’m a member of, but don’t run, the EACN network and my take is it’s a group of consultants interested in EA with highly varied degrees of familiarity / interest: from “oh, I think I’ve heard of GiveWell?” to “I’m only working here because GiveWell rejected my job application.”
80,000 Hours’ old career survey pointed me toward management consulting ~7-8 years ago (affirming a path I was already planning on following) and it’s the only job full-time I’ve had. I’d be surprised if any of us had ever had an EA client (closest I’m aware of is Bill & Melinda Gates Foundation), though I’ve unsuccessfully pitched my employer on doing pro-bono work with a top GiveWell charity.
I agree with Niklas that it seems to me it’d make sense for EA groups to start off by hiring existing consultants / consultancies to prove out the use-case and demand before expecting a boutique firm to get off the ground, but… as a matter of practice what I imagine would happen is as follows:
You’d be set up with the global health / social impact / non-profit side of the consultancy (while plenty of us, myself included, do commercial work—and so would never hear about the project)
The “expertise” would come from the more senior members of the consultancy (e.g., Partners), who might know a lot about, say, global health but are less likely to be familiar with EA (both because they’re older and because they’ve built a book of business with the sorts of companies that pay for consulting… which hasn’t been EA)
The “brawn” would come from generalists—which is where there are some folks who are EA-aligned—but who are usually not selected for projects based on their own content expertise
You’d need a ton of consistent demand with a single consultancy to be able to “develop” experts, much less keep up a large enough pool of brawn with EA knowledge to reliably execute this work [which I think cuts in favor of the boutique firm model]. As soon as one project finishes I’m expected to move to the next, so unless something is actively sold and in need of a person at my tenure the very next day I’ll be moved on to something else for 3-6 months and won’t be pulled off even if a great EA project sells 1 week later
All that said, I’d venture to say almost every major corporation and government relies on generalist consultancies to varying degrees, even for fairly technical / specialized work. I think that should at least raise questions on how important EA-familiarity is for the work described above—it may be a narrower slice of work that really demands it than the author of this post imagines. [To be clear, not trying to shill here—I’m too junior to sell work myself—just sharing an “insider” perspective / trying to help re-calibrate priors.]
I just want to flag that one sort of “regular” consulting I’d love to see in EA is “really good” management consulting. My read is that many our management setups (leadership training, leadership practices, board membership administration) are fine but not world-class. As we grow it’s increasingly important to do a great job here.
My impression is that the majority of “management consultants” wouldn’t be very exciting to us, but if there were some that were somewhat aligned or think in similarly nerdy ways, it would be possibly highly valuable.
Thanks so much for the comment and congrats for staying on the 80k-suggested job train for 8 years! In your experience as a consultant, how much do people in the field care about truth? As opposed to satisfying what customers think they want, solving principal-agent problems within a company, etc.
Put another way, what percentage of the time did consultants in your firm provide results that >70% of senior management in a client company initially disagreed with?
I’ve heard a (perhaps flippant) claim that analysts at even top consulting companies believe that their job is more about justifying client beliefs than about uncovering the correct all-things-considered belief (and have recently observed evidence that is more consistent with this explanation than other nearby ones). So I would like to calibrate expectations here.
Posting as an individual who is a consultant, not on behalf of my employer
Let me start off by saying that’s an interesting question, and one I can’t give a highly confident answer to because I don’t know that I’ve ever had a conversation with a colleague about truth qua truth.
That said, my short answer would be: I think many of us care about truth, I think our work can be shaped by factors other than truth-seeking, and I think if the statement of work or client need is explicitly about truth / having the tough conversations, consultants wouldn’t find it especially hard to deliver on that. The only factor particular to consulting that I could see weighing against truth-seeking would be the desire to sell future work to the client… but to me that’s resolved by clients making clear that what the client values is truth, which would keep incentives well-aligned.
My longer answer...
I think most of my colleagues do care about truth, and are willing to take a firm stance on what they believe is right even if it’s a tough message for the client to hear. [Indeed I’ve explicitly heard firm leadership share examples of such behavior… which I think is an indicator that a) it does happen but b) it’s not a given which ties to...]
...I think there’s a recognition that at the end of the day, we have formal signed statements of work regarding what our clients expect us to deliver, and our foremost obligation is to deliver according to that contract (and secondarily, to their satisfaction) rather than to “truth”
If our contracts were structured in a more open-ended manner or explicitly framed around us delivering the truth, I see no reason (other than the aforementioned) why we would do anything other than provide that honest perspective
I wonder the extent to which employees of EA organizations feel competing forces against truth (e.g., I need to keep my job, not rock the boat, say controversial things that could upset donors) - I think you could make a case that consultants are actually better poised to do some of that truth-seeking e.g., if it’s a true one-off contract
To your 2nd question about >70%:
I don’t think this framing is really putting your original question another way (to sprinkle in some consulting-ese I think “the question behind your question” is something else)
That said, my “safe,” not-super-helpful, and please-don’t-selectively-quote-this-out-of-context answer is less than half the time...
...But that’s because most of the work I (and I’d venture to say, most of us) do isn’t about truth-seeking, so it’s not the sort of thing about which reasonable people of good will will have meaningful disagreement. Rather, the work is about further developing a client’s hypothesis, or helping them understand how best to pursue an objective, or helping them execute a process in which they lack expertise [all generally in the service of increasing client profitability]
The only factor particular to consulting that I could see weighing against truth-seeking would be the desire to sell future work to the client… but to me that’s resolved by clients making clear that what the client values is truth, which would keep incentives well-aligned.
Hmm, on reflection maybe the issue isn’t as particular to consulting, like I think the issue here isn’t that people by default have overwhelming incentives against truth, but just that actually seeking truth is such an unusual preference in the vast majority of contexts that the whole idea is almost alien to most people. Like they hear the same words but don’t know what it means/internalize this at all.
I’m probably not phrasing this well, but to give a sense of my priors: I guess my impression is that my interactions with approximately every entity that perceives themself as directly doing good outside of EA* is that they are not seeking truth, and this systematically corrupts them in important ways. Non-random examples that come to mind include public health (on covid, vaping, nutrition), bio-ethics, social psychology, developmental econ, climate change, vegan advocacy, religion, US Democratic party, and diversity/inclusion. Moreover, these aren’t limited to particular institutions: these problems are instantiated in academia, activist groups, media, regulatory groups and “mission-oriented” companies. My limited experience with “mission-oriented” consultancies is that they’re not an exception.
I think the situation is plausibly better outside of do-gooders. For example, I sort of believe that theoretical CS has much better publication norms than the listed academic fields, and that finance or poker people are too focused on making money to be doing much grandstanding.**
Similarly, I would be surprised but not overwhelmingly so if mission alignment is the issue here, and if we take random McKinsey associates who are used to working in profit-seeking industries with higher standards, things would be okay/great.
I wonder the extent to which employees of EA organizations feel competing forces against truth (e.g., I need to keep my job, not rock the boat, say controversial things that could upset donors) - I think you could make a case that consultants are actually better poised to do some of that truth-seeking e.g., if it’s a true one-off contract
This seems plausible yeah, though if it’s a one-off contract I also don’t see a positive incentive to seek truth (To the extent my hypothesis is correct, what you want is consultants who are only motivated by profit + high professional standards).
* The natural Gricean implicature of that claim is that I’m saying that EA orgs are an exception. I want to disavow that implication. For context, I think this is plausibly the second or third biggest limitation for my own work.
I come in peace, but I want to flag that this claim will sound breathtakingly arrogant to many people not fully immersed in the EA bubble, and to me:
I’m probably not phrasing this well, but to give a sense of my priors: I guess my impression is that my interactions with approximately every entity that perceives themself as directly doing good outside of EA* is that they are not seeking truth, and this systematically corrupts them in important ways.
Do you mean: a) They don’t make truth-seeking as high a priority as they should (relative to, say, hands-on work for change)? b) They try to understand what’s true, but their feeble non-EA efforts go nowhere? c) They make zero effort to seek the truth? (“Not seeking truth”) d) They don’t care in the slightest what the truth is?
These are worth distinguishing, at least in communications that might plausibly be read by non-EAs. Someone could read what you wrote and conclude, or at least conclude you believe, that before EA the intersection of people who were very concerned about what was true, and people who were trying hard to make the world a better place, was negligible. That would be unfortunate.
Hmm, did you read the asterisk in the quoted comment?
*The natural Gricean implicature of that claim is that I’m saying that EA orgs are an exception. I want to disavow that implication. For context, I think this is plausibly the second or third biggest limitation for my own work.
(No worries if you haven’t, I’m maybe too longwinded and it’s probably unreasonable to expect people to carefully read everything on a forum post with 76 comments!)
If you’ve read it and still believe that I “sound breathtakingly arrogant ”, I’d be interested in whether you can clarify whether “breathtakingly arrogant” means either a) what I say is untrue or b) what I say is true but insufficiently diplomatic.
More broadly, I mostly endorse the current level of care and effort and caveats I put on the forum. (though I want to be more concise, working on it!)
I can certainly make my writing more anodyne and less likely to provoke offense, e.g. by defensive writing and pre-empting all objections I can think of, by sprinkling the article heavily with caveats throughout, by spending 3x as much time on each sentence, or just by having much less public output (the last of which is empirically what most EAs tend to do).
I suspect this will make my public writing worse however.
I did read it, and I agree it improves the tone of your post (helpfully reduces the strength of its claim). My criticism is partly optical, but I do think you should write what you sincerely think: perhaps not every single thing you think (that’s a tall order alas in our society: “I say 80% of what I think, a hell of a lot more than any politician I know”—Gore Vidal), but sincerely on topics you do choose to opine on.
The main thrusts of my criticism are:
Because of the optical risk, and also just generally because criticizing others merits care, you should have (and still can) clarify which of the significantly different meanings I listed (or others) of “they are not seeking truth” you intended.
If you believe one of the stronger forms, eg “before EA the intersection of people who were very concerned about what was true, and people who were trying hard to make the world a better place, was negligible,” then I strongly disagree, and I think this is worth discussing further for both optical and substantive reasons. We would probably get lost in definition hairsplitting at some point, but I believe many, many people (activists, volunteers, missionaries, scientists, philanthropists, community leaders, …) for at least hundreds of years have both been trying hard to make the world a better place and trying hard to be guided by an accurate understanding of reality while doing so. We can certainly argue any one of them got a lot wrong: but that’s about execution, not intent.
This is, again, partly optical and partly substantive: but it’s worth realizing that to a lot of the world who predate EA or have read a lot about the world pre-EA, the quoted claim above is just laughable. I care about EA but I see it as a refinement, a sort of technical advance. Not an amazing invention.
I tried answering your question on the object level a few times but I notice myself either trying to be reconciliatory or defensive, and I don’t think I will endorse either response upon reflection.
All right. Well, I know you’re a good guy, just keep this stuff in mind.
Out of curiosity I ran the following question by our local EA NYC group’s Slack channel and got the following six responses. In hindsight I wish I’d given your wording, not mine, but oh well, maybe it’s better that way. Even if we just reasonably disagree at the object level, this response is worth considering in terms of optics. And this was an EA crowd, we can only guess how the public would react.
Jacob: what do y’all think about the following claim: “before EA the intersection of people who were very concerned about what was true, and people who were trying hard to make the world a better place, was negligible”
Jacob: all takes welcome!
A: I think it’s false 😛 as a lot of people are interested in the truth and trying hard to make the world a better place
B: also think it’s false; wasn’t this basically the premise of the enlightenment?
B: Thinking e.g. legal reforms esp. french revolution and prussian state, mexican cientificos, who were comteans
B: might steelman this by specifying the entire world i.e. a globalist outlook
B: even then, modernist projects c. 1920 onwards seemed to have a pretty strong alliance between proper reasoning on best evidence and genuine charitable impulses, even where ineffective or counterproductive
B: and, of course, before all the shit and social dynamics e.g. lysenkoism, marxism had a reasonably good claim at being scientific and materialist in its revolutionary aims
C: I find it plausible that one can be very concerned about what is true without being very good finding out the truth according to rationalists’ standards. Science and philosophy are hard! (And, in some cases, rationalists probably just have weird standards.)
D: Disagree. Analogy: before evidence-based medicine, physicians were still concerned with what was true and trying to make the world a better place (through medical practice). They just had terrible methodology (e.g., theorizing that led to humors and leeches).
D: Likewise, I think EA is a step up in methodology, but it’s not original in its simultaneous concern for welfare and truth.
E: Sounds crazy hubristic..
F: I think this isn’t right, but not necessarily because I think the intersection is all that common, it might be, I don’t know, but more because EA is small enough that its existence doesn’t provide much evidence of a large change in the number of people in this intersection. It could be a bunch them just talk to each other more now
I can see Jacob’s perspective and how Linch’s statement is very strong. For example, in developmental econ, in just one or two top schools, the set of professors and their post-docs/staff might be larger and more impressive than the entire staff of Rethink Priorities and Open Phil combined. It’s very very far from playpumps. So saying that they are not truth-seeking seems sort of questionable at least.
At the same time, in another perspective I find reasonable, I think I can see how academic work can be swayed by incentives, trends and become arcane and wasteful. Separately and additionally, the phrasing Linch used originally, reduces the aggressive/pejorative tone for me, certainly viewed through “LessWrong” sort of culture/norms. I think I understand and have no trouble with this statement, especially since it seems to be a personal avowal:
I’m probably not phrasing this well, but to give a sense of my priors: I guess my impression is that my interactions with approximately every entity that perceives themself as directly doing good outside of EA* is that they are not seeking truth, and this systematically corrupts them in important ways.
Again, I think there’s two different perspectives here and a reasonable person could both take up both or either.
I think a crux is the personal meaning of the statement being made.
Unfortunately, in his last response I’m replying to, it is now coming off as Jacob is sort of pursuing a point. This is less useful. For example, looking at his responses, it seems like people are just responding to “EA is much more truth seeking than everyone else”, which is generating responses like “Sounds crazy hubristic..”.
Instead, I think Jacob could have ended the discussion at Linch’s comment here or maybe asked for models and examples to get “gears-level” sense for Linch’s beliefs (e.g. what’s wrong with development econ, can you explain?).
I don’t think impressing everyone into a rigid scout mentality is required, but it would have been useful here.
“Y” is a strictly stronger claim than “If X, then Y”, but many people get more emotional with “If X, then Y.”
Consider “Most people around 2000 years ago had a lot of superstitions and usually believed wrong things” vs “Before Jesus Christ, people had a lot of superstitions and usually believed wrong things.”
In hindsight I wish I’d given your wording, not mine, but oh well
Well, my point wasn’t to prove you wrong. It was to see what people thought about a strong version of what you wrote: I couldn’t tell if that version was what you meant, which is why I asked for clarification. Larks seemed to think that version was plausible anyway.
I probably shouldn’t resurrect this thread. But I was reminded of it by yet another egregious example of bad reasoning in an EA-adjacent industry (maybe made by EAs. I’m not sure). So I’m going to have one last go.
To be clear, my issue with your phrasing isn’t that you used a stronger version of what I wrote, it’s that you used a weaker version of what I wrote, phrased in a misleading way that’s quite manipulative. Consider the following propositions:
A. “political partisans in the US are often irrational and believe false things” vs B. “Democrats are often irrational and believe false things.”
I claim that A is a strictly stronger claim than B (in the sense that an ideal Bayesian reasoner will assign lower probability to A than B), but unless it’s said is a epistemically healthy and socially safe context, B will get people much more angry in non-truth-seeking ways than A.
B is similar to using a phrasing like:
before EA the intersection of people who were very concerned about what was true, and people who were trying hard to make the world a better place, was negligible
instead of a more neutral (A-like)
the intersection of people who were very concerned about what was true, and people who were trying hard to make the world a better place, is negligible
Note again that the less emotional phrasing is actually a strictly stronger claim than the more emotional one.
Similarly, your initial question:
Do you mean: a) They don’t make truth-seeking as high a priority as they should (relative to, say, hands-on work for change)? b) They try to understand what’s true, but their feeble non-EA efforts go nowhere? c) They make zero effort to seek the truth? (“Not seeking truth”) d) They don’t care in the slightest what the truth is?
was very clearly (unintentionally?) optimized to really want me to answer “oh no I just meant a),” (unwritten: since that’s the socially safest thing to answer). Maybe this is unintentional, but this is how it came across to me.
A better person than me would have been able to successfully answered you accurately and directly despite that initial framing, but alas I was/am not mature enough.
(I’m not optimistic that this will update you since I’m basically saying the same thing 3 times, but occasionally this has worked in the past. I do appreciate your attempts to defuse the situation at a personal level. Also I think it bears mentioning that I don’t think this argument is particularly important, and I don’t really think less of you or your work because of it; I like barely know you).
[T]he intersection of people who were very concerned about what was true, and people who were trying hard to make the world a better place, was negligible.
Seems pretty plausible to me this is true. Both categories are pretty small to start with, and their correlation isn’t super high. Indeed, the fact that you think it would be bad optics to say this seems like evidence that most people are indeed not ‘very concerned’ about what is true.
I do agree with you that client quality and incentives are a serious potential problem here, especially when we consider potential funders other than Open Phil. A potential solution here is for the rest of the EA movement to make it clear that “you are more likely to get future work if you write truthful things, even if they are critical of your direct client/more negative than your client wants or is incentivizing you to write/believe,” but maybe this message/nuance is hard to convey and/or may not initially seem believable to people more used to other field norms.
I have posted about this in the Facebook group to let them know. IMO they have done a great job setting that group up and probably have just been focusing on more practical work than keeping up with the EA forum, which is a full time job!
My hunch is that “EAs doing consulting for non-EA companies” looks very different from “EAs doing consulting for EA orgs”, but I’d be happy to be wrong.
just adding to this, there is the ea consulting network whose members are all, well, ea-aligned consultants, though i don’t know exactly what competencies most people have.
=> https://www.eac-network.com/
Interesting, thanks, I didn’t know about this. That group’s first newsletter says:
The EACN network consist of 200+ members by now
All major consulting firms represented
BCG & McKinsey launched their own internal EA slack channels—featuring 70+ consultants each
Those are some pretty compelling numbers, but I’d be a lot more optimistic if they were engaged enough to show up in the comments here. (Maybe — I could imagine they’re engaged with EA ideas in other ways, but now we’re into territory where I’d feel like I’d need to do more vetting.)
Posting as an individual who is a consultant, not on behalf of my employer
Hi, I’m one of the co-organizers of EACN, running the McKinsey EA community and currently co-authoring a forum post about having an impact as a management consultant (to add some nuance and insider perspectives to what 80k is writing on the topic: https://80000hours.org/articles/alternatives-to-consulting/).
First let me voice a +1 to everything Jeremy has said here already—with the possible exception that I know several McKinsey partners are interfacing with the EA movement on particular causes like Animal Welfare, Governance of AI, pandemic preparedness and climate change. However I don’t know the exact scope of our client work in either field and haven’t heard of projects for EA orgs (I’ve worked with several of these topics for the McKinsey Global Insitute, see e.g. this report: https://www.mckinsey.com/business-functions/sustainability/our-insights/climate-risk-and-response-physical-hazards-and-socioeconomic-impacts?cid=app)
Second, I’m happy to jump on a 30-60 minute call in July/August to discuss if the EACN or some of its members can be helpful in making something like this happen—you can reach me at jakob_graabak[at]mckinsey[dot]com. (Luke, Ozzie, any of the Peters, any others?)
One example of how we could help: for “Talent Loans” I can imagine that we could use the McKinsey EA Community to find the right people in a more efficient way than described above. I of course understand that most EA orgs likely won’t become regular McKinsey clients, but I can try to talk to some of our partners about how we could run 2-3 pilot projects with e.g. Open Phil in a mutually beneficial way. Perhaps that would also work as a proof of demand and would drive more people into this space.
Love the idea of a having call and a pilot project (if this is what is most useful). We might even explore the options for pro bono work in the EACN as I know that some partners in BCG are looking for strong partnerships in their regions. I imagine that might also be the case for McKinsey, Accenture, Bain, … .
I also agree that almost all consultancies already do EA-aligned work (not to the extent, we would like them to of course) and have expertise in many relevant fields. E.g., my last project was to do an impact assessment (incl. counterfactual impact etc.) of a 300+M€ government grant, which addressed an EA cause area. At Accenture, BCG and Capgemini members of the EACN are actively reaching out to partners to push EA relevant topics even more. So we have a broad network of contact persons within the EACN and the different firms we could reach out to depending on the needs.
Jakob and Jona, what do you think about crowdsourcing/creating something like this for EA relevant consultants and posting about it on the forum when filled? See my response above to Ozzy for context. Jakob, I’ll send an email as well just to make sure you see it. I hear that people tend to be busy at Mckinsey!
There is also an Airtable version of that directory that is more up to date, I’ll update the google sheet
Hi Peter,
Thanks for the link—I was not aware of this but have added my name to it.
To your question, I don’t know if it would be helpful. I haven’t tried to do consulting for EA orgs yet, and I know that some who have tried to do this have found it hard because of lack of demand. To the first point in your comment: Maybe a document like this and a forum post could unlock some demand, but I’m not sure. The best way to learn would be to simply test it!
Thanks. I agree! On the point of not knowing about the link, I’ll mention that I think it is all too often the case that useful EA resources remain relatively unknown.
I occasionally find out about a resource after it would have been very helpful for me. Even when I know resources are out there I can’t often remember where I found them.
With that in mind, I think we could do better/more awareness raising for good resources. I think that EA forum posts are good for that because the forum is well indexed easily searchable. Posts can also be found via a google search. Hence the suggestion for posting about it in the forum. I’d also recommend mentioning it wherever it is relevant (e.g., as a comment in any new posts by consultants new to EA for example).
Already done.
Posting as an individual who is a consultant, not on behalf of my employer
Hi, one such consultant checking in! I had this post open from the moment I saw it in this week’s EA Forum digest, but… I (like many other consultants) work a silly number of hours during the work week so just reading the post in detail now.
I’m a member of, but don’t run, the EACN network and my take is it’s a group of consultants interested in EA with highly varied degrees of familiarity / interest: from “oh, I think I’ve heard of GiveWell?” to “I’m only working here because GiveWell rejected my job application.”
80,000 Hours’ old career survey pointed me toward management consulting ~7-8 years ago (affirming a path I was already planning on following) and it’s the only job full-time I’ve had. I’d be surprised if any of us had ever had an EA client (closest I’m aware of is Bill & Melinda Gates Foundation), though I’ve unsuccessfully pitched my employer on doing pro-bono work with a top GiveWell charity.
I agree with Niklas that it seems to me it’d make sense for EA groups to start off by hiring existing consultants / consultancies to prove out the use-case and demand before expecting a boutique firm to get off the ground, but… as a matter of practice what I imagine would happen is as follows:
You’d be set up with the global health / social impact / non-profit side of the consultancy (while plenty of us, myself included, do commercial work—and so would never hear about the project)
The “expertise” would come from the more senior members of the consultancy (e.g., Partners), who might know a lot about, say, global health but are less likely to be familiar with EA (both because they’re older and because they’ve built a book of business with the sorts of companies that pay for consulting… which hasn’t been EA)
The “brawn” would come from generalists—which is where there are some folks who are EA-aligned—but who are usually not selected for projects based on their own content expertise
You’d need a ton of consistent demand with a single consultancy to be able to “develop” experts, much less keep up a large enough pool of brawn with EA knowledge to reliably execute this work [which I think cuts in favor of the boutique firm model]. As soon as one project finishes I’m expected to move to the next, so unless something is actively sold and in need of a person at my tenure the very next day I’ll be moved on to something else for 3-6 months and won’t be pulled off even if a great EA project sells 1 week later
All that said, I’d venture to say almost every major corporation and government relies on generalist consultancies to varying degrees, even for fairly technical / specialized work. I think that should at least raise questions on how important EA-familiarity is for the work described above—it may be a narrower slice of work that really demands it than the author of this post imagines. [To be clear, not trying to shill here—I’m too junior to sell work myself—just sharing an “insider” perspective / trying to help re-calibrate priors.]
I just want to flag that one sort of “regular” consulting I’d love to see in EA is “really good” management consulting. My read is that many our management setups (leadership training, leadership practices, board membership administration) are fine but not world-class. As we grow it’s increasingly important to do a great job here.
My impression is that the majority of “management consultants” wouldn’t be very exciting to us, but if there were some that were somewhat aligned or think in similarly nerdy ways, it would be possibly highly valuable.
Thanks so much for the comment and congrats for staying on the 80k-suggested job train for 8 years! In your experience as a consultant, how much do people in the field care about truth? As opposed to satisfying what customers think they want, solving principal-agent problems within a company, etc.
Put another way, what percentage of the time did consultants in your firm provide results that >70% of senior management in a client company initially disagreed with?
I’ve heard a (perhaps flippant) claim that analysts at even top consulting companies believe that their job is more about justifying client beliefs than about uncovering the correct all-things-considered belief (and have recently observed evidence that is more consistent with this explanation than other nearby ones). So I would like to calibrate expectations here.
Posting as an individual who is a consultant, not on behalf of my employer
Let me start off by saying that’s an interesting question, and one I can’t give a highly confident answer to because I don’t know that I’ve ever had a conversation with a colleague about truth qua truth.
That said, my short answer would be: I think many of us care about truth, I think our work can be shaped by factors other than truth-seeking, and I think if the statement of work or client need is explicitly about truth / having the tough conversations, consultants wouldn’t find it especially hard to deliver on that. The only factor particular to consulting that I could see weighing against truth-seeking would be the desire to sell future work to the client… but to me that’s resolved by clients making clear that what the client values is truth, which would keep incentives well-aligned.
My longer answer...
I think most of my colleagues do care about truth, and are willing to take a firm stance on what they believe is right even if it’s a tough message for the client to hear. [Indeed I’ve explicitly heard firm leadership share examples of such behavior… which I think is an indicator that a) it does happen but b) it’s not a given which ties to...]
...I think there’s a recognition that at the end of the day, we have formal signed statements of work regarding what our clients expect us to deliver, and our foremost obligation is to deliver according to that contract (and secondarily, to their satisfaction) rather than to “truth”
If our contracts were structured in a more open-ended manner or explicitly framed around us delivering the truth, I see no reason (other than the aforementioned) why we would do anything other than provide that honest perspective
I wonder the extent to which employees of EA organizations feel competing forces against truth (e.g., I need to keep my job, not rock the boat, say controversial things that could upset donors) - I think you could make a case that consultants are actually better poised to do some of that truth-seeking e.g., if it’s a true one-off contract
To your 2nd question about >70%:
I don’t think this framing is really putting your original question another way (to sprinkle in some consulting-ese I think “the question behind your question” is something else)
That said, my “safe,” not-super-helpful, and please-don’t-selectively-quote-this-out-of-context answer is less than half the time...
...But that’s because most of the work I (and I’d venture to say, most of us) do isn’t about truth-seeking, so it’s not the sort of thing about which reasonable people of good will will have meaningful disagreement. Rather, the work is about further developing a client’s hypothesis, or helping them understand how best to pursue an objective, or helping them execute a process in which they lack expertise [all generally in the service of increasing client profitability]
Thanks for the detailed response!
Hmm, on reflection maybe the issue isn’t as particular to consulting, like I think the issue here isn’t that people by default have overwhelming incentives against truth, but just that actually seeking truth is such an unusual preference in the vast majority of contexts that the whole idea is almost alien to most people. Like they hear the same words but don’t know what it means/internalize this at all.
I’m probably not phrasing this well, but to give a sense of my priors: I guess my impression is that my interactions with approximately every entity that perceives themself as directly doing good outside of EA* is that they are not seeking truth, and this systematically corrupts them in important ways. Non-random examples that come to mind include public health (on covid, vaping, nutrition), bio-ethics, social psychology, developmental econ, climate change, vegan advocacy, religion, US Democratic party, and diversity/inclusion. Moreover, these aren’t limited to particular institutions: these problems are instantiated in academia, activist groups, media, regulatory groups and “mission-oriented” companies. My limited experience with “mission-oriented” consultancies is that they’re not an exception.
I think the situation is plausibly better outside of do-gooders. For example, I sort of believe that theoretical CS has much better publication norms than the listed academic fields, and that finance or poker people are too focused on making money to be doing much grandstanding.**
Similarly, I would be surprised but not overwhelmingly so if mission alignment is the issue here, and if we take random McKinsey associates who are used to working in profit-seeking industries with higher standards, things would be okay/great.
This seems plausible yeah, though if it’s a one-off contract I also don’t see a positive incentive to seek truth (To the extent my hypothesis is correct, what you want is consultants who are only motivated by profit + high professional standards).
* The natural Gricean implicature of that claim is that I’m saying that EA orgs are an exception. I want to disavow that implication. For context, I think this is plausibly the second or third biggest limitation for my own work.
** Even that’s not necessarily true to be clear.
FYI in case anybody’s wondering
Was primarily referring to issues Neil and I discuss in this report. It is certainly plausible that I overupdated on n=1, of course.
I come in peace, but I want to flag that this claim will sound breathtakingly arrogant to many people not fully immersed in the EA bubble, and to me:
Do you mean:
a) They don’t make truth-seeking as high a priority as they should (relative to, say, hands-on work for change)?
b) They try to understand what’s true, but their feeble non-EA efforts go nowhere?
c) They make zero effort to seek the truth? (“Not seeking truth”)
d) They don’t care in the slightest what the truth is?
These are worth distinguishing, at least in communications that might plausibly be read by non-EAs. Someone could read what you wrote and conclude, or at least conclude you believe, that before EA the intersection of people who were very concerned about what was true, and people who were trying hard to make the world a better place, was negligible. That would be unfortunate.
Hmm, did you read the asterisk in the quoted comment?
(No worries if you haven’t, I’m maybe too longwinded and it’s probably unreasonable to expect people to carefully read everything on a forum post with 76 comments!)
If you’ve read it and still believe that I “sound breathtakingly arrogant ”, I’d be interested in whether you can clarify whether “breathtakingly arrogant” means either a) what I say is untrue or b) what I say is true but insufficiently diplomatic.
More broadly, I mostly endorse the current level of care and effort and caveats I put on the forum. (though I want to be more concise, working on it!)
I can certainly make my writing more anodyne and less likely to provoke offense, e.g. by defensive writing and pre-empting all objections I can think of, by sprinkling the article heavily with caveats throughout, by spending 3x as much time on each sentence, or just by having much less public output (the last of which is empirically what most EAs tend to do).
I suspect this will make my public writing worse however.
I did read it, and I agree it improves the tone of your post (helpfully reduces the strength of its claim). My criticism is partly optical, but I do think you should write what you sincerely think: perhaps not every single thing you think (that’s a tall order alas in our society: “I say 80% of what I think, a hell of a lot more than any politician I know”—Gore Vidal), but sincerely on topics you do choose to opine on.
The main thrusts of my criticism are:
Because of the optical risk, and also just generally because criticizing others merits care, you should have (and still can) clarify which of the significantly different meanings I listed (or others) of “they are not seeking truth” you intended.
If you believe one of the stronger forms, eg “before EA the intersection of people who were very concerned about what was true, and people who were trying hard to make the world a better place, was negligible,” then I strongly disagree, and I think this is worth discussing further for both optical and substantive reasons. We would probably get lost in definition hairsplitting at some point, but I believe many, many people (activists, volunteers, missionaries, scientists, philanthropists, community leaders, …) for at least hundreds of years have both been trying hard to make the world a better place and trying hard to be guided by an accurate understanding of reality while doing so. We can certainly argue any one of them got a lot wrong: but that’s about execution, not intent.
This is, again, partly optical and partly substantive: but it’s worth realizing that to a lot of the world who predate EA or have read a lot about the world pre-EA, the quoted claim above is just laughable. I care about EA but I see it as a refinement, a sort of technical advance. Not an amazing invention.
I tried answering your question on the object level a few times but I notice myself either trying to be reconciliatory or defensive, and I don’t think I will endorse either response upon reflection.
All right. Well, I know you’re a good guy, just keep this stuff in mind.
Out of curiosity I ran the following question by our local EA NYC group’s Slack channel and got the following six responses. In hindsight I wish I’d given your wording, not mine, but oh well, maybe it’s better that way. Even if we just reasonably disagree at the object level, this response is worth considering in terms of optics. And this was an EA crowd, we can only guess how the public would react.
I can see Jacob’s perspective and how Linch’s statement is very strong. For example, in developmental econ, in just one or two top schools, the set of professors and their post-docs/staff might be larger and more impressive than the entire staff of Rethink Priorities and Open Phil combined. It’s very very far from playpumps. So saying that they are not truth-seeking seems sort of questionable at least.
At the same time, in another perspective I find reasonable, I think I can see how academic work can be swayed by incentives, trends and become arcane and wasteful. Separately and additionally, the phrasing Linch used originally, reduces the aggressive/pejorative tone for me, certainly viewed through “LessWrong” sort of culture/norms. I think I understand and have no trouble with this statement, especially since it seems to be a personal avowal:
Again, I think there’s two different perspectives here and a reasonable person could both take up both or either.
I think a crux is the personal meaning of the statement being made.
Unfortunately, in his last response I’m replying to, it is now coming off as Jacob is sort of pursuing a point. This is less useful. For example, looking at his responses, it seems like people are just responding to “EA is much more truth seeking than everyone else”, which is generating responses like “Sounds crazy hubristic..”.
Instead, I think Jacob could have ended the discussion at Linch’s comment here or maybe asked for models and examples to get “gears-level” sense for Linch’s beliefs (e.g. what’s wrong with development econ, can you explain?).
I don’t think impressing everyone into a rigid scout mentality is required, but it would have been useful here.
“Y” is a strictly stronger claim than “If X, then Y”, but many people get more emotional with “If X, then Y.”
Consider “Most people around 2000 years ago had a lot of superstitions and usually believed wrong things” vs “Before Jesus Christ, people had a lot of superstitions and usually believed wrong things.”
Oh what an interesting coincidence.Well, my point wasn’t to prove you wrong. It was to see what people thought about a strong version of what you wrote: I couldn’t tell if that version was what you meant, which is why I asked for clarification. Larks seemed to think that version was plausible anyway.
I probably shouldn’t resurrect this thread. But I was reminded of it by yet another egregious example of bad reasoning in an EA-adjacent industry (maybe made by EAs. I’m not sure). So I’m going to have one last go.
To be clear, my issue with your phrasing isn’t that you used a stronger version of what I wrote, it’s that you used a weaker version of what I wrote, phrased in a misleading way that’s quite manipulative. Consider the following propositions:
I claim that A is a strictly stronger claim than B (in the sense that an ideal Bayesian reasoner will assign lower probability to A than B), but unless it’s said is a epistemically healthy and socially safe context, B will get people much more angry in non-truth-seeking ways than A.
B is similar to using a phrasing like:
instead of a more neutral (A-like)
Note again that the less emotional phrasing is actually a strictly stronger claim than the more emotional one.
Similarly, your initial question:
was very clearly (unintentionally?) optimized to really want me to answer “oh no I just meant a),” (unwritten: since that’s the socially safest thing to answer). Maybe this is unintentional, but this is how it came across to me.
A better person than me would have been able to successfully answered you accurately and directly despite that initial framing, but alas I was/am not mature enough.
(I’m not optimistic that this will update you since I’m basically saying the same thing 3 times, but occasionally this has worked in the past. I do appreciate your attempts to defuse the situation at a personal level. Also I think it bears mentioning that I don’t think this argument is particularly important, and I don’t really think less of you or your work because of it; I like barely know you).
Seems pretty plausible to me this is true. Both categories are pretty small to start with, and their correlation isn’t super high. Indeed, the fact that you think it would be bad optics to say this seems like evidence that most people are indeed not ‘very concerned’ about what is true.
I do agree with you that client quality and incentives are a serious potential problem here, especially when we consider potential funders other than Open Phil. A potential solution here is for the rest of the EA movement to make it clear that “you are more likely to get future work if you write truthful things, even if they are critical of your direct client/more negative than your client wants or is incentivizing you to write/believe,” but maybe this message/nuance is hard to convey and/or may not initially seem believable to people more used to other field norms.
I have posted about this in the Facebook group to let them know. IMO they have done a great job setting that group up and probably have just been focusing on more practical work than keeping up with the EA forum, which is a full time job!
Good find—but it seems pretty sparsely populated, and most consultants at large firms would be tricky to grab one-at-a-time.
Yea,
My hunch is that “EAs doing consulting for non-EA companies” looks very different from “EAs doing consulting for EA orgs”, but I’d be happy to be wrong.