EA needs consultancies
Problem
EA organizations like Open Phil and CEA could do a lot more if we had access to more analysis and more talent, but for several reasons we can’t bring on enough new staff to meet these needs ourselves, e.g. because our needs change over time, so we can’t make a commitment that there’s much future work of a particular sort to be done within our organizations.[1] This also contributes to there being far more talented EAs who want to do EA-motivated work than there are open roles at EA organizations.[2]
A partial solution?
In the public and private sectors, one common solution to this problem is consultancies. They can be think tanks like the National Academies or RAND,[3] government contractors like Booz Allen or General Dynamics, generalist consulting firms like McKinsey or Deloitte, niche consultancies like The Asia Group or Putnam Associates, or other types of service providers such as UARCs or FFRDCs.
At the request of their clients, these consultancies (1) produce decision-relevant analyses, (2) run projects (including building new things), (3) provide ongoing services, and (4) temporarily “loan” their staff to their clients to help with a specific project, provide temporary surge capacity, provide specialized expertise that it doesn’t make sense for the client to hire themselves, or fill the ranks of a new administration.[4] (For brevity, I’ll call these “analyses,” “projects,” “ongoing services,” and “talent loans,” and I’ll refer to them collectively as “services.”)
This system works because even though demand for these services can fluctuate rapidly at each individual client, in aggregate across many clients there is a steady demand for the consultancies’ many full-time employees, and there is plenty of useful but less time-sensitive work for them to do between client requests.
Current state of EA consultancies
Some of these services don’t require EA talent, and can thus be provided for EA organizations by non-EA firms, e.g. perhaps accounting firms. But what about analyses and services that require EA talent, e.g. because they benefit from lots of context about the EA community, or because they benefit from habits of reasoning and moral intuitions that are far more common in the EA community than elsewhere?[5]
Rethink Priorities (RP) has demonstrated one consultancy model: producing useful analyses specifically requested by EA organizations like Open Philanthropy across a wide range of topics.[6] If their current typical level of analysis quality can be maintained, I would like to see RP scale as quickly as they can. I would also like to see other EAs experiment with this model.[7]
BERI offers another consultancy model, providing services that are difficult or inefficient for clients to handle themselves through other channels (e.g. university administration channels).
There may be a few other examples, but I think not many.[8]
Current demand for these services
All four models require sufficient EA client demand to be sustainable. Fortunately, my guess is that demand for ≥RP-quality analysis from Open Phil alone (but also from a few other EA organizations I spoke to) will outstrip supply for the foreseeable future, even if RP scales as quickly as they can and several RP clones capable of ≥RP-quality analysis are launched in the next couple years.[9] So, I think more EAs should try to launch RP-style “analysis” consultancies now.
However, for EAs to get the other three consultancy models off the ground, they probably need clearer evidence of sufficiently large and steady aggregate demand for those models from EA organizations. At least at first, this probably means that these models will work best for services that demand relatively “generalist” talent, perhaps corresponding roughly to the “generalist researchers” category, plus some of the “operations” category, in this survey of EA organizational needs.[10] Ongoing services may be a partial exception because in that category, demand from each interested client is relatively stable over time,[11] so one might only need demand from 2-3 EA organizations to justify a full-time role providing that service at an EA consultancy.
Below, I comment on the current demand for each of these three models of EA consultancy. Based on polling other Open Phil staff, I think there is substantial demand for all four types of services from Open Phil alone, but I know less about demand from other EA organizations.[12]
Projects
For example, I wish there was an EA consultancy I could pay to do the market research on how much EA organization demand there is for each of these types of services. :)
Here’s an initial brainstorm of project types for which there might be substantial ongoing demand from EA organizations, perhaps enough for them to be provided by one or more EA consultancies:
-
Impact assessment, e.g. trying to estimate the counterfactual impact of a grant made or project run a few years ago, by interviewing 5-20 people, gathering relevant facts, and putting some numbers on the magnitude of relevant changes in outcomes variables and counterfactual credit to different actors.
-
EA event organization and management
-
Statistics / data science assistance
-
Web development projects for which EA context and habits are helpful, e.g. for new EA discussion platforms, forecasting/calibration software, or interactive visualizations of core EA ideas.
-
Polling, survey research, and online experiments (e.g. via Positly+GuidedTrack) on EA-relevant questions[13]
-
Marketing pushes for EA things, i.e. figuring out which marketing tools best fit the thing to be promoted and the intended audience, and then executing
-
Run an EA-related RFP, filter the responses, summarize the strongest submissions for the client to consider funding
-
Policy development and advocacy
-
Run a fellowship program, filter the responses, summarize the strongest candidates for the client to consider funding, find and manage the training resources and connection opportunities for the fellows
-
Run a training program for staff / contractors / grantees / collaborators, a la a superforecasting workshop but with EA-specific content, and perhaps extending longer than one day
-
Design and run a prize program
-
Design, test, and iterate a training program, a MOOC, an undergraduate course, a summer school program, or other educational materials
-
In general, pilot projects for ideas that, if successful, could perhaps become an ongoing program/organization
-
Do 80% of the work for a recruitment round for a full-time role at the client organization
-
Help communicate some research (that perhaps can’t itself be done by consultants) to non-specialist audiences such as policymakers or the general public
-
Other ideas?
I’m not sure how much overall demand there is for such projects to be run by EA consultancies,[14] but there is substantial demand for some of them at Open Phil alone (see footnote[15]).
Ongoing services
Again, an initial brainstorm:
-
Initial vetting stages of job applicants
-
On-demand EA life/career coaching[16]
-
On-demand EA-aware mental health services[17]
-
EA-aware legal services and HR services[18]
-
Some kinds of content writing
-
Writing support (feedback, copyediting, design)
-
Donor services
-
A community fund / DAF provider
-
Fiscal sponsorship for new projects without their own incorporation (yet)
-
Other ideas?
Here again, I don’t have a good sense of how much overall demand there is for such ongoing services from EA consultancies,[19] but there is some demand from Open Phil alone.
Talent loans
I frequently think something like “If I could hire an analytically strong EA to work with me for 2 months on X, I would do it, but I can’t hire anyone with that skill level for just 2 months, and also vetting hundreds of applicants and interviewing ~10 of them just to enable 2 months of work wouldn’t be worth it.”
But if McKinsey-style EA consultancies existed and had a track record for hiring conscientious, analytically strong people, then I could effectively hire such EAs for 2 months at a time (via a contract with the consultancy), with the consultancy already having done >90% of the necessary vetting and training.[20]
Talent loans would often serve a similar purpose as outsourced analyses or projects, and I’d need more experience with all three to have a good sense of when I prefer a talent loan to an outsourced analysis or project. However, my initial guess is that I personally might have a need for two 2-4mo EA talent loans per year on average.
I’m not sure how much demand there is for this from others at Open Phil or other EA organizations.[21] Rethink Priorities has made a small number of talent loans before, to Longview and Effective Giving.
Thoughts on offering these services
There are various books, courses, etc. on how to start and run a successful consulting business. I don’t know how good they are, or how relevant their advice is to EA consultancies, but they might be worth a glance.[22]
Probably any single consultancy should provide only one or a few of the services above, not all of them.
If you want to offer some of these services yourself, you could do a bit of market research on how much demand there is for the specific service(s) you think you can provide, and then start pitching potential clients to contract you for an initial chunk of work. Here are some potential obstacles and ways to address them:
-
In this post I’ve mostly been thinking about the need for somewhat-established many-person consultancies, which can develop reputations for good client service and good selection and management of individual consultants. Individual freelance consultants can also be helpful, but they can be less convenient for clients, because then the client needs to put more work into vetting and managing the work of each individual consultant, instead of relying on an external firm for that. To overcome this problem you could try to get a job at an existing EA consultancy like Rethink Priorities, though there are very few such positions today.
-
Some EA organizations may not have the budget to experiment with external consultants. But, you could encourage them to include some funding for EA consultant experiments in their next grant proposal.
-
Your potential clients probably don’t have much time to try things out with an “unproven” consultant. To overcome this, you could complete some example work of the sort you’d like to provide to clients, make it extremely “legible” to prospective clients (i.e. fast and easy to evaluate for quality and plausible usefulness), and then send it to potential clients. E.g. the reason I gave Rethink Priorities a grant to do more work on moral weight is that Jason Schukraft had previously written several reports on moral weight that I found helpful,[23] and I think he knew Open Phil might find that specific kind of work helpful because it followed very directly from the “open questions” listed in my moral patienthood report, and pursued those questions from a similar perspective/framework.
-
You might not have as good a picture of the client’s needs as you think you do. There are lots of very subtle things that can make even high-quality work essentially unusable by the client. The best way to address this is to get a call with the potential client and ask them questions to understand in detail what they need and why, but it might be hard to get their time unless you’ve already done some work that is “close enough” to being useful to the client that they can recognize that the call might be worth the time.
-
You might not be as good at providing the service as you think you are. If you’ve addressed the challenges above and you’re still not getting any paid consulting work, that might be an indicator that your potential clients don’t think you’re as good a fit for providing those services as you think you are, in which case you should consider moving on and doing something else with your time and energy. Or perhaps get more experience (e.g. at one of the large generalist non-EA consulting firms) and then try again.
Some of these services could perhaps be offered not by new organizations, but by existing organizations deciding to offer particular services alongside their other work. For example Rethink Priorities could expand the range of services it offers, or 80,000 Hours could offer on-demand career coaching while continuing its other work.
Some additional notes of caution
The consultancies model looks promising to me given what I’ve seen in other industries and the constraints I’ve observed when Open Phil considers or tries to hire more staff. That said, I don’t want to oversell it. In addition to the list of challenges in the previous section, I should say:
One obvious failure mode is that EA consultancies, like many non-EA consultancies, might simply cost a lot but provide little value beyond generic advice, sharp-looking slide decks, and a façade of external justification for something a manager had been planning to do anyway. If this happens then I’d like to think EA client organizations would simply stop commissioning those services.
In general, it can be difficult for consultants to understand the goals and heuristics of their clients in enough detail to know how to “hit the mark,” without all the context that one can acquire as a full-time employee of that client. Perhaps especially in EA, even things that seem like minor details and debatable judgment calls can make the ultimate product effectively useless from the client’s perspective. This might be a fundamental problem that limits the utility of the consulting model, at least within EA, to a pretty small set of services.
Should a talented EA provide services via a consultancy, or do more entrepreneurial work that isn’t specifically requested by EA clients, or do something else? It’s debatable which of these will be more impactful. My guess is that experimentation, personal fit, and career capital development should play major roles when choosing between these options.
I haven’t spent as much time thinking through possible objections and reasons for skepticism about the advice in this post as I sometimes do, for time reasons. I hope that the community will discuss the pros and cons of my advice here in more detail in the comments.
Acknowledgements: I got helpful feedback from several people in the EA community on earlier drafts of this post but unfortunately forgot to ask permission to name them here, except for some people I name and quote or paraphrase in specific footnotes.
Notes
- ↩︎
Also, there can be large costs to hiring someone who turns out to not be a strong fit.
- ↩︎
- ↩︎
Some think tanks do lots of work that is specifically commissioned by clients (the consultancy model), but more often they produce outputs that weren’t specifically requested (the entrepreneur model), including work that is aimed at affecting the behavior of specific actors in a specific way (e.g. GPI’s work on “patient philanthropy”). EA needs both; this post is focused on the need for consultancies. EA has many “entrepreneurship” organizations, including several funded by Open Phil, and we have benefited from their work. Within longtermism (which I know best), I think of e.g. FHI, MIRI, 80,000 Hours, and GPI.
- ↩︎
Working on such projects for a client, or especially being “loaned” to a client for a time, provides both the consultant and the client a strong opportunity to evaluate each other for fit w.r.t. a full-time role with the client, but in a “safe” context in which there is no default expectation of a full-time offer from the client, and the consultant’s job security with the consultancy remains intact.
- ↩︎
E.g. reasoning that is calibrated, reasoning-transparent, rigorous but willing to draw from any genre of evidence, focused on maximal counterfactual impact, weirdness-tolerant, and impartial (in the moral sense), all at the same time.
- ↩︎
Rethink Priorities has done commissioned work on animal consciousness, animal welfare interventions, lead exposure, charter cities, and agricultural land redistribution (commissioned by Open Phil), and on the EA community itself (commissioned by Center for Effective Altruism and 80,000 Hours). Open Phil has found the work we commissioned to be of sufficiently high quality to be useful to us, though I can’t speak to the quality of their other work. They have also produced work specifically requested by (and in some cases paid for by) The Humane League, Farmed Animal Funders, Mercy for Animals, Animal Equality, and Wild Animal Initiative, but I’m not familiar with that work. And between client-requested projects, they have produced a variety of non-requested analyses that seem generally useful to the EA community.
- ↩︎
Why not just use RFPs? I’m more optimistic about the consultancy model because it can more often leverage an existing relationship with an existing organization that is known to have hit some quality threshold for similar-ish projects in the past. In contrast, with RFPs the funder often need to build a new relationship for every funded project, has much less context on each grantee on average, and grantees are less accountable for performance because they have a lower expectation for future funding from that funder compared to a consultancy that is more fundamentally premised on repeat business with particular clients.
- ↩︎
Some EA organizations provide significant services to the EA community, in part due to expressions of interest from other EA organizations, e.g. CEA’s work on community health and community discussion platforms. But that is different from more narrowly scoped services being delivered in a particular way for a particular time period under contract with a specific client. One organization (besides Rethink Priorities) that might qualify as an EA consultancy as I use the term here is The Good Growth Co, but I don’t know much about them yet. Two other possible exceptions are Longview and Founders Pledge, which provide donor services to some people who are perhaps “EA-curious,” though they don’t charge their clients for their services. CLTR may be another example.
- ↩︎
If this was feasible to do while maintaining quality, I’d probably want to commission enough ongoing analysis from RP on AI governance research questions alone to sustain >10 FTEs there. (A group like GovAI doesn’t fill this need because they generally don’t do projects requested by clients, and they typically want to produce work optimized for academic publication rather than for informing action at EA organizations.) To be clear, I don’t think it would be ideal for a consultancy such as RP to have just one client commissioning >90% of its work; that would seem to restore some of the dynamics that the consultancy model is meant to avoid.
- ↩︎
Generalist researchers and operations were the two categories of talent most commonly demanded by the surveyed organizations.
- ↩︎
Almost by definition, that is.
- ↩︎
I comment elsewhere throughout this post and its footnotes about Open Phil demand for specific services. Beyond that, I got several comments from other Open Phil staffers about demand across all four types of services, along the lines of “I’m not sure, but I think our demand for these things is kind of a lot” or “I frequently want one or more of these things.” Likewise, Seán Ó hÉigeartaigh of CSER and CFI said he expects there would be significant demand for some of these services at CSER/CFI, and that some others at CSER/CFI reported the same opinion.
- ↩︎
Rethink Priorities has done a fair bit of this before, for Open Philanthropy, Center for Effective Altruism, Forethought Foundation, The Humane League, Mercy for Animals, Animal Equality, and the Humane Society for the United States.
- ↩︎
Max Dalton of CEA told me he thinks CEA has some demand for such project services.
- ↩︎
Examples for which other Open Phil staff members told me they plausibly or probably could have made (or could make) good use of relatively generalist EA consultants for short-term projects include: (1) a recent ballot measure project, (2) surge capacity for work test grading for recruiting rounds, (3) figuring out what it would take for Open Phil to have a significant campus recruiting presence, and various other things.
- ↩︎
This is currently available from Lynette Bye and Daniel Kestenholz. 80,000 Hours offers career coaching but not “on demand.”
- ↩︎
I’ve heard that this is available from Ewelina Tur and Damon Sasi and perhaps others, but I don’t know anything beyond that.
- ↩︎
For example, Open Phil wants legal advice that attempts to quantify the risks of different options rather than giving advice consistent with minimizing all risk, and willingness to quantify risk is common among EAs and rare among non-EA lawyers. (In part for this reason, we recently hired EA lawyer Molly Kovite as our in-house counsel.) Likewise, it would be helpful to have EA-familiar HR consultants that could better understand issues raised by our EA staff members and understand certain things in our work culture that are common in EA and less common elsewhere. I’m not sure that EA-friendly legal or HR services need to be their own firms, though; perhaps they could be provided by a handful of EA-friendly or EA-aware lawyers and HR experts at one or more larger firms that EA organizations can hire whenever the legal advice or HR services they need would especially benefit from EA context, a willingness to quantify the risks of different options, etc.
- ↩︎
Max Dalton of CEA told me he thinks CEA has some demand for such ongoing services.
- ↩︎
I would still need to vet specific consultants for specific projects to some degree, but I would know they were already selected for strong performance on analytically demanding work tests and (typically) prior projects, as well as being filtered for general conscientiousness, communication clarity, ease of interaction, etc.
- ↩︎
Max Dalton of CEA told me he thinks CEA has some demand for such talent loans, especially for research.
- ↩︎
Some example books that turned up in a quick search include Getting Started in Consulting, Consulting Success, An Insider’s Guide to Building a Successful Consulting Practice, The Boutique, and Coaching and Consulting Made Easy.
- ↩︎
In this case, Open Phil funded Jason’s initial reports on moral weight, though we didn’t commission them — instead, some portion of an earlier grant included funding for projects chosen by Rethink Priorities, and they chose to use some of that funding to write reports on moral weight.
- The motivated reasoning critique of effective altruism by 14 Sep 2021 20:43 UTC; 285 points) (
- 21 criticisms of EA I’m thinking about by 1 Sep 2022 19:28 UTC; 209 points) (
- 2021 AI Alignment Literature Review and Charity Comparison by 23 Dec 2021 14:06 UTC; 176 points) (
- 2021 AI Alignment Literature Review and Charity Comparison by 23 Dec 2021 14:06 UTC; 168 points) (LessWrong;
- Introducing Effective Self-Help by 6 Jan 2022 13:11 UTC; 111 points) (
- Rethink Priorities’ 2022 Impact, 2023 Strategy, and Funding Gaps by 25 Nov 2022 5:37 UTC; 108 points) (
- Features that make a report especially helpful to me by 12 Apr 2022 13:57 UTC; 107 points) (
- The availability bias in job hunting by 30 Apr 2022 14:53 UTC; 105 points) (
- Database of orgs relevant to longtermist/x-risk work by 19 Nov 2021 8:50 UTC; 104 points) (
- EA Infrastructure Fund: May–August 2021 grant recommendations by 24 Dec 2021 10:42 UTC; 85 points) (
- How to write better blog posts by 25 Jan 2022 12:29 UTC; 79 points) (
- Why EA needs Operations Research: the science of decision making by 21 Jul 2022 0:47 UTC; 76 points) (
- An evaluation of Mind Ease, an anti-anxiety app by 26 Jul 2021 11:35 UTC; 70 points) (
- What is the role of public discussion for hits-based Open Philanthropy causes? by 4 Aug 2021 20:15 UTC; 69 points) (
- Help Rethink Priorities Use Data for Animals, Longtermism, and EA by 5 Jul 2021 17:20 UTC; 59 points) (
- Part 1: EA tech work is inefficiently allocated & bad for technical career capital by 25 Jul 2021 13:03 UTC; 59 points) (
- Data Science for Effective Good + Call for Projects + Call for Volunteers by 9 Aug 2022 14:02 UTC; 42 points) (
- Features that make a report especially helpful to me by 14 Apr 2022 1:12 UTC; 40 points) (LessWrong;
- EA Updates for July 2021 by 2 Jul 2021 14:39 UTC; 37 points) (
- 4 Dec 2021 11:09 UTC; 34 points) 's comment on EA megaprojects continued by (
- Exit opportunities after management consulting by 16 Aug 2021 10:54 UTC; 33 points) (
- 19 May 2021 15:29 UTC; 30 points) 's comment on MichaelA’s Quick takes by (
- 32 EA Forum Posts about Careers and Jobs (2020-2022) by 19 Mar 2022 22:27 UTC; 30 points) (
- CEA Update: Q2 2021 by 29 Jul 2021 9:14 UTC; 27 points) (
- The Motivated Reasoning Critique of Effective Altruism by 15 Sep 2021 1:43 UTC; 27 points) (LessWrong;
- 19 Nov 2021 12:36 UTC; 16 points) 's comment on We’re Rethink Priorities. Ask us anything! by (
- Part 4: Intra-organizational and non-tech agencies by 25 Jul 2021 13:03 UTC; 13 points) (
- How to reach out to orgs en masse? by 27 Jul 2021 19:57 UTC; 12 points) (
- Should large EA nonprofits consider splitting? by 6 Jun 2022 20:20 UTC; 12 points) (
- 1 Aug 2023 21:28 UTC; 9 points) 's comment on Ozzie Gooen’s Quick takes by (
- 1 Oct 2021 19:44 UTC; 7 points) 's comment on The LessWrong Team is now Lightcone Infrastructure, come work with us! by (
- 30 Jun 2021 5:53 UTC; 6 points) 's comment on Ozzie Gooen’s Quick takes by (
- 3 Jul 2021 9:13 UTC; 6 points) 's comment on Propose and vote on potential EA Wiki entries by (
- 11 Jul 2022 11:24 UTC; 5 points) 's comment on Senior EA ‘ops’ roles: if you want to undo the bottleneck, hire differently by (
- 2 Oct 2021 10:11 UTC; 4 points) 's comment on Propose and vote on potential EA Wiki entries by (
- 7 Oct 2022 21:35 UTC; 3 points) 's comment on Why don’t people post on the Forum? Some anecdotes by (
- 30 Jun 2021 9:23 UTC; 3 points) 's comment on Improving the EA-aligned research pipeline: Sequence introduction by (
I’m happy to speak with anyone who wants to compete with Rethink Priorities! Feel free to send inquiries to peter@rethinkpriorities.org
(I work at RP, as well as at FHI and the EA Infrastructure Fund, but I’m writing in a personal capacity and describing activities I did in a personal capacity.)
On a similar note, there have been at least two times in the last few months when I think I provided quite useful advice to someone who was recently started an organisation or plans to do so soon, basically just via me describing aspects of how RP thinks and works. And probably >10 times in the last few months when I provided quite useful advice to researchers or aspiring researchers simply by describing aspects of how RP generates ideas for research projects, prioritises among them, plans them, conducts them, disseminates findings, and assesses impact.*
I’ll also be delivering a 1-hour workshop that partly covers that latter batch of topics to participants of a research training program soon, and would potentially be open to delivering the same workshop to other groups as well. (You can see the slides and links to related resources here. Note that this workshop is something I’m doing in my personal time and expresses personal views only; it merely draws on things I’ve learned from RP.)
I say “quite useful” based on things like the people wanting the calls to run longer, asking for followup calls, writing up strategy docs afterwards and asking for my feedback on them, etc. I don’t yet have much evidence of actual good outcomes in the world from this.
This all increases my enthusiasm about the idea of more people trying to copy or draw on good bits of RP, including via:
people reading public writeups of aspects of RP’s strategy (e.g. here and here)
RP producing more such writeups (though as Luke’s post implies, there are many other projects competing for our staff time!)
Maybe RP people delivering some workshops on aspects of this, like I’m now dipping my toes into doing
people having calls with RP staff to talk about these things
(I also of course think there’s a lot I and RP could usefully copy or draw on from elsewhere, and I’ve indeed already “imported” various things from e.g. CLR and FHI into RP or at least my own work.)
Basically, I’d be excited for lots of orgs and individual researchers to operate as anything on a spectrum from “good RP clones” to “very much their own thing, but remixing good aspects from RP and elsewhere”. I think there’s a lot of room for this.
I’m also now a guest fund manager at the EA Infrastructure Fund, and the version of me that wears that hat would likewise be excited about funding more people to do that sor tof thing. (That of course doesn’t mean that I’d want to fund every application like this, but I’d want to fund some and would be excited to have more such applications coming our way.)
(Again, just writing in a personal capacity.)
*I also have my own in-my-view-useful thoughts on these topics, but even if I had deleted all of those from the conversations and just described RP thinking and processes, I think the conversations would’ve been quite useful.
The slides look good Michael. I also think that there is a lot of value in delivering research training and improving skills in the community - being an EA is basically doing applied research on how to do good better! By the way, here is a quick prioritisation template that Alexander Saeri and I developed based on the SNS/INT framework. There are also other tools on the linked website around intervention prioritisation that might be useful—feel free to take and adapt the spreadsheets if you want to create tools.
Thanks so much Peter! READI could never compete with Rethink Priorities but we might be interested in some coopetition :) I will send you an email!
I’ve been thinking a bit about EA consultancy solutions for a while. A few thoughts:
1. I think many EA orgs are much more resistant to outsourcing large amounts of work than they should be. I’ve had a surprising amount of trouble getting groups to pay even token amounts of Guesstimate, a few years back, and have seen other groups refrain from making payments. This seems due to multiple reasons: they often aren’t sure how their donors would view this (often somewhat expensive) spending, this sort of spending often needs approval from a few parties, and in many situations it just isn’t allowed (University rules).
2. Right now the market for large EA consulting seems very isolated to OpenPhil. If this is the case, I imagine the value proposition is precarious to the contractors. Often the main benefit to hiring a contractor over an employee is the ease of firing/ending contracts, but this is obviously quite undesired by the contractor. When you have only one client, being an employee is generally a better deal than being a contractor (with the exception that sometimes they pay significantly more to compensate). See the recent ridesharing contractor debate as an example.
3. As mentioned in (2), generally the way that contractors work is that they cost a fair bit (~1.3x to 2x) more than an employee per hour worked. This is because they need to also pay for work benefits, the time between jobs, and the costs of finding new work. As long as all parties are fine with this, this can work, but it’s something to be aware of. I think a lot of organizations balk when they see contractor prices for most kinds of work they’re not used to.
If we’d like to move in the direction of an “Effective Altruist Economy/Market”, some things that might help kickstart this would be:
1. Setting expectations that contractors will cost money, but are often a good move, all things considered. I imagine it could eventually become common knowledge that contracting relationships are often worthwhile. This would prevent the awkwardness around funders seeing big contractor line-items.
2. Subsidizing contractor rates to small or medium sized client organizations. Like, EA Funds pays out $0.40 for each $1 paid to a contractor by one of these organizations, for research work. In theory there could be some sort of quadratic funding setup for group purchased.
3. Many contractors that organizations themselves come from those organizations. In general, having better systems to facilitate engagement with core Effective Altruists and promising other people will lead to better understandings of needs, which will enable more new consulting groups. I think that understanding the internal needs is really important, but also very difficult.
Thanks Ozzie, these are good suggestions. To add some thoughts: I think we may benefit from someone building a directory of aspiring freelance consultants .This can help solve coincidence of wants issues (i.e., knowing who wants to hire/be hired) and help provide the scale and critical mass needed for many person consultancies to form and grow.
It would need to be low effort as now many potential consultants are doing very well in their work lives and don’t really have time to engage with EA groups and organisations. Even something like this which we have for the behaviour science community would be a big help to start. https://www.eac-network.com/ might be worth contacting as they could be good people to lead something.
Some advanced market commitments (i.e., organisations publicly committing to pay for consulting services if they are offered) might also be helpful. Related to that, some sort of EA wide survey of what consultants orgs need and will pay for might help to catalyse the development of a market. I wonder if RP could do something like that in the next survey round?
RP and others offering incubation support and grants might also help. The EA infrastructure fund drive probably helps but most people still don’t know much about how to set up and run an organisation. I think that charity entrepreneurship has a good model to learn from in that regard. You get in, you learn, then if it goes well you will usually get funded.
To give some context: I am one of the people who set up READI in 2019 to potentially provide behavioural/social science research support and consulting services related to pressing social issues. Our most notable so far is probably the SCRUB project which has been funded (through BehaviourWorks Australia, where three of us work) by the government for over a year. We have also finished two literature reviews on promoting philanthropy and reducing animal product consumption, both of which are in review for journals.
My experience with that has been that it isn’t easy to know what EA orgs want and would pay for without reaching out directly, which is a lot of effort for full time professionals and also quite low efficiency. It’s also hard to know how to effectively structure and run such an org. Hence many of the ideas above.
Hi Peter,
Thanks for the ideas here.
My guess is that this is going to be a bit difficult. My impression is that the needs EA organizations know they have are fairly specific; they look like “really great research into key questions”, or sometimes very tactical things like, “bookkeeping” or simple website development. “Consultant” is a really broad class of thing and really needs to be narrowed down in conversation.
Generally, organizations don’t have that much time to experiment with non-obvious contractor arrangements. This includes time brainstorming ways they might be useful. If one is having a lot of trouble getting integrated (as a possible contractor), the best method I know of is to just work in one of these organizations for a while and develop a close understanding, or perhaps try to write blog posts on topics that are really useful to these groups and see if these pick up.
Around having things like a directory, I expect the ones to work will be more narrow. There are a few smaller “contractor hubs” around; or “talent agencies”, that assist with hiring contractors and charge some fee on top. I think this is a pretty good model for low-level work, and I’d like to see more of it. It does require people with either really good understandings of EA needs (or the relationships), or really good ability to do some super-obviously useful problems (like accounting).
If anyone is interested in doing consulting, one easy way to indicate so would be by just posting in a comment in this thread, or there could be a new thread for such work.
My guess is that this would be a tough sell, but I appreciate the idea.
One (small) positive is that I think contractor setups can be some of the easiest to get started with. If you’re just doing contracting with yourself, and maybe one other person, you don’t even need to set up a formal business, you could just do it directly. The big challenges are in finding clients and providing value. You don’t need much scale at first. But those things are challenges.
I imagine it could be considered nice for organizations to hire more new contractors than would otherwise make sense, as that would be effectively subsidizing the industry.
Thanks Ozzy, that’s useful. I don’t have time to respond in full or say much more, but I will mention:
After reflection I still think that the catalyst needs to be something that solves the coincidence of wants issue (i.e., consultants don’t know who would hire them if they took the time to advertise and work in this space and orgs don’t know who they can hire or if it would go well).
I think that next steps that could help could be as simple as i) someone creating something like this for consultants in EA and posting about it on the forum when filled and ii) the next time someone does a survey of EA leaders they could ask them to benchmark how much more they to spend annually on consultants if they had the talent available, and for what exactly, then share that on the forum also.
I think that you go narrower after the first two steps are done as right now we don’t have much to work with. Though maybe Luke’s suggestions are evidence enough to form narrow directories in those areas, or to have them as ‘specialisations’ in the initial database?
I think that a talent agency is a great idea. I can imagine a ‘head-hunter/recruiter’ with contacts across both the organisations and the consulting networks would help to accelerate things.
Agree that working in existing orgs is a good idea for potential org founders. I am warming more to the RP incubator idea!
I agree that doing things as a sole contractor is probably easier but that’s also a lot more stressful for many people as you assume full responsibility for the work and need to be across all of the accounting and other aspects. It’s probably got some of the issues of being a sole founder in that it asks a lot of one person. It probably works well in a lot of cases though.
Hi Peter,
I already created a directory for EA aligned consultants.
Best,
Jona
Excellent! Sorry, I didn’t know about this. I will promote it to a few relevant people.
Actually, this makes me think, maybe it would be great if Charity Entrepreneurship’s next “round” was focused on EA consultancies, rather than on a particular cause area? Their usual process seems potentially well-suited to this; they can survey relevant stakeholders regarding what needs exist and what might be best for filling them, do some additional shallow investigation of various ideas like those listed in Luke’s post, then attract people and help them set these things up.
At first glance, it seems at least plausible that:
an EA funder would be happy to fund this whole process
this process would result in, say, ~3 orgs that will provide a fair amount of value at good cost-effectiveness for at least 2 years, & 1 org that might eventually grow up to be something kinda like RP.
Maybe I’ll contact CE to see what they think. I’d also be interested to hear if anyone thinks this would be a bad idea for some reason.
(I also think people applying to EA Funds, trying to learn from or get advice from RP, and/or trying to get funding and support in other ways would be good. But I agree that this won’t always be “enough”.)
Edit: Someone downvoted this, which seems reasonable if they mean to say “I do think that this would be a bad idea”, but then I’d be quite interested to hear why they think so.
I really like this idea, as you might have guessed. The best solution of all probably involves RP working in collaboration with CE where you merge RP’s experience of consulting for EA orgs with CE’s ability for training up new people to set up organisations. I think that RP could also think about how to i) get more people in to learn about their processes and ii) how to support those people to take that knowledge and found new research organisation that focus on different regions, topics or methods but can keep much of the prior learning
.
Thanks for your thoughtful comment!
Re: reluctance. Can you say more about the concern about donor perceptions? E.g. maybe grantmakers like me should be more often nudging grantees with questions like “How could you get more done / move faster by outsourcing some work to consultants/contractors?” I’ve done that in a few cases but haven’t made a consistent effort to signal willingness to fund subcontracts.
What do you mean about approval from a few parties? Is it different than other expenditures?
Re: university rules. Yes, very annoying. BERI is trying to help with that, and there could be more BERIs.
Re: “isolated to Open Phil.” Agree that the consultancy model doesn’t help much if in practice there’s only one client, or just a few — hence my attempt (mostly in the footnotes) to get some sense of how much demand there is for these services outside Open Phil. Of course, with Open Phil being the largest funder in the EA space, many potential clients of EA consultancies are themselves in part funded by Open Phil, but that doesn’t seem too problematic so long as Open Phil isn’t institutionally opposed to subgranting/subcontracting.
(Even within Open Phil, a bit of robustness could come from multiple teams demanding a particular genre of services, e.g. at least 3 pretty independent teams at Open Phil have contracted Rethink Priorities for analysis work. But still much safer for contractors if there are several truly independent clients.)
Re: prices. Seems like an education issue. If you find you need additional validation for the fact that contractors have good reasons for costing ~1.3x to 2x as much as an employee per hour worked, feel free to point people to this comment. :)
Re: subsidizing. Yes, this would be interesting to think more about. There’s even a model like Founders Pledge and Longview where donors fund the service entirely and then the consultant provides the services for free to clients (in this case, donor services to founders and high-net-worth individuals).
I’m struggling to parse “Many contractors that organizations themselves come from those organizations.” Could you rephrase?
Definitely agree that understanding the internal needs of clients is difficult. Speaking from the side of someone trying to communicate my needs/desires to various grantees and consultants, it also feels difficult on this end of things. This difficulty is often a major reason to do something in-house even if it would in theory be simpler and more efficient to outsource. E.g. it’s a major part of why Open Phil as built a “worldview investigations” team: it’s sort-of weird to have a think tank within a grantmaker instead of just funding external think tanks, but it was too hard to communicate to external parties exactly what we needed to make our funding decisions, so the only way forward was to hire that talent internally so we could build up more shared context etc. with the people doing that work. That was very expensive in staff time, but ultimately the only way to get what we needed. But in other cases it should be possible (and has been possible) for clients to communicate what they need to consultants. One person I spoke to recently suggested that programs like RSP could be a good complement to consultancy work because it allows more people to hang out and gain context on how potential future clients (in that case FHI, but also sort-of “veteran hardcore longtermists in general”) think about things and what they need.
(Personal views only)
I found this post and the comments very interesting, and I’d be excited to see more people doing the sort of things suggested in this post.
That said, there’s one point of confusion that remains for me, which is somewhat related to the point that “Right now the market for large EA consulting seems very isolated to OpenPhil”. In brief, the confusion is something like “I agree that there is sufficient demand for EA consultancies. But a large enough fraction of that demand is from Open Phil that it seems unclear why Open Phil wouldn’t instead or also do more in-house hiring.”
I think the resolution of this mystery is something like:
Really Open Phil should and plans to do both (a) more in-house hiring and (b) more encouragement and contracting of EA consultancies, but this post just emphasises one half of that
There are many reasons why Open Phil doesn’t want to just hire more people in-house, and “our needs change over time, so we can’t make a commitment that there’s much future work of a particular sort to be done within our organizations” is actually a smaller part of that than this post (to me) implies
Some other reasons are discussed in Reflections on Our 2018 Generalist Research Analyst Recruiting and somewhere in Holden Karnofsky (Open Philanthropy) | EA Global: Reconnect 2021 (I can’t remember the relevant time stamp, unfortunately)
Does that sound right to you?
---
The rest of this comment just explains my confusion a bit more, and may be worth skipping.
The post says:
But then elsewhere you (Luke) write things like:
And:
And:
In light of this and other things, I guess it seems to me like Open Phil is big enough, RP researchers are generalist enough (or are sufficiently interested and capable in multiple Open Phil focus areas), and demand will continue to remain high enough that it seems like it also could really make sense for Open Phil to hire more people who are roughly like RP researchers.
It seems one could’ve in the past predicted, or at least can now predict, that some RP researchers will continue to be in demand by someone at Open Phil, for some project, for at least few years, which implies that they or similar people could also be hired in-house.
(I’m not saying such people should be hired in-house by Open Phil. I think the current set up is also working well, hence me choosing to work at RP and being excited about RP trying to scale its longtermist work relatively rapidly. It’s just that this makes me think that “our needs change over time, so we can’t make a commitment that there’s much future work of a particular sort to be done within our organizations” isn’t really as large a cause of the rationale for EA consultancies as this post seems to me to imply?)
A couple quick replies:
Yes, there are several reasons why Open Phil is reluctant to hire in-house talent in many cases, hence the “e.g.” before “because our needs change over time, so we can’t make a commitment that there’s much future work of a particular sort to be done within our organizations.”
I actually think there is more widespread EA client demand (outside OP) for EA consulting of the types listed in this post than the post itself represents, because there were several people who gave me feedback on the post and said something like “This is great, I think my org has lots of demand for several of these services if they can be provided to a sufficient quality level, but please don’t quote me on that because I haven’t thought hard enough about this and don’t want people to become over-enthusiastic about this on the basis of my OTOH reaction.” Perhaps I should’ve mentioned this in the original post.
Contractors are known to be pricey and have a bit of a bad reputation in some circles. Research hires have traditionally been dirt cheap (though that is changing). I think if an org spends 10-30% of its budget on contractors, it would be treated with suspicion. It feels like a similar situation to how a lot of charities tried to have insanely low overheads (and many outside EA still do).
I think that grantmakers / influential figureheads making posts like yours above, and applying some pressure, could go a long way here. It should be obvious to the management of the nonprofit that the funders won’t view them poorly if they spend a fair bit on contractors, even if sometimes this results in failures. (Contract work can be risky for clients, though perhaps less risky than hiring.)
At many orgs, regular expenditures can be fairly annoying. Contracting engagements can be more expensive and more unusual, so new arrangements have to sometimes be figured out. I’ve had some issues around hiring contractors myself in previous startups for a similar reason. The founders would occasionally get cold-feet, sometimes after I agreed to an arrangement with a contractor.
I agree. The main thing for contractors is the risk of loss of opportunities. So if there were multiple possible clients funded by one group, but each makes separate decisions, and that one group is unlikely to stop funding all of those subgroups at once, things should be fine.
Agreed
Sorry, this was vague. I meant cases where:
1) Person A is employed at Organization B.
2) Person A leaves employment.
3) Person A later (or immediately) joins Organization B as a contractor.
I’ve done this before. The big benefit is that person A has established a relationship with Organization B, so this relationship continues to do a lot of work (similar to what you describe).
Yep, this is what I was thinking about above in point (3) on the bottom. Having more methods to encourage interaction seem good. There’s been a bit of discussion of having more coworking between longtermists in the Bay Area for example; the more we have things like that, the better I’d expect things to be. (Both because of the direct connections, and the fact that it could make it much easier to integrate more people, particularly generalists)
i think that depends hugely on the industry. in software, where i work, everyone i know who is a freelancer prefers to stay that way, even if they work for an extended period for just one customer, and german law (which puts up a number of rules about contractors working for a single customer) is seen as a nuisance by them (though it’s no doubt good for contractors who have less negotiating power).
The EA Infrastructure Fund seems like the go-to place to support such projects if anyone reading this is up to it (other than perhaps Open Phil, where lukeprog works and he may give more information if that’s relevant). They are actively encouraging and looking for people to apply, which you can apply to at any time.
So if you think that you may be a good fit for setting up a project or service along these lines, now would be a great opportunity of doing that!
I agree, the EA Infrastructure Fund seems like a great source of funding for launching potential new EA consultancies!
Could OpenPhil run such a consultancy? You could hire people you only expected to have enough work to partially sustain, and then rent out their services to other organizations for the remainder. This could be a good way of proving out the business model. If successful, you could then spin it out.
My impression is many consultancies have their ex-employer as their primary client so this might not be so unusual.
I don’t think that would play to Open Phil’s comparative advantages especially well. I think Open Phil should focus on figuring out how to move large amounts of money toward high-ROI work.
(all opinions my own, in this and other threads on this post).
Do people generally think there’s greater marginal value in starting EA consultancies than in established EA orgs?
If any of the readers a) believe this and b) work in an established EA org, one obvious way to get more consultancies going is to leave your existing org to start a competitor to RP.
Because you’re a “known quantity” (having already been vetted etc), this means you and whoever you hire are more trustworthy and will be given the benefit of a doubt for startup funding, initial projects, etc.
To the extent that it’s easier to scale consultancies than the structures of established EA orgs, this will also mean we can better leverage EA talent.
If anyone’s thinking seriously about doing as Linch suggests and would like to talk about the nuts and bolts of consulting, feel free to get in touch. I’ve been consulting independently for four years and am happy to share what I know/discuss potential collaborations.
I’d love to talk to you about this! Sent you a DM
The problem I’m trying to solve (at the top of the post) is that (non-consultancy) EA organizations like Open Phil, for a variety of reasons, can’t hire the talent we need to accomplish everything we’d like to accomplish. So when we do manage to hire someone into a specific role, I think their work in that role can be highly valuable, and if they’re performing well in that role after the first ~year then my hunch is they should stay in that role for at least a few years. That said, we’ve had staff leave and become a grantee/similar instead, and I could imagine some staff leaving to become an EA consultant at some point if they think they can accomplish more good that way and/or if they think that’s a better fit for them personally.
Hmm I think the main reason to start a consultancy is for scalability, since for whatever reasons existing orgs can’t hire fast while maintaining quality.
I do think value of time is unusually high at Open Phil compared to the majority of other EA orgs I’m aware of, which points against people leaving Open Phil specifically.
Another option is if you’re an established consultant/independent researcher within the EA sphere and you want to leverage this to start a consultancy group.
Posting as an individual who is a consultant, not on behalf of my employer
Thanks for the great post and the insightful comments! Building on your thoughts some additional comments from a consultants perspective (Worked two years at BCG on 10+ projects in the public, private and social sector; founded the Effective Altruism and Consulting Network; was Vice-Pres. for EA Austria):
On the need for consulting services: Generally speaking, I agree that consultancy can in specific circumstances (e.g., clearly defined objective, no expertise or free resources inhouse, enough capacity on the client-side to provide input and guidance, …) unlock value for the EA community and we already got several requests from different EA orgs via the effective altruism and consulting network (e.g., on calculating a business case, researching some data, assessing the feasibility of some planned projects). I also believe that investments in consultancies can be a waste of money if the project isn’t clearly defined. There is also a variance of quality in the market (the same general principle as with every purchasing decision apply: you need to find the right fit for your needs, the more money you spent the higher the likelihood of high-quality output, …). Additional efficiency gains for the EA community (esp. for local groups) might lay in setting up a shared service centre for EA orgs . Shared service centres offer standardized services (some of them mentioned above) such as support with marketing/layouts, HR services, reviews/translations, … . The difference to consultancy services is that a shared service centre covers much more the routine tasks every org has to deal with and is meant to be long-term support (vs. short-term/project engagements of consultants).
On the need for an EA-specific consultancy: While I generally believe that there might be specific services, where it is useful to have EA experience and build expertise over the years (e.g., EA trainings, the great work RP is doing), I would generally argue that many of the tasks listed above can be done by non-EA consultancies as
many of them need little to none EA specific input (e.g., the technical side of web development)
customization is needed for almost every client and a lot of the required input can be provided by the EA org/client with little time invest (see also the comment of JeremyR; e.g., for standard HR processes every org has)
you can benefit from years of experience from different industries normal consultants bring to the table
you as the client are in the driver seat of structuring the project so you get out of it, what you want (see also next bullet)
On the value of consulting services and the role the client plays: Additionally on the points mentioned above on the mixed quality of providers and projects, I would also argue that most people underestimate the role the clients play in delivering a successful project. This starts by assessing whether a consulting engagement is the right solution for the problem, deciding how much money you spent and choosing the consultancy/team. Clients have immense power to tailor the team and project set-up as well as deliverables based on their needs. Examples of good practices I have seen were partnering client and consulting team members to make sure knowledge transfer takes place in both directions as well as the client asking for ways how we could be wrong.
One way to drive this further would be to (1) assess and structure the different (future) needs and concerns etc., (2) identify relevant segments for external support, (3) define the best delivery model for the prioritized needs (inhouse, shared service centre, EA consultancy, freelancer, normal consultancy)
+1 to all Jona writes here—with the caveat that consulting firms like McKinsey or BCG can also help you scope the project and prioritize what’s most important to work on. This of course requires some level of trust (like in all professional services where the client may not know their exact needs), which strengthens the case for using EA consultants at least for pilot projects until norms around using consultants are well-established.
Posting as an individual who is a consultant, not on behalf of my employer
To complement:
I think the discussion would benefit from a more clear distinction between research on the one hand and (strategy) consulting on the other.
Of course, research is often part of a consulting project, but there is a different skillset required in order to a) perform diligent quant. / qual. reserach, or b) strategically steer decisions and projects.
From the discussion and RP’s positioning, I observe that there definitely is a need for a) research.
I would be interested in how much there is a need for b) (strategy) consulting. Below is a list my (bold) hypotheses of potential projects that might be interesting to EA-aligned organisations:
- Developing a marketing strategy for NGOs
- Developing a recruitment strategy for NGOs
- Optimizing cost structure of NGOs
- Project management
- Influencing senior decision makers / policy makers
- Facilitating workshops
I would highly appreciate if you could falsify / verify the need for these types of projects, or complement the list as you see fit—thank you!
Great post! Thanks for sharing this!
Nonlinear has actually been considering doing almost half of these ideas, particularly prizes, RFPs, training, recruiting, mental health, and doing market research about which services would be the most useful. We’ll definitely reach out to you privately about possible plans because we’d love to get your input on what would be most helpful for OpenPhil.
We are also looking for people or organizations that might be a good fit for these projects and we will be able to provide mentorship, funding, and introductions, so if any of these ideas excited you and seem like a potentially good fit, please reach out to me! t.ly/sBUB
Rethink Charity does this! https://www.rethinkprojects.org/fiscal-sponsorship
Thanks, I didn’t know this!
Thanks! I might use this in the future :)
Thanks for writing this.
I think your logic follows for research, policy etc. But I’m not sure about all the things you name.
Why is it reasonable to assume the best people to hire for web development, marketing or events management would be EAs rather than a standard web dev org?
If anything it seems to me you’d prefer aligned people took impactful roles rather than replicating non impactful roles in the EA community. The whole point of 80k is that there is more impact by choosing non-standard careers.
What’s more, it feels off. I think if I heard that the libertarians wanted to hire libertarian web devs and libertarian events managers, I’d think they were off track. It’s hard to pin down why, but maybe I’d feel it was becoming a cash grab.
In short, I agree, for things that are EAs comparative advantage—policy, research etc, but am unconvinced in general. Happy to give a case by case run down of the above if that’s useful. Feel free to break the above arguments.
Yeah, I originally had the same thought, and I considered e.g. web development, event management, legal services, and HR services as not benefiting enough from EA context etc. to be worth the opportunity cost of EA talent, but then several people at multiple organizations said “Actually we’ve struggled to get what we want from non-EA consultants doing those things. I really wish I could contract EA consultants to do that work instead.” So I added them to the list of possibilities for services that EA consultancies could provide.
I’m still not sure which conditions make it worth the opportunity cost of EA talent to provide these kinds of services, but I wanted to list them as possibilities given the feedback I got on earlier drafts of this post.
See also footnote 18.
Will ponder. Thanks again for going to the effort. I largely agree regardless.
A related confusion to me is why there is EA comparative advantage in policy/research, like naively you’d expect external policy groups, consultancies, and academia to do a fine job of it. Yet in practice I think many EA orgs have paid academics to investigate questions of interest to EAs, and while there’s a lot of interesting work, the hit rate is lower than what we might naively have expected (moderate confidence, lukeprog and others can correct me on whether this gestalt view is correct).
So maybe this is a useful reference class to consider.
I don’t think EAs have a comparative advantage in policy/research in general, but I do think some EAs have a comparative advantage in doing some specific kinds of policy/research for other EAs, since EAs care more than many (not all) clients about certain analytic features, e.g. scope-sensitivity, focus on counterfactual impact, probability calibration, reasoning transparency of a particular sort, a tolerance for certain kinds of weirdness, etc.
Your question:
Answer from the post:
I think the emphasis is on the relationship with the EA community. You do not need to be an EA-dedicated consultancy team, but you should have some group dedicated to serving EA interests.
I believe this is what all consultancy firms do. They take care of their customer organisations, by becoming familiar with the expectation, aspirations, and goals. (And it is easier if the people carrying out the work share the same aspirations of the organisations they serve, just because they are likely to be more receptive)
Here, the post is only asking new or old consultancy to give some attention to the EA community.
You mention
several times. How critical do you think it is to have quality of our standards or higher? One reason I’m suspicious of this is that RP chose a particular standard and we happen to be an existing example of something that works, so naively it’d be quite surprising if we hit exactly at the right level of quality vs quantity/scalability tradeoff, such that anything worse than us is ~useless.
Another reason I’m suss is that there are quality differences within RP’s work. For example, Jason’s work on invertebrate sentience is considerably higher quality than some of the (nonpublic) projects I did, which are (I hope) still quite useful to the relevant funders.
To decompose this a little, I think there are several dimensions where I think quality can be sacrificed and still be useful to EA orgs, the most obvious of which is time. Several projects you mentioned are done in what appears to be very short timelines (both calendar time and clock time), which makes it hard for other people to replicate RP’s performance, either because they’re more junior/otherwise worse researchers or because they have more external commitments.
For example, David and Jason’s report on charter cities was completed in 100 hours, a reasonable fraction of which was extra legwork for external writeup/following up with affected parties, after the original report was delivered to Open Phil. My impression is that the bulk of the work was done on a fairly short calendar time cycle too, in ways that may be hard for external parties to replicate. But naively the report would still be useful to Open Phil and cost-effective to fund if it took 200 hours to complete and 3x the calendar time.
Other dimensions I can imagine orgs being willing to accept work at a lower bar than RP’s while still being useful: EA alignment, reasoning transparency, accuracy, thoroughness, formatting, etc. Of course, some of these dimensions are more important than others, and these dimensions are not uncorrelated.
Other Rethink Priorities clients (including at Open Phil) might disagree, but my hunch is that if anything, higher quality and lower quantity is the way to go, because a client like me has less context on consultants doing some project than I do on someone I’ve directly managed (internally) on research projects for 2 years. So e.g. Holden vetted my Open Phil work pretty closely for 2 years and now feels less need to do so because he has a sense of what my strengths and weaknesses are, where he can just defer to me and where he should make sure to develop his own opinion, etc. That’s part of the massive cost of hiring, training, and managing internal talent, but it eventually gets you to a place where you don’t need to be so nervous about major crippling flaws (of some kinds) in someone’s work. But a major purpose of outsourcing analysis work is to get some information you need without needing to first have built up months or years of deep context with them. But how can I trust the work of someone I have so little context with? I think “go kinda overboard on legibility / reasoning transparency” and “go kinda overboard on quality / thoroughness / vetting” are two major strategies, especially when the client is far more time-constrained than funding constrained (as Open Phil is).
In this case, do you think RP should focus more on quality and less on quantity as we scale, by satisficing on quantity and focusing/optimizing on research quality (concretely, this may mean being very slow to add additional researchers and primarily using them as additional quality checks on existing work, over trying to have more output in novel work)? This is very much not the way we currently plan to scale, which is closer to focusing on maintaining research quality and trying to increase quantity/output.
(reiterating that all impressions here are my own).
I don’t feel strongly. You all have more context than I do on what seems feasible here. My hunch is in favor of RP maintaining current quality (or raising it only a tiny bit) and scaling quickly for a while — I mostly wanted to give some counterpoints to your suggestion that maybe RP should lower its quality to get more quantity.
Another read of this is maybe RP is leaving a bunch of gains on the table by not trying to be higher quality.
I think right now (as you know) while we’d like to have higher quality (and we expect to improve naturally somewhat by gaining more experience both as individual researchers and in research management), we’re prioritizing organizational resources more towards scalability/output than quality.
I’m also interested in whether this is mistaken.
Just to clarify, the 100 hours was actually just for the original report and doesn’t include any of the extra leg work for the public version, because I forgot to update that time taken estimate in the public version. The extra work for the public version was an additional 10-15 hours of work from the two of us, but there was also work from others reviewing the report. This extra work took place over 5 weeks of calendar time.
Thanks for the clarification!
Quick note to anyone interested that I’ve been researching the idea of an EA tech-agency-cum-consultancy for a while now. I’m hoping to post a sequence on it within the next week or two. When the next draft is ready I’ll link to it here, but if anyone’s curious about the idea feel free to PM me in the meantime.
This is now ready for proofreading for anyone who’s interested. I’ve left a handful of questions as comments on the doc that I’d be particularly interested in opinions/support on. Part 1 is here—there are five parts, each part linking to the next:
https://docs.google.com/document/d/1WbMzIQBNKw_2RN0O5hNM_mLuRSnlWHYxcOMxqXNgiYI/edit
Do you, or anyone else, have some more insight into the consultancy work that’s needed around statistics and data science?
One angle on how this could go poorly is something I call ‘failure cascades’ (a la information cascade). I’m excited that this has been incorporated as a concept in the EA Ops channel, and I think it would be valuable for EA consultants to keep it in mind.
Roughly, a failure cascade could be:
> An EA consultancy conducts a search for a really good immigration law firm that they can use when helping EA orgs with immigration. They find a good law firm and proceed to help a dozen EA orgs with visas. Unfortunately it turns out this law firm misunderstood an important component of the H1B renewal process, and suddenly three years later a bunch of EAs across the ecosystem get kicked out of the USA. So a single judgment failure ‘cascades’ across the organizations, causing a relatively catastrophic situation for the community compared to a world where infrastructure/judgment was less centralized.
Hi all, Haydn and I figured this post was a good place to plug our startup, Pantask. While the services we provide are not as advanced as those listed here, Pantask can offer assistance to EA orgs that need help with day to day operations but can’t afford to hire full time employees. We provide general virtual assistance services, such as organizing chaotic troves of data, manage schedules and emails, and help with brain debugging. We also offer graphic design, copyediting, transcription, and writing services. Our assistants can also perform certain kinds of research (the kind you can do in <8 hours, generally speaking), such as finding service providers, information on grants, etc.
Essentially, if the task can be done by a reasonably competent person without a specialized skill set, our assistants can very likely do it for you. In addition to being EA owned, some of our assistants are also EAs and even more are familiar and interested in EA. We’ve served EA charities before. We charge 30 USD per hour. If you’re not used to delegating tasks, we can help you review the tasks you delegate to make sure they are clear, at no additional cost.
You can send tasks to ask@pantask.com, or email either of us at mati@pantask.com or haydn@pantask.com, or call us at (570) 509-3366. You can also schedule me on: https://calendar.google.com/calendar/u/0/appointments/schedules/AcZssZ0Dc0qvV3EbGsGR39_dhoeusVtX6rwnpfXpGVHwRHPGPuIjTd1GPiCRz9qMwTkIZKCPPVB0AQQm
Just a quick comment to say that SoGive would be well positioned to be another consultancy providing services like Rethink.
We have collaborated with Rethink before (see this research) and are in moderately frequent informal contact with them.
We have c10 analysts who are a mixture of volunteers and staff. Mostly volunteers, as the organisation is funded solely by me, and there is a limit to what I can afford.
I’m open to the idea of us doing more of this sort of work, although it would need a discussion before we commit to anything, as we already have a separate strategic focus in mind.
Wouldn´t this problem be solvable by creating a database/network of existing consultants, freelancers etc. that have a background in effective altruism? Then, whenever needed, you could assemble a team from this network and just pay their regular employers.
Also, this might in some cases be accepted as (price reduced) pro-bono work. And you would get free advertisement for EA Orgs on top.
My initial hunch is that the amount of work that is EA-specific is not sufficiently big enough to run a EA-dedicated consultancy, especially if you consider the large amount of different fields that might benefit from specialization. However, I do not have a good picture of the demand.
In any case, this sounds interesting and I would be interested in hearing more about this.
just adding to this, there is the ea consulting network whose members are all, well, ea-aligned consultants, though i don’t know exactly what competencies most people have.
=> https://www.eac-network.com/
Interesting, thanks, I didn’t know about this. That group’s first newsletter says:
The EACN network consist of 200+ members by now
All major consulting firms represented
BCG & McKinsey launched their own internal EA slack channels—featuring 70+ consultants each
Those are some pretty compelling numbers, but I’d be a lot more optimistic if they were engaged enough to show up in the comments here. (Maybe — I could imagine they’re engaged with EA ideas in other ways, but now we’re into territory where I’d feel like I’d need to do more vetting.)
Posting as an individual who is a consultant, not on behalf of my employer
Hi, I’m one of the co-organizers of EACN, running the McKinsey EA community and currently co-authoring a forum post about having an impact as a management consultant (to add some nuance and insider perspectives to what 80k is writing on the topic: https://80000hours.org/articles/alternatives-to-consulting/).
First let me voice a +1 to everything Jeremy has said here already—with the possible exception that I know several McKinsey partners are interfacing with the EA movement on particular causes like Animal Welfare, Governance of AI, pandemic preparedness and climate change. However I don’t know the exact scope of our client work in either field and haven’t heard of projects for EA orgs (I’ve worked with several of these topics for the McKinsey Global Insitute, see e.g. this report: https://www.mckinsey.com/business-functions/sustainability/our-insights/climate-risk-and-response-physical-hazards-and-socioeconomic-impacts?cid=app)
Second, I’m happy to jump on a 30-60 minute call in July/August to discuss if the EACN or some of its members can be helpful in making something like this happen—you can reach me at jakob_graabak[at]mckinsey[dot]com. (Luke, Ozzie, any of the Peters, any others?)
One example of how we could help: for “Talent Loans” I can imagine that we could use the McKinsey EA Community to find the right people in a more efficient way than described above. I of course understand that most EA orgs likely won’t become regular McKinsey clients, but I can try to talk to some of our partners about how we could run 2-3 pilot projects with e.g. Open Phil in a mutually beneficial way. Perhaps that would also work as a proof of demand and would drive more people into this space.
Love the idea of a having call and a pilot project (if this is what is most useful). We might even explore the options for pro bono work in the EACN as I know that some partners in BCG are looking for strong partnerships in their regions. I imagine that might also be the case for McKinsey, Accenture, Bain, … .
I also agree that almost all consultancies already do EA-aligned work (not to the extent, we would like them to of course) and have expertise in many relevant fields. E.g., my last project was to do an impact assessment (incl. counterfactual impact etc.) of a 300+M€ government grant, which addressed an EA cause area. At Accenture, BCG and Capgemini members of the EACN are actively reaching out to partners to push EA relevant topics even more. So we have a broad network of contact persons within the EACN and the different firms we could reach out to depending on the needs.
Jakob and Jona, what do you think about crowdsourcing/creating something like this for EA relevant consultants and posting about it on the forum when filled? See my response above to Ozzy for context. Jakob, I’ll send an email as well just to make sure you see it. I hear that people tend to be busy at Mckinsey!
There is also an Airtable version of that directory that is more up to date, I’ll update the google sheet
Hi Peter,
Thanks for the link—I was not aware of this but have added my name to it.
To your question, I don’t know if it would be helpful. I haven’t tried to do consulting for EA orgs yet, and I know that some who have tried to do this have found it hard because of lack of demand. To the first point in your comment: Maybe a document like this and a forum post could unlock some demand, but I’m not sure. The best way to learn would be to simply test it!
Thanks. I agree! On the point of not knowing about the link, I’ll mention that I think it is all too often the case that useful EA resources remain relatively unknown.
I occasionally find out about a resource after it would have been very helpful for me. Even when I know resources are out there I can’t often remember where I found them.
With that in mind, I think we could do better/more awareness raising for good resources. I think that EA forum posts are good for that because the forum is well indexed easily searchable. Posts can also be found via a google search. Hence the suggestion for posting about it in the forum. I’d also recommend mentioning it wherever it is relevant (e.g., as a comment in any new posts by consultants new to EA for example).
Already done.
Posting as an individual who is a consultant, not on behalf of my employer
Hi, one such consultant checking in! I had this post open from the moment I saw it in this week’s EA Forum digest, but… I (like many other consultants) work a silly number of hours during the work week so just reading the post in detail now.
I’m a member of, but don’t run, the EACN network and my take is it’s a group of consultants interested in EA with highly varied degrees of familiarity / interest: from “oh, I think I’ve heard of GiveWell?” to “I’m only working here because GiveWell rejected my job application.”
80,000 Hours’ old career survey pointed me toward management consulting ~7-8 years ago (affirming a path I was already planning on following) and it’s the only job full-time I’ve had. I’d be surprised if any of us had ever had an EA client (closest I’m aware of is Bill & Melinda Gates Foundation), though I’ve unsuccessfully pitched my employer on doing pro-bono work with a top GiveWell charity.
I agree with Niklas that it seems to me it’d make sense for EA groups to start off by hiring existing consultants / consultancies to prove out the use-case and demand before expecting a boutique firm to get off the ground, but… as a matter of practice what I imagine would happen is as follows:
You’d be set up with the global health / social impact / non-profit side of the consultancy (while plenty of us, myself included, do commercial work—and so would never hear about the project)
The “expertise” would come from the more senior members of the consultancy (e.g., Partners), who might know a lot about, say, global health but are less likely to be familiar with EA (both because they’re older and because they’ve built a book of business with the sorts of companies that pay for consulting… which hasn’t been EA)
The “brawn” would come from generalists—which is where there are some folks who are EA-aligned—but who are usually not selected for projects based on their own content expertise
You’d need a ton of consistent demand with a single consultancy to be able to “develop” experts, much less keep up a large enough pool of brawn with EA knowledge to reliably execute this work [which I think cuts in favor of the boutique firm model]. As soon as one project finishes I’m expected to move to the next, so unless something is actively sold and in need of a person at my tenure the very next day I’ll be moved on to something else for 3-6 months and won’t be pulled off even if a great EA project sells 1 week later
All that said, I’d venture to say almost every major corporation and government relies on generalist consultancies to varying degrees, even for fairly technical / specialized work. I think that should at least raise questions on how important EA-familiarity is for the work described above—it may be a narrower slice of work that really demands it than the author of this post imagines. [To be clear, not trying to shill here—I’m too junior to sell work myself—just sharing an “insider” perspective / trying to help re-calibrate priors.]
I just want to flag that one sort of “regular” consulting I’d love to see in EA is “really good” management consulting. My read is that many our management setups (leadership training, leadership practices, board membership administration) are fine but not world-class. As we grow it’s increasingly important to do a great job here.
My impression is that the majority of “management consultants” wouldn’t be very exciting to us, but if there were some that were somewhat aligned or think in similarly nerdy ways, it would be possibly highly valuable.
Thanks so much for the comment and congrats for staying on the 80k-suggested job train for 8 years! In your experience as a consultant, how much do people in the field care about truth? As opposed to satisfying what customers think they want, solving principal-agent problems within a company, etc.
Put another way, what percentage of the time did consultants in your firm provide results that >70% of senior management in a client company initially disagreed with?
I’ve heard a (perhaps flippant) claim that analysts at even top consulting companies believe that their job is more about justifying client beliefs than about uncovering the correct all-things-considered belief (and have recently observed evidence that is more consistent with this explanation than other nearby ones). So I would like to calibrate expectations here.
Posting as an individual who is a consultant, not on behalf of my employer
Let me start off by saying that’s an interesting question, and one I can’t give a highly confident answer to because I don’t know that I’ve ever had a conversation with a colleague about truth qua truth.
That said, my short answer would be: I think many of us care about truth, I think our work can be shaped by factors other than truth-seeking, and I think if the statement of work or client need is explicitly about truth / having the tough conversations, consultants wouldn’t find it especially hard to deliver on that. The only factor particular to consulting that I could see weighing against truth-seeking would be the desire to sell future work to the client… but to me that’s resolved by clients making clear that what the client values is truth, which would keep incentives well-aligned.
My longer answer...
I think most of my colleagues do care about truth, and are willing to take a firm stance on what they believe is right even if it’s a tough message for the client to hear. [Indeed I’ve explicitly heard firm leadership share examples of such behavior… which I think is an indicator that a) it does happen but b) it’s not a given which ties to...]
...I think there’s a recognition that at the end of the day, we have formal signed statements of work regarding what our clients expect us to deliver, and our foremost obligation is to deliver according to that contract (and secondarily, to their satisfaction) rather than to “truth”
If our contracts were structured in a more open-ended manner or explicitly framed around us delivering the truth, I see no reason (other than the aforementioned) why we would do anything other than provide that honest perspective
I wonder the extent to which employees of EA organizations feel competing forces against truth (e.g., I need to keep my job, not rock the boat, say controversial things that could upset donors) - I think you could make a case that consultants are actually better poised to do some of that truth-seeking e.g., if it’s a true one-off contract
To your 2nd question about >70%:
I don’t think this framing is really putting your original question another way (to sprinkle in some consulting-ese I think “the question behind your question” is something else)
That said, my “safe,” not-super-helpful, and please-don’t-selectively-quote-this-out-of-context answer is less than half the time...
...But that’s because most of the work I (and I’d venture to say, most of us) do isn’t about truth-seeking, so it’s not the sort of thing about which reasonable people of good will will have meaningful disagreement. Rather, the work is about further developing a client’s hypothesis, or helping them understand how best to pursue an objective, or helping them execute a process in which they lack expertise [all generally in the service of increasing client profitability]
Thanks for the detailed response!
Hmm, on reflection maybe the issue isn’t as particular to consulting, like I think the issue here isn’t that people by default have overwhelming incentives against truth, but just that actually seeking truth is such an unusual preference in the vast majority of contexts that the whole idea is almost alien to most people. Like they hear the same words but don’t know what it means/internalize this at all.
I’m probably not phrasing this well, but to give a sense of my priors: I guess my impression is that my interactions with approximately every entity that perceives themself as directly doing good outside of EA* is that they are not seeking truth, and this systematically corrupts them in important ways. Non-random examples that come to mind include public health (on covid, vaping, nutrition), bio-ethics, social psychology, developmental econ, climate change, vegan advocacy, religion, US Democratic party, and diversity/inclusion. Moreover, these aren’t limited to particular institutions: these problems are instantiated in academia, activist groups, media, regulatory groups and “mission-oriented” companies. My limited experience with “mission-oriented” consultancies is that they’re not an exception.
I think the situation is plausibly better outside of do-gooders. For example, I sort of believe that theoretical CS has much better publication norms than the listed academic fields, and that finance or poker people are too focused on making money to be doing much grandstanding.**
Similarly, I would be surprised but not overwhelmingly so if mission alignment is the issue here, and if we take random McKinsey associates who are used to working in profit-seeking industries with higher standards, things would be okay/great.
This seems plausible yeah, though if it’s a one-off contract I also don’t see a positive incentive to seek truth (To the extent my hypothesis is correct, what you want is consultants who are only motivated by profit + high professional standards).
* The natural Gricean implicature of that claim is that I’m saying that EA orgs are an exception. I want to disavow that implication. For context, I think this is plausibly the second or third biggest limitation for my own work.
** Even that’s not necessarily true to be clear.
FYI in case anybody’s wondering
Was primarily referring to issues Neil and I discuss in this report. It is certainly plausible that I overupdated on n=1, of course.
I come in peace, but I want to flag that this claim will sound breathtakingly arrogant to many people not fully immersed in the EA bubble, and to me:
Do you mean:
a) They don’t make truth-seeking as high a priority as they should (relative to, say, hands-on work for change)?
b) They try to understand what’s true, but their feeble non-EA efforts go nowhere?
c) They make zero effort to seek the truth? (“Not seeking truth”)
d) They don’t care in the slightest what the truth is?
These are worth distinguishing, at least in communications that might plausibly be read by non-EAs. Someone could read what you wrote and conclude, or at least conclude you believe, that before EA the intersection of people who were very concerned about what was true, and people who were trying hard to make the world a better place, was negligible. That would be unfortunate.
Hmm, did you read the asterisk in the quoted comment?
(No worries if you haven’t, I’m maybe too longwinded and it’s probably unreasonable to expect people to carefully read everything on a forum post with 76 comments!)
If you’ve read it and still believe that I “sound breathtakingly arrogant ”, I’d be interested in whether you can clarify whether “breathtakingly arrogant” means either a) what I say is untrue or b) what I say is true but insufficiently diplomatic.
More broadly, I mostly endorse the current level of care and effort and caveats I put on the forum. (though I want to be more concise, working on it!)
I can certainly make my writing more anodyne and less likely to provoke offense, e.g. by defensive writing and pre-empting all objections I can think of, by sprinkling the article heavily with caveats throughout, by spending 3x as much time on each sentence, or just by having much less public output (the last of which is empirically what most EAs tend to do).
I suspect this will make my public writing worse however.
I did read it, and I agree it improves the tone of your post (helpfully reduces the strength of its claim). My criticism is partly optical, but I do think you should write what you sincerely think: perhaps not every single thing you think (that’s a tall order alas in our society: “I say 80% of what I think, a hell of a lot more than any politician I know”—Gore Vidal), but sincerely on topics you do choose to opine on.
The main thrusts of my criticism are:
Because of the optical risk, and also just generally because criticizing others merits care, you should have (and still can) clarify which of the significantly different meanings I listed (or others) of “they are not seeking truth” you intended.
If you believe one of the stronger forms, eg “before EA the intersection of people who were very concerned about what was true, and people who were trying hard to make the world a better place, was negligible,” then I strongly disagree, and I think this is worth discussing further for both optical and substantive reasons. We would probably get lost in definition hairsplitting at some point, but I believe many, many people (activists, volunteers, missionaries, scientists, philanthropists, community leaders, …) for at least hundreds of years have both been trying hard to make the world a better place and trying hard to be guided by an accurate understanding of reality while doing so. We can certainly argue any one of them got a lot wrong: but that’s about execution, not intent.
This is, again, partly optical and partly substantive: but it’s worth realizing that to a lot of the world who predate EA or have read a lot about the world pre-EA, the quoted claim above is just laughable. I care about EA but I see it as a refinement, a sort of technical advance. Not an amazing invention.
I tried answering your question on the object level a few times but I notice myself either trying to be reconciliatory or defensive, and I don’t think I will endorse either response upon reflection.
All right. Well, I know you’re a good guy, just keep this stuff in mind.
Out of curiosity I ran the following question by our local EA NYC group’s Slack channel and got the following six responses. In hindsight I wish I’d given your wording, not mine, but oh well, maybe it’s better that way. Even if we just reasonably disagree at the object level, this response is worth considering in terms of optics. And this was an EA crowd, we can only guess how the public would react.
I can see Jacob’s perspective and how Linch’s statement is very strong. For example, in developmental econ, in just one or two top schools, the set of professors and their post-docs/staff might be larger and more impressive than the entire staff of Rethink Priorities and Open Phil combined. It’s very very far from playpumps. So saying that they are not truth-seeking seems sort of questionable at least.
At the same time, in another perspective I find reasonable, I think I can see how academic work can be swayed by incentives, trends and become arcane and wasteful. Separately and additionally, the phrasing Linch used originally, reduces the aggressive/pejorative tone for me, certainly viewed through “LessWrong” sort of culture/norms. I think I understand and have no trouble with this statement, especially since it seems to be a personal avowal:
Again, I think there’s two different perspectives here and a reasonable person could both take up both or either.
I think a crux is the personal meaning of the statement being made.
Unfortunately, in his last response I’m replying to, it is now coming off as Jacob is sort of pursuing a point. This is less useful. For example, looking at his responses, it seems like people are just responding to “EA is much more truth seeking than everyone else”, which is generating responses like “Sounds crazy hubristic..”.
Instead, I think Jacob could have ended the discussion at Linch’s comment here or maybe asked for models and examples to get “gears-level” sense for Linch’s beliefs (e.g. what’s wrong with development econ, can you explain?).
I don’t think impressing everyone into a rigid scout mentality is required, but it would have been useful here.
“Y” is a strictly stronger claim than “If X, then Y”, but many people get more emotional with “If X, then Y.”
Consider “Most people around 2000 years ago had a lot of superstitions and usually believed wrong things” vs “Before Jesus Christ, people had a lot of superstitions and usually believed wrong things.”
Oh what an interesting coincidence.Well, my point wasn’t to prove you wrong. It was to see what people thought about a strong version of what you wrote: I couldn’t tell if that version was what you meant, which is why I asked for clarification. Larks seemed to think that version was plausible anyway.
I probably shouldn’t resurrect this thread. But I was reminded of it by yet another egregious example of bad reasoning in an EA-adjacent industry (maybe made by EAs. I’m not sure). So I’m going to have one last go.
To be clear, my issue with your phrasing isn’t that you used a stronger version of what I wrote, it’s that you used a weaker version of what I wrote, phrased in a misleading way that’s quite manipulative. Consider the following propositions:
I claim that A is a strictly stronger claim than B (in the sense that an ideal Bayesian reasoner will assign lower probability to A than B), but unless it’s said is a epistemically healthy and socially safe context, B will get people much more angry in non-truth-seeking ways than A.
B is similar to using a phrasing like:
instead of a more neutral (A-like)
Note again that the less emotional phrasing is actually a strictly stronger claim than the more emotional one.
Similarly, your initial question:
was very clearly (unintentionally?) optimized to really want me to answer “oh no I just meant a),” (unwritten: since that’s the socially safest thing to answer). Maybe this is unintentional, but this is how it came across to me.
A better person than me would have been able to successfully answered you accurately and directly despite that initial framing, but alas I was/am not mature enough.
(I’m not optimistic that this will update you since I’m basically saying the same thing 3 times, but occasionally this has worked in the past. I do appreciate your attempts to defuse the situation at a personal level. Also I think it bears mentioning that I don’t think this argument is particularly important, and I don’t really think less of you or your work because of it; I like barely know you).
Seems pretty plausible to me this is true. Both categories are pretty small to start with, and their correlation isn’t super high. Indeed, the fact that you think it would be bad optics to say this seems like evidence that most people are indeed not ‘very concerned’ about what is true.
I do agree with you that client quality and incentives are a serious potential problem here, especially when we consider potential funders other than Open Phil. A potential solution here is for the rest of the EA movement to make it clear that “you are more likely to get future work if you write truthful things, even if they are critical of your direct client/more negative than your client wants or is incentivizing you to write/believe,” but maybe this message/nuance is hard to convey and/or may not initially seem believable to people more used to other field norms.
I have posted about this in the Facebook group to let them know. IMO they have done a great job setting that group up and probably have just been focusing on more practical work than keeping up with the EA forum, which is a full time job!
Good find—but it seems pretty sparsely populated, and most consultants at large firms would be tricky to grab one-at-a-time.
Yea,
My hunch is that “EAs doing consulting for non-EA companies” looks very different from “EAs doing consulting for EA orgs”, but I’d be happy to be wrong.
Thanks for the great discussion on this thread! I noticed that there hasn’t been much mention of development consulting firms that fall somewhere between pure management and EA. IDInsight in particular is pretty EA-aligned. Others like Bridgespan and Dalberg also work with social sector clients. There is still a lot of room for development consulting being more “EA-first” and wonder if there an opportunity to orient existing firms to EA principles.
Quick update: we launched an EA-aligned strategy consultancy, partly motivated by this post and the feedback we received from our pilot projects: https://forum.effectivealtruism.org/posts/a65aZvDAcPTkkjWHT/introducing-cfactual-a-new-ea-aligned-consultancy-1
Might be worth looking into 180 Degrees Consulting, the low cost consultancy for non-profits. Either to use them directly or learn from their model.
My understanding of their model is that they use volunteer university students to do most of the work and they are mentored by the big consultancy firms. Students get experience and career capital, consultancies get exposure to future grads and non-profits get low cost consulting.
I’m not sure what “low cost” is but looking at their 2020 report they ran 550+ projects form 166 branches in 36 countries with $117,300 total contribution from clients.
This seems achievable to replicate this model in some ways for EA. Already have lots of university groups and would be clients.
Agree. We already organized several events with 180° as part of the effective altruism and consulting network (as well as several 180° consultants supporting us to build the network). I believe there is room for more collaboration and synergies as people were super excited about the EA mindset