Research manager at ICFG.eu, board member at Langsikt.no, doing policy research to mitigate risks from biotechnology and AI. Ex-SecureBio manager, ex-McKinsey Global Institute fellow and founder of the McKinsey Effective Altruism community. Follow me on Twitter at @jgraabak
Jakob
+1 to all Jona writes here—with the caveat that consulting firms like McKinsey or BCG can also help you scope the project and prioritize what’s most important to work on. This of course requires some level of trust (like in all professional services where the client may not know their exact needs), which strengthens the case for using EA consultants at least for pilot projects until norms around using consultants are well-established.
Hi Peter,
Thanks for the link—I was not aware of this but have added my name to it.
To your question, I don’t know if it would be helpful. I haven’t tried to do consulting for EA orgs yet, and I know that some who have tried to do this have found it hard because of lack of demand. To the first point in your comment: Maybe a document like this and a forum post could unlock some demand, but I’m not sure. The best way to learn would be to simply test it!
Hi Ryan, thanks for your comment!
1) “The title should clarify that it’s “national scale” rather than scale generally that’s overrated.”
We did not use “national scale” because we cover policy making on both national-, subnational- and multinational scale. However, we agree that “scale” is very useful as a parameter in cause prioritization frameworks. You’re right that our claim is more narrow—only that it’s overrated in this specific setting.
2) “US and China are probably more likely to copy their own respective states & provinces than copy the Nordics, right?”
This is a valid point. For this reason, our logic can also be used to argue that EA should increase policy efforts in US states, or other sub-national policy entities. However, there are some policy domains that are mostly relevant on the national level (e.g. foreign policy), and there are examples where foreign examples work as better motivators (see e.g. this commercial which uses US patriotism to advocate for accelerated EV uptake in the US).
3) “Being unusually homogenous, stable, and trusting might mean that some policies work in the Nordics, even if they don’t work elsewhere.”
You’re right that some policies that work in the Nordics won’t work elsewhere! This is analogous to how some (small-scale) startups will pass a Series A round funding but not succeed at larger scale. Startups typically start with little funding and unlock increasing amounts of money. This way, if the startup fails, it fails in the cheapest possible way. Similarly, by testing new policies first in the most ideal governance environments and gradually scaling them to trickier environments with larger costs of failure, the policies that fail will do so in the least costly wa
4) “If we’re worried about whether govt pursues certain tech (like AI) safely over the coming 1-2 decades, then we should favour involvement in the executive over legislating, and the former can’t really transfer from the Nordics to the US. Diffusion may be rather slow.”
You’re right that if your main concern is linked to specific, urgent causes, you may prefer more direct routes to impact in the countries that matter most
Thanks! I think we meant to refer to https://total-portfolio.org/
Perhaps also some of Ellen Quigley’s work on Universal Ownership https://www.cser.ac.uk/team/ellen-quigley/
Thanks Jona, agree! Also, many EA orgs seem to be experiencing growth pains at the moment, I think the case for helping them scale (in ops/mgt roles) is stronger than ever. Some consulting firms also allow their employees to do temporary (paid or unpaid) secondments with selected non-profits, which could be one way of exploring if this path is a fit.
A potential complementary strategy to this one, could be research into putting out large-scale wildfires (though I’m not sure about the feasibility of this—are anyone aware of existing research on this?)
One potential niche could be betting markets around outcomes of political events (e.g., betting on outcome metrics such as GDP growth, expected lifespan, GINI coefficient, or carbon emissions; linked to events such as a national election, new regulatory proposals, or the passing of government budgets). Depending on legal restrictions, this market could even ask policy makers or political parties to place bets in these markets, to help the public assess which policy makers have the best epistemics, to hold policy makers accountable, and to incentivize policy makers to invest in better epistemics. (note: this also links to an idea presented in a different comment here -https://forum.effectivealtruism.org/posts/KigFfo4TN7jZTcqNH/the-future-fund-s-project-ideas-competition?commentId=zjvCCNuLEToCQyHdn)
- 6 Mar 2022 10:45 UTC; 3 points) 's comment on The Future Fund’s Project Ideas Competition by (
Would this be another organization like Rethink Priorities, or is it different from what they are doing? (Note: I don’t think this space is crowded yet, so even if it is another organization doing the same things, it could still be very helpful!)
Have you spoken to Jona Glade about it? He’s also working on setting up a consultancy. I’m also happy to chat about this.
The 3rd wave of EA is coming—what does it mean for you?
Good point! Indeed, the key funding sources for EA (tech billionaires) have notoriously volatile fortunes, though I’m not sure how tight the link is between their wealth in a given year, and the flow of money to EA.
Also, others seem to predict that the number of major funders will grow over the next years, which can increase both the average level of funding, and the stability https://forum.effectivealtruism.org/posts/Ze2Je5GCLBDj3nDzK/how-many-ea-billionaires-five-years-from-now
I think it is likely that increased attention will lead to increased funding, but the question is on what timescales, and by what magnitude. Relatively recent numbers showed that the clear majority of people, even among US college students, had not heard of EA, which means it’s very unlikely that the potential funder pool is already saturated https://forum.effectivealtruism.org/posts/qQMLGqe4z95i6kJPE/how-many-people-have-heard-of-effective-altruism
Is GiveWell underestimating the health value of lead eradication?
You can see their rationale in their public model: https://docs.google.com/spreadsheets/d/1tytvmV_32H8XGGRJlUzRDTKTHrdevPIYmb_uc6aLeas/edit#gid=1362437801
It’s the sum of 1.7% “improving circumstances over time”, 0.9% “compounding non-monetary benefits” and 1.4% “temporal uncertainty”. They have 0.0% “pure time preference”
Hi Toby, thanks for the good insight and also relevant links—and apologies for the extremely delayed response! I thought I had already responded to this.
Agree that such a map would be valuable, though 1) I’m not sure if the data is rich enough to create a general map that works across all policy areas (due to substantial confounding factors throughout history), and 2) there may also be conceptual challenges (e.g., the strength of each arrow may differ by policy domain). Still, I think this is an important crux for the value of policy work in smaller countries, so agree that developing a better understanding would be valuable!
Thank you Max, and good point! While we did try to use the state-of-the-art evidence in this piece I think I’ll defer to Will’s research team on that one—his take is probably closer to the current consensus among the relevant experts
Thank you for writing this up—I’ve wanted to do the same for a while! I think the only thing I see missing is that prizes can raise the salience of some concept or nuance, and therefore serve as a coordination mechanism in more ways than you list (e.g., say that we want more assessments of long-term interventions using the framework from WWOTF of significance—durability—contingency, then a prize for those assessments would also help signal boost the framework)
One interesting debate would be: what’s the optimal % of funding that should go to prizes? Which parameters would allow us to determine this? One can imagine that the % should be higher in communities that are struggling more to hire enough, or where research agendas are unclear so more coordination is needed, but should be lower in communities with people with low savings, or where the funders have capacity to diversify risks.
One additional consideration is that the coordination benefits from prizes (in raising the salience of memes or the status of the winners) comes at an attention cost, so a large number of prizes may cannibalize on our “common knowledge budget” (if there is a limit to how much common knowledge we can generate)
Posting as an individual who is a consultant, not on behalf of my employer
Hi, I’m one of the co-organizers of EACN, running the McKinsey EA community and currently co-authoring a forum post about having an impact as a management consultant (to add some nuance and insider perspectives to what 80k is writing on the topic: https://80000hours.org/articles/alternatives-to-consulting/).
First let me voice a +1 to everything Jeremy has said here already—with the possible exception that I know several McKinsey partners are interfacing with the EA movement on particular causes like Animal Welfare, Governance of AI, pandemic preparedness and climate change. However I don’t know the exact scope of our client work in either field and haven’t heard of projects for EA orgs (I’ve worked with several of these topics for the McKinsey Global Insitute, see e.g. this report: https://www.mckinsey.com/business-functions/sustainability/our-insights/climate-risk-and-response-physical-hazards-and-socioeconomic-impacts?cid=app)
Second, I’m happy to jump on a 30-60 minute call in July/August to discuss if the EACN or some of its members can be helpful in making something like this happen—you can reach me at jakob_graabak[at]mckinsey[dot]com. (Luke, Ozzie, any of the Peters, any others?)
One example of how we could help: for “Talent Loans” I can imagine that we could use the McKinsey EA Community to find the right people in a more efficient way than described above. I of course understand that most EA orgs likely won’t become regular McKinsey clients, but I can try to talk to some of our partners about how we could run 2-3 pilot projects with e.g. Open Phil in a mutually beneficial way. Perhaps that would also work as a proof of demand and would drive more people into this space.