Thanks, Ollie! I thought this was helpful.
Jona
Thanks for creating this post! +1 to the general notion incl. the uncertainties around if it is always the most impactful use of time. On a similar note, after working with 10+ EA organizations on theories of change, strategies and impact measurement, I was surprised that there is even more room for more prioritization of highest leverage activities across the organization (e.g., based on results of decision-relevant impact analysis). For example, at cFactual, I don’t think we have nailed how we allocate our time. We should probably deprioritize even more activities, double down even more aggressively on the most impactful ones and spend more time exploring new impact growth areas which could outperform existing ones.
FWIW, I also think one key consideration is the likelihood of organizations providing updates and making sure the data means the same thing across organizations (see caveats in the report for more)
The Impact Case for Taking a Break from Your Non-EA Job
Observations on the funding landscape of EA and AI safety
Registered. It also seems valuable to talk to impact-driven people who seriously considered quitting but then decided to finish their PhD as (a) it is not obvious to me that quitting is always the right choice and (b) it might be useful to know common reasons why people decided to continue working on their PhD.
Thanks for creating this post! Sharing some thoughts on the topic based on my experience creating and redteaming theories of change (ToCs) with various EA orgs (partly echoing your observations and partly adding new points; Two concrete project examples can be found here and here).
Neglectedness of ToC work (basically echoing your claim). Due to the non-pressing nature and required senior input, ToC/strategy work seems to get deprioritized very often, e.g., I have been deprioritizing updating our ToC for three months due to more pressing work. I think the optimal time spent thinking about your ToC/prioritization/strategy depends on the maturity of your project and is hard to get right, but based on my experience, most of us spend too little time on it (just like we tend to spend too little time exploring our career options as it is worth to invest 800 hours in our career planning when we can increase its impact by 1% in expectation). Assuming your org has ten staff, which work 200 days a year for 8 hours per day, you would want to invest 16k hours in figuring out how staff spends their time best if it is likely to result in a 1% impact increase
More than one ToC. I think most orgs should have a ToC on the org, team and individual level as well as for each main program/activity. It seems not optimal to work on something without even having thought through how this will change the world for at least 3min (and if there are alternatives, how you could achieve the same with less work)
Different levels of granularity. Depending on the purpose and the context of your ToC, you can have a three-row ToC (e.g., for small projects you are exploring), a flow-chart (e.g., to communicate the ToC of your org clearly, see examples in this post) and/or an exhaustive document showing lots of reasoning transparency, alternatives you considered etc. (e.g., to lay out the ToC of your research agenda)
Developing a ToC. One simplified approach to develop a ToC on an org level which worked well with some clients but always needs tailoring looks very roughly like this: (1) Map out all potential sources of value of today (and potentially in the future), (2) Prioritize them, (3) Create a flow chart for the most promising source of value (potentially include other sources of value or create several flow charts), (4) Think through the flow of impact/value end to end as a sanity check, (5) collect data (e.g., talk to experts, run small experiments) to reduce uncertainties, (6) re-iterate. See more here
Strategic implications and influencing decisions (previously mentioned but I think this is a point many people found useful and is not stressed enough in typical ToC literature). Your ToC should inform key decisions and ultimately how you allocate resources (your staff’s time or money). I never experienced that we were certain about all the sources of value and the causal relationships between each step or did not have a hypothesis on how to have even more impact with an adapted or new program. So the ToC work always had at least some implications for allocating time. One great example is the Fish Welfare Initiative, which included resolving uncertainties around their ToC in their yearly priorities (See slides 13 and 24)
Areas for improvement. (1) Trying to map every casual pathway and not focusing on the most important ones, (2) Deferring too much to others on what they perceive as valuable, e.g., the target group, and not doing enough first principles/independent thinking and data collection, (3) Not considering counterfactuals at all and/or not considering that there are likely several counterfactual worlds, and (4) Not laying out key assumptions and uncertainties (previously mentioned in the post but seems valuable to highlight that this also reflects my experience) among other things
Note that I likely have a significant sample bias, as organizations are unlikely to reach out to me if they have enough time to think through their ToC. Additionally, please read this as “random thoughts which came to Jona’s mind when reading the article” and not as “these are the X main things EA orgs get wrong based on a careful analysis”. I expect to update my views as I learn more
Hmm. Obviously, career advice depends a lot on the individual and the specific context, all things equal, I tentatively agree that there is some value in having seen a large “functioning” org. I think many of these orgs have also dysfunctional aspects (e.g., I think most orgs are struggling with sexual harassment and concentration of formal and informal power) and that working at normal orgs has quite high opportunity costs. I also think that many of my former employers were net negative for some silly which I think are highly relevant, e.g., high-quality decision making
Thanks for clarifying! I think Training for Good looked into “scalable management trainings”, but had a hard time identifying a common theme, which they could work on (This is my understanding based on a few informal chats. This might be outdated and I am sure they have a more nuanced take). Based on my experience, different managers seem to have quite different struggles which change over time and good coaching and peer support seemed to be the most time-effective interventions for the managers (This is based on me chatting occasionally to people and not based on proper research or deep thinking about the topic)
What do you specifically mean by “maturing in management, generally”? I noticed that people tend to have very different things in mind when they are talking about “Improving management in EA” so could be worth clarifying
Some shameless self-promotion as this might be relevant to some readers: I work at cFactual, a new EA strategy consultancy, where one of our three initial services is to optimize ToC’s and KPI’s together with organizations. Illustrative project experience includes the evaluation of the ToC and design of a KPI for GovAI’s fellowship program, building a quantitative impact and cost-effectiveness model for a global health NGO, internally benchmarking the impact potential of two competing programs of an EA meta organization with each other, doing coaching with a co-founder of a successful longtermist org around Fermi-estimates and prioritization of activities as well as redteaming the impact evaluation of a program of a large EA organization.
Thanks for highlighting this offer again and sharing your feelings, Catherine!
I like how you highlight that the forum is just one element of EA. Personally, I also distinguish quite strongly between EA as a question and set of evolving ideas and the EA community (which is obviously a part of EA).
Historically, I found it super valuable to talk with you through various sensitive community-building considerations and benefited a lot from your experience managing countless tricky situations I wasn’t even aware of. Thanks for doing that important and hard behind-the-scenes work!
Thanks for sharing, Catherine! I apply many of your tips and agree that they are super useful. Additional questions I ask myself quite often:
What is the goal I want to achieve? This is the question which helps me to structure my thinking and approach the most
Am I asking the right question? Next to regularly not thinking through all the options I have, I also realize often that I am not asking the question I really care about in the first place
Can I make a prediction about my decision? This helps me a lot to keep track of my decisions e.g., at cFactual we have a “prediction of the week” to calibrate ourselves on outcomes we expect to see, identify differences in reasoning about important topics among team members, …
Do I weigh all arguments/considerations equally or do I believe one argument is 10x more relevant than others?
Some tools for group decision-making we use:
Our google doc company template has as default drop-downs to always indicate the status of the document, time spent, what the stage of the document is (Strawmen, key arguments or flashed out document) and a section for a few words on epistemic status (“braindump/ 5min of desk research/ I am an expert/...”). This helped us a lot to remain focused and have higher quality discussions with a time investment of 20sec
We try to quantify our preferences, e.g., Instead of saying: I am in favour of option A, we aim to write: I am 55% in favour of A. This helps us quite often to make a judgement call without forward and backward writing of comments
If there are larger decisions I want to think through more rigorously, I quite often use this mental structure as a starting point (and then adapt it): Recommendation/conclusion incl. my certainty in the conclusion, alternative options, my arguments for the recommendation, my arguments against, key uncertainties, key assumptions, downside risks and predictions
Probably stating the obvious for many here: I think the CFAR handbook also has great prompts for people who are interested in the topic
Quick update: we launched an EA-aligned strategy consultancy, partly motivated by this post and the feedback we received from our pilot projects: https://forum.effectivealtruism.org/posts/a65aZvDAcPTkkjWHT/introducing-cfactual-a-new-ea-aligned-consultancy-1
Thanks! Yes, feel free to DM me, if relevant.
Thanks for the question, Merlin. Please note that we have a small sample size and are still refining our models of what skillsets are most relevant for more EA-aligned consulting.
Three things that have been useful: 1) Structuring problems, projects and meetings well, 2) Being able to switch quickly between different levels of abstraction quickly and constantly: Thinking carefully about a key assumption in an excel model in one moment and thinking about how the results change the big picture for a CEO in the next moment 3) More vaguely: Just having seen and worked with a lot of organization, projects and leaders, probably shaped our intuitions
Three things that were less useful than I thought: 1) Executional speed—On the margin, we care much more about what we work on and getting it right compared to getting things done; 2) Qualitative data selection e.g., interviewing—during the pilot project we did a lot more independent thinking and then specific testing of key uncertainties with key stakeholders. In traditional consulting, we would have conducted interviews earlier to develop our hypotheses (which has the cost of becoming potentially an echo chamber); 3) Project planning—during all pilot projects we adapted our activities quite a lot based on our updated models, what would be most impactful
Thanks for sharing and your great work during the last year. Having talked to you several times, I was and am impressed with your systematic approach to finding product-market fit/high expected impact opportunities, your ability to build MVPs to test ideas quickly, and your courage to discontinue programs that do not meet your bar.
I think the latter is hard especially after investing weeks of work into programs and it is easy to trick oneself into motivated reasoning, about why it might be worth continuing the program. I admire you for having the courage to make tough judgment calls. Probably most of us should stop mediocre activities (earlier)
+1
Interesting read! Thanks for sharing! I imagine some points might even apply to
university groups e.g., targeting specific people who could fill talent gaps or doing A/B testing (even though I am very uncertain about this)
myself (and potentially other EAs) e.g., I should probably prioritize looking for open jobs and forwarding it to relevant people even higher. Personally, I keep a list of the top ~10 most promising people I know and who could be open for new positions and try to keep them in my mind, if I stumble across opportunities. This post is a good nudge to potentially think about a more systematic and time-effective approach to this low-effort/passive matchmaking.
For the EACN, we
have put some thought into tracking metrics of members to make meaningful referrals. Probably it is still not optimal and needs adaption based on your groups circumstances, but feel free to reach out, if this is of interest to your group
did A/B/C/D testing with all the workplace groups at the different consulting firms (Note: the EACN only servers as an umbrella org and includes several workplace groups as well as consultants without a workplace group). We want to spend the summer learning from each other’s different community-building approaches after 1 year of trying things. I hope we can share insights and scale what has been working to other groups (both within the EACN and potentially beyond, if applicable)
are currently developing a plan to target social impact student consultancies to address the top 2 (out of 3) EA orgs needs for talent: management and ops. We are currently exploring a potential partnership with 180° degrees consulting, which has 150+ student chapters and want to partner with EA university groups to do some outreach pilots. Let us know if you would be interested in potentially setting up a pilot at your university with a local social impact student consultancy
Looking forward to learning more from other groups
Thanks for creating this post!
I think it could be worth clarifying how you operationalize EA epistemics. In this comment, I mostly focus on epistemics at EA-related organizations and focus on “improving decision-making at organizations” as a concrete outcome of good epistemics.
I think I can potentially provide value by adding anecdotal data points from my work on improving epistemics of EA related organizations. For context, I work at cFactual, supporting high-impact organizations and individuals during pivotal times. So far we have done 20+ projects partnering with 10+ EA adjacent organizations.
Note that there might be several sampling biases and selection effects, e.g., organizations who work with us are likely not representing all high-impact organizations.
So please read it what it is: Mixed confidence thoughts based on weak anecdotal data which are based on doing projects on important decisions for almost 2 years.
Overall, I agree with you that epistemics at EA orgs tend to be better than what I have seen while doing for-profit-driven consulting in the private, public and social sectors.
For example, following a simple decision document structure including: Epistemic status, RAPID, current best guess, main alternatives considered, best arguments for the best guess, best arguments against the best guess, key uncertainties and cruxes, most likely failure mode and things we would do if we had more time, is something I have never seen in the non-EA world.
The services we list under “Regular management and leadership support for boards and executives” are gaps we see that often ultimately improve organizational decision-making.
Note that clients pay us, hence, we are not listing things, which could be useful but don’t have a business model (like writing a report on improving risk management by considering base rates and how risks link and compound).
I think many of the gaps, we are seeing, are more about getting the basics right in the first place and don’t require sophisticated decision-making methods, e.g.,
spending more time developing goals incl. OKRs, plans, theories of change, impact measurement and risk management
Quite often it is hard for leaders to spend time on the important things instead of the urgent and important things, e.g., more sophisticated risk management still seems neglected at some organizations even after the FTX fall-out
improving executive- and organization-wide reflection, prioritization and planning rhythms
asking the right questions and doing the right, time-effective analysis at a level which is decision-relevant
getting an outside view on important decisions and CEO performance from well-run boards, advisors and coaches
improving the executive team structure and hiring the right people to spend more time on the topics above
Overall, I think the highest variance on whether an organization has good epistemics can be explained by hiring the right people and the right people simply spending more time on the prioritized, important topics. I think there are various tweaks on culture (e.g., rewarding if someone changes someone’s mind, introducing an obligation to dissent and Watch team backup), processes (e.g., having a structured and regular retro and prioritization session, making forecasts when launching a new project) as well as targeted upskilling (e.g., there are great existing calibration tools which could be included in the onboarding process) but the main thing seems to be something simple like having the right people, in the right roles spending their time on the things that matter most.
I think simply creating a structured menu of things organizations currently do to improve epistemics (aka a google doc) could be a cost-effective MVP for improving epistemics at organizations
To provide more concrete, anecdotal data on improving epistemics of key organizational decisions, the comments I leave most often when redteaming google docs of high impact orgs are roughly the following:
What are the goals?
Did you consider all alternatives? Are there some shades of grey between Option A and B? Did you also consider postponing the decisions?
What is the most likely failure mode?
What are the main cruxes and uncertainties which would influence the outcome of the decision and how can we get data on this quickly? What would you do if you had more time?
Part X doesn’t seem consistent with part Y
To be very clear,
I also think that I am making many of these prioritization and reasoning mistakes myself! Once a month, I imagine providing advice to cFactual as an outsider and every time I shake my head due to obvious mistakes I am making.
I also think there is room to use more sophisticated methods like forecasting for strategy, impact measurement and risk management or other tools mentioned here and here