I think it could be worth clarifying how you operationalize EA epistemics. In this comment, I mostly focus on epistemics at EA-related organizations and focus on “improving decision-making at organizations” as a concrete outcome of good epistemics.
I think I can potentially provide value by adding anecdotal data points from my work on improving epistemics of EA related organizations. For context, I work at cFactual, supporting high-impact organizations and individuals during pivotal times. So far we have done 20+ projects partnering with 10+ EA adjacent organizations.
Note that there might be several sampling biases and selection effects, e.g., organizations who work with us are likely not representing all high-impact organizations.
So please read it what it is: Mixed confidence thoughts based on weak anecdotal data which are based on doing projects on important decisions for almost 2 years.
Overall, I agree with you that epistemics at EA orgs tend to be better than what I have seen while doing for-profit-driven consulting in the private, public and social sectors.
For example, following a simple decision document structure including: Epistemic status, RAPID, current best guess, main alternatives considered, best arguments for the best guess, best arguments against the best guess, key uncertainties and cruxes, most likely failure mode and things we would do if we had more time, is something I have never seen in the non-EA world.
Note that clients pay us, hence, we are not listing things, which could be useful but don’t have a business model (like writing a report on improving risk management by considering base rates and how risks link and compound).
I think many of the gaps, we are seeing, are more about getting the basics right in the first place and don’t require sophisticated decision-making methods, e.g.,
spending more time developing goals incl. OKRs, plans, theories of change, impact measurement and risk management
Quite often it is hard for leaders to spend time on the important things instead of the urgent and important things, e.g., more sophisticated risk management still seems neglected at some organizations even after the FTX fall-out
improving executive- and organization-wide reflection, prioritization and planning rhythms
asking the right questions and doing the right, time-effective analysis at a level which is decision-relevant
getting an outside view on important decisions and CEO performance from well-run boards, advisors and coaches
improving the executive team structure and hiring the right people to spend more time on the topics above
Overall, I think the highest variance on whether an organization has good epistemics can be explained by hiring the right people and the right people simply spending more time on the prioritized, important topics. I think there are various tweaks on culture (e.g., rewarding if someone changes someone’s mind, introducing an obligation to dissent and Watch team backup), processes (e.g., having a structured and regular retro and prioritization session, making forecasts when launching a new project) as well as targeted upskilling (e.g., there are great existing calibration tools which could be included in the onboarding process) but the main thing seems to be something simple like having the right people, in the right roles spending their time on the things that matter most.
I think simply creating a structured menu of things organizations currently do to improve epistemics (aka a google doc) could be a cost-effective MVP for improving epistemics at organizations
To provide more concrete, anecdotal data on improving epistemics of key organizational decisions, the comments I leave most often when redteaming google docs of high impact orgs are roughly the following:
What are the goals?
Did you consider all alternatives? Are there some shades of grey between Option A and B? Did you also consider postponing the decisions?
What is the most likely failure mode?
What are the main cruxes and uncertainties which would influence the outcome of the decision and how can we get data on this quickly? What would you do if you had more time?
Part X doesn’t seem consistent with part Y
To be very clear,
I also think that I am making many of these prioritization and reasoning mistakes myself! Once a month, I imagine providing advice to cFactual as an outsider and every time I shake my head due to obvious mistakes I am making.
I also think there is room to use more sophisticated methods like forecasting for strategy, impact measurement and risk management or other tools mentioned here and here
Thanks for creating this post!
I think it could be worth clarifying how you operationalize EA epistemics. In this comment, I mostly focus on epistemics at EA-related organizations and focus on “improving decision-making at organizations” as a concrete outcome of good epistemics.
I think I can potentially provide value by adding anecdotal data points from my work on improving epistemics of EA related organizations. For context, I work at cFactual, supporting high-impact organizations and individuals during pivotal times. So far we have done 20+ projects partnering with 10+ EA adjacent organizations.
Note that there might be several sampling biases and selection effects, e.g., organizations who work with us are likely not representing all high-impact organizations.
So please read it what it is: Mixed confidence thoughts based on weak anecdotal data which are based on doing projects on important decisions for almost 2 years.
Overall, I agree with you that epistemics at EA orgs tend to be better than what I have seen while doing for-profit-driven consulting in the private, public and social sectors.
For example, following a simple decision document structure including: Epistemic status, RAPID, current best guess, main alternatives considered, best arguments for the best guess, best arguments against the best guess, key uncertainties and cruxes, most likely failure mode and things we would do if we had more time, is something I have never seen in the non-EA world.
The services we list under “Regular management and leadership support for boards and executives” are gaps we see that often ultimately improve organizational decision-making.
Note that clients pay us, hence, we are not listing things, which could be useful but don’t have a business model (like writing a report on improving risk management by considering base rates and how risks link and compound).
I think many of the gaps, we are seeing, are more about getting the basics right in the first place and don’t require sophisticated decision-making methods, e.g.,
spending more time developing goals incl. OKRs, plans, theories of change, impact measurement and risk management
Quite often it is hard for leaders to spend time on the important things instead of the urgent and important things, e.g., more sophisticated risk management still seems neglected at some organizations even after the FTX fall-out
improving executive- and organization-wide reflection, prioritization and planning rhythms
asking the right questions and doing the right, time-effective analysis at a level which is decision-relevant
getting an outside view on important decisions and CEO performance from well-run boards, advisors and coaches
improving the executive team structure and hiring the right people to spend more time on the topics above
Overall, I think the highest variance on whether an organization has good epistemics can be explained by hiring the right people and the right people simply spending more time on the prioritized, important topics. I think there are various tweaks on culture (e.g., rewarding if someone changes someone’s mind, introducing an obligation to dissent and Watch team backup), processes (e.g., having a structured and regular retro and prioritization session, making forecasts when launching a new project) as well as targeted upskilling (e.g., there are great existing calibration tools which could be included in the onboarding process) but the main thing seems to be something simple like having the right people, in the right roles spending their time on the things that matter most.
I think simply creating a structured menu of things organizations currently do to improve epistemics (aka a google doc) could be a cost-effective MVP for improving epistemics at organizations
To provide more concrete, anecdotal data on improving epistemics of key organizational decisions, the comments I leave most often when redteaming google docs of high impact orgs are roughly the following:
What are the goals?
Did you consider all alternatives? Are there some shades of grey between Option A and B? Did you also consider postponing the decisions?
What is the most likely failure mode?
What are the main cruxes and uncertainties which would influence the outcome of the decision and how can we get data on this quickly? What would you do if you had more time?
Part X doesn’t seem consistent with part Y
To be very clear,
I also think that I am making many of these prioritization and reasoning mistakes myself! Once a month, I imagine providing advice to cFactual as an outsider and every time I shake my head due to obvious mistakes I am making.
I also think there is room to use more sophisticated methods like forecasting for strategy, impact measurement and risk management or other tools mentioned here and here