Context: I’m drawing from experience with a small research organization in a young field where it used to be very hard to do good research without thoroughly understanding the causal paths to impact.
Strongly stated, weakly held, and definitely tainted by personal idiosyncrasies:
I often found myself suspicious about (too many) internal strategy documents because I think that in a well-functioning organization of the kind I described, the people who make prioritization decisions (researchers pursuing their interests autonomously or executive director/managers who define tasks and targets at the organization-level) should be hired, among other things, for their prioritization abilities.
My sense is that being good at prioritization is more about the mindset than following some plan, and it involves thinking through the paths to impact for every decision, every day. So when I’m asked to help write up a theory of change, my intuitive reaction is “Who is this for? This feels like tediously writing down things that are already second nature to many people, and so much goes into this that it’s hard not to come away from this feeling like the document produced is too simplistic to be of any use.”
So, I’m overall skeptical about the use of ToC documents for improving a small organization’s focus, especially if the organization operates in a field/paradigm where staff have already been selected for their ability to prioritize well.
To be clear, this isn’t measured against a comparison of not thinking about strategy at all. Instead, I favor leaner versions of strategy discussions. For instance, one person writes up their thoughts on what could be improved (this might sometimes look like an abbreviated version of a ToC document), then core staff use it as a basis for group discussions and try to identify the non-obvious questions that seem the most crucial to the organization’s strategic direction. Then one discusses these questions from various angles, switches to a solution-oriented mode, and defines action points. The result of those discussions should be written down, but there’s no need to start at “our mission is to reduce future suffering.”)
Of course, there might be other reasons why internal ToC documents could be useful. For instance, not everyone’s work involves making big-picture prioritization decisions, and it’s helpful and motivating for all staff to have a good sense of what the organization concretely aims to accomplish. Still, if the reason for writing a ToC document is updating staff instead of actually improving overall prioritization and focus, then that calls for different ways of writing the document. And perhaps doing a (recorded) strategy Q&A with researchers and the executive director might be more efficient than a drily written document with rectangles and arrows.
Another instance where ToC documents might be (more) useful is for establishing consensus about an organization’s aims. If it feels like the organization lacks a coherent framework for how to think about their mission, maybe the process of writing a ToC document could be helpful in getting staff to think along similar lines.
My sense is that being good at prioritization is more about the mindset than following some plan, and it involves thinking through the paths to impact for every decision, every day.
I don’t strongly disagree with this. (Though based on conversations we’ve had elsewhere I’d guess I’m still somewhat more positive about plans than you are.)
However, I actually think of day-to-day prioritization decisions on one hand and the kind of strategic planning I’d like to see more on the other hand as two quite different activities and to some extent skill sets. I think they are both important, and complement each other well.
I think one major difference is which decisions you’re thinking about in the first place, and in particular how reactive versus proactive you are.
A lot of prioritization decisions are responding to a specific stimulus such as someone inviting you to give a talk or proposing a joint project. Others at least are very salient, e.g. because they concern the allocation of resources that are available by default—such as staff time. For these—and more broadly once you’ve identified a specific decision as important to make—I agree that following some explicit plan usually won’t add much.
However, there are important prioritization decisions that will never, or only too late, become salient by following day-to-day incentives. Have you really considered the full option space of how to reach your goals? Have you thought about how to prevent low-probability risks to your organization? Have you developed metrics that tell you whether you’re on track well before you can recognize obvious problems or achievements? Can you get more data to help you make decisions? I basically think that good strategic planning processes are a glorified checklist that direct leadership attention to issues that might otherwise be overlooked.
For example, I’ve seen all of the following happening (not at CLR):
Some years into a program, management wants to do an impact evaluation but hasn’t done a baseline assessment. They can describe how their target audience is doing now, but when assessing change they have to rely on memory.
Some months into a program, management realizes that some aspects don’t work well for some of their target audience. They say “it seems like we’ve designed a program for people who are like us, but actually some members of the target audience are quite different”, and make changes.
Seeing what in my view are clear and basic mistakes like these on one hand and skepticism about strategic planning on the other hand reinforces my view. These are paradigm cases of mistakes that standard strategic planning processes have been designed to avoid. For example, any material I can think of will emphasize baseline assessments (and more broadly planning for the ability to evaluate impact from the very start) and the need to understand your target audience e.g. through interviews or other ways of data collection.
It sounds like part of your thinking is that ToC diagrams won’t add much value when (1) the organisation already has consensus / “a coherent framework for how to think about their mission”, and (2) all of its researchers are already very good at prioritization. I’d guess that both of those conditions will be harder to maintain as an organisation scales up. Would you guess that ToC diagrams tend to become more useful as organisations scale up?
Also, when you say “prioritization abilities”, do you just mean ability to prioritise between research questions? Or also things like ability to generate new research questions, generate ideas of non-research activities to do (e.g., different ways of disseminating research findings to different audiences), and prioritise among those non-research activities?
I ask largely because one reason I suspect ToC diagrams may be helpful is to guide decisions about things like which forms of output to produce, who to share research findings with, and whether and how to disseminate particular findings broadly. It seems plausible to me that a researcher who’s excellent at prioritizing among research questions might not be good at thinking about those matters, and a ToC diagram (or the process of making one) might speed up or clarify their thoughts on those matters.
But you might find that researchers with that discrepancy of skills are uncommon or selected against, or that strategy discussions without ToC diagrams can cover that, or that staff members other than researchers can cover that.
I’d guess that both of those conditions will be harder to maintain as an organisation scales up. Would you guess that ToC diagrams tend to become more useful as organisations scale up?
I think so. I’m somewhat nervous about this because if the culture changes drastically, maybe that’s generally bad and ToC documents just mitigate some of the badness, but can’t quite get you back the culture at a smaller organization. Whether large scaling even makes sense might depend on the organization’s mission, or the ability of the executive director (and hiring committee) to scale in a way that preserves the right culture.
Also, when you say “prioritization abilities”, do you just mean ability to prioritise between research questions?
Also the other things you list.
I ask largely because one reason I suspect ToC diagrams may be helpful is to guide decisions about things like which forms of output to produce, who to share research findings with, and whether and how to disseminate particular findings broadly. It seems plausible to me that a researcher who’s excellent at prioritizing among research questions might not be good at thinking about those matters, and a ToC diagram (or the process of making one) might speed up or clarify their thoughts on those matters.
That seems reasonable. My experience is that people often know the right answers in theory, but need a lot of nudging to choose mediums or venues different from the ones they find personally the most rewarding. I think there are also just large constraints by individual psychology that make things less flexible than one might think. So, to preserve intrinsic motivation for research, it’s maybe not a good idea to push researchers too much. Still, I think it’s crucial to have a culture where researchers think actively about which medium to pick, why they’re doing it, and how the output will be shared. As long as this is being diligently considered and discussed, I think it’s reasonable to defer to the judgment of individual researchers.
Thanks for linking to that CLR post; that was an interesting snapshot into the process/output of an org’s strategic thinking.
The post says:
In future docs inspired by this outline here, we are going to list the pros and cons for each of the above proposals in order to then assign rough weightings to them.
Do you know if those further docs ended up being written, and made public?
I did write something that builds on it, yeah. It was about defining various proxies to optimize for (e.g., money, societal influence, connections to other EA organizations, followers of the organization’s newsletter (with near-term EA as their main interest), value-aligned people with computer science expertise, etc.) and how well they do in futures where we decide different interventions are most important. I didn’t want to make it public because it felt unpolished, and I was worried that some of the proxies could give outsiders the impression of instrumentalizing people.
Someone even helped me with excel to produce a heat map of the results weighted by probability we assign to various interventions mattering the most, and at the time this helped me clarify objections I had to EAF’s 2015/2016 strategic direction (we interacted little with other EA orgs and tried to build up capacity with animal advocacy, but always promote cause neutrality with the intent of maybe pivoting to other causes later). It didn’t lead to many important changes right away, but we made major changes in 2017 that strongly reflected the takeaways I had sketched in those documents.
Context: I’m drawing from experience with a small research organization in a young field where it used to be very hard to do good research without thoroughly understanding the causal paths to impact.
Strongly stated, weakly held, and definitely tainted by personal idiosyncrasies:
I often found myself suspicious about (too many) internal strategy documents because I think that in a well-functioning organization of the kind I described, the people who make prioritization decisions (researchers pursuing their interests autonomously or executive director/managers who define tasks and targets at the organization-level) should be hired, among other things, for their prioritization abilities.
My sense is that being good at prioritization is more about the mindset than following some plan, and it involves thinking through the paths to impact for every decision, every day. So when I’m asked to help write up a theory of change, my intuitive reaction is “Who is this for? This feels like tediously writing down things that are already second nature to many people, and so much goes into this that it’s hard not to come away from this feeling like the document produced is too simplistic to be of any use.”
So, I’m overall skeptical about the use of ToC documents for improving a small organization’s focus, especially if the organization operates in a field/paradigm where staff have already been selected for their ability to prioritize well.
To be clear, this isn’t measured against a comparison of not thinking about strategy at all. Instead, I favor leaner versions of strategy discussions. For instance, one person writes up their thoughts on what could be improved (this might sometimes look like an abbreviated version of a ToC document), then core staff use it as a basis for group discussions and try to identify the non-obvious questions that seem the most crucial to the organization’s strategic direction. Then one discusses these questions from various angles, switches to a solution-oriented mode, and defines action points. The result of those discussions should be written down, but there’s no need to start at “our mission is to reduce future suffering.”)
Of course, there might be other reasons why internal ToC documents could be useful. For instance, not everyone’s work involves making big-picture prioritization decisions, and it’s helpful and motivating for all staff to have a good sense of what the organization concretely aims to accomplish. Still, if the reason for writing a ToC document is updating staff instead of actually improving overall prioritization and focus, then that calls for different ways of writing the document. And perhaps doing a (recorded) strategy Q&A with researchers and the executive director might be more efficient than a drily written document with rectangles and arrows.
Another instance where ToC documents might be (more) useful is for establishing consensus about an organization’s aims. If it feels like the organization lacks a coherent framework for how to think about their mission, maybe the process of writing a ToC document could be helpful in getting staff to think along similar lines.
I don’t strongly disagree with this. (Though based on conversations we’ve had elsewhere I’d guess I’m still somewhat more positive about plans than you are.)
However, I actually think of day-to-day prioritization decisions on one hand and the kind of strategic planning I’d like to see more on the other hand as two quite different activities and to some extent skill sets. I think they are both important, and complement each other well.
I think one major difference is which decisions you’re thinking about in the first place, and in particular how reactive versus proactive you are.
A lot of prioritization decisions are responding to a specific stimulus such as someone inviting you to give a talk or proposing a joint project. Others at least are very salient, e.g. because they concern the allocation of resources that are available by default—such as staff time. For these—and more broadly once you’ve identified a specific decision as important to make—I agree that following some explicit plan usually won’t add much.
However, there are important prioritization decisions that will never, or only too late, become salient by following day-to-day incentives. Have you really considered the full option space of how to reach your goals? Have you thought about how to prevent low-probability risks to your organization? Have you developed metrics that tell you whether you’re on track well before you can recognize obvious problems or achievements? Can you get more data to help you make decisions? I basically think that good strategic planning processes are a glorified checklist that direct leadership attention to issues that might otherwise be overlooked.
For example, I’ve seen all of the following happening (not at CLR):
Some years into a program, management wants to do an impact evaluation but hasn’t done a baseline assessment. They can describe how their target audience is doing now, but when assessing change they have to rely on memory.
Some months into a program, management realizes that some aspects don’t work well for some of their target audience. They say “it seems like we’ve designed a program for people who are like us, but actually some members of the target audience are quite different”, and make changes.
Seeing what in my view are clear and basic mistakes like these on one hand and skepticism about strategic planning on the other hand reinforces my view. These are paradigm cases of mistakes that standard strategic planning processes have been designed to avoid. For example, any material I can think of will emphasize baseline assessments (and more broadly planning for the ability to evaluate impact from the very start) and the need to understand your target audience e.g. through interviews or other ways of data collection.
Interesting perspective, thanks for sharing.
It sounds like part of your thinking is that ToC diagrams won’t add much value when (1) the organisation already has consensus / “a coherent framework for how to think about their mission”, and (2) all of its researchers are already very good at prioritization. I’d guess that both of those conditions will be harder to maintain as an organisation scales up. Would you guess that ToC diagrams tend to become more useful as organisations scale up?
Also, when you say “prioritization abilities”, do you just mean ability to prioritise between research questions? Or also things like ability to generate new research questions, generate ideas of non-research activities to do (e.g., different ways of disseminating research findings to different audiences), and prioritise among those non-research activities?
I ask largely because one reason I suspect ToC diagrams may be helpful is to guide decisions about things like which forms of output to produce, who to share research findings with, and whether and how to disseminate particular findings broadly. It seems plausible to me that a researcher who’s excellent at prioritizing among research questions might not be good at thinking about those matters, and a ToC diagram (or the process of making one) might speed up or clarify their thoughts on those matters.
But you might find that researchers with that discrepancy of skills are uncommon or selected against, or that strategy discussions without ToC diagrams can cover that, or that staff members other than researchers can cover that.
I think so. I’m somewhat nervous about this because if the culture changes drastically, maybe that’s generally bad and ToC documents just mitigate some of the badness, but can’t quite get you back the culture at a smaller organization. Whether large scaling even makes sense might depend on the organization’s mission, or the ability of the executive director (and hiring committee) to scale in a way that preserves the right culture.
Also the other things you list.
That seems reasonable. My experience is that people often know the right answers in theory, but need a lot of nudging to choose mediums or venues different from the ones they find personally the most rewarding. I think there are also just large constraints by individual psychology that make things less flexible than one might think. So, to preserve intrinsic motivation for research, it’s maybe not a good idea to push researchers too much. Still, I think it’s crucial to have a culture where researchers think actively about which medium to pick, why they’re doing it, and how the output will be shared. As long as this is being diligently considered and discussed, I think it’s reasonable to defer to the judgment of individual researchers.
Thanks for linking to that CLR post; that was an interesting snapshot into the process/output of an org’s strategic thinking.
The post says:
Do you know if those further docs ended up being written, and made public?
I did write something that builds on it, yeah. It was about defining various proxies to optimize for (e.g., money, societal influence, connections to other EA organizations, followers of the organization’s newsletter (with near-term EA as their main interest), value-aligned people with computer science expertise, etc.) and how well they do in futures where we decide different interventions are most important. I didn’t want to make it public because it felt unpolished, and I was worried that some of the proxies could give outsiders the impression of instrumentalizing people.
Someone even helped me with excel to produce a heat map of the results weighted by probability we assign to various interventions mattering the most, and at the time this helped me clarify objections I had to EAF’s 2015/2016 strategic direction (we interacted little with other EA orgs and tried to build up capacity with animal advocacy, but always promote cause neutrality with the intent of maybe pivoting to other causes later). It didn’t lead to many important changes right away, but we made major changes in 2017 that strongly reflected the takeaways I had sketched in those documents.