Tactical models to improve institutional decision-making
~ EA Geneva
~ Max, Konrad, and Nora, represented as ‘we’
This post presents reflections on how to improve the work of governments and international organisations. It focuses in particular on the role of institutional decision-making, as this seems to be a concrete and feasible avenue of fostering policymakers’ impact. This post does not try to explain why one should (not) work on improving policy-making.
First of all, we propose that approaching policy-making systematically can be roughly done as follows:
Understand policy-making dynamics
Define tactics to approach policy-making
Implement techniques (e.g. calibration training)
Evaluate impact and feed learnings back to 1-2-3
Jess Whittlestone’s post on improving institutional decision-making provides useful high-level approaches:
test and evaluate existing techniques
research alternatives techniques
fostering adoption of techniques
direct more funding to the above
… which fall under 3. and 4.
Our post complements Whittlestone’s by presenting three models that inform 2. and thus help calibrating an outside actor’s approach to improving institutional decision-making. These models come from the literature review we conducted for forthcoming publications which attempt to cover point 1. of understanding policy-making dynamics.
Whenever this post mentions ‘policymaker’, we refer to an individual involved in the formal process of articulating and matching multiple stakeholders’ goals and means.
Institutional decision-making refers to the set of individual and collective decisions that are made by policymakers in interaction with other actors.
Note that it is not generally accepted that institutional decision-making directly leads to the creation of policies. Rather, policies result from a mix of many small day-to-day decisions and executives ones.
Common positions on policy-making in the community
Based on our interactions with EA community members, recent 80,000 Hours publications and podcasts, and the thematic focus of several talks at EA Global conferences in 2017 and 2018, we observe a growing interest in policy-making as a way to make progress on global priorities.
We also found that many EAs tend to make one or more of the following five independent claims when it comes to assessing whether one should work on improving policy-making that we can roughly group into:
the EA community should simply become/hire lobbyists and advocate for global priorities;
policy-making can only be effectively improved from the inside (e.g. take a policymaker job and move up in the hierarchy);
it is risky to work on policy-making now (e.g. due to limited knowledge about policy or idea inoculation);
working on policy-making is intractable or too costly; and/or
policy-making is worth improving as an outside actor to tackle global priorities (even if intractable short-term), but the EA community has little idea how.
We agree with 3 and 5 to a large extent and are unsure about 4. 1 and 2 are certainly relevant strategies but we disagree about their uniqueness. We believe that different people end up with a combination of the five conclusions above because of two cruxes:
‘improving policy-making’ has high Kolmogorov complexity; and
the community has little knowledge and experience about policy-making.
Some of our work with EA Geneva has been about improving (b) to systematically approach (a).
Three basic models to inform approaches
For an external actor targeting policy-makers to improve their collective decision-making, there are three models we found helpful in thinking about how to allocate one’s limited resources in different techniques to have the best shot at influencing institutional decision-making for the better. To illustrate the three models, suppose the following hypothetical case :
Suppose a small sub-unit that works for the UK’s Department for International Development (DFID) programme on non-communicable diseases eradication in West Africa. Eight individuals − 2 senior policymakers, 1 senior ops, 1 junior staff, 2 consultants and 2 country officers—work together on deciding which diseases to tackle with which interventions (“policy instruments”). It is a two year programme with 2 million dollars of funding and a strong recommendation from DFID’s directors to combine the interventions implementation with ex ante research, evaluations, and an ex post report. Knowing this, both senior policymakers requested the help from 1 consultant to report on the state of evidence on non-communicable diseases in West Africa and 1 consultant on the possible evaluation process. The country officers are meant to provide field expertise, attest (or not) the programme feasibility, and implement the programme. Both senior policymakers write the plan, together with the junior staff. The senior ops handle communications, optimise working process and prepare presentations. The deadline to submit the programme plan is in six months. After this date, the sub-unit hopes to receive green light from the unit director and approvals from country offices and West African States.
Consider also the following:
Both senior policymakers are also involved in other programmes and have very limited time.
Both senior policymakers will progress in their career if the programme is accepted and implemented.
Both consultants will use the same method (for the evidence collection and the evaluation process) as they did a few years ago for a HIV case in South America.
The funding comes from taxes paid by UK citizens.
For a few years now, DFID wants its programmes to tackle systemic root-causes rather than symptoms.
How would one approach the actors’ “institutional decision-making” here?
This is a relatively simple case with clearly defined actors and roles, a well-defined cause, one source of funding, available evidence, and involving micro interventions in selected areas. Policy cases may take much more complicated shapes and involve many more actors of different kinds, i.e. the amendment of a national law in a controversial area by politicians, bureaucrats and the public.
Who to target?
Most policy-networks seem to have a high degree of centrality or pivotal agents (figure 1, Dente 2014, chapter 2), meaning that few organizations or individuals have a disproportionate influence on the decision-making process. These key agents are often also the hardest to engage with and targeting them directly is difficult. One will likely still have to engage a large part of the policy-network to effect change. But keeping in mind who the key agents are is crucial to ensure that efforts do not go to waste due to ignorance of their outsized influence.
Figure 1. Shapes of policy networks
The DFID case illustrates both, the influence of central and pivotal agents. First, both senior policymakers initiate and direct the creation of the programme. They made hiring decisions and will be the main point of contact for the programme. Due to their place in the hierarchy and responsibilities, their decisions will influence the programme to a larger extent than the junior staff, the ops staff, or both consultants. This argument is valid for the six months of programme design.
Second, after the six months, pivotal agents play a crucial role. Here, the unit director and country officers make the final decision through approval/refusal.
In this case, targeting senior policymakers, the director and country officers is probably the best strategy. In other words, a rule of thumb is “as many agents as possible among the few most influential ones”.
What to improve?
Decision-making is likely to vary across contexts and take different forms. The Stacey diagram (figure 2 from Geyer and Rihani 2010) helps to map out these different forms as a function of levels of agreement and certainty.
Figure 2. A Stacey Diagram
Some issues are technical, backed by strong evidence and widely supported by stakeholders (‘rational decision-making’). Other issues may be less prone to agreement (‘political decision-making’) or can be less informed by further information (‘judgemental decision-making’).
When stakeholders refuse to interact or disagree and there is no information to inform decision-making, then decision-makers face chaotic situations with decisions entailing unpredictable outcomes (‘chaos’).
The literature suggests that most of policy decisions happen somewhere between these four areas: decision-making under partial agreement and partial certainty (‘complex decision-making’).
This suggests the need for a combination of strategies to decide which techniques must be implemented (matrix 1).
Matrix 1: strategies to improve collective ‘complex’ decision-making
The DFID case is characterised by uncertainty that can be reduced through ex ante research and an unclear level of agreement shared by the sub-unit, the unit director, country officers and West African States. However, since DFID emphasises the need to tackle systemic causes the significant uncertainty will likely remain because of complex research questions and methodological challenges to produce generalizable evidence on systemic causes. So the unit can benefit from support to reduce uncertainty to a limited extent and to deal with the remaining uncertainty in an intelligible manner (e.g. learn how to state it explicitly and to factor it in expected impact calculation).
Here, the level of agreement probably depends on other variables. In the case of West African States’ being strongly against any programme on non-communicable diseases on their territory, then country officers and States might strongly disagree with the proposal of the sub-unit. A higher level of agreement could potentially be achieved through a more direct involvement of West African States in the programme development.
When to act?
The timing drastically changes how one can affect the decision-making process (figure 3). Understanding windows of opportunity, a time period during which a larger share of decisions can be affected, is crucial (see for example Birkland 1997).
Figure 3. Possible window of opportunity dynamics
Before a window of opportunity (a), one can possibly equip decision-makers with skills and tools to make better decisions once the window occurs.
During a window of opportunity (b), a network can become crowded very quickly and unless one has built exceptional relationships, it is hard to affect change. Nonetheless, it is the time during which evidence can be provided in a timely manner, and political agendas play an important role.
After a window of opportunity (c), the crowdedness of a policy domain often only recedes slowly due to the previously concerted momentum leading up to a decision. The period after a window of opportunity can allow for decision-making support for the implementation of decisions or the preparation for the next window.
Policy agendas generally are fairly stable and drastic changes happen rarely (see for example Jones et al. 2009). The re-assessment of annual budgets or the periods when agendas are being set are possible windows of opportunity. For example, a window of opportunity opened when the Millenium Development Goals were re-discussed and it closed when the Sustainable Development Goals agenda was decided. Another example is the forthcoming replacement of the European Commission’s Horizon 2020 strategy.
The DFID case suggests a window of opportunity of 6 months, i.e. the time period during which the programme can be created. These 6 months also become more crowded (+ 2 consultants). Therefore, in this case, an outside actor may support decision-making with additional (counter-)evidence or by advocating for specific non-communicable diseases and special policy instruments.
If one has access to the two senior policymakers before this window of opportunity, then one could, for example, provide calibration training, sensitisation to Bayesian thinking, or other techniques. If one has access only after this window of opportunity, then one could support the evaluation procedure and ensure that learnings are reported and will influence the programme in the future.
Limits of our current knowledge
Our epistemic status on the usefulness of the three models across contexts is low because:
We do not know of specific organisations that have used them explicitly.
They only serve as pointers and cannot be counterfactually tested.
They result from our review of the public policy literature (mainly) which is theoretical or qualitative with a limited set of case-studies.
There are many assumptions and nuances for each model that take time to analyse and we have not had the time to check all of them.
However, we do find them useful because:
They help unpacking an opaque concept: ‘improving institutional decision-making’.
They help thinking strategically and cautiously because they raise more questions than answers.
We received feedback from policymakers that these models match their understanding of their work.
There are further limits to our knowledge that we deem important to address in the future (illustrated by the hypothetical DFID case):
How to improve the work of consultants other than becoming consultants ourselves?
DFID: they will replicate data collection and evaluation methods that may not be appropriate to the specific programme.
How to improve the decisions of actors that have mixed motives?
DFID: both senior policymakers will progress in their career if the programme is accepted and implemented which may lead them to prefer uncontroversial decisions and stick to what is widely accepted in policy networks.
How to influence the decisions of actors that have limited time?
DFID: both senior policymakers are involved in other programmes and have limited time for extra training.
Should decision-making support be general (methods-based) or cause-specific?
DFID: shall one train policymakers in rational thinking or provide training on how to eradicate non-communicable diseases most effectively? This is partly resolved through timing considerations—but what works best?
If decision-making support is general, what should be part of it?
Calibration training? Training in probabilistic thinking? Sensitisation to varying strengths of evidence?
What are policymakers’ ideal learning curves? How much time does one need to nourish such learning curves and how often can and should one push the right reminders so that policymakers do not forget?
We believe that the EA community can benefit a lot from progressing on these questions.
Improving institutional decision-making has many moving parts. We presented some preliminary tactical models to approach it strategically. We could not aim to be exhaustive.
We are unsure about their validity and usefulness beyond the important questions they raise. We would really appreciate feedback.
We believe that working on policy as an outside actor currently involves the reduction of uncertainties and risks through knowledge acquisition. We will publish a research agenda on improving policy-making here in March 2019.
 We chose a hypothetical over a real case because we make normative claims further in the blog post. We also chose to select an area that is not part of EA current priorities to avoid talking about the case too much, but to discuss the approach to improving the decision-making process. We decided to choose a relatively simple case to illustrate the models instead of a more complex case to avoid having to oversimplify applications of models or overcrowd this post with complications.