Tactical models to improve institutional decision-making

~ EA Geneva

~ Max, Kon­rad, and Nora, rep­re­sented as ‘we’

This post pre­sents re­flec­tions on how to im­prove the work of gov­ern­ments and in­ter­na­tional or­gani­sa­tions. It fo­cuses in par­tic­u­lar on the role of in­sti­tu­tional de­ci­sion-mak­ing, as this seems to be a con­crete and fea­si­ble av­enue of fos­ter­ing poli­cy­mak­ers’ im­pact. This post does not try to ex­plain why one should (not) work on im­prov­ing policy-mak­ing.

First of all, we pro­pose that ap­proach­ing policy-mak­ing sys­tem­at­i­cally can be roughly done as fol­lows:

  1. Un­der­stand policy-mak­ing dynamics

  2. Define tac­tics to ap­proach policy-making

  3. Im­ple­ment tech­niques (e.g. cal­ibra­tion train­ing)

  4. Eval­u­ate im­pact and feed learn­ings back to 1-2-3

Jess Whit­tle­stone’s post on im­prov­ing in­sti­tu­tional de­ci­sion-mak­ing pro­vides use­ful high-level ap­proaches:

  • test and eval­u­ate ex­ist­ing techniques

  • re­search al­ter­na­tives techniques

  • fos­ter­ing adop­tion of techniques

  • di­rect more fund­ing to the above

… which fall un­der 3. and 4.

Our post com­ple­ments Whit­tle­stone’s by pre­sent­ing three mod­els that in­form 2. and thus help cal­ibrat­ing an out­side ac­tor’s ap­proach to im­prov­ing in­sti­tu­tional de­ci­sion-mak­ing. Th­ese mod­els come from the liter­a­ture re­view we con­ducted for forth­com­ing pub­li­ca­tions which at­tempt to cover point 1. of un­der­stand­ing policy-mak­ing dy­nam­ics.

Pre­limi­nary defi­ni­tions:

  • When­ever this post men­tions ‘poli­cy­maker’, we re­fer to an in­di­vi­d­ual in­volved in the for­mal pro­cess of ar­tic­u­lat­ing and match­ing mul­ti­ple stake­hold­ers’ goals and means.

  • In­sti­tu­tional de­ci­sion-mak­ing refers to the set of in­di­vi­d­ual and col­lec­tive de­ci­sions that are made by poli­cy­mak­ers in in­ter­ac­tion with other ac­tors.

Note that it is not gen­er­ally ac­cepted that in­sti­tu­tional de­ci­sion-mak­ing di­rectly leads to the cre­ation of poli­cies. Rather, poli­cies re­sult from a mix of many small day-to-day de­ci­sions and ex­ec­u­tives ones.

Com­mon po­si­tions on policy-mak­ing in the community

Based on our in­ter­ac­tions with EA com­mu­nity mem­bers, re­cent 80,000 Hours pub­li­ca­tions and pod­casts, and the the­matic fo­cus of sev­eral talks at EA Global con­fer­ences in 2017 and 2018, we ob­serve a grow­ing in­ter­est in policy-mak­ing as a way to make progress on global pri­ori­ties.

We also found that many EAs tend to make one or more of the fol­low­ing five in­de­pen­dent claims when it comes to as­sess­ing whether one should work on im­prov­ing policy-mak­ing that we can roughly group into:

  1. the EA com­mu­nity should sim­ply be­come/​hire lob­by­ists and ad­vo­cate for global pri­ori­ties;

  2. policy-mak­ing can only be effec­tively im­proved from the in­side (e.g. take a poli­cy­maker job and move up in the hi­er­ar­chy);

  3. it is risky to work on policy-mak­ing now (e.g. due to limited knowl­edge about policy or idea in­oc­u­la­tion);

  4. work­ing on policy-mak­ing is in­tractable or too costly; and/​or

  5. policy-mak­ing is worth im­prov­ing as an out­side ac­tor to tackle global pri­ori­ties (even if in­tractable short-term), but the EA com­mu­nity has lit­tle idea how.

We agree with 3 and 5 to a large ex­tent and are un­sure about 4. 1 and 2 are cer­tainly rele­vant strate­gies but we dis­agree about their unique­ness. We be­lieve that differ­ent peo­ple end up with a com­bi­na­tion of the five con­clu­sions above be­cause of two cruxes:

  1. ‘im­prov­ing policy-mak­ing’ has high Kol­mogorov com­plex­ity; and

  2. the com­mu­nity has lit­tle knowl­edge and ex­pe­rience about policy-mak­ing.

Some of our work with EA Geneva has been about im­prov­ing (b) to sys­tem­at­i­cally ap­proach (a).

Three ba­sic mod­els to in­form approaches

For an ex­ter­nal ac­tor tar­get­ing policy-mak­ers to im­prove their col­lec­tive de­ci­sion-mak­ing, there are three mod­els we found helpful in think­ing about how to al­lo­cate one’s limited re­sources in differ­ent tech­niques to have the best shot at in­fluenc­ing in­sti­tu­tional de­ci­sion-mak­ing for the bet­ter. To illus­trate the three mod­els, sup­pose the fol­low­ing hy­po­thet­i­cal case [1]:

Sup­pose a small sub-unit that works for the UK’s Depart­ment for In­ter­na­tional Devel­op­ment (DFID) pro­gramme on non-com­mu­ni­ca­ble dis­eases erad­i­ca­tion in West Africa. Eight in­di­vi­d­u­als − 2 se­nior poli­cy­mak­ers, 1 se­nior ops, 1 ju­nior staff, 2 con­sul­tants and 2 coun­try officers—work to­gether on de­cid­ing which dis­eases to tackle with which in­ter­ven­tions (“policy in­stru­ments”). It is a two year pro­gramme with 2 mil­lion dol­lars of fund­ing and a strong recom­men­da­tion from DFID’s di­rec­tors to com­bine the in­ter­ven­tions im­ple­men­ta­tion with ex ante re­search, eval­u­a­tions, and an ex post re­port. Know­ing this, both se­nior poli­cy­mak­ers re­quested the help from 1 con­sul­tant to re­port on the state of ev­i­dence on non-com­mu­ni­ca­ble dis­eases in West Africa and 1 con­sul­tant on the pos­si­ble eval­u­a­tion pro­cess. The coun­try officers are meant to provide field ex­per­tise, at­test (or not) the pro­gramme fea­si­bil­ity, and im­ple­ment the pro­gramme. Both se­nior poli­cy­mak­ers write the plan, to­gether with the ju­nior staff. The se­nior ops han­dle com­mu­ni­ca­tions, op­ti­mise work­ing pro­cess and pre­pare pre­sen­ta­tions. The dead­line to sub­mit the pro­gramme plan is in six months. After this date, the sub-unit hopes to re­ceive green light from the unit di­rec­tor and ap­provals from coun­try offices and West Afri­can States.

Con­sider also the fol­low­ing:

  1. Both se­nior poli­cy­mak­ers are also in­volved in other pro­grammes and have very limited time.

  2. Both se­nior poli­cy­mak­ers will progress in their ca­reer if the pro­gramme is ac­cepted and im­ple­mented.

  3. Both con­sul­tants will use the same method (for the ev­i­dence col­lec­tion and the eval­u­a­tion pro­cess) as they did a few years ago for a HIV case in South Amer­ica.

  4. The fund­ing comes from taxes paid by UK cit­i­zens.

  5. For a few years now, DFID wants its pro­grammes to tackle sys­temic root-causes rather than symp­toms.

How would one ap­proach the ac­tors’ “in­sti­tu­tional de­ci­sion-mak­ing” here?

This is a rel­a­tively sim­ple case with clearly defined ac­tors and roles, a well-defined cause, one source of fund­ing, available ev­i­dence, and in­volv­ing micro in­ter­ven­tions in se­lected ar­eas. Policy cases may take much more com­pli­cated shapes and in­volve many more ac­tors of differ­ent kinds, i.e. the amend­ment of a na­tional law in a con­tro­ver­sial area by poli­ti­ci­ans, bu­reau­crats and the pub­lic.

Who to tar­get?

Most policy-net­works seem to have a high de­gree of cen­tral­ity or pivotal agents (figure 1, Dente 2014, chap­ter 2), mean­ing that few or­ga­ni­za­tions or in­di­vi­d­u­als have a dis­pro­por­tionate in­fluence on the de­ci­sion-mak­ing pro­cess. Th­ese key agents are of­ten also the hard­est to en­gage with and tar­get­ing them di­rectly is difficult. One will likely still have to en­gage a large part of the policy-net­work to effect change. But keep­ing in mind who the key agents are is cru­cial to en­sure that efforts do not go to waste due to ig­no­rance of their out­sized in­fluence.

Figure 1. Shapes of policy networks

The DFID case illus­trates both, the in­fluence of cen­tral and pivotal agents. First, both se­nior poli­cy­mak­ers ini­ti­ate and di­rect the cre­ation of the pro­gramme. They made hiring de­ci­sions and will be the main point of con­tact for the pro­gramme. Due to their place in the hi­er­ar­chy and re­spon­si­bil­ities, their de­ci­sions will in­fluence the pro­gramme to a larger ex­tent than the ju­nior staff, the ops staff, or both con­sul­tants. This ar­gu­ment is valid for the six months of pro­gramme de­sign.

Se­cond, af­ter the six months, pivotal agents play a cru­cial role. Here, the unit di­rec­tor and coun­try officers make the fi­nal de­ci­sion through ap­proval/​re­fusal.

In this case, tar­get­ing se­nior poli­cy­mak­ers, the di­rec­tor and coun­try officers is prob­a­bly the best strat­egy. In other words, a rule of thumb is “as many agents as pos­si­ble among the few most in­fluen­tial ones”.

What to im­prove?

De­ci­sion-mak­ing is likely to vary across con­texts and take differ­ent forms. The Stacey di­a­gram (figure 2 from Geyer and Rihani 2010) helps to map out these differ­ent forms as a func­tion of lev­els of agree­ment and cer­tainty.

Figure 2. A Stacey Diagram

Some is­sues are tech­ni­cal, backed by strong ev­i­dence and widely sup­ported by stake­hold­ers (‘ra­tio­nal de­ci­sion-mak­ing’). Other is­sues may be less prone to agree­ment (‘poli­ti­cal de­ci­sion-mak­ing’) or can be less in­formed by fur­ther in­for­ma­tion (‘judge­men­tal de­ci­sion-mak­ing’).

When stake­hold­ers re­fuse to in­ter­act or dis­agree and there is no in­for­ma­tion to in­form de­ci­sion-mak­ing, then de­ci­sion-mak­ers face chaotic situ­a­tions with de­ci­sions en­tailing un­pre­dictable out­comes (‘chaos’).

The liter­a­ture sug­gests that most of policy de­ci­sions hap­pen some­where be­tween these four ar­eas: de­ci­sion-mak­ing un­der par­tial agree­ment and par­tial cer­tainty (‘com­plex de­ci­sion-mak­ing’).

This sug­gests the need for a com­bi­na­tion of strate­gies to de­cide which tech­niques must be im­ple­mented (ma­trix 1).

Ma­trix 1: strate­gies to im­prove col­lec­tive ‘com­plex’ de­ci­sion-mak­ing

The DFID case is char­ac­ter­ised by un­cer­tainty that can be re­duced through ex ante re­search and an un­clear level of agree­ment shared by the sub-unit, the unit di­rec­tor, coun­try officers and West Afri­can States. How­ever, since DFID em­pha­sises the need to tackle sys­temic causes the sig­nifi­cant un­cer­tainty will likely re­main be­cause of com­plex re­search ques­tions and method­olog­i­cal challenges to pro­duce gen­er­al­iz­able ev­i­dence on sys­temic causes. So the unit can benefit from sup­port to re­duce un­cer­tainty to a limited ex­tent and to deal with the re­main­ing un­cer­tainty in an in­tel­ligible man­ner (e.g. learn how to state it ex­plic­itly and to fac­tor it in ex­pected im­pact calcu­la­tion).

Here, the level of agree­ment prob­a­bly de­pends on other vari­ables. In the case of West Afri­can States’ be­ing strongly against any pro­gramme on non-com­mu­ni­ca­ble dis­eases on their ter­ri­tory, then coun­try officers and States might strongly dis­agree with the pro­posal of the sub-unit. A higher level of agree­ment could po­ten­tially be achieved through a more di­rect in­volve­ment of West Afri­can States in the pro­gramme de­vel­op­ment.

When to act?

The timing dras­ti­cally changes how one can af­fect the de­ci­sion-mak­ing pro­cess (figure 3). Un­der­stand­ing win­dows of op­por­tu­nity, a time pe­riod dur­ing which a larger share of de­ci­sions can be af­fected, is cru­cial (see for ex­am­ple Birk­land 1997).

Figure 3. Pos­si­ble win­dow of op­por­tu­nity dynamics

Be­fore a win­dow of op­por­tu­nity (a), one can pos­si­bly equip de­ci­sion-mak­ers with skills and tools to make bet­ter de­ci­sions once the win­dow oc­curs.

Dur­ing a win­dow of op­por­tu­nity (b), a net­work can be­come crowded very quickly and un­less one has built ex­cep­tional re­la­tion­ships, it is hard to af­fect change. Nonethe­less, it is the time dur­ing which ev­i­dence can be pro­vided in a timely man­ner, and poli­ti­cal agen­das play an im­por­tant role.

After a win­dow of op­por­tu­nity (c), the crowd­ed­ness of a policy do­main of­ten only re­cedes slowly due to the pre­vi­ously con­certed mo­men­tum lead­ing up to a de­ci­sion. The pe­riod af­ter a win­dow of op­por­tu­nity can al­low for de­ci­sion-mak­ing sup­port for the im­ple­men­ta­tion of de­ci­sions or the prepa­ra­tion for the next win­dow.

Policy agen­das gen­er­ally are fairly sta­ble and dras­tic changes hap­pen rarely (see for ex­am­ple Jones et al. 2009). The re-as­sess­ment of an­nual bud­gets or the pe­ri­ods when agen­das are be­ing set are pos­si­ble win­dows of op­por­tu­nity. For ex­am­ple, a win­dow of op­por­tu­nity opened when the Mille­nium Devel­op­ment Goals were re-dis­cussed and it closed when the Sus­tain­able Devel­op­ment Goals agenda was de­cided. Another ex­am­ple is the forth­com­ing re­place­ment of the Euro­pean Com­mis­sion’s Hori­zon 2020 strat­egy.

The DFID case sug­gests a win­dow of op­por­tu­nity of 6 months, i.e. the time pe­riod dur­ing which the pro­gramme can be cre­ated. Th­ese 6 months also be­come more crowded (+ 2 con­sul­tants). There­fore, in this case, an out­side ac­tor may sup­port de­ci­sion-mak­ing with ad­di­tional (counter-)ev­i­dence or by ad­vo­cat­ing for spe­cific non-com­mu­ni­ca­ble dis­eases and spe­cial policy in­stru­ments.

If one has ac­cess to the two se­nior poli­cy­mak­ers be­fore this win­dow of op­por­tu­nity, then one could, for ex­am­ple, provide cal­ibra­tion train­ing, sen­si­ti­sa­tion to Bayesian think­ing, or other tech­niques. If one has ac­cess only af­ter this win­dow of op­por­tu­nity, then one could sup­port the eval­u­a­tion pro­ce­dure and en­sure that learn­ings are re­ported and will in­fluence the pro­gramme in the fu­ture.

Limits of our cur­rent knowledge

Our epistemic sta­tus on the use­ful­ness of the three mod­els across con­texts is low be­cause:

  • We do not know of spe­cific or­gani­sa­tions that have used them ex­plic­itly.

  • They only serve as poin­t­ers and can­not be coun­ter­fac­tu­ally tested.

  • They re­sult from our re­view of the pub­lic policy liter­a­ture (mainly) which is the­o­ret­i­cal or qual­i­ta­tive with a limited set of case-stud­ies.

  • There are many as­sump­tions and nu­ances for each model that take time to analyse and we have not had the time to check all of them.

How­ever, we do find them use­ful be­cause:

  • They help un­pack­ing an opaque con­cept: ‘im­prov­ing in­sti­tu­tional de­ci­sion-mak­ing’.

  • They help think­ing strate­gi­cally and cau­tiously be­cause they raise more ques­tions than an­swers.

  • We re­ceived feed­back from poli­cy­mak­ers that these mod­els match their un­der­stand­ing of their work.

There are fur­ther limits to our knowl­edge that we deem im­por­tant to ad­dress in the fu­ture (illus­trated by the hy­po­thet­i­cal DFID case):

  • How to im­prove the work of con­sul­tants other than be­com­ing con­sul­tants our­selves?

    • DFID: they will repli­cate data col­lec­tion and eval­u­a­tion meth­ods that may not be ap­pro­pri­ate to the spe­cific pro­gramme.

  • How to im­prove the de­ci­sions of ac­tors that have mixed mo­tives?

    • DFID: both se­nior poli­cy­mak­ers will progress in their ca­reer if the pro­gramme is ac­cepted and im­ple­mented which may lead them to pre­fer un­con­tro­ver­sial de­ci­sions and stick to what is widely ac­cepted in policy net­works.

  • How to in­fluence the de­ci­sions of ac­tors that have limited time?

    • DFID: both se­nior poli­cy­mak­ers are in­volved in other pro­grammes and have limited time for ex­tra train­ing.

  • Should de­ci­sion-mak­ing sup­port be gen­eral (meth­ods-based) or cause-spe­cific?

    • DFID: shall one train poli­cy­mak­ers in ra­tio­nal think­ing or provide train­ing on how to erad­i­cate non-com­mu­ni­ca­ble dis­eases most effec­tively? This is partly re­solved through timing con­sid­er­a­tions—but what works best?

  • If de­ci­sion-mak­ing sup­port is gen­eral, what should be part of it?

    • Cal­ibra­tion train­ing? Train­ing in prob­a­bil­is­tic think­ing? Sen­si­ti­sa­tion to vary­ing strengths of ev­i­dence?

  • What are poli­cy­mak­ers’ ideal learn­ing curves? How much time does one need to nour­ish such learn­ing curves and how of­ten can and should one push the right re­minders so that poli­cy­mak­ers do not for­get?

We be­lieve that the EA com­mu­nity can benefit a lot from pro­gress­ing on these ques­tions.

Three conclusions

  1. Im­prov­ing in­sti­tu­tional de­ci­sion-mak­ing has many mov­ing parts. We pre­sented some pre­limi­nary tac­ti­cal mod­els to ap­proach it strate­gi­cally. We could not aim to be ex­haus­tive.

  2. We are un­sure about their val­idity and use­ful­ness be­yond the im­por­tant ques­tions they raise. We would re­ally ap­pre­ci­ate feed­back.

  3. We be­lieve that work­ing on policy as an out­side ac­tor cur­rently in­volves the re­duc­tion of un­cer­tain­ties and risks through knowl­edge ac­qui­si­tion. We will pub­lish a re­search agenda on im­prov­ing policy-mak­ing here in March 2019.

[1] We chose a hy­po­thet­i­cal over a real case be­cause we make nor­ma­tive claims fur­ther in the blog post. We also chose to se­lect an area that is not part of EA cur­rent pri­ori­ties to avoid talk­ing about the case too much, but to dis­cuss the ap­proach to im­prov­ing the de­ci­sion-mak­ing pro­cess. We de­cided to choose a rel­a­tively sim­ple case to illus­trate the mod­els in­stead of a more com­plex case to avoid hav­ing to over­sim­plify ap­pli­ca­tions of mod­els or over­crowd this post with com­pli­ca­tions.