Refining improving institutional decision-making as a cause area: results from a scoping survey
Summary
The improving institutional decision-making (IIDM) working group (now the Effective Institutions Project) ran a survey asking members of the EA community which topics they thought were in scope of “improving institutional decision-making” (IIDM) as a cause area. 74 individuals participated.
The survey results will help target the Effective Institutions Project’s priorities and work products going forward. For example, the list of in-scope topics will form the guardrails for developing a directory of introductory resources to IIDM.
Five out of the nine overarching fields we asked about were rated as ‘fully in scope’ by the median respondent. Leading amongst them was ‘whole-institution governance, leadership and culture’, with 77% of respondents rating it as fully in scope. Of the remaining fields, two were rated as ‘mostly in scope’ and two as ‘somewhat in scope’.
‘Determining goals and strategies to achieve them’ (e.g. ethics and global prioritisation research), had the smallest proportion of respondents rating it as at least somewhat in scope, although the number who did so was still a majority (62%).
The respondents were demographically similar to the EA community as a whole but were more likely to work in government and policy and less likely to work in software engineering and machine learning / AI.
Why did we run a survey to scope out IIDM as a cause area?
As part of the IIDM working group (now the Effective Institutions Project), we ran a survey to help us refine the scope of improving institutional decision-making (IIDM) as a cause area (see the EA Forum post announcing the survey). The aim was to gauge the diversity of perspectives in the EA community on what “counts″ as IIDM. This helps us understand what the community thinks is important and has the most potential for impact. We hope that the results will shape the rest of our work as a working group and provide a helpful starting point for others as well.
What did the survey involve?
We created a list of 67 topics based on input from a range of individuals involved in the IIDM community. The topics are mostly organised around academic disciplines and professional communities. Given that such a long list can be unwieldy, we then grouped these topics into overarching fields based on similarities such as level of the decision-maker, the focus of the decision etc. These fields are not ones that would be recognised as fields in academia but were useful in giving us broad brush insights into the types of topics which respondents may include as in scope.
In the survey, we first asked the respondent whether they thought each field was within the scope of IIDM (see the screenshot below).
Figure 1:
Respondents had the option to answer in more detail about each field. For each field for which they chose to do so, we asked whether they thought each of the topics listed under that field was within scope. As an example, one of the overarching fields was “Understanding and improving group and team dynamics in decision-making” and the topics within it were: improving group deliberation, improving voting (small scale e.g. in a committee), game theory, conflict resolution and procedural justice (see Figure 2).
Figure 2
We provided respondents with a short description of each topic. The list of topics and their descriptions can be found here. We also asked for feedback on our working definition of IIDM and on what activities the working group should pursue.
We recruited respondents to the survey by posting the survey on the EA Forum, IIDM Facebook group and IIDM Slack workspace (link to join here) and also contacting individuals who had previously shown an interest in the cause area or who we thought would have strong (including sceptical) opinions about IIDM. We are incredibly grateful to the 74 people who took the time to fill in the survey.
Results
Which overarching fields are considered in-scope for IIDM?
The nine fields were:
Understanding and improving whole-institution governance, leadership, and culture
Understanding and improving group and team dynamics in decision-making
Understanding how decisions (at all levels) are made and how they can be improved
Using evidence and data to improve decisions
Activities to support decision-making within government/international institutions
Monitoring and improving the ongoing performance of institutions and their employees/collaborators
Improving knowledge about the future to affect plans now
Improving individual-level judgement and decision-making
Determining goals and developing strategies to achieve them
As shown in Table 1 and Figure 3 below, five out of the nine fields were considered to be ‘fully in scope’ by the median respondent. Leading among them was ‘whole-institution governance, leadership and culture’, which was considered ‘fully in scope’ by 77% of respondents, and ‘mostly’ or ‘somewhat’ in scope by the others. The other fields rated as ‘fully in scope’ by the median respondents were considered to be at least somewhat in scope by the vast majority of other respondents.
Of the remaining fields, the median respondent rated two of the fields as ‘mostly in scope’ and the other two as ‘somewhat in scope’. Even among most of these lower-ranked fields, over 80% of respondents rated them as at least somewhat in scope. The one exception was ‘determining goals and strategies to achieve them’ (e.g. ethics and global prioritisation research), where a smaller majority of respondents (62%) rated the field as at least somewhat in scope. This was also the field that most polarised respondents, with the most common responses being ‘mostly out of scope’ (27%) followed by ‘fully in scope’ (23%) and ‘mostly in scope’ (23%). Differences between the two distinct elements of this field—goals and strategies—might partly account for this: in the topic-level ratings for this field, ‘strategic planning’ was rated as at least somewhat in scope by almost all respondents (96%), but none of the other topics, which focused more on goals, attracted this rating from more than 6 in 10 respondents.
Table 1: Fields by “in scope” rating
Field | % at least somewhat in scope | % fully in scope | Median response |
Whole-institution governance, leadership, and culture | 100% | 77% | Fully in scope |
Group dynamics in decision-making | 100% | 52% | Fully in scope |
How decisions (at all levels) are made | 96% | 68% | Fully in scope |
Using evidence and data to improve decisions | 95% | 60% | Fully in scope |
Decision-making support within government and international institutions | 90% | 67% | Fully in scope |
Monitoring and improving ongoing performance | 90% | 36% | Mostly in scope |
Improving knowledge about the future | 86% | 27% | Mostly in scope |
Individual-level judgement and decision-making | 84% | 25% | Somewhat in scope |
Determining goals and strategies to achieve them | 62% | 23% | Somewhat in scope |
Figure 3
Which specific topics were considered in scope for IIDM?
Topics generally considered to be more in scope
The topics in Table 2 below were considered to be fully in scope by the median respondent. These topics also tended to have near unanimous agreement about being at least somewhat in scope (implementation science is somewhat of an exception, but only because it had a higher share of ‘not sure’ responses). A few other topics also had near unanimous agreement about being at least somewhat in scope, but were rated by the median respondent as being ‘mostly’ rather than fully in scope. These were: scenario planning, systems design / systems thinking; behavioral insights / science; strategic foresight; and scenario planning.
Table 2: Topics that the median respondent considered to be fully in scope
Topic | % fully in scope | % at least somewhat in scope |
Institutional design / governance | 85% | 100% |
Improving group deliberation | 77% | 100% |
Evidence-based/informed decision-making | 76% | 97% |
Incentive design | 67% | 97% |
Decision analysis / research | 64% | 96% |
Organizational behavior | 61% | 97% |
Mechanism design | 56% | 100% |
Evidence synthesis and translation | 53% | 94% |
Organizational culture | 52% | 97% |
Implementation science | 50% | 88% |
Topics generally considered to be less in scope
The topics in the table below to be only ‘somewhat’ in scope by the median respondent. None of the topics received a median rating of ‘fully out’ or ‘mostly out’ of scope.
Table 3: Topics that the median respondent considered to be somewhat in scope
Topic | % at least somewhat in scope | % fully in scope |
Progress studies | 50% | 23% |
Productivity | 53% | 10% |
Ethics / axiology | 54% | 19% |
Global prioritization research | 58% | 23% |
Merging organizations | 59% | 3% |
Design thinking / user-centered design | 62% | 27% |
Epistemology | 65% | 26% |
Building new organizations | 66% | 9% |
Innovations in financial instruments | 67% | 12% |
Science communication | 68% | 18% |
Meta-science / meta-research | 68% | 26% |
Innovations in contracting | 71% | 12% |
Innovation | 75% | 12% |
Intelligence operations and analysis | 75% | 25% |
Political science | 81% | 25% |
Data science | 85% | 18% |
Nowcasting | 88% | 21% |
Additional topics suggested by respondents
We also asked respondents to suggest additional topics that they thought were within the scope of IIDM but weren’t covered in the survey. The additional topics suggested were: social epistemology, practical long-termist decision-making, judicial decision-making and political theory (in particular the study of democracy). We did not then ask the respondents to indicate whether they thought the additional topics were in scope or not as we assumed that they would respond at least “somewhat in scope” given that they had suggested them. To proxy whether these suggested topics should be considered in scope, we’ve listed below the topics which we consider to be closest to these suggestions and the percentage of respondents who thought that those topics were at least somewhat in scope:
Suggested topic | Definition | Closest field or topic | % of respondents who indicated that the closest topic was at least somewhat in scope |
Social epistemology | The study of knowledge as a collective achievement. It deals with questions such as “how should I revise my beliefs given that my peers disagree with me?” and “Can we attribute beliefs to groups instead of individuals?” | Group deliberation | 100% |
Practical long-termist decision-making | How do we decide what to do when we have massive uncertainty about the flow through effects of our actions? | The field of “Improving knowledge about the future to affect plans now” | 86% |
Political theory | Political theory is the study of how we do and should think about the nature and organisation of political life and its limits. It deals with questions such as “How might a legitimate or just state be constituted?” and “What gives rulers the authority to rule, and do citizens have a duty to obey?” | Political science | 81% |
Judicial decision-making | Decisions by judges (in contrast to decisions in the executive or legislative branch). | Procedural justice / Anti-corruption activities (e.g. whistleblowing) / Individual-level judgement and decision-making | 73% / 81% / 84% |
Other notable findings
The ratings provided by respondents who considered themselves to be working in an IIDM-related field (N=31) were broadly similar to those provided by those who said they were not or were not sure (N=36). In particular, the top two-ranked fields and bottom-ranked fields (according to the % responding at least somewhat in scope) were identical for both groups).
Nevertheless, there were two key differences between the two groups. First, those who said they worked in an IIDM-related field rated many fields as being more in scope than those who did not say they worked in an IIDM-related field. For example, 77% of these respondents said ‘determining goals and strategies to achieve’ them were at least somewhat in scope, compared to 50% of those who did not say they worked in an IIDM-related field. While we can’t say for sure what accounts for these differences, two possible explanations are: i) those who consider IIDM to be a high-priority cause area self-select into IIDM-related jobs; ii) professional experience with IIDM-related work increases beliefs about many of the fields we asked about being in scope.
Second, the ranking of ‘decision-making support within government/international institutions’ was much higher among those who said they worked in an IIDM-related field (third ranked; 100% at least somewhat in scope) than among those who did not (second-last ranked; 81% at least somewhat in scope).
Who took the survey?
67 out of 74 respondents (91%) shared information about themselves. Of these:
Around 9 out of 10 respondents considered themselves to be a part of the EA community, with most being involved for 1-3 years (39%) or 4 or more years (42%). Of those who did not consider themselves to be a part of the EA community, 5 out of 6 considered themselves to be working in an IIDM-related field.
Close to 1 in 2 considered themselves to be working in an IIDM-related field, while 34% did not and 19% were unsure.
Around 9 out of 10 respondents were based in North America, Western Europe or Australia, with the remainder spread between Africa (1%), Eastern Europe (and former Soviet Bloc) (3%), Latin America and the Carribean (3%), and the Middle East, Asia and the Pacific (not including Australia) (4%)
Around 2 out of 3 respondents were male, while 28% were female and 3% gave another answer.
Around 2 out of 3 were aged between 24 and 34. Smaller shares of respondents were aged between 18-23 (17%), 35-49 (17%) or at least 50 (3%).
We also asked respondents about the areas in which they had at least 3 years of substantial work experience or graduate study with major focus, with the list of areas taken from the 2019 EA Survey. The most common areas of experience were government and policy (24% of respondents), management (22%) and economics (22%). Relative to the 2019 EA Survey, other overrepresented areas of experience included: movement building / public speaking / campaigning (16%), consulting (16%) and philosophy (13%), while software engineering was substantially underrepresented (9%, compared with 27% in the EA Survey). We didn’t have any respondents with a background in AI safety, machine learning, asset management, or personal assistance.
Limitations of the scoping survey
There were several limitations to the survey:
Some respondents felt that the way they should answer the survey wasn’t made clear enough
The breadth of the topics varied, meaning that some very broad topics may have been more likely to be considered at least somewhat in scope because counting entire academic disciplines e.g. political science as out of scope may have felt incautious; conversely broader topics may have also been less likely to be considered fully in scope
The survey design, which was intended to reduce the time burden on respondents, meant that we need to be cautious when comparing the responses for the topics across fields due to a different sample answering for different fields
Small sample size
Respondents were broadly representative of the EA community in terms of the region they live in and their gender, but less representative in terms of professional expertise and age. We don’t know to what extent the respondents were representative of people doing IIDM work more generally.
Taking into account these considerations, the most robust results from both the perspective of a larger sample size and inter-field comparability are thus the results for the overarching fields as displayed in Figure 3 (rather than for the individual topics) although it is worth keeping in mind that these aren’t fields with established definitions outside of this survey. We have provided more detail on the limitations here.
How to define IIDM
Following the questions about whether the topics were in scope, we asked for feedback on our current working definition of IIDM:
“IIDM is about increasing the technical quality and effective altruism alignment of the most important decisions made by the world’s most important decision-making bodies. It emphasizes building sustained capacity to make high-quality and well-aligned decisions within powerful institutions themselves.”
A large majority of survey respondents who provided a response to the request for feedback on the definition (N=43) said that they disliked having “increasing … effective altruism alignment” within the definition. The most common concern expressed was that it may put off those outside of the EA community engaging with IIDM: “outsiders may be a bit suspicious if we say that we try to make institutions aligned with our ideology”. Several respondents went further, arguing that IIDM-like initiatives should be explicitly aligned with institutions’ own preferences. (For organisations that originated within or work in close partnership with the EA community, those preferences would naturally align with the goals of effective altruism.) For others, it was more of a strategic consideration: “IIDM’s comparative advantage is not in the EA-alignment part”.
Focusing on “the most important decisions made by the world’s most important decision-making bodies” was also seen by some respondents to be at the cost of considerations of neglectedness and tractability in addition to the importance of the decision.
We intend to take this feedback into account alongside other feedback we have received over the last 6 months as we refine the definitions we use both for IIDM in general as well as its component parts.
Prioritisation of activities
How did respondents prioritise activities to develop IIDM as a cause area?
We also asked respondents to score a range of potential activities the Effective Institutions Project could undertake to develop IIDM as a cause area, where a score of zero was “low priority” and five was “high priority”. As shown in Table 4 below, the only activity to receive the top score from the median respondent was “synthesise the existing evidence on ‘what works’ to improve institutional decision-making across academic fields”. Most of the other activities received a priority score of 4 out 5 from the median respondent and received a score of at least 3 out 5 from at least three-quarters of respondents.
The two activities that received the lowest score from the median respondent were ‘compare IIDM to other cause areas’ and ‘help people start new organisations’ (2 out of 5). It is also notable that providing careers advice and funding recommendations (two common tools in an EA toolbox) also ranked lower down the priority lists, with a median score of 3 out 5. This may be because some of the higher-ranked activities are foundational prerequisites for these more practical outputs.
Table 4: IIDM activities by priority rating
Activity | % priority of at least 3⁄5 | % priority of 5⁄5 (‘high priority’) | Median score (0-5) |
Synthesise the existing evidence on “what works” in IIDM across academic fields | 94 | 53 | 5 |
Foster an IIDM community within the EA community | 89 | 27 | 4 |
Coordinate between actors within the EA community doing aligned work | 87 | 27 | 4 |
Evaluate interventions to improve institutional decision-making | 85 | 42 | 4 |
Build bridges between the EA community and people / organizations with aligned goals outside of the EA community | 85 | 25 | 4 |
Develop a theoretical framework for prioritising institutions or interventions within IIDM | 84 | 25 | 4 |
Develop a research agenda for IIDM | 84 | 35 | 4 |
Build bridges between the EA community and people working at / with important institutions | 81 | 35 | 4 |
Provide practical support for individuals and teams within institutions | 76 | 25 | 4 |
Map out the institutional landscape for a particular cause area to identify gaps | 75 | 20 | 4 |
Provide careers advice | 70 | 13 | 3 |
Provide funding recommendations | 66 | 9 | 3 |
Help people start new organisations | 47 | 4 | 2 |
Compare IIDM to other cause areas | 38 | 4 | 2 |
The prioritisation scores of people who said they worked in an IIDM-related field were similar to those who didn’t, particularly regarding the high priority of synthesising the existing evidence and the lower priority of the bottom ranked activities. However, there were some differences between the two groups on other activities. In particular, “Develop a theoretical framework for prioritising institutions or interventions within IIDM” was more likely to be scored highly by those who worked in an IIDM field (92% responding 3 out of 5, vs. 82%) while “Develop a research agenda for IIDM” was less likely to be scored highly by this group (79% vs. 89%).
In our December blog post, we identified a number of key initiatives that we would focus on in 2021. We outline below what percent of survey respondents gave these initiatives scores of at least 3 out of 5, along with the median score:
“Synthesise the existing evidence on “what works” in IIDM” (94%; median = 5⁄5)
Community Engagement and Development: “Foster an IIDM community within the EA community” (89%; 4⁄5) , “Coordinate between actors within the EA community doing aligned work” (87%; 4⁄5) and “Build bridges between the EA community and people / organisations with aligned goals outside of EA” (85%; 4⁄5) and “Build bridges between the EA community and people working at or with important institutions” (81%; 4⁄5)
“Developing a research agenda for IIDM” (84%; 4⁄5)
“Developing a theoretical framework for prioritising institutions or interventions within IIDM” (84%; 4⁄5)
[Resource directory: no direct equivalent activity included in the survey as this was a discrete project already underway]
These key initiatives overlap almost exactly with those that received the highest scores from survey respondents . The only difference was the high ranking of “Evaluate interventions to improve institutional decision-making”—not a key initiative identified in the December blog post—which was scored as a three or above by 85% of respondents, coming ahead of the research agenda the framework for prioritising institutions and some of the key community engagement activities.
Additional comments encouraged us to further prioritise: “Since this will be a relatively new cause area to be explored in the EA community I would be selective on what you can and want to achieve taking for instance capacity into consideration” and also to take into account what value we can add: “I think a lot of things are plausibly good ideas but it depends on the comparative advantage of the team.”
What’s next?
While this feedback from the community is not the only input into our prioritisation process , having a systematic way to consider which fields and topics are more in scope within IIDM allows us to make sure we’re building something which is reflective of what the community believes would be most impactful in relation to developing more effective institutions. Since a community of practice is defined partly by a common knowledge base, a list of more in scope topics gives us an idea of what core knowledge is important for activities ranging from educational programming to career development. As an immediate next step, we’ll be using the results from the scoping survey to inform the extent to which we focus on different fields within the resource directory. For example, we’re likely to provide in-depth resources for topics considered more in scope whilst we’ll provide only broad, high-level overviews of fields that are considered less in scope. The results of the survey may also shape how we prioritise our external engagement efforts, e.g. by focusing our attention on forging closer partnerships with organisations that concentrate on the more in scope topics. Finally, it gives us a starting point for where to look for interventions that may help improve institutional decision-making as part of the evidence synthesis.
With respect to our activities, on the whole, the survey results mostly seem to affirm our current plans for the group. We are currently considering what events and spaces are needed to foster an IIDM community within EA. We are also creating a process for a collaborative synthesis of knowledge about IIDM, which could act as a starting point for an academic synthetic review but is less formal (and so allows us to synthesise more and different types of knowledge). Both the resource directory and theoretical framework for prioritising institutions are currently underway. Whilst these will be somewhat living documents, the bulk of the work will be completed shortly.
We have made one change in response to this feedback: given that none of the core team are academics and developing a formal research agenda scored poorly relative to the other key initiatives we’ve previously identified, we have parked this activity for the time being (although identifying unanswered key questions will likely be part of other work). On the other hand, even though it scored highly, we have decided not to prioritise evaluating interventions to improve institutional decision-making. Although there are undoubtedly many interventions that have not been formally evaluated in practical contexts, we believe that we get “greater bang for our buck” in synthesising the existing evidence and expect the synthesis to help us identify the gaps in knowledge and provide a roadmap for “field work” in the future.
Thank you!
Thank you so much to everyone who participated! If you’d like to chat to us about any of the above or getting involved in any of our activities, please contact us on improvinginstitutions@gmail.com
About the authors
Dilhan is an advisor at the Behavioural Insights Team, focusing on designing evaluations and helping organisations make better use of data in decision-making. He analysed responses to most of the survey questions.
Ishita works at IDinsight. She brings experience working to support development sector leaders in using evidence in their decision-making. She developed the survey.
Vicky is a data science manager at the What Works Centre for Children’s Social Care, and uses her skills to improve the evidence base for social interventions. She proposed and coordinated the survey.
The Effective Institutions Project co-leaders Ian David Moss and Laura Green contributed to the list of topics and provided feedback on this post.
With thanks also to Aman Patel, Angela María Aristizábal, Marisa Jurczyk, Michael Noetel and Willem Roekens for contributing to the list of topics.
Appendix: Full topic-level results
Whole institutional governance, leadership and culture
Note: 100% of the overall sample reported that this field was at least somewhat in scope. The average proportion across the topics we listed in this field, from those who provided topic-level ratings, was 89%.
Group dynamics in decision-making:
Note: 100% of the overall sample reported that this field was at least somewhat in scope. The average proportion across the topics we listed in this field, from those who provided topic-level ratings, was 83%.
How decisions (at all levels) are made:
Note: 96% of the overall sample reported that this field was at least somewhat in scope. The average proportion across the topics we listed in this field, from those who provided topic-level ratings, was 91%.
Using evidence and data to improve decisions:
Note: 95% of the overall sample reported that this field was at least somewhat in scope. The average proportion across the topics we listed in this field, from those who provided topic-level ratings, was 84%.
Decision-making support within government and international institutions:
Note: 90% of the overall sample reported that this field was at least somewhat in scope. The average proportion across the topics we listed in this field, from those who provided topic-level ratings, was 84%.
Monitoring and improving ongoing performance of institutions:
Note: 90% of the overall sample reported that this field was at least somewhat in scope. The average proportion across the topics we listed in this field, from those who provided topic-level ratings, was 77%.
Understanding and improving interactions between institutions and across institutional ecosystems:
Note: Ratings were not collected for this field as a whole, from the overall sample, due to a mistake in survey implementation. The average proportion reporting at least somewhat in scope, across the topics we listed in this field, from those who provided topic-level ratings, was 80%.
Improving knowledge about the future to affect plans now:
Note: 86% of the overall sample reported that this field was at least somewhat in scope. The average proportion across the topics we listed in this field, from those who provided topic-level ratings, was 90%.
Individual-level judgement and decision-making:
Note: 84% of the overall sample reported that this field was at least somewhat in scope. The average proportion across the topics we listed in this field, from those who provided topic-level ratings, was 71%.
Determining goals and developing strategies to achieve them:
Note: 62% of the overall sample reported that this field was at least somewhat in scope. The average proportion across the topics we listed in this field, from those who provided topic-level ratings, was 64%.
- List of Interventions for Improving Institutional Decision-Making by 24 Nov 2022 3:54 UTC; 92 points) (
- Disentangling “Improving Institutional Decision-Making” by 13 Sep 2021 23:50 UTC; 92 points) (
- Foresight for Governments: Singapore’s Long-termist Policy Tools & Lessons by 14 Aug 2021 12:29 UTC; 58 points) (
- Two directions for research on forecasting and decision making by 11 Mar 2023 15:33 UTC; 48 points) (
- EA Updates for July 2021 by 2 Jul 2021 14:39 UTC; 37 points) (
- The Effective Institutions Project is hiring by 15 Apr 2022 10:48 UTC; 30 points) (
- 21 Jul 2022 20:05 UTC; 5 points) 's comment on Why EA needs Operations Research: the science of decision making by (
Thanks for publishing the results of the survey! I found it very interesting and informative.
When addressing the scope of IIDM, I think that the survey might conflate a bit between what topics can be considered in scope based on the approaches and goals taken and what topics should be prioritized when we want to improve decision making. So for example, I wouldn’t take these results to mean that “Whole-institution governance, leadership, and culture” should be the most likely prioritized topic in IIDM.
Thanks EdoArad—hopefully I’ve answered your point when replying to Jonas below but let me know if not :)
Yea, mostly 😊 There might be a problem on the other direction, where people who took the survey had in mind “which topics are high priority” and that may have caused either early elimination of potentially relevant topics or, more likely I think, a scope-creep where somewhat related topics which seem important might find their way in.
But I’m not that concerned about that! I’m sure that you can handle the upcoming strategic considerations and prioritization. It was mostly important for me to add that comment for readers who might make the mistake of taking your results as a prioritization within IIDM
Ok great, thanks. :)
I’m excited that there’s now more work happening on Effective Institutions / IIDM!
Some questions and constructive criticism that’s hopefully useful:
It seems that you’re starting out with the assumption that IIDM is a useful category/area, and that figuring out its scope is helpful for determining what’s the most impactful. Was there a particular reason for taking the intermediate step via the scope/definition of IIDM? I personally would be curious to learn which kinds of activities people find most promising in this area, and why so. In comparison, the scope question might just track a ‘verbal dispute’ rather than opinions on ground truths. (Edit: Looks like EdoArad pointed out something similar above.)
Relatedly, the survey gives a picture of what some people interested in IIDM believe about some high-level abstract categories. I wonder if the survey also gave you any insight into the types of activities that people think we should work on. E.g., what specific things do people have in mind when they talk about “Institutional design / governance”, and why exactly do they think it’s important? Does their reasoning hold up on closer inspection? I personally would feel very excited to see more object-level discussion of that kind. Perhaps a small number of people who have thought about IIDM carefully and systematically could share their object-level arguments on which approaches seem the most promising to them.
Hi Jonas, I can share some personal reflections on this. Please note that the following are better described as hunches and impressions based on my experiences rather than strongly held opinions—I’m hopeful that some of the analysis and knowledge synthesis EIP is doing this year will help us and me take more confident positions in the future.
Re: institutional design/governance specifically, I would guess that this scored highly because of its holistic and highly leveraged nature. Many institutions are strongly shaped and highly constrained by rules and norms that are baked into the way they operate from the very beginning or close to it, which in turn can make other kinds of reforms much more difficult or less likely to succeed. The most common problem I see in this area is not so much bad design as lack of design, i.e., silos and practices that may have made sense at one particular moment for one particular set of stakeholders, but weren’t implemented with any larger vision in mind for how everything would need to function together. This is a common failure mode when organizations grow opportunistically rather than intentionally. My sense is that opportunities to make interventions into institutional design and governance are few and far between, but can be tremendously impactful when they do appear. It’s generally easiest to make changes to institutional design early in the life of an institution, but because the scale of operations is often smaller and the prospects for success unclear at that point, it’s not always obvious to the participants how much downstream impact their decisions during that period can have.
One of the biggest bottlenecks to improved decision-making in institutions is simply the level of priority and attention the issue receives. There tends to be much more focus in institutions on specific policies and strategies than on the process by which those priorities are determined. At the same time, institutional cultures tend to reflect their leaders’ priorities, especially if the leaders are in place for a while. Thus, I’m optimistic about interventions that target the selection and recruitment of leaders with an eye toward choosing people who understand the importance of decision-making processes and are committed to making high-quality decision-making a priority in the organizations they come into.
I think there’s a version of moral circle expansion that is very relevant to institutional contexts. Institutions tend to prioritize first and foremost their direct stakeholders, i.e. the interests of people close to the institution. If more of them took seriously the effects of their decisions on everyone, not just those who are their primary voting constituents or intended beneficiaries or paying customers, that would represent a dramatic cultural shift that would make lots of other improvements more feasible. I see this as more of a long-term strategy that will not be easy to pull off, but the potential benefits from making progress on this dimension are massive.
Thanks Jonas. We / I are also really interested in activities that people find promising within this area! The idea with the survey was partly to connect IIDM to categories which exist in other professional communities and academic literatures to help us understand what are considered promising approaches in those fields and allow us to build on existing knowledge.
I like the move from IIDM to Effective Institutions :)
Ian David Moss mentioned some overlap between IIDM and “improving science”, which is something I’ve been thinking about a bit lately. From this survey, I think that overlap exists in at least
Funding mechanisms.
Designing mechanisms to align incentives more with social merit or scientific merit.
Maybe any other thing that causes the academic institution to be more efficient.
Thanks EdoAarad! Yes, I think there are definitely parallels. I was a bit personally disappointed not to see meta-science considered more within the scope of IIDM as it’s something I’m very interested in too. And glad you like the name! :)
I have the impression you asked people: is discussing about dogs or cats in the scope of improving decision in animal welfare? I would be very surprised if somebody did disagree.
That is a pity you stop at the presentation of the results. I believe the interesting part of the story is in the reason why some people disagreed. Would those reasons make the positive respondents change their mind? Are those negative answer a reason of concern for IIDM?
I believe this is a more interesting question, and it certainly feels like IIDM is working toward a transparent and fair decision making approach.
However, results are presented but not discussed. I would be curious to hear an analysis about whether the population of the respondents could have had any impact on the results.
For example the lowest priority item “Compare IIDM to other cause areas” could rank low because respondents are already well aware of the topic or because the imagine the findings might not tell a favourable story.
Thanks Vhanon. We did have some open text boxes so that we could pick up a bit more of the reasons why people gave the answers that they did. We’ve scattered those throughout the post so it’s maybe a bit less obvious where we’ve included that information. I don’t have answers to the questions you’re posing (e.g. what would make respondents change their mind?) but some extra snippets which I thought were interesting but didn’t make it into the final cut were about considering the decisions of non-human agents and also where to place interventions to shift people’s values towards long-termism. The comments on activities tended to be around encouraging us to prioritise based on the skillset of the team.