As part of the working group’s activities this year, we’re currently in the process of developing a prioritization framework for selecting institutions to engage with. In the course of setting up that framework, we realized that the traditional Importance/Tractability/Neglectedness schematic doesn’t really have an explicit consideration for downside risk. So we’ve added that in the context of what it would look like to engage with an institution. With the caveat that this is still in development, here are some mechanisms we’ve come up with by which an intervention to improve decision-making could cause more harm than good:
The involvement of people from our community in a strategy to improve an institution’s decision-making reduces the chances of that strategy succeeding, or its positive impact if it does succeed
(This seems most likely to be a reputation/optics effect, e.g. for whatever reason we are not credible messengers for the strategy or bring controversy to the effort where it didn’t exist before. It will be most relevant where there is already capacity in place among other stakeholders or players in the system to make a change, whereby there is something to lose by us getting involved.)
The strategy selected leads to worse outcomes than the status quo due to poor implementation or an incomplete understanding of its full implications for the organization
(One way I’ve seen this go wrong is with reforms intended to increase the amount of information available to decision-makers at the expense of some ongoing investment of time. Often, there is insufficient attention put toward ensuring use of the additional information, with the result that the benefits of the reform aren’t realized but the cost in time is still there.)
A failed attempt to execute on a particular strategy at the next available opportunity crowds out a what would otherwise be a more successful strategy in the near future
(This one could go either way; sometimes it takes several attempts to get something done and previous pushes help to lay the groundwork for future efforts rather than crowding them out. However, there are definitely cases where a particularly bad execution of a strategy can poison critical relationships or feed into a damaging counter-narrative that then makes future efforts more difficult.)
The strategy succeeds in improving decision quality at that particular institution, but it doesn’t actually improve world outcomes because of insufficient altruistic intent on the part of the institution
(We do define this sort of value alignment as a component of decision quality, but since it’s only one element it would theoretically be possible to engage in a way that solely focuses on the technical aspects of decision-making, only to see the improved capability directed toward actions that cause global net harm even if they are good for some of the institution’s stakeholders. I think that there’s a lot our community can do in practice to mitigate this risk, but in some contexts it will loom large.)
I think all of these risks are very real but also ultimately manageable. The most important way to mitigate them is to approach engagement opportunities carefully and, where possible, in collaboration with people who have a strong understanding of the institutions and/or individual decision-makers within them.
What are the major risks or downsides that may occur, accidentally or otherwise, from efforts to improve institutional decision-making?
How concerned are you about these (how likely you think they are, and how bad would they be if it happened)?
As part of the working group’s activities this year, we’re currently in the process of developing a prioritization framework for selecting institutions to engage with. In the course of setting up that framework, we realized that the traditional Importance/Tractability/Neglectedness schematic doesn’t really have an explicit consideration for downside risk. So we’ve added that in the context of what it would look like to engage with an institution. With the caveat that this is still in development, here are some mechanisms we’ve come up with by which an intervention to improve decision-making could cause more harm than good:
The involvement of people from our community in a strategy to improve an institution’s decision-making reduces the chances of that strategy succeeding, or its positive impact if it does succeed
(This seems most likely to be a reputation/optics effect, e.g. for whatever reason we are not credible messengers for the strategy or bring controversy to the effort where it didn’t exist before. It will be most relevant where there is already capacity in place among other stakeholders or players in the system to make a change, whereby there is something to lose by us getting involved.)
The strategy selected leads to worse outcomes than the status quo due to poor implementation or an incomplete understanding of its full implications for the organization
(One way I’ve seen this go wrong is with reforms intended to increase the amount of information available to decision-makers at the expense of some ongoing investment of time. Often, there is insufficient attention put toward ensuring use of the additional information, with the result that the benefits of the reform aren’t realized but the cost in time is still there.)
A failed attempt to execute on a particular strategy at the next available opportunity crowds out a what would otherwise be a more successful strategy in the near future
(This one could go either way; sometimes it takes several attempts to get something done and previous pushes help to lay the groundwork for future efforts rather than crowding them out. However, there are definitely cases where a particularly bad execution of a strategy can poison critical relationships or feed into a damaging counter-narrative that then makes future efforts more difficult.)
The strategy succeeds in improving decision quality at that particular institution, but it doesn’t actually improve world outcomes because of insufficient altruistic intent on the part of the institution
(We do define this sort of value alignment as a component of decision quality, but since it’s only one element it would theoretically be possible to engage in a way that solely focuses on the technical aspects of decision-making, only to see the improved capability directed toward actions that cause global net harm even if they are good for some of the institution’s stakeholders. I think that there’s a lot our community can do in practice to mitigate this risk, but in some contexts it will loom large.)
I think all of these risks are very real but also ultimately manageable. The most important way to mitigate them is to approach engagement opportunities carefully and, where possible, in collaboration with people who have a strong understanding of the institutions and/or individual decision-makers within them.