Original paper written by Max Khan Hayward (thanks for engaging with EA, and for doing so much work to further public philosophy!).
Thanks to Sci-Hub for the article access.
Epistemic status: This is a basic summary and commentary that I didn’t spend much time on, and my analysis may be too simple // full of holes. I’d love to hear additional thoughts from anyone who finds this interesting!
Utility cascades occur when a utilitarian’s reduction of support for an intervention reduces the effectiveness of that intervention, leading the utilitarian to further reduce support, thereby further undermining effectiveness, and so on, in a negative spiral.
This paper illustrates the mechanisms by which utility cascades occur, and then draws out the theoretical and practical implications.
Theoretically, utility cascades provide an argument that the utilitarian agent should sometimes either ignore evidence about effectiveness or fail to apportion support to effectiveness. Practically, utility cascades call upon utilitarians to rethink their relationship with the social movement known as Effective Altruism, which insists on the importance of seeking and being guided by evidence concerning effectiveness.
This has particular implications for the ‘Institutional Critique’ of Effective Altruism, which holds that Effective Altruists undervalue political and systemic reforms. The problem of utility cascades undermines the Effective Altruist response to the Institutional Critique.
There are cases wherein an act-utilitarian should “ostrich” (that is, refuse to update their judgments in the light of new evidence) if they want the best outcome. This poses a challenge for how act-utilitarians ought to prioritize moral and epistemic normativity.
Sometimes, rationally updating can lead to a utility cascade. If an altruist discovers that a charity they funded now seems to be less effective than they had thought, and pulls away funding as a result, the charity may become even less effective due to the loss of resources — which could lead the altruist to pull away even more funding, leading to a further drop in effectiveness...
The altruist may be apportioning their support based on effectiveness, but if a charity’s effectiveness is not independent of their support, there is a risk of cascade. (This risk becomes higher if many people apportion their support based on the same information, since more resources will be pulled away and effectiveness drops more sharply.)
While the altruist in question can find other charities which may seem better than the original (downgraded) charity, a utility cascade can still leave to permanent losses. For example:
The original charity may end up shutting down due to a temporary lapse in effectiveness or a failed (but worthy) experiment.
Or, to riff on an example for the paper, the most risk-intolerant backer of a risky venture might withdraw support, making the project more risky, leading the next-most risk-intolerant backer to withdraw support… until the venture no longer exists at all, even though nearly all funders saw it as valuable at the beginning of the cascade.
“By the utilitarian’s own lights, this is a problem. And it is not anomalous. The preconditions that permit of utility cascades are not rare.”
There must be a charity/initiative/policy that can receive different degrees of support
Its effectiveness must depend in part on its level of support
“Most collective attempts to make the world better [...] instantiate these features.”
Why this problem can be hard to coordinate around: “While [act-utilitarians] can share information and make plans together, they cannot undertake to perform actions that conflict with their principles.”
From the paper’s conclusion:
Probably the only way to address the root causes of world misery is through structural reforms – the interventions with the highest utility were they to work are systemic and political. Whether or not they do work is in part dependent on how many people pursue them. But, in a world increasingly influenced by Effective Altruists, the likelihood of people pursuing these reforms is reduced by arguments that this is an inefficient strategy. Perhaps the world would be better, in utilitarian terms, if Effective Altruists would keep quiet about the difficulty of political reform.
Short and easy to read!
This kind of thing can definitely be an issue for EA, and it’s nice to see a published summary that assigns a catchy term to replace the more general “coordination problem” for these cases.
Memorable examples that help me hold utility cascades in mind as a single “unit” of thought; it’s especially nice that there are two different examples which illustrate different instances of the problem at hand.
The author makes assumptions about EA’s approach to political reform which seem years out of date (if they were ever accurate at all)
Seems to associate EA a bit too closely with pure act-utilitarianism, where in my experience, EA is more practical: If we notice that a predictable/rational behavior pattern seems like it will lead somewhere bad, we take steps to break that pattern. We research political campaigns and highlight those which are worth pursuing; we use forms of reasoning beyond just effectiveness calculation.
If we did live in a world where some highly unlikely level of coordination were required to get anything done, we might run into utility cascades more frequently. Fortunately, there are plenty of good opportunities for systemic change that don’t require this much risk-taking (e.g. the Center for Election Science’s Fargo approval voting campaign, and the rest of their gradual, city-by-city strategy)
It’s very hard to tell when you’re about to hit a utility cascade vs. when you are simply making a wise choice not to invest in something that isn’t worthwhile. It seems to me as though the latter case is far more common than the former, because most uses of funding won’t be nearly as good as the best uses of funding, and a low effectiveness score provides at least some evidence that you are looking at a non-”best” use of funding.
No matter what critiques you launch at EA, in the end you have to find some way of choosing a cause to fund. The author doesn’t try to present a formal method, which is of course fine, but they seem to lean toward the heuristic of “fund the sorts of things which worked in the past,” which isn’t very specific and doesn’t seem reliable. (As often happens when I see a critique of EA, I want to ask the author what they’d fund, and why that thing, and why not various other things.)
Overall, the paper identifies a real risk that does come up in EA funding, but I think the author is too quick to dismiss EA’s chances of reducing that risk in ways other than “selectively ignoring new evidence about effectiveness.”