Utility Cascades

Link post

Origi­nal pa­per writ­ten by Max Khan Hay­ward (thanks for en­gag­ing with EA, and for do­ing so much work to fur­ther pub­lic philos­o­phy!).

Thanks to Sci-Hub for the ar­ti­cle ac­cess.

Epistemic sta­tus: This is a ba­sic sum­mary and com­men­tary that I didn’t spend much time on, and my anal­y­sis may be too sim­ple //​ full of holes. I’d love to hear ad­di­tional thoughts from any­one who finds this in­ter­est­ing!

Abstract

Utility cas­cades oc­cur when a util­i­tar­ian’s re­duc­tion of sup­port for an in­ter­ven­tion re­duces the effec­tive­ness of that in­ter­ven­tion, lead­ing the util­i­tar­ian to fur­ther re­duce sup­port, thereby fur­ther un­der­min­ing effec­tive­ness, and so on, in a nega­tive spiral.

This pa­per illus­trates the mechanisms by which util­ity cas­cades oc­cur, and then draws out the the­o­ret­i­cal and prac­ti­cal im­pli­ca­tions.

The­o­ret­i­cally, util­ity cas­cades provide an ar­gu­ment that the util­i­tar­ian agent should some­times ei­ther ig­nore ev­i­dence about effec­tive­ness or fail to ap­por­tion sup­port to effec­tive­ness. Prac­ti­cally, util­ity cas­cades call upon util­i­tar­i­ans to re­think their re­la­tion­ship with the so­cial move­ment known as Effec­tive Altru­ism, which in­sists on the im­por­tance of seek­ing and be­ing guided by ev­i­dence con­cern­ing effec­tive­ness.

This has par­tic­u­lar im­pli­ca­tions for the ‘In­sti­tu­tional Cri­tique’ of Effec­tive Altru­ism, which holds that Effec­tive Altru­ists un­der­value poli­ti­cal and sys­temic re­forms. The prob­lem of util­ity cas­cades un­der­mines the Effec­tive Altru­ist re­sponse to the In­sti­tu­tional Cri­tique.

My notes

  • There are cases wherein an act-util­i­tar­ian should “os­trich” (that is, re­fuse to up­date their judg­ments in the light of new ev­i­dence) if they want the best out­come. This poses a challenge for how act-util­i­tar­i­ans ought to pri­ori­tize moral and epistemic nor­ma­tivity.

  • Some­times, ra­tio­nally up­dat­ing can lead to a util­ity cas­cade. If an al­tru­ist dis­cov­ers that a char­ity they funded now seems to be less effec­tive than they had thought, and pulls away fund­ing as a re­sult, the char­ity may be­come even less effec­tive due to the loss of re­sources — which could lead the al­tru­ist to pull away even more fund­ing, lead­ing to a fur­ther drop in effec­tive­ness...

  • The al­tru­ist may be ap­por­tion­ing their sup­port based on effec­tive­ness, but if a char­ity’s effec­tive­ness is not in­de­pen­dent of their sup­port, there is a risk of cas­cade. (This risk be­comes higher if many peo­ple ap­por­tion their sup­port based on the same in­for­ma­tion, since more re­sources will be pul­led away and effec­tive­ness drops more sharply.)

  • While the al­tru­ist in ques­tion can find other char­i­ties which may seem bet­ter than the origi­nal (down­graded) char­ity, a util­ity cas­cade can still leave to per­ma­nent losses. For ex­am­ple:

    • The origi­nal char­ity may end up shut­ting down due to a tem­po­rary lapse in effec­tive­ness or a failed (but wor­thy) ex­per­i­ment.

    • Or, to riff on an ex­am­ple for the pa­per, the most risk-in­tol­er­ant backer of a risky ven­ture might with­draw sup­port, mak­ing the pro­ject more risky, lead­ing the next-most risk-in­tol­er­ant backer to with­draw sup­port… un­til the ven­ture no longer ex­ists at all, even though nearly all fun­ders saw it as valuable at the be­gin­ning of the cas­cade.

  • “By the util­i­tar­ian’s own lights, this is a prob­lem. And it is not anoma­lous. The pre­con­di­tions that per­mit of util­ity cas­cades are not rare.”

    • There must be a char­ity/​ini­ti­a­tive/​policy that can re­ceive differ­ent de­grees of support

    • Its effec­tive­ness must de­pend in part on its level of support

    • “Most col­lec­tive at­tempts to make the world bet­ter [...] in­stan­ti­ate these fea­tures.”

  • Why this prob­lem can be hard to co­or­di­nate around: “While [act-util­i­tar­i­ans] can share in­for­ma­tion and make plans to­gether, they can­not un­der­take to perform ac­tions that con­flict with their prin­ci­ples.”

From the pa­per’s con­clu­sion:

Prob­a­bly the only way to ad­dress the root causes of world mis­ery is through struc­tural re­forms – the in­ter­ven­tions with the high­est util­ity were they to work are sys­temic and poli­ti­cal. Whether or not they do work is in part de­pen­dent on how many peo­ple pur­sue them. But, in a world in­creas­ingly in­fluenced by Effec­tive Altru­ists, the like­li­hood of peo­ple pur­su­ing these re­forms is re­duced by ar­gu­ments that this is an in­effi­cient strat­egy. Per­haps the world would be bet­ter, in util­i­tar­ian terms, if Effec­tive Altru­ists would keep quiet about the difficulty of poli­ti­cal re­form.

The good

  • Short and easy to read!

  • This kind of thing can definitely be an is­sue for EA, and it’s nice to see a pub­lished sum­mary that as­signs a catchy term to re­place the more gen­eral “co­or­di­na­tion prob­lem” for these cases.

  • Me­morable ex­am­ples that help me hold util­ity cas­cades in mind as a sin­gle “unit” of thought; it’s es­pe­cially nice that there are two differ­ent ex­am­ples which illus­trate differ­ent in­stances of the prob­lem at hand.

The bad

  • The au­thor makes as­sump­tions about EA’s ap­proach to poli­ti­cal re­form which seem years out of date (if they were ever ac­cu­rate at all)

  • Seems to as­so­ci­ate EA a bit too closely with pure act-util­i­tar­i­anism, where in my ex­pe­rience, EA is more prac­ti­cal: If we no­tice that a pre­dictable/​ra­tio­nal be­hav­ior pat­tern seems like it will lead some­where bad, we take steps to break that pat­tern. We re­search poli­ti­cal cam­paigns and high­light those which are worth pur­su­ing; we use forms of rea­son­ing be­yond just effec­tive­ness calcu­la­tion.

    • If we did live in a world where some highly un­likely level of co­or­di­na­tion were re­quired to get any­thing done, we might run into util­ity cas­cades more fre­quently. For­tu­nately, there are plenty of good op­por­tu­ni­ties for sys­temic change that don’t re­quire this much risk-tak­ing (e.g. the Cen­ter for Elec­tion Science’s Fargo ap­proval vot­ing cam­paign, and the rest of their grad­ual, city-by-city strat­egy)

  • It’s very hard to tell when you’re about to hit a util­ity cas­cade vs. when you are sim­ply mak­ing a wise choice not to in­vest in some­thing that isn’t worth­while. It seems to me as though the lat­ter case is far more com­mon than the former, be­cause most uses of fund­ing won’t be nearly as good as the best uses of fund­ing, and a low effec­tive­ness score pro­vides at least some ev­i­dence that you are look­ing at a non-”best” use of fund­ing.

    • No mat­ter what cri­tiques you launch at EA, in the end you have to find some way of choos­ing a cause to fund. The au­thor doesn’t try to pre­sent a for­mal method, which is of course fine, but they seem to lean to­ward the heuris­tic of “fund the sorts of things which worked in the past,” which isn’t very spe­cific and doesn’t seem re­li­able. (As of­ten hap­pens when I see a cri­tique of EA, I want to ask the au­thor what they’d fund, and why that thing, and why not var­i­ous other things.)

Over­all, the pa­per iden­ti­fies a real risk that does come up in EA fund­ing, but I think the au­thor is too quick to dis­miss EA’s chances of re­duc­ing that risk in ways other than “se­lec­tively ig­nor­ing new ev­i­dence about effec­tive­ness.”