“Just take the expected value” – a possible reply to concerns about cluelessness

This is the sec­ond in a se­ries of posts ex­plor­ing con­se­quen­tial­ist clue­less­ness and its im­pli­ca­tions for effec­tive al­tru­ism:

  • The first post de­scribes clue­less­ness & its rele­vance to EA; ar­gu­ing that for many pop­u­lar EA in­ter­ven­tions we don’t have a clue about the in­ter­ven­tion’s over­all net im­pact.

  • This post con­sid­ers a po­ten­tial re­ply to con­cerns about clue­less­ness – maybe when we are un­cer­tain about a de­ci­sion, we should just choose the op­tion with the high­est ex­pected value.

  • Fol­low­ing posts dis­cuss how tractable clue­less­ness is, and what be­ing clue­less im­plies about do­ing good.

Con­sider read­ing the first post first.


A ra­tio­nal­ist’s re­ply to con­cerns about clue­less­ness could be as fol­lows:

  • Clue­less­ness is just a spe­cial case of em­piri­cal un­cer­tainty.[1]

  • We have a frame­work for deal­ing with em­piri­cal un­cer­tainty – ex­pected value.

  • So for de­ci­sions where we are un­cer­tain, we can de­ter­mine the best course of ac­tion by mul­ti­ply­ing our best-guess prob­a­bil­ity against our best-guess util­ity for each op­tion, then choos­ing the op­tion with the high­est ex­pected value.

While this ap­proach makes sense in the ab­stract, it doesn’t work well in real-world cases. The difficulty is that it’s un­clear what “best-guess” prob­a­bil­ities & util­ities we should as­sign, as well as un­clear to what ex­tent we should be­lieve our best guesses.

Con­sider this pas­sage from Greaves 2016 (“cre­dence func­tion” can be read roughly as “prob­a­bil­ity”):

The al­ter­na­tive line I will ex­plore here be­gins from the sug­ges­tion that in the situ­a­tions we are con­sid­er­ing, in­stead of hav­ing some sin­gle and com­pletely pre­cise (real-val­ued) cre­dence func­tion, agents are ra­tio­nally re­quired to have im­pre­cise cre­dences: that is, to be in a credal state that is rep­re­sented by a many-mem­bered set of prob­a­bil­ity func­tions (call this set the agent’s ‘rep­re­sen­tor’). In­tu­itively, the idea here is that when the ev­i­dence fails con­clu­sively to recom­mend any par­tic­u­lar cre­dence func­tion above cer­tain oth­ers, agents are ra­tio­nally re­quired to re­main neu­tral be­tween the cre­dence func­tions in ques­tion: to in­clude all such equally-recom­mended cre­dence func­tions in their rep­re­sen­tor.

To trans­late a lit­tle, Greaves is say­ing that real-world agents don’t as­sign pre­cise prob­a­bil­ities to out­comes, they in­stead con­sider mul­ti­ple pos­si­ble prob­a­bil­ities for each out­come (taken to­gether, these prob­a­bil­ities sum to the agent’s “rep­re­sen­tor”). Be­cause an agent holds mul­ti­ple prob­a­bil­ities for each out­come, and has no way by which to ar­bi­trate be­tween its mul­ti­ple prob­a­bil­ities, it can­not use a straight­for­ward ex­pected value calcu­la­tion to de­ter­mine the best out­come.

In­tu­itively, this makes sense. Prob­a­bil­ities can only be for­mally as­signed when the sam­ple space is fully mapped out, and for most real-world de­ci­sions we can’t map the full sam­ple space (in part be­cause the world is very com­pli­cated, and in part be­cause we can’t pre­dict the long-run con­se­quences of an ac­tion).[2] We can make sub­jec­tive prob­a­bil­ity es­ti­mates, but if a prob­a­bil­ity es­ti­mate does not flow out of a clearly ar­tic­u­lated model of the world, its be­liev­abil­ity is sus­pect.[3]

Fur­ther­more, be­cause mul­ti­ple prob­a­bil­ity es­ti­mates can seem sen­si­ble, agents can hold mul­ti­ple es­ti­mates si­mul­ta­neously (i.e. their rep­re­sen­tor). For de­ci­sions where the full sam­ple space isn’t mapped out (i.e. most real-world de­ci­sions), the method by which hu­man de­ci­sion-mak­ers con­vert their multi-value rep­re­sen­tor into a sin­gle-value, “best-guess” es­ti­mate is opaque.

The next time you en­counter some­one mak­ing a sub­jec­tive prob­a­bil­ity es­ti­mate, ask “how did you ar­rive at that num­ber?” The an­swer will fre­quently be along the lines of “it seems about right” or “I would be sur­prised if it were higher.” An­swers like this in­di­cate that the es­ti­ma­tor doesn’t have visi­bil­ity into the pro­cess by which they’re ar­riv­ing at their es­ti­mate.

So we have be­liev­abil­ity prob­lems on two lev­els:

  1. When­ever we make a prob­a­bil­ity es­ti­mate that doesn’t flow from a clear world-model, the be­liev­abil­ity of that es­ti­mate is ques­tion­able.

  2. And if we at­tempt to rec­on­cile mul­ti­ple prob­a­bil­ity es­ti­mates into a sin­gle best-guess, the be­liev­abil­ity of that best-guess is ques­tion­able be­cause our method of rec­on­cil­ing mul­ti­ple es­ti­mates into a sin­gle value is opaque.[4]

By now it should be clear that sim­ply fol­low­ing the ex­pected value is not a suffi­cient re­sponse to con­cerns of clue­less­ness. How­ever, it’s pos­si­ble that clue­less­ness can be ad­dressed by other routes – per­haps by dili­gent in­ves­ti­ga­tion, we can grow clue­ful enough to make be­liev­able de­ci­sions about how to do good.

The next post will con­sider this fur­ther.

Thanks to Jesse Clif­ton and an anony­mous col­lab­o­ra­tor for thought­ful feed­back on drafts of this post. Views ex­pressed above are my own. Cross-posted to my per­sonal blog.


Footnotes

[1]: This is sep­a­rate from nor­ma­tive un­cer­tainty – un­cer­tainty about what crite­rion of moral bet­ter­ness to use when com­par­ing op­tions. Em­piri­cal un­cer­tainty is un­cer­tainty about the over­all im­pact of an ac­tion, given a crite­rion of bet­ter­ness. In gen­eral, clue­less­ness is a sub­set of em­piri­cal un­cer­tainty.

[2]: Leonard Sav­age, who worked out much of the foun­da­tions of Bayesian statis­tics, con­sid­ered Bayesian de­ci­sion the­ory to only ap­ply in “small world” set­tings. See p. 16 & p. 82 of the sec­ond edi­tion of his Foun­da­tions of Statis­tics for fur­ther dis­cus­sion of this point.

[3]: Thanks to Jesse Clif­ton to mak­ing this point.

[4]: This prob­lem per­sists even if each in­put es­ti­mate flows from a clear world-model.