A summary of Nicholas Beckstead’s writing on Bayesian Ethics


This write-up is in­tended to be a sum­mary of Chap­ter 2 (How could we be so wrong) of On the Over­whelming Im­por­tance of Shap­ing the Far Fu­ture by Ni­cholas Beck­stead. The chap­ter origi­nally spans ~12000 words, here we sum­ma­rize the main points in ~1800 words.

This chap­ter of Beck­stead’s the­sis dis­cusses Bayesian Ethics, ev­i­dence of er­ror in moral in­tu­ition and some bi­ases spe­cific to longter­mism. I have omit­ted the dis­cus­sion only rele­vant to the lat­ter.

Out­line of Chap­ter Two—How could we be so wrong

In the first sec­tion, the stan­dard Bayesian frame­work for sci­en­tific in­quiry is in­tro­duced.

In the sec­ond sec­tion, the frame­work is ap­plied to moral philos­o­phy to ar­gue that when as­sign­ing cre­dence to moral the­o­ries we should:

  • give less weight to fit with in­tu­ition in par­tic­u­lar cases (eg “no num­ber of headaches can be worse than a death”) and cherry-picked counterexamples

  • give more weight to meta eth­i­cal the­o­ries, ba­sic hunches about moral the­o­ries (eg “util­i­tar­i­anism seems el­e­gant”) and to ba­sic epistemic standards

In the next three sec­tions, three bod­ies of ev­i­dence are pre­sented fa­vor­ing the con­clu­sion that hu­man moral in­tu­ition about spe­cific cases is prone to er­ror:

  1. A his­tor­i­cal record of ac­cept­ing morally ab­surd so­cial practices

  2. A re­view of the sci­en­tific lit on hu­man bias in in­tu­itive judgement

  3. A philo­soph­i­cal ac­count show­ing deep in­con­sis­ten­cies in com­mon moral convictions

In the sixth sec­tion, it is ar­gued that there are spe­cific bi­ases that make hu­mans un­der­es­ti­mate the im­por­tance of the far fu­ture.

In the sev­enth and last sec­tion Beck­stead sum­ma­rizes his ar­gu­ments and con­clu­sions.


In this sum­mary I group the first two sec­tions ex­plain­ing the Bayesian ap­proach to Ethics to­gether, as well as the three sec­tions pro­vid­ing ev­i­dence against the re­li­a­bil­ity of moral in­tu­ition. I will not cover the sixth sec­tion—in­ter­ested read­ers can read the origi­nal text.

A bayesian ap­proach to moral philosophy

In the first two sec­tions, Beck­stead in­tro­duces Bayesian curve fit­ting and ar­gues that it ap­plies to moral philos­o­phy as well as to sci­en­tific in­quiry.

His main take­away from that propo­si­tion is that to the ex­tent that we ex­pect moral in­tu­itions to be bi­ased we should rely less on fit to in­tu­ition and coun­terex­am­ples when as­sign­ing cre­dence to moral the­o­ries.

Bayesian curve fitting

In the prob­lem of curve-fit­ting we have a col­lec­tion of noisy data points (“ob­ser­va­tions”) and a set of curves (“mod­els”), both re­lat­ing an in­put X into an out­put Y. We want to find the model that best ex­plains the ob­ser­va­tions we have col­lected and will bet­ter ex­trap­o­late to the in­puts we have not seen yet.

In Bayesian curve fit­ting we start by as­sign­ing a prior cre­dence to each pos­si­ble model based on back­ground in­for­ma­tion, ba­sic epistemic stan­dards (eg sim­plic­ity) and ba­sic hunches, and then up­date it based on the ob­ser­va­tions and our hy­poth­e­sis of how the ob­ser­va­tions might be noisy or bi­ased. Beck­stead sum­ma­rizes the ap­proach with the fol­low­ing figure:

Bayesian curve fit­ting as a method of sci­en­tific in­quiry is fairly stan­dard and well ar­gued for in other texts.

Beck­stead draws two key im­pli­ca­tions out of it:

  • When you ex­pect more er­ror, rely on pri­ors more. The higher the var­i­ance, the most plau­si­ble is that any par­tic­u­lar ob­ser­va­tion can be ex­plained by noise.

  • When you ex­pect your ob­ser­va­tions to be sys­tem­at­i­cally bi­ased and you don’t know ei­ther the sign and/​or the mag­ni­tude of the bias, rely on pri­ors more.

Rele­vance for moral methodology

Beck­stead claims that our moral in­tu­itions are a kind of noisy data, and that our cre­dences in moral the­o­ries should be up­dated in ac­cor­dance with our best epistemic the­ory, ie Bayesian anal­y­sis.

Beck­stead sum­ma­rizes the trans­la­tion of Bayesian curve fit­ting from sci­en­tific grounds to moral philos­o­phy grounds with the fol­low­ing table:

Beck­stead dis­cusses a pos­si­ble ob­jec­tion to the Bayesian ap­proach to Ethics: moral philos­o­phy is a pri­ori and re­quires differ­ent method­olog­i­cal stan­dards. He coun­ters ar­gu­ing that Bayesian up­dat­ing is a rea­son­able ap­prox­i­ma­tion of how peo­ple change their be­liefs as they think. By way of an ex­am­ple of a pri­ori rea­son­ing work­ing this way he sug­gests a math stu­dent is try­ing to figure out if ev­ery differ­en­tiable func­tion is con­tin­u­ous by try­ing ex­am­ples.

Now, Beck­stead claims that our moral in­tu­itions are es­pe­cially noisy and bi­ased, and thus we should rely more on our moral pri­ors, as he dis­cussed in the pre­vi­ous sec­tion. From this propo­si­tion Beck­stead draws two main con­clu­sions:

Ev­i­dence in fa­vor of moral error

In the sec­tions 3-5 Beck­stead brings ev­i­dence in fa­vor of the claim that our moral in­tu­itions are es­pe­cially noisy. He talks about 1) his­tor­i­cal moral mis­takes, 2) the sci­en­tific liter­a­ture on hu­man bias, and 3) some im­pos­si­bil­ity re­sults show­ing that some of our strongest moral in­tu­itions are mu­tu­ally in­con­sis­tent.

The his­tor­i­cal record

  1. In the past, there was wide­spread, cor­re­lated, bi­ased moral er­ror, even on mat­ters where peo­ple were very con­fi­dent. As ex­am­ples, Beck­stead draws at­ten­tion over the treat­ment of women, slaves, ho­mo­sex­u­als, etc.

  2. By in­duc­tion, we make similar er­rors, even on mat­ters where we are very con­fi­dent.
    Beck­stead ar­gues that the bi­ases that caused the er­rors of the past re­main em­bed­ded in us and will cause similar classes of moral mis­takes.

Beck­stead pre­emp­tively ad­dresses some ar­gu­ments against his claims:

  1. Ob­jec­tion: Past er­rors were mostly due to limited and/​or in­ac­cu­rate non-nor­ma­tive in­for­ma­tion. Since we have much more in­for­ma­tion, we should ex­pect much less moral er­ror.

    1. moral judge­ments are pro­duced by emo­tional pro­cesses and ver­bally jus­tified post-hoc, so fur­ther in­for­ma­tion won’t dras­ti­cally change our intuitions

    2. we might not have un­cov­ered all er­ror-rele­vant information

    3. we might not have in­ter­nal­ized all available, er­ror-rele­vant information

  2. Ob­jec­tion: On some meta-eth­i­cal views (eg ideal­ized prefer­ences), his­tor­i­cal moral er­ror was prob­a­bly much less abun­dant than on others

    1. Beck­stead ar­gues that al­though some his­tor­i­cal er­ror can be ex­plained by chang­ing stan­dards and prefer­ences, er­ror can hap­pen in­de­pen­dently, and meta eth­i­cal the­o­ries that can­not ac­count for this should be pe­nal­ized.

  3. Ob­jec­tion; We have rea­son to be­lieve that philoso­phers would not be sub­ject to what­ever these his­tor­i­cal er­ror pro­cesses were

    1. Beck­stead pre­sents sev­eral his­tor­i­cal coun­terex­am­ples, such as Aris­to­tle’s en­dorse­ment of slavery

The sci­en­tific record: biases

  1. Scien­tific find­ings on pru­den­tial, epistemic, and moral heuris­tics and bi­ases strongly sug­gest that our moral judg­ments are sub­ject to er­ror pro­cesses which are wide­spread and bi­ased.

  2. Be­cause of this, we should ex­pect philoso­phers’ in­tu­itions to be sub­ject to er­ror pro­cesses which are wide­spread and bi­ased.

Beck­stead cites as ev­i­dence Kah­ne­man and Tver­sky’s sem­i­nal work on heuris­tics and bi­ases. He then ex­plains three types of bi­ases:

Beck­stead ap­peals to in­tu­ition to ar­gue that we should ex­pect that there are many un­known moral bi­ases. He also cites some ex­per­i­ments on bias to ar­gue that philoso­phers are no less prone to moral bias.

The philo­soph­i­cal record

  1. A num­ber of im­pos­si­bil­ity re­sults show that cer­tain moral judg­ments about which philoso­phers are very con­fi­dent are in­con­sis­tent with each other. As ex­am­ples Beck­stead ex­plains Parfit’s Mere Ad­di­tion Para­dox and Temkin’s Spec­trum Para­doxes.

  2. There­fore, we should ex­pect that there are some er­ror pro­cesses un­der­ly­ing these judg­ments that bi­ases us to­ward over­con­fi­dence, and we should not ex­pect to find a the­ory of eg pop­u­la­tion ethics which ac­cords with all of our most con­fi­dent moral judg­ments.

  3. A rel­a­tively limited amount of re­sources (the ca­reers of a few very in­sight­ful philoso­phers) gen­er­ated most of these im­pos­si­bil­ity re­sults.

  4. This search pro­cess is un­likely to have un­cov­ered a sig­nifi­cant pro­por­tion of the im­por­tant im­pos­si­bil­ity re­sults.

  5. There­fore, we should ex­pect that there are many more such im­pos­si­bil­ity re­sults.

  6. There­fore, we should ex­pect that analo­gous er­ror pro­cesses are op­er­at­ing in many cases where im­pos­si­bil­ity re­sults have not yet been dis­cov­ered, and that it will be im­pos­si­ble to find the­o­ries that ac­cord with all of our most con­dent moral judg­ments in these cases as well.

Beck­stead ar­gues that these im­pos­si­bil­ity re­sults should prompt us to 1) doubt our moral in­tu­itions and 2) ac­cept that no moral the­ory will be able to ex­plain in a satis­fac­tory way many of our moral in­tu­itions.

Conclusion

Ver­ba­tim from the the­sis:

Learn­ing about moral er­rors through his­tory, bi­ased heuris­tics gen­er­at­ing our moral judg­ments, and a col­lec­tion of im­pos­si­bil­ity re­sults should tell us that our moral judg­ments are sub­ject to er­rors that are hard to de­tect and hard to cor­rect. In light of this, we should trust in­tu­ition less and rely on our pri­ors more. We should not ex­pect to find a the­ory that fits all of most con­dent moral judg­ments, and we should largely be en­gaged in an ex­er­cise in dam­age con­trol, es­pe­cially in pop­u­la­tion ethics. Fi­nally, we should ex­pect these er­ror pro­cesses to lead us to sig­nifi­cantly un­der­es­ti­mate the im­por­tance of shap­ing the far fu­ture.

This sum­mary was writ­ten by Jaime Sevilla, sum­mer fel­low at the Fu­ture of Hu­man­ity In­sti­tute. The source ma­te­rial is due to Ni­cholas Beck­stead, and I have di­rectly reused many sen­tences from his work. This rep­re­sen­ta­tion of Beck­stead’s work is only as cor­rect as my un­der­stand­ing of it—if cri­tiquing the origi­nal work please con­sult the source rather than pre­sume my char­ac­ter­i­sa­tion of it is cor­rect. I do not nec­es­sar­ily en­dorse the con­clu­sions reached by Beck­stead.

I want to thank Max Daniel and Alex Hill for their in­sight­ful com­ments and thought­ful dis­cus­sion over the draft of this sum­mary.

Bibliography