Descriptive Population Ethics and Its Relevance for Cause Prioritization

Summary

De­scrip­tive ethics is the em­piri­cal study of peo­ple’s val­ues and eth­i­cal views, e.g. via a sur­vey or ques­tion­naire. This overview fo­cuses on be­liefs about pop­u­la­tion ethics and ex­change rates be­tween goods (e.g. hap­piness) and bads (e.g. suffer­ing). Two vari­ables seem par­tic­u­larly im­por­tant and ac­tion-guid­ing in this con­text, es­pe­cially when try­ing to make in­formed choices about how to best shape the long-term fu­ture: 1) One’s nor­ma­tive goods-to-bads ra­tio (N-ra­tio) and 2) one’s ex­pected bads-to-goods ra­tio (E-ra­tio). I elab­o­rate on how a frame­work con­sist­ing of these two vari­ables could in­form our de­ci­sion-mak­ing with re­spect to shap­ing the long-term fu­ture, as well as fa­cil­i­tate co­op­er­a­tion among differ­ing value sys­tems and fur­ther moral re­flec­tion. I then pre­sent con­crete ideas for fur­ther re­search in this area and in­ves­ti­gate as­so­ci­ated challenges. The last sec­tion lists re­sources which dis­cuss fur­ther method­olog­i­cal and the­o­ret­i­cal is­sues which were be­yond the scope of the pre­sent text.

De­scrip­tive ethics and long-term fu­ture prioritization

Re­cently, some de­bate has emerged on whether re­duc­ing ex­tinc­tion risk is the ideal course of ac­tion for shap­ing the long-term fu­ture. For in­stance, in the Global Pri­ori­ties In­sti­tute (GPI) re­search agenda, Greaves & MacAskill (2017, p.13) ask “[...] whether it might be more im­por­tant to en­sure that fu­ture civil­i­sa­tion is good, as­sum­ing we don’t go ex­tinct, than to en­sure that fu­ture civil­i­sa­tion hap­pens at all.” We could fur­ther ask to what ex­tent we should fo­cus our efforts on re­duc­ing risks of as­tro­nom­i­cal suffer­ing (s-risks). Again, Greaves & MacAskill: “Should we be more con­cerned about avoid­ing the worst pos­si­ble out­comes for the fu­ture than we are for en­sur­ing the very best out­comes oc­cur [...]?” Given the enor­mous stakes, these are ar­guably some of the most im­por­tant ques­tions fac­ing those who pri­ori­tize shap­ing the long-term fu­ture.1

Some in­ter­ven­tions in­crease both the qual­ity of fu­ture civ­i­liza­tion as well as its prob­a­bil­ity. Pro­mot­ing in­ter­na­tional co­op­er­a­tion, for in­stance, likely re­duces ex­tinc­tion risks as well as s-risks. How­ever, it seems im­plau­si­ble for one sin­gle in­ter­ven­tion to be op­ti­mally cost-effec­tive at ac­com­plish­ing both types of ob­jec­tives at the same time. To the ex­tent to which there is a trade­off be­tween differ­ent goals re­lat­ing to shap­ing the long-term fu­ture, we should make a well-con­sid­ered choice about how to pri­ori­tize among them.

Nor­ma­tive and ex­pected ra­tios (aka ex­change rates and fu­ture op­ti­mism)

I sug­gest that this choice can be in­formed by two im­por­tant vari­ables: One’s nor­ma­tive bads-to-goods ra­tio2 (N-ra­tio) and one’s em­piri­cally ex­pected goods-to-bads ra­tio (E-ra­tio). Taken to­gether, these vari­ables can serve as a frame­work for choos­ing be­tween differ­ent op­tions to shape the long-term fu­ture.

(For util­i­tar­i­ans, N- and E-ra­tios amount to their nor­ma­tive /​ ex­pected suffer­ing-to-hap­piness ra­tios. But for most hu­mans, there are bads be­sides suffer­ing, e.g. in­jus­tice, and goods other than hap­piness, e.g. love, knowl­edge, or art. More on this be­low.)

I will elab­o­rate in greater de­tail be­low on how to best in­ter­pret and mea­sure these two ra­tios. For now, a few ex­am­ples should suffice to illus­trate the gen­eral con­cept. Some­one with a high N-ra­tio of, say, 100:1 be­lieves that re­duc­ing bads is one hun­dred times as im­por­tant as in­creas­ing goods, whereas some­one with an N-ra­tio of 1:1 thinks that in­creas­ing goods and re­duc­ing bads is of equal im­por­tance.3 Similarly, some­one with an E-ra­tio of, say, 1000:1 thinks that there will be one thou­sand times as much good than bad in the fu­ture in ex­pec­ta­tion, whereas some­one with a lower E-ra­tio is more pes­simistic about the fu­ture.4

Note that I don’t as­sume an ob­jec­tive way to mea­sure goods and bads, so a state­ment like “re­duc­ing suffer­ing is x times more im­por­tant than pro­mot­ing hap­piness” is im­pre­cise un­less one fur­ther speci­fies what pre­cisely is be­ing com­pared. (See also the sec­tion “The mea­sura­bil­ity of hap­piness and suffer­ing”.)

In short, the more one’s E-ra­tio ex­ceeds one’s N-ra­tio, the higher one’s ex­pected value of the fu­ture, and the more one fa­vors in­ter­ven­tions that pri­mar­ily re­duce ex­tinc­tion risks.5 In con­trast, the more one’s N-ra­tio ex­ceeds one’s E-ra­tio, the more ap­peal­ing be­come in­ter­ven­tions that pri­mar­ily re­duce s-risks or oth­er­wise im­prove the qual­ity of the fu­ture with­out af­fect­ing its prob­a­bil­ity. The graphic be­low sum­ma­rizes the dis­cus­sion so far.

Of course, this rea­son­ing is rather sim­plis­tic. In prac­tice, con­sid­er­a­tions from com­par­a­tive ad­van­tages, tractabil­ity, ne­glect­ed­ness, op­tion value, moral trade, et cetera need to be fac­tored in.6 See also Cause pri­ori­ti­za­tion for down­side-fo­cused value sys­tems for a more in-depth anal­y­sis.7

In­ter­pret­ing and mea­sur­ing N-ratios

The rest of this sec­tion elab­o­rates on the mean­ing of N-ra­tios and ex­plains one ap­proach of mea­sur­ing or at least ap­prox­i­mat­ing them. In short, I pro­pose to ap­prox­i­mate an in­di­vi­d­ual’s N-ra­tio by mea­sur­ing their re­sponse ten­den­cies to var­i­ous eth­i­cal thought ex­per­i­ments (e.g. as part of a ques­tion­naire or sur­vey) and com­par­ing them to those of other in­di­vi­d­u­als. Th­ese ques­tions could be of (roughly) the fol­low­ing kind:

Imag­ine you could cre­ate a new world in­hab­ited by X hu­mans liv­ing in a utopian civ­i­liza­tion free of in­vol­un­tary suffer­ing, and where ev­ery­one is ex­tremely kind, in­tel­li­gent, and com­pas­sion­ate. In this world, how­ever, there also ex­ist 100 hu­mans who ex­pe­rience ex­treme suffer­ing.

What’s the small­est value of X for which you would want to cre­ate this world?

In short, peo­ple who re­spond with higher equiv­alence num­bers X to such thought ex­per­i­ments should have higher N-ra­tios, on av­er­age.

Some words of cau­tion are in or­der here. First, the fi­nal for­mu­la­tions of such ques­tions should ob­vi­ously con­tain more de­tailed in­for­ma­tion and, for ex­am­ple, spec­ify how the in­hab­itants of the utopian so­ciety live, what form of suffer­ing the hu­mans ex­pe­rience pre­cisely, et cetera. (See also the doc­u­ment “Pre­limi­nary For­mu­la­tions of Eth­i­cal Thought Ex­per­i­ments” which con­tains much longer for­mu­la­tions.)

Se­cond, an in­di­vi­d­ual’s equiv­alence num­ber X will de­pend on what form of eth­i­cal dilemma is used and its pre­cise word­ing. For ex­am­ple, ask­ing peo­ple to make in­trap­er­sonal in­stead of in­ter­per­sonal trade-offs, or writ­ing “pre­serv­ing” in­stead of “cre­at­ing”, will likely in­fluence the re­sponses.

Third, sub­jects’ equiv­alence num­bers will de­pend on which type of bad or good is de­picted. He­donis­tic util­i­tar­i­ans, for in­stance, re­gard plea­sure as the sin­gle most im­por­tant good and would place great value on, say, com­puter pro­grams ex­pe­rienc­ing ex­tremely bliss­ful states. Many other value sys­tems would con­sider such pro­grams to be of no pos­i­tive value what­so­ever. For­tu­nately, many if not most value sys­tems re­gard suffer­ing8 as one of the most im­por­tant bads and also place sub­stan­tial pos­i­tive value on flour­ish­ing so­cieties in­hab­ited by hu­mans ex­pe­rienc­ing eu­daimo­nia – i.e. “hu­man flour­ish­ing” or hap­piness plus var­i­ous other goods, such as virtue and friend­ship.9 In con­clu­sion, al­though N-ra­tios (as well as E-ra­tios) are gen­er­ally agent-rel­a­tive, well-cho­sen “suffer­ing-to-eu­daimo­nia ra­tios” will likely al­low for more mean­ingful and ro­bust in­terindi­vi­d­ual com­par­i­sons while still be­ing suffi­ciently nat­u­ral and in­for­ma­tive. (See also the sec­tion “N-ra­tios and E-ra­tios are agent-rel­a­tive” of the ap­pendix for a fur­ther dis­cus­sion of this is­sue.)

How­ever, even if we limit our dis­cus­sion to var­i­ous forms of suffer­ing and eu­daimo­nia, judg­ments might di­verge sub­stan­tially. For ex­am­ple, Anna might only be will­ing to trade one minute of phys­i­cal tor­ture in ex­change for many years of eu­daimo­nia, while she would trade one week of de­pres­sion for just one hour of eu­daimo­nia. Others might make differ­ent or even op­po­site choices. If we had asked Anna only the first ques­tion, we could have con­cluded that her N-ra­tio is high, but her stance on the sec­ond ques­tion sug­gests that the pic­ture is more com­pli­cated.

Con­se­quently, one might say that even differ­ent forms of suffer­ing and hap­piness/​eu­daimo­nia com­prise “ax­iolog­i­cally dis­tinct” cat­e­gories and that, in­stead of generic “suffer­ing-to-eu­daimo­nia ra­tios” – let alone “bads-to-goods ra­tios” – we need more fine-grained ra­tios, e.g. “suffer­ing_typeY-to-eu­daimo­nia_typeZ ra­tios”.10

See also “Towards a Sys­tem­atic Frame­work for De­scrip­tive (Pop­u­la­tion) Ethics” for a more ex­ten­sive overview of the rele­vant di­men­sions along which eth­i­cal thought ex­per­i­ments can and should vary. “De­scrip­tive Ethics – Method­ol­ogy and Liter­a­ture Re­view” pro­vides an in-depth dis­cus­sion of var­i­ous method­olog­i­cal and the­o­ret­i­cal ques­tions, such as how to pre­vent an­chor­ing or fram­ing effects, con­trol for scope in­sen­si­tivity, in­crease in­ter­nal con­sis­tency, and so on.

The need for a sur­vey (of effec­tive al­tru­ists)

Do these con­sid­er­a­tions sug­gest that re­search in de­scrip­tive ethics is sim­ply not fea­si­ble? This seems un­likely to me but it’s at least worth in­ves­ti­gat­ing fur­ther.

For illus­tra­tion, imag­ine that a few hun­dred effec­tive al­tru­ists com­pleted a sur­vey con­sist­ing of thirty differ­ent eth­i­cal thought ex­per­i­ments that vary along a cer­tain num­ber of di­men­sions, such as the form and in­ten­sity of suffer­ing or hap­piness, its du­ra­tion, or the num­ber of be­ings in­volved.

We could now as­sign a per­centile rank to ev­ery par­ti­ci­pant for each eth­i­cal thought ex­per­i­ment. If the con­cept of a gen­eral N-ra­tio is vi­able, we should ob­serve that the per­centile ranks of a given par­ti­ci­pant cor­re­late across differ­ent dilem­mas. That is, if some­one gave very high equiv­alence num­bers to the first, say, fif­teen dilem­mas, it should be more likely that this per­son also gave high equiv­alence num­bers to the re­main­ing dilem­mas. In­ves­ti­gat­ing whether there is such a cor­re­la­tion, how high it is, and how much it de­pends on the type or word­ing of each eth­i­cal thought ex­per­i­ment, could it­self lead to in­ter­est­ing in­sights.

What could we learn from such a sur­vey?

Im­por­tant and ac­tion-guid­ing con­clu­sions could be in­ferred from such a sur­vey, both on an in­di­vi­d­ual and on a group level.

First, con­sider the in­di­vi­d­ual level. Imag­ine a par­ti­ci­pant an­swered with “in­finite” in twenty dilem­mas. Fur­ther as­sume that the av­er­age equiv­alence num­ber of this par­ti­ci­pant in the re­main­ing ten dilem­mas was also ex­tremely high, say, one trillion. Un­less this per­son has an un­rea­son­ably high E-ra­tio (i.e. is un­rea­son­ably op­ti­mistic about the fu­ture), this per­son should, ce­teris paribus, pri­ori­tize in­ter­ven­tions that re­duce s-risks over, say, in­ter­ven­tions that pri­mar­ily re­duce risks of ex­tinc­tion but which might also in­crease s-risks (such as, per­haps, build­ing dis­aster shelters11); es­pe­cially so if they learn that most re­spon­dents with lower av­er­age equiv­alence num­bers do the same.12

Se­cond, let’s turn to the group level. It could be very use­ful to know how equiv­alence num­bers among effec­tive al­tru­ists are dis­tributed. For ex­am­ple, cen­tral ten­den­cies such as the me­dian or av­er­age equiv­alence num­ber could in­form al­lo­ca­tion de­ci­sions within the effec­tive al­tru­ism move­ment as a whole. They could also serve as a start­ing point for find­ing com­pro­mise solu­tions or moral trades be­tween vary­ing groups within the EA move­ment – e.g. be­tween groups with more up­side-fo­cused value sys­tems and those with more down­side-fo­cused value sys­tems. Lastly, en­gag­ing with the ac­tual thought ex­per­i­ments of the sur­vey, as well as its re­sults and po­ten­tial im­pli­ca­tions, could in­crease the moral re­flec­tion and so­phis­ti­ca­tion of the par­ti­ci­pants, al­low­ing them to make de­ci­sions more in line with their ideal­ized prefer­ences.

De­scrip­tive ethics and its im­por­tance for mul­ti­verse-wide superrationality

Read­ers un­fa­mil­iar with the idea of mul­ti­verse-wide su­per­ra­tional­ity (MSR) are strongly en­couraged to first read the pa­per “Mul­ti­verse-wide Co­op­er­a­tion via Cor­re­lated De­ci­sion Mak­ing” (Oester­held, 2017) or the post “Mul­ti­verse-wide co­op­er­a­tion in a nut­shell”. Read­ers un­con­vinced by or un­in­ter­ested in MSR are wel­come to skip this sec­tion.

To briefly sum­ma­rize, MSR is the idea that by tak­ing into ac­count the val­ues of su­per­ra­tional­ists lo­cated el­se­where in the mul­ti­verse, it be­comes more likely that they do the same for us. In or­der for MSR to work, it is es­sen­tial to have at least some knowl­edge about how the val­ues of su­per­ra­tional­ists el­se­where in the mul­ti­verse are dis­tributed. Sur­vey­ing the val­ues of (su­per­ra­tional) hu­mans13 is one promis­ing way of gain­ing such knowl­edge.14

Ob­tain­ing a bet­ter es­ti­mate of the av­er­age N-ra­tio of su­per­ra­tional­ists in the mul­ti­verse seems es­pe­cially ac­tion-guid­ing. For illus­tra­tion, imag­ine we knew that most su­per­ra­tional­ists in the mul­ti­verse have a very high N-ra­tio. All else equal and ig­nor­ing con­sid­er­a­tions from ne­glect­ed­ness, tractabil­ity, etc., this im­plies that su­per­ra­tional­ists el­se­where in the mul­ti­verse would prob­a­bly want us to pri­ori­tize the re­duc­tion of s-risks over the re­duc­tion of ex­tinc­tion risks.15 In con­trast, if we knew that the av­er­age N-ra­tio among su­per­ra­tional­ists in the mul­ti­verse is very low, re­duc­ing ex­tinc­tion risks would be­come more promis­ing.

Another im­por­tant ques­tion is to what ex­tent and in what re­spects su­per­ra­tional­ists dis­crim­i­nate be­tween their na­tive species and the species lo­cated el­se­where in the mul­ti­verse.16

The prob­lem of bi­ased, un­re­li­able, and un­sta­ble judgments

Another challenge fac­ing re­search in de­scrip­tive ethics is that at least some an­swers are likely to be driven by more or less su­perfi­cial sys­tem 1 heuris­tics gen­er­at­ing a va­ri­ety of bi­ases – e.g. em­pa­thy gap, du­ra­tion ne­glect, scope in­sen­si­tivity, and fram­ing effects, to name just a few. While there are ways to fa­cil­i­tate the en­gage­ment of more con­trol­led cog­ni­tive pro­cesses17 that make re­flec­tive judg­ments more likely, not ev­ery pos­si­ble bias or con­founder can be elimi­nated.

All in all, the skep­tic has a point when she dis­trusts the re­sults of such sur­veys be­cause she as­sumes that most sub­jects merely pul­led their equiv­alence num­bers out of thin air. Ul­ti­mately, how­ever, I think that re­flect­ing on var­i­ous eth­i­cal thought ex­per­i­ments in a sys­tem­atic fash­ion, pul­ling equiv­alence num­bers out of thin air and then us­ing these num­bers to make more in­formed de­ci­sions about how to best shape the long-term fu­ture is of­ten bet­ter – in the sense of drag­ging in fewer bi­ases and dis­tort­ing in­tu­itions – than pul­ling one’s en­tire de­ci­sion out of thin air.18

A fur­ther prob­lem is that the N-ra­tios of many sub­jects will likely fluc­tu­ate over the course of years or even weeks.19 Nonethe­less, know­ing one’s N-ra­tios will be in­for­ma­tive and po­ten­tially ac­tion-guid­ing for some sub­jects – e.g. for those who already en­gaged in sub­stan­tial amounts of moral re­flec­tion (and are thus likely to have more sta­ble N-ra­tios), or for sub­jects who have par­tic­u­larly high N-ra­tios such that their pri­ori­ties would only shift if their N-ra­tios changed dra­mat­i­cally. Study­ing the sta­bil­ity of N-ra­tios is also an in­ter­est­ing re­search pro­ject in it­self. (See also the sec­tion “moral un­cer­tainty” of an­other doc­u­ment for more notes on this topic.)

Fur­ther resources

The Google Docs listed be­low dis­cuss fur­ther method­olog­i­cal, prac­ti­cal, and the­o­ret­i­cal ques­tions which were be­yond the scope of the pre­sent text. As I might de­pri­ori­tize the pro­ject for sev­eral months, I de­cided to pub­lish my think­ing at its cur­rent stage to en­able oth­ers to ac­cess it in the mean­time.

1) De­scrip­tive Ethics – Method­ol­ogy and Liter­a­ture Re­view.
This doc­u­ment is mo­ti­vated by the ques­tion of what we can learn from the ex­ist­ing liter­a­ture – par­tic­u­larly in health eco­nomics and ex­per­i­men­tal philos­o­phy – on how to best elicit nor­ma­tive ra­tios. It also con­tains a lengthy cri­tique of the two most rele­vant aca­demic stud­ies about pop­u­la­tion eth­i­cal views and ex­am­ines how to best mea­sure and con­trol for var­i­ous bi­ases (such as scope in­sen­si­tivity, fram­ing effects, and so on).

2) Towards a Sys­tem­atic Frame­work for De­scrip­tive (Pop­u­la­tion) Ethics.
This doc­u­ment de­vel­ops a sys­tem­atic frame­work for de­scrip­tive ethics and pro­vides a clas­sifi­ca­tion of di­men­sions along which eth­i­cal thought ex­per­i­ments can (and should) vary.

3) Pre­limi­nary For­mu­la­tions of Eth­i­cal Thought Ex­per­i­ments.
This doc­u­ment con­tains pre­limi­nary for­mu­la­tions of eth­i­cal thought ex­per­i­ments. Note that the for­mu­la­tions are de­signed such that they can be pre­sented to the gen­eral pop­u­la­tion and might be sub­op­ti­mal for effec­tive al­tru­ists.

4) De­scrip­tive ethics – Or­di­nal Ques­tions (incl. MSR) & Psy­cholog­i­cal Mea­sures.
This doc­u­ment dis­cusses the use­ful­ness of ex­ist­ing psy­cholog­i­cal in­stru­ments (such as the Mo­ral Foun­da­tions Ques­tion­naire, the Cog­ni­tive Reflec­tion Test, etc.). The doc­u­ment also in­cludes ten­ta­tive sug­ges­tions for how to as­sess other con­structs such as moral re­flec­tion, hap­piness, and so on.

Op­por­tu­nity to give feed­back or collaborate

If you’re in­ter­ested in col­lab­o­rat­ing on the sur­vey, feel free to email me di­rectly at david.al­thaus[at]foun­da­tional-re­search.org. Please note that the above doc­u­ments, as well as the pro­ject as whole, are very much a work in progress, so I ask you to un­der­stand that much of the ma­te­rial hasn’t been pol­ished and, in some cases, does not even ac­cu­rately re­flect my most re­cent think­ing. This also means that there is a sig­nifi­cant op­por­tu­nity for col­lab­o­ra­tors to con­tribute their own ideas rather than to just ex­e­cute an already set­tled plan. In any case, com­ments in the Google doc­u­ments or un­der this text are highly ap­pre­ci­ated, whether you’re in­ter­ested in be­com­ing more in­volved in the pro­ject or not.

Acknowledgments

I want to thank Max Daniel, Cas­par Oester­held, Jo­hannes Treut­lein, To­bias Pul­ver, Jonas Vol­lmer, To­bias Bau­mann, Lu­cius Cavi­ola, and Lukas Gloor for their ex­tremely valuable in­puts and com­ments. Thanks also to Nate Liu, Si­mon Knutsson, Brian To­masik, Adrian Rorheim, Jan Brauner, Ewelina Tur, Jen­nifer Wald­mann, and Ruairi Don­nelly for their com­ments.

Appendix

N-ra­tios and E-ra­tios are agent-relative

As­sum­ing moral anti-re­al­ism is true, there are no uni­ver­sal or “ob­jec­tive” goods and bads. Con­se­quently, if we want to avoid con­fu­sion, E-ra­tios and N-ra­tios should ul­ti­mately re­fer to the val­ues of a spe­cific agent, or, to be more pre­cise, a spe­cific set of goods and bads.

For illus­tra­tion, con­sider two hy­po­thet­i­cal agents: Agent_1 has an N-ra­tio of 1:1 and an E-ra­tio of 1000:1, while agent_2 has an N-ra­tio of 1:1 and an E-ra­tio of 1:10. Do these agents share similar val­ues but have rad­i­cally differ­ent con­cep­tions about how the fu­ture will likely un­fold? Not nec­es­sar­ily. Agent_1 might be a to­tal he­do­nis­tic util­i­tar­ian and agent_2 an AI that wants to max­i­mize pa­per­clips and min­i­mize spam emails. Both might agree that the fu­ture will, in ex­pec­ta­tion, con­tain 1000 times as much plea­sure as suffer­ing but 10 times as many spam emails as pa­per­clips.

Of course, the sets of bads and goods of hu­mans will of­ten over­lap, at least to some ex­tent. Con­se­quently, if we learn that hu­man_1 has a much lower E-ra­tio than hu­man_2, this tells us that hu­man_1 is prob­a­bly more pes­simistic than hu­man_2 and that both likely dis­agree about how the fu­ture is go­ing to un­fold.

In this con­text, it also seems worth not­ing that there might be more over­lap with re­gards to bads than with re­gards to goods. For illus­tra­tion, con­sider the num­ber of macro­scop­i­cally dis­tinct fu­tures whose net value is ex­tremely nega­tive ac­cord­ing to at least 99.5% of all hu­mans. It seems plau­si­ble that this num­ber is (much) greater than the num­ber of macro­scop­i­cally dis­tinct fu­tures whose net value is ex­tremely pos­i­tive ac­cord­ing to at least 99.5% of all hu­mans. In fact, those of us who are more pes­simistic about the prospect of wide agree­ment on val­ues might worry that the lat­ter num­ber is (close to) zero, es­pe­cially if one doesn’t al­low for long pe­ri­ods of moral re­flec­tion.

The mea­sura­bil­ity of hap­piness and suffering

In my view, there are no “ob­jec­tive” units of hap­piness or suffer­ing. Thus, it can be mis­lead­ing to talk about the ab­solute mag­ni­tude of N-ra­tios with­out spec­i­fy­ing the con­crete in­stan­ti­a­tions of bads and goods that were traded against each other.

For more de­tails on the mea­sura­bil­ity of hap­piness and suffer­ing (or lack thereof), I highly recom­mend the es­says “Mea­sur­ing Hap­piness and Suffer­ing” and “What Is the Differ­ence Between Weak Nega­tive and Non-Nega­tive Eth­i­cal Views?” by Si­mon Knutsson, es­pe­cially this sec­tion and the de­scrip­tion of Brian To­masik’s views whose ap­proach I share.

Other resources

Footnotes

[1] For more con­sid­er­a­tions along these lines, I es­pe­cially recom­mend the sec­tion “The value of the fu­ture” of the GPI re­search agenda (p. 12 − 14).

[2] The term “ex­change rate” is more com­mon.

[3] Views with N-ra­tios greater than 1:1 have also been referred to as “nega­tive-lean­ing”. Promi­nent ex­am­ples in­clude nega­tive con­se­quen­tial­ism and nega­tive(-lean­ing) util­i­tar­i­anism. How­ever, the dis­tinc­tion be­tween nega­tive and “tra­di­tional” con­se­quen­tial­ism is non-ob­vi­ous, see e.g. What Is the Differ­ence Between Weak Nega­tive and Non-Nega­tive Eth­i­cal Views? (Knutsson, 2016).

[4] Of course, if one per­son has an E-ra­tio that is un­usu­ally low or un­usu­ally high, this pre­sents grounds for con­cern, as it could in­di­cate a bias or lack of up­dat­ing to­wards other peo­ple’s judg­ment. Diverg­ing N-ra­tios could also be sub­ject to the same con­sid­er­a­tion, but be­cause N-ra­tios con­cern nor­ma­tive dis­agree­ments, it is less clear to what ex­tent up­dat­ing to­wards other peo­ple’s moral in­tu­itions is de­manded by epistemic ra­tio­nal­ity.

[5] Note that, even for a per­son with a very high E-ra­tio and a low N-ra­tio, in­ter­ven­tions that pri­mar­ily re­duce ex­tinc­tion risks are nei­ther nec­es­sar­ily op­ti­mal nor as cost-effec­tive as, for in­stance, in­ter­ven­tions that pri­mar­ily in­crease the prob­a­bil­ity of the very best fu­tures (such as cer­tain forms of ad­vo­cacy).

[6] I’m also ig­nor­ing in­ter­ven­tions which don’t pri­mar­ily af­fect ex­tinc­tion risks or s-risks but e.g. in­crease the prob­a­bil­ity of the very best fu­tures.

[7] Par­tic­u­larly the sec­tions “Down­side-fo­cused views pri­ori­tize s-risk re­duc­tion over utopia cre­ation” and “Ex­tinc­tion risk re­duc­tion: Un­likely to be pos­i­tive ac­cord­ing to down­side-fo­cused views”.

[8] Par­tic­u­larly the in­vol­un­tary, gra­tu­itous suffer­ing of in­no­cent hu­mans.

[9] See also sec­tion 4 of the Stan­ford En­cy­clo­pe­dia of Philos­o­phy en­try on “Well-Be­ing”.

[10] In this con­text, see also the ap­pendix for fur­ther dis­cus­sion.

[11] It could be ar­gued that many in­ter­ven­tions in this area don’t ac­tu­ally in­crease s-risks be­cause they will only af­fect how re­cov­ery will hap­pen rather than if it will hap­pen.

[12] How­ever, it does nec­es­sar­ily fol­low that this per­son should ac­tu­ally pur­sue in­ter­ven­tions which pri­mar­ily re­duce s-risks. For ex­am­ple, de­pend­ing on the speci­fics of her val­ues and other con­sid­er­a­tions such as ne­glect­ed­ness, tractabil­ity, et cetera, in­ter­ven­tions that in­crease the qual­ity of the long-term fu­ture while not pri­mar­ily af­fect­ing s-risks might be even more promis­ing.

[13] Cf. Oester­held (2017, p. 66): “Be­cause the sam­ple size [of su­per­ra­tional­ists] is so small, we may also look at hu­mans in gen­eral, un­der the as­sump­tion that the val­ues of su­per­ra­tional­ists re­sem­ble the val­ues of their na­tive civ­i­liza­tion. It may be that the val­ues of su­per­ra­tional­ists differ from those of other agents in sys­tem­atic and pre­dictable ways. Gen­eral hu­man val­ues may thus yield some use­ful in­sights about the val­ues of su­per­ra­tional­ists.”

[14] We should not draw too strong con­clu­sions from sur­vey­ing cur­rent su­per­ra­tional­ists be­cause they might be atyp­i­cal in var­i­ous ways and thus not rep­re­sen­ta­tive (cf. Oester­held, 2017, p.71).

[15] One might re­tort that su­per­ra­tional­ists should always fo­cus on stay­ing around so they can help to ac­tu­al­ize the val­ues of su­per­ra­tional­ists el­se­where in the mul­ti­verse. But as­sum­ing the av­er­age N-ra­tio of su­per­ra­tional­ists is suffi­ciently high, the pos­si­ble up­side from en­sur­ing good fu­tures in which su­per­ra­tional­ists can ac­tu­al­ize goods val­ued by su­per­ra­tional­ists el­se­where in the mul­ti­verse is likely smaller than the pos­si­ble down­side from failing to pre­vent bad fu­tures full of suffer­ing or other forms of dis­value. Of course, one’s ul­ti­mate de­ci­sion also has to be in­formed by one’s E-ra­tio and by other con­sid­er­a­tions such as tractabil­ity, ne­glect­ed­ness or one’s com­par­a­tive ad­van­tage.

[16] For ex­am­ple, many peo­ple dis­like the civ­i­liza­tion of “ems” de­picted in Han­son’s Age of Em al­though most ems are happy and pre­sum­ably more similar to hu­mans than the av­er­age alien. Gen­er­ally, it seems that many hu­mans wish that the even­tual de­scen­dants of hu­man­ity re­tain a lot of their idiosyn­cratic val­ues and cus­toms. And given the rather un­en­thu­si­as­tic re­ac­tions of many read­ers to the utopian civ­i­liza­tion of the “su­per happy peo­ple” (a race of ex­trater­res­tri­als de­picted in Eliezer Yud­kowsky’s short story Three Wor­lds Col­lide), it seems not too im­plau­si­ble to con­clude that many hu­mans just don’t care much for the ex­is­tence of alien civ­i­liza­tions el­se­where in the mul­ti­verse – how­ever flour­ish­ing and utopian from the per­spec­tive of their in­hab­itants. If one fur­ther as­sumes that su­per­ra­tional­ists (here and el­se­where in the mul­ti­verse) share these sen­ti­ments to some de­gree but also care at least some­what about the pre­ven­tion of suffer­ing (even if ex­pe­rienced by aliens), this sug­gests that alien su­per­ra­tional­ists would want us to pri­ori­tize avoid­ing the worst pos­si­ble fu­tures over en­sur­ing the ex­is­tence of a utopian (post-)hu­man civ­i­liza­tion. In con­trast, the less su­per­ra­tional­ists dis­crim­i­nate be­tween the well-be­ing of hu­mans and aliens, the less sub­stan­tive this line of ar­gu­men­ta­tion. Of course, this whole line of rea­son­ing is very spec­u­la­tive to be­gin with, and should be taken with a (big) grain of salt.

[17] See e.g. the sec­tions “How to test and con­trol for var­i­ous bi­ases” and ”In­creas­ing val­idity and in­ter­nal con­sis­tency“ of the doc­u­ment “De­scrip­tive Ethics – Method­ol­ogy & Liter­a­ture Re­view” for fur­ther rele­vant method­olog­i­cal con­sid­er­a­tions.

[18] Just as pul­ling other nu­mer­i­cal val­ues (prob­a­bil­ities, cost es­ti­mates, etc.) out of thin air and then us­ing these num­bers to in­form one’s de­ci­sion is of­ten bet­ter than pul­ling the de­ci­sion out of thin air (in this con­text, see e.g. How to Mea­sure Any­thing by D. Hub­bard).

[19] For ex­am­ple, due to ran­dom fluc­tu­a­tions in mood or be­cause sub­jects fur­ther re­flected on their val­ues.