Conditional interests, asymmetries and EA priorities

The aim of this post is to in­tro­duce the con­cept of con­di­tional in­ter­ests. What I fo­cus on is the fol­low­ing claim, its jus­tifi­ca­tion and its im­pli­ca­tions, in­clud­ing for EA pri­ori­ties:

Only Ac­tual In­ter­ests: We ac­com­plish no good by cre­at­ing and then satis­fy­ing an in­ter­est, all else equal, be­cause in­ter­ests give us rea­sons for their satis­fac­tion, not for their ex­is­tence or satis­fac­tion over their nonex­is­tence.

This could be de­scribed as an asym­met­ric “in­ter­est-af­fect­ing view”, and the pro­cre­ation asym­me­try fol­lows, be­cause in­di­vi­d­u­als who wouldn’t oth­er­wise ex­ist have no in­ter­ests to satisfy. I think such a view ac­cords best with our in­tu­itions about per­sonal trade­offs.

It there­fore (in the­ory) al­lows in­di­vi­d­u­als to make per­sonal trade­offs be­tween ex­pe­riences of plea­sure and suffer­ing as nor­mally un­der­stood, un­like strong nega­tive he­do­nis­tic util­i­tar­i­anism, but it also doesn’t give rea­sons to in­di­vi­d­u­als who have no in­ter­est in wire­head­ing, Noz­ick’s ex­pe­rience ma­chine or psy­choac­tive drugs to sub­ject them­selves to these. If you don’t have an in­ter­est in (fur­ther) plea­sure for its own sake at a given mo­ment, you are not mis­taken about this, de­spite the claims of clas­si­cal util­i­tar­i­ans. As such, while strong nega­tive util­i­tar­i­anism may, to some, be coun­ter­in­tu­itive if it seems to over­ride in­ter­ests (see re­sponses to this ob­jec­tion by Brian To­masik and Si­mon Knutsson), clas­si­cal util­i­tar­i­anism effec­tively does the same, be­cause it can pri­ori­tize the cre­ation and satis­fac­tion of new in­ter­ests (in the same in­di­vi­d­ual or oth­ers) over the satis­fac­tion of ac­tual in­ter­ests. So, in my view, if nega­tive he­do­nis­tic util­i­tar­i­anism (in any form) is wrong be­cause it over­rides in­di­vi­d­ual in­ter­ests, so is clas­si­cal util­i­tar­i­anism. How­ever, I think we can do bet­ter than both strong nega­tive he­do­nis­tic util­i­tar­i­anism and clas­si­cal util­i­tar­i­anism, which is the point of this post.

Fur­ther­more, by see­ing value in the cre­ation of new in­ter­est hold­ers just to satisfy their in­ter­ests and some­times pri­ori­tiz­ing this over in­ter­ests that would ex­ist any­way (in a nar­row or wide sense), clas­si­cal util­i­tar­i­anism (and any other the­ory pre­scribing this) treats in­ter­est hold­ers as mere re­cep­ta­cles or ves­sels for value in a way that Only Ac­tual In­ter­ests pre­vents. There have been differ­ent state­ments of this ob­jec­tion, but I think this is the clear­est one.

The claim Only Ac­tual In­ter­ests is ba­si­cally from Jo­hann Frick’s pa­per and the­sis which defend the pro­cre­ation asym­me­try. I re­cently wrote a post on the fo­rum that referred to his work, but this post con­sid­ers es­sen­tially his ap­proach, its jus­tifi­ca­tions and its im­pli­ca­tions. Christoph Fe­hige’s an­tifrus­tra­tionism, de­vel­oped ear­lier, is also ba­si­cally the same, but con­cerned only with prefer­ences, speci­fi­cally.

In the first sec­tion, I give defi­ni­tions and in the sec­ond, I state some claims. In the third sec­tion, I list a few ba­sic im­pli­ca­tions. In the fourth sec­tion, I de­scribe the re­la­tion­ship to Bud­dhist ax­iol­ogy or tran­quil­ism. In the fifth sec­tion, I defend the claim, pri­mar­ily through ex­am­ples with our com­mon sense un­der­stand­ing of in­ter­ests. In the sixth sec­tion, I con­sider some other more ab­stract the­o­ret­i­cal im­pli­ca­tions. In the sev­enth sec­tion, I de­scribe im­pli­ca­tions for EA pri­ori­ties, start­ing from 80,000 Hours’ cause analy­ses; the main con­clu­sion is that ex­is­ten­tial risks should re­ceive less pri­or­ity. In the last sec­tion, I in­clude some thoughts about the pos­si­bil­ity of pri­ori­tiz­ing (an in­di­vi­d­ual’s) cur­rent in­ter­ests over (their) fu­ture ones.


Out­come: The en­tire ac­tual his­tory of all that is on­tolog­i­cal (uni­verse, mul­ti­verse, pos­si­bly things be­yond the phys­i­cal), past, pre­sent and ac­tual fu­ture.

In­ter­est, in­ter­est holder: An in­ter­est is a value held by some holder, the in­ter­est holder, and that can be more or less satis­fied (ac­cord­ing to some to­tal or­der) so that for the in­ter­est holder, it is bet­ter that it be more satis­fied than less, all else equal.

Note: a value could a pri­ori be an in­ter­est with it­self or the uni­verse as its holder. We might say the value of plea­sure it­self has an in­ter­est in fur­ther plea­sure, or the uni­verse has an in­ter­est in fur­ther plea­sure, al­though I think this is wrong; see the dis­cus­sion fol­low­ing Only Ac­tu­ally Con­scious In­ter­ests in the Claims sec­tion.

Ac­tual in­ter­est: An in­ter­est is ac­tual in a given out­come if it is held in that out­come.

Con­scious in­ter­est: An in­ter­est is con­scious if its satis­fac­tion or un­satis­fac­tion can be ex­pe­rienced con­sciously by the holder in some out­come.

Ac­tu­ally con­scious in­ter­est: An in­ter­est is ac­tu­ally con­scious in a given out­come if its satis­fac­tion or un­satis­fac­tion is ex­pe­rienced con­sciously by the holder in that out­come.

Ac­tu­ally con­scious in­ter­ests are ac­tual in­ter­ests and con­scious in­ter­ests.

Ex­pe­ri­en­tial in­ter­est: An in­ter­est is ex­pe­ri­en­tial if its de­gree of satis­fac­tion is de­ter­mined solely by the con­scious ex­pe­riences of its holder.

Plea­sure: A con­scious ex­pe­rience is plea­surable if the ex­pe­rience comes with a con­scious in­ter­est in it­self over its ab­sence for its own sake, and this ex­pe­rience is ex­pe­rienced by the holder of this in­ter­est. Plea­sure is the con­scious ex­pe­rience and the con­scious in­ter­est of the holder.

Suffer­ing: A con­scious ex­pe­rience in­volves suffer­ing if the ex­pe­rience comes with a con­scious in­ter­est in its ab­sence over the ex­pe­rience it­self, and this ex­pe­rience is ex­pe­rienced by the holder of this in­ter­est. Suffer­ing is the con­scious ex­pe­rience and the con­scious in­ter­est of the holder.


This is just a set of claims of in­ter­est for this post. I am not ac­tu­ally mak­ing all of these claims here.

Ex­pe­ri­en­tial­ism: The only in­ter­ests that mat­ter are the hold­ers’ in­ter­ests in their own con­scious ex­pe­riences.

Only Con­scious In­ter­ests: The only in­ter­ests that mat­ter are con­scious in­ter­ests.

He­donism: Ex­pe­ri­en­tial­ism and Only Con­scious In­ter­ests are true, and speci­fi­cally, plea­sure and suffer­ing are the only kinds of in­ter­ests that mat­ter.

He­donism is one of the main claims of he­do­nis­tic util­i­tar­i­anism, in­clud­ing clas­si­cal util­i­tar­i­anism.

Nega­tive He­donism: Ex­pe­ri­en­tial­ism (or He­donism) and Only Con­scious In­ter­ests are true, and speci­fi­cally, suffer­ing is the only kind of in­ter­est that mat­ters.

Nega­tive He­donism is one of the main claims of (strong) nega­tive he­do­nis­tic util­i­tar­i­anism.

Now, the main claim of this post, restated:

Only Ac­tual In­ter­ests: In­ter­ests provide rea­sons for their fur­ther satis­fac­tion, but not their ex­is­tence or satis­fac­tion over their nonex­is­tence.

In par­tic­u­lar, an in­ter­est is nei­ther satis­fied nor un­satis­fied in an out­come if it does not oc­cur in that out­come, and this out­come is not worse than one in which the in­ter­est oc­curs, all else equal. (I don’t say that only ac­tual in­ter­ests mat­ter, since that’s ei­ther con­fus­ing or in­ac­cu­rate.)

You could call this an “in­ter­est-af­fect­ing view”, and this could be in­ter­preted in a nar­row or a wide way. Un­der a nar­row view, we wouldn’t com­pare the de­gree of satis­fac­tion of differ­ent in­ter­ests in differ­ent out­comes, only the de­gree of satis­fac­tion of the same in­ter­ests com­mon to differ­ent out­comes. I’m not sure if such a view can be made both tran­si­tive and in­de­pen­dent of ir­rele­vant al­ter­na­tives, al­though we might also re­ject these re­quire­ments in the first place.

Un­der a wide view, we might say that it’s bet­ter for in­ter­est to ex­ist and be satis­fied to de­gree than for in­ter­est to ex­ist and be satis­fied to de­gree , es­pe­cially if and are in­ter­ests of the same kind (e.g. both are in­ter­ests in plea­sure, both are in­ter­ests in not suffer­ing, both are in­ter­ests in gain­ing knowl­edge), and , so that would be satis­fied to a greater de­gree than .

See the non­iden­tity prob­lem for some dis­cus­sion of per­son-af­fect­ing views, in which the in­ter­ests at stake are the wellbe­ing for two differ­ent peo­ple, ex­actly one of whom will be born. Should we pre­fer for a per­son with a bet­ter life to be born than a per­son with a worse life, even if they would be differ­ent peo­ple? If yes, we should re­ject a purely nar­row per­son-af­fect­ing view.

We might also want to re­strict fur­ther to in­ter­ests that are held presently or in the ac­tual fu­ture, ex­clud­ing past in­ter­ests.

Only Ac­tu­ally Con­scious In­ter­ests: In­ter­ests provide rea­sons for their fur­ther satis­fac­tion when they are con­sciously ex­pe­rienced and through their con­scious ex­pe­rience by their hold­ers, but they don’t provide rea­sons for their ex­is­tence or satis­fac­tion over their nonex­is­tence.

In par­tic­u­lar, an in­ter­est is nei­ther satis­fied nor un­satis­fied in an out­come if the in­ter­est (or its satis­fac­tion/​un­satis­fac­tion) is not ex­pe­rienced, and this out­come is not worse than one in which the in­ter­est (or its satis­fac­tion/​un­satis­fac­tion) is ex­pe­rienced, all else equal. As such, the hold­ers of the in­ter­ests may as well be the con­scious ex­pe­riences them­selves. Or, the uni­verse as a whole may be the holder of con­scious in­ter­ests, but the in­ter­ests are (in prac­tice, not a pri­ori) lo­cal­ized: in lo­ca­tions where there are no con­scious ex­pe­riences, there are no con­scious in­ter­ests to be satis­fied. To illus­trate, what you nor­mally un­der­stand to be my con­scious in­ter­ests for my own con­scious states are in­ter­ests that are di­rected at con­scious states in the parts of space that will be oc­cu­pied by what can vaguely be defined as my body. If “I” didn’t ex­pe­rience this in­ter­est, nei­ther would the uni­verse as a whole.

Some ba­sic implications

In this sec­tion, I list a few sim­ple im­pli­ca­tions of Only Ac­tual In­ter­ests.

1. That I could in­duce a crav­ing in some­one and then satisfy it is not a rea­son for me to ac­tu­ally do so.

2. That some­one would have their in­ter­ests satis­fied or be happy or have a good life is not a rea­son to bring them into ex­is­tence, al­though there may be other rea­sons. That they will al­most cer­tainly have some un­satis­fied in­ter­ests is a rea­son to not bring them into ex­is­tence, but the rea­sons to do so could be stronger in prac­tice. This is the pro­cre­ation asym­me­try. In par­tic­u­lar, you would never be re­quired to sac­ri­fice your own wellbe­ing (even into he­do­nic nega­tive) to bring new in­di­vi­d­u­als into ex­is­tence, all else equal, al­though, again, all else is rarely equal.

3. If some­one has no in­ter­est in psy­choac­tive sub­stances (or, speci­fi­cally, the re­sult­ing ex­pe­riences), that they might en­joy them is not on its own a rea­son to try to con­vince them to take them.

Bud­dhist ax­iol­ogy or tranquilism

If we com­bine Only Ac­tu­ally Con­scious In­ter­ests with Ex­pe­ri­en­tial­ism or even He­donism, we don’t ac­tu­ally get Nega­tive He­donism (one of the main claims of nega­tive he­do­nis­tic util­i­tar­i­anism). Plea­sure and suffer­ing can both mat­ter, and I am not claiming a con­scious in­ter­est in fur­ther plea­sure is nec­es­sar­ily an in­stance of suffer­ing as I’ve defined these terms, but that the ab­sence of this con­scious in­ter­est is never in it­self bad, while its un­satis­fac­tion is. So, if some­one has an un­satis­fied in­ter­est in fur­ther plea­sure, this is worse than not hav­ing this in­ter­est at all, even if they are happy over­all, and it’s also worse than not in­creas­ing their plea­sure to satisfy this in­ter­est.

This does lead to an asym­me­try be­tween plea­sure and suffer­ing, but not one that does not count plea­sure at all:

Asym­me­try be­tween plea­sure and suffer­ing: In the ab­sence of an in­ter­est in fur­ther plea­sure, there’s no rea­son to in­crease plea­sure, but suffer­ing by its very defi­ni­tion im­plies an in­ter­est in its ab­sence, so there is a rea­son to pre­vent it.

This is effec­tively Bud­dhist ax­iol­ogy or tran­quil­ism, framed slightly differ­ently.

In par­tic­u­lar, if you’re a util­i­tar­ian who also ac­cepts Only Ac­tu­ally Con­scious In­ter­ests and Ex­pe­ri­en­tial­ism (or He­donism), you’re ba­si­cally a nega­tive prefer­ence util­i­tar­ian who cares only about the con­scious satis­fac­tion/​un­satis­fac­tion of prefer­ences about con­scious ex­pe­riences. This can in­clude the con­scious prefer­ence for more plea­sure.

Nega­tive prefer­ence util­i­tar­i­ans see the un­satis­fac­tion of a prefer­ence as worse than its nonex­is­tence, and the com­plete satis­fac­tion of a prefer­ence no bet­ter than its nonex­is­tence. If prefer­ences are in­ter­ests, with Only Ac­tual In­ter­ests, they could po­ten­tially provide rea­sons for their satis­fac­tion, but not their ex­is­tence.

Why Only Ac­tual In­ter­ests?

He­donis­tic con­se­quen­tial­ists defend some­thing like or as strong as Only Con­scious In­ter­ests. If we can con­vince them of Only Ac­tual In­ter­ests, then they should ac­cept Only Ac­tu­ally Con­scious In­ter­ests. Or, we can con­vince them di­rectly of Only Ac­tu­ally Con­scious In­ter­ests.

We might try to shift the bur­den of proof: If, ac­cord­ing to Only Con­scious In­ter­ests, an in­ter­est mat­ters only if it can be ex­pe­rienced con­sciously, why should it mat­ter (i.e. de­tract from an out­come) if it is not ac­tu­ally ex­pe­rienced at all?

How­ever, rather than just shift­ing the bur­den of proof, we can defend Only Ac­tual In­ter­ests by anal­ogy and gen­er­ally (not just to those who ac­cept He­donism), based on the ex­am­ples Frick gives (the first two), and one more of my own:

1. That you’ve made a promise to some­one is a rea­son to keep the promise, but the fact that you could keep a promise is not in it­self a rea­son to make it in the first place. Promises provide rea­sons to be kept, but not rea­sons to be made.

2. That you have the gear and other means nec­es­sary to climb mount Ever­est suc­cess­fully doesn’t give you a rea­son to ac­tu­ally do it; you must already (or ei­ther way, ex­pect to) have an in­ter­est in do­ing it.

3. That I could in­duce some­one to want and then buy a product (e.g. through mar­ket­ing) is not a rea­son for me to ac­tu­ally do so.

I find it pretty in­tu­itive that this is how in­ter­ests should work. The claims Frick defends are slightly more gen­eral than mine in this post, us­ing “nor­ma­tive stan­dard” in­stead of “in­ter­est” and “bear­ers” in­stead of “hold­ers”. The re­jec­tion of the Trans­fer Th­e­sis for each in­ter­est F is ba­si­cally equiv­a­lent to a claim similar to Only Ac­tual In­ter­ests:

No Trans­fer: In­ter­ests provide rea­sons for their fur­ther satis­fac­tion, but nei­ther an in­ter­est nor its satis­fac­tion pro­vides rea­sons for the ex­is­tence of that in­ter­est’s holder over its nonex­is­tence.

Trans­fer Th­e­sis: If there is rea­son to in­crease the ex­tent to which F is in­stan­ti­ated amongst ex­ist­ing po­ten­tial bear­ers, there is also rea­son to in­crease the ex­tent to which F is in­stan­ti­ated by cre­at­ing new bear­ers of F.

Peo­ple value a lot of things, but it doesn’t seem like these things jus­tify the ex­is­tence of peo­ple them­selves. Is a world worse for not hav­ing value X if no one is around to miss it? If not, why would adding peo­ple just to achieve X do any good? Take X to be any value from that list, “Abun­dance, achieve­ment, ad­ven­ture, af­fili­a­tion, al­tru­ism, ap­atheia, art, as­ceti­cism, aus­ter­ity, au­tarky, au­thor­ity, au­ton­omy, beauty, benev­olence, bod­ily in­tegrity, challenge, col­lec­tive prop­erty, com­mem­o­ra­tion, com­mu­nism, com­mu­nity, com­pas­sion, com­pe­tence, com­pe­ti­tion, com­pet­i­tive­ness, com­plex­ity, com­radery, con­scien­tious­ness, con­scious­ness, con­tent­ment, co­op­er­a­tion, courage …”

Other the­o­ret­i­cal implications

1. The point of a he­do­nium shock­wave, if any, would be to elimi­nate oth­er­wise un­satis­fied in­ter­ests, not to cre­ate hap­piness. The pre­ven­tion of fu­ture in­ter­ests by de­struc­tion could be a good thing, gen­er­ally. How­ever, both are wildly spec­u­la­tive, and there are good con­se­quen­tial­ist and non­con­se­quen­tial­ist rea­sons to not pur­sue ei­ther, e.g. for con­se­quen­tial­ist ones, moral co­op­er­a­tion and trade, given how much op­po­si­tion there would be to both. Value may also be more com­plex than He­donism al­lows.

2. It avoids the Repug­nant Con­clu­sion, which is a con­se­quence of clas­si­cal util­i­tar­i­anism. The re­pug­nant con­clu­sion is ba­si­cally that for the fol­low­ing pop­u­la­tions in which ev­ery­one has a life worth liv­ing,

the fol­low­ing re­la­tions hold

and so by tran­si­tivity and the in­de­pen­dence of ir­rele­vant al­ter­na­tives,

The first step from to (Mere Ad­di­tion) sup­pos­edly fol­lows be­cause the ex­tra lives are worth liv­ing so adding them can’t make the situ­a­tion worse; the step from to (Non-Anti-Egal­i­tar­i­anism) sup­pos­edly fol­lows if we en­sure the av­er­age welfare is high enough and we aren’t so anti-egal­i­tar­ian that we think a to­tal loss to the best off in­di­vi­d­u­als can only be made up for by a much larger to­tal gain for the worst off in­di­vi­d­u­als, and the last step ac­tu­ally doesn’t do any­thing, since and are iden­ti­cal pop­u­la­tions (the di­vi­sion is only illus­tra­tive).

Then, do­ing the same by adding a very large pop­u­la­tion of lives barely worth liv­ing in in­stead of what’s shown there, and would be very flat and close to 0. So, a very large pop­u­la­tion of lives barely worth liv­ing would be bet­ter than a small pop­u­la­tion of very good lives, e.g. here:

For some defenses of the Repug­nant Con­clu­sion, see “In defense of re­pug­nance” by Michael Hue­mer.

The Repug­nant Con­clu­sion is avoided in two pos­si­bly differ­ent ways:

a. It de­nies the premise that there could be pos­i­tive ex­is­tences.

b. As­sum­ing there are pos­i­tive ex­is­tences (us­ing a differ­ent mea­sure of value than one based on con­di­tional in­ter­est satis­fac­tion), adding a pop­u­la­tion of lives barely worth liv­ing is in fact bad, so is false.

To fur­ther defend this last point, un­der a wide view, the step from to of adding a pop­u­la­tion of lives barely worth liv­ing is equiv­a­lent to also mak­ing ev­ery­one in swap places with ex­tra in­di­vi­d­u­als from of the same num­ber. That is, start­ing from , we make ev­ery­one in as badly off as would be the ex­tra in­di­vi­d­u­als in , and add ex­tra in­di­vi­d­u­als with the same wellbe­ing as the origi­nals in and even more with the lower level. From the point of view of the origi­nal in­di­vi­d­u­als in , this could make worse, and adding the ex­tra in­di­vi­d­u­als would not com­pen­sate, be­cause they have no in­ter­ests in be­ing brought into ex­is­tence.

3. By deny­ing the pos­si­bil­ity of pos­i­tive ex­is­tences, it avoids Ar­rhe­nius’s ma­jor im­pos­si­bil­ity the­o­rems like this one (I think this is prob­a­bly the strongest state­ment, since it only as­sumes or­di­nal, but in­ter­per­son­ally com­pa­rable, welfare). Alter­na­tively, if we do in­ter­pret pos­i­tive ex­is­tence us­ing in he­do­nis­tic terms or terms prefer­ences for con­tinued life, then it vi­o­lates Non-Sadism and im­plies the Sadis­tic Con­clu­sion: it can be worse to add a pop­u­la­tion of pos­i­tive ex­is­tences than one of nega­tive ex­is­tences (usu­ally with a much larger pop­u­la­tion of pos­i­tive ex­is­tences). In re­sponse, the Sadis­tic Con­clu­sion might not be so bad, at least com­pared to the Repug­nant Con­clu­sion, and even more plau­si­bly so since we’ve already ac­cepted the pro­cre­ation asym­me­try and re­jected Mere Ad­di­tion. (Aside: Ar­rhe­nius was a nega­tive util­i­tar­ian of some kind in the past; I don’t know if he still is.)

4. Whether death is good or bad in it­self (ig­nor­ing effects on oth­ers, which we should not ig­nore) de­pends on the na­ture of in­ter­ests we count and how we count them. If we ac­cept Only Ac­tu­ally Con­scious In­ter­ests, then death would be good in it­self (again, ig­nor­ing effects on oth­ers). If we pri­ori­tize an in­di­vi­d­ual’s cur­rent in­ter­ests over their fu­ture ones, then their in­ter­ests in con­tin­u­ing to live would be given greater weight, see the last sec­tion for some thoughts on this.

Im­pli­ca­tions for EA priorities

In this sec­tion, I de­scribe how we might rerank the differ­ent cause ar­eas in 80,000 Hours’ list here.

1. We should re­ject Bostrom’s as­tro­nom­i­cal waste ar­gu­ment and give less pri­or­ity to pre­vent­ing ex­tinc­tion. That does not mean we have no rea­sons to care about the (far) fu­ture or pre­vent ex­tinc­tion, but the fact that fu­ture hu­mans who would not oth­er­wise ex­ist would be happy (rather than not ex­ist, or fewer of them ex­ist) is not a rea­son for in­ter­ven­tion. This sig­nifi­cantly re­duces the value of work­ing to pre­vent ex­is­ten­tial risks, al­though they may still be very im­por­tant, if we think our con­tinued ex­is­tence would be suffi­ciently helpful in ex­pec­ta­tion to, say, wild an­i­mals (if they would also con­tinue to ex­ist af­ter our ex­tinc­tion), or aliens. If you don’t think ex­tinc­tion is much worse than al­most ev­ery­one dy­ing, you can see how 80,000 Hours’ tool reranks the cause ar­eas. As­sum­ing you an­swer the pre­vi­ous ques­tions in a way to not cause rerank­ing (al­though you may very well dis­agree with the un­der­ly­ing as­sump­tions), an­swer­ing “(C) Not more than twice as bad” to ques­tion 4 reranks the list as fol­lows:

Re-ranked list:
1. Global pri­ori­ties re­search − 26
2. Pro­mot­ing effec­tive al­tru­ism − 25 ⇩ (-1 point)
3. Risks posed by ar­tifi­cial in­tel­li­gence − 23.5 ⇩ (-3.5 points)
4. Fac­tory farm­ing − 23
5. Health in poor coun­tries − 21
6. Re­duc­ing to­bacco use in the de­vel­op­ing world − 20
7. Nu­clear se­cu­rity − 20 ⇩ (-3 points)
8. Land use re­form − 20
9. Biose­cu­rity − 20 ⇩ (-3 points)
10. Cli­mate change (ex­treme risks) − 18 ⇩ (-2 points)

Ques­tion 4 is

Ques­tion 4: Here’s two sce­nar­ios:
A nu­clear war kills 90% of the hu­man pop­u­la­tion, but we re­build and civ­i­liza­tion even­tu­ally re­cov­ers.
A nu­clear war kills 100% of the hu­man pop­u­la­tion and no peo­ple live in the fu­ture.
How much worse is the sec­ond sce­nario?

If you want to avoid rerank­ing be­fore ques­tion 4, you should an­swer 1. (A), 2. (A) and 3. (B).

Note that AI risk re­mains above both Fac­tory farm­ing and Health in poor coun­tries. The far fu­ture can still in­deed be over­whelm­ingly im­por­tant, and we may ex­pect AI to shape it even if we don’t go ex­tinct. Fur­ther­more, if we do go ex­tinct, that is a lot of early deaths. How­ever, they didn’t provide op­tions which go fur­ther than “(C) Not more than twice as bad”, and your other an­swers to other ques­tions can in­fluence the rank­ings. Even if we ig­nore fu­ture gen­er­a­tions, it might be bet­ter for ev­ery­one to go ex­tinct than for 90% of the pop­u­la­tion to die out, be­cause the sur­viv­ing 10% may have very bad lives in such an out­come.

Fur­ther­more, if the prob­a­bil­ity of ex­tinc­tion is around 1% or less (80,000 Hours’ best guess seems to be 1-15% in the next 50 years, ac­cord­ing to ques­tion 3., with an­swer (B)), then the non-ex­is­ten­tial risk causes should go up in pri­or­ity, since there’s a greater chance that the work we do for those causes isn’t wasted. E.g. end­ing fac­tory farm­ing and then us go­ing ex­tinct im­me­di­ately af­ter isn’t much bet­ter for fac­tory farmed an­i­mals than us just go­ing ex­tinct, be­cause fac­tory farm­ing will end any­way if we do go ex­tinct (al­though we’re likely to achieve con­sid­er­able progress and pre­vent a lot of suffer­ing up un­til ex­tinc­tion if we do work on fac­tory farm­ing).

2. There’s also a ques­tion of the de­gree to which death is bad. If death seems less bad, then this could fur­ther re­duce the pri­or­ity to ex­is­ten­tial risks. This might also have an effect on the value of some, but not all global health and poverty in­ter­ven­tions. If death is bad, I think its bad­ness is un­likely to be roughly pro­por­tional to the num­ber of years of life lost, since ex­ist­ing in­ter­ests are likely to change for many peo­ple as they age, but GiveWell doesn’t ex­plic­itly use such a mea­sure any­more, any­way (see here and here), and I don’t know to what de­gree an­a­lysts rely on such an in­tu­ition. With Ex­pe­ri­en­tial­ism or He­donism, death in it­self is not bad, but the pro­cess of dy­ing and the im­pacts on loved ones are of course of­ten very bad, per­haps es­pe­cially an un­ex­pected early death (but if early and later deaths are equally bad, then post­pon­ing death doesn’t look very good in he­do­nis­tic terms). Over­all, I don’t think global health and poverty as a cause area would nec­es­sar­ily look worse since many of the best in­ter­ven­tions do not de­rive most of their value from life ex­ten­sion. Ex­is­ten­tial risk cause ar­eas would prob­a­bly look worse if we thought be­fore that the bad­ness of ex­tinc­tion came pri­mar­ily from early deaths (and as­tro­nom­i­cal waste).

3. Global health and poverty in­ter­ven­tions might de­crease the rate of pop­u­la­tion growth, and this might be in it­self good. Fam­ily plan­ning in­ter­ven­tions and ed­u­ca­tion in de­vel­op­ing coun­tries might look bet­ter than oth­er­wise, speci­fi­cally, for this rea­son.

4. We should re­ject the logic of the larder. That is, if an­i­mals bred and used for hu­man pur­poses would have good lives, this is not a rea­son to breed and use them in the first place, and the fact that they will al­most cer­tainly have un­satis­fied in­ter­ests is a rea­son to not do so. There could be other rea­sons for their breed­ing and use, but they need to be even stronger.

Pri­ori­tiz­ing cur­rent in­ter­ests over fu­ture ones?

I’ve been won­der­ing lately if there’s a plau­si­ble con­se­quen­tial­ist the­ory (or more gen­er­ally, the­ory of value) which as­signs more value to the im­me­di­ate satis­fac­tion of an in­di­vi­d­ual’s cur­rent in­ter­ests over the satis­fac­tion of their fu­ture ones in such a way as to be com­pat­i­ble with com­mon sense no­tions of non-pa­ter­nal­ism and con­sent. In this way, could vi­o­lat­ing an in­di­vi­d­ual’s in­ter­ests now to bet­ter satisfy their own fu­ture in­ter­ests be usu­ally bad in it­self? I think this would still be com­pat­i­ble with our un­der­stand­ing of im­par­tial­ity. We could just use some kind of dis­count­ing of in­ter­ests within in­di­vi­d­u­als, but I’m not sure if this quite does it.

How­ever, if we don’t think peo­ple’s per­sonal iden­tities per­sist over time (which seems likely to me), this wouldn’t mean much. If we don’t think they per­sist, noth­ing can be pa­ter­nal­is­tic, since there would be no per­sonal trade­offs, only in­ter­per­sonal trade­offs.

If we also give greater weight to cur­rent in­ter­ests than to fu­ture in­ter­ests gen­er­ally, not just within in­di­vi­d­u­als, our the­ory could also look much more de­on­tolog­i­cal in prac­tice, but it would be harder to call this im­par­tial, since it gives less weight to the in­ter­ests of fu­ture peo­ple. It might also be difficult to ground from an im­par­tial per­spec­tive, be­cause of the rel­a­tivity of si­mul­tane­ity. There’s no such phys­i­cal ob­sta­cle for per­sonal trade­offs be­cause we can use an in­di­vi­d­ual’s own frame of refer­ence.

If we’re not care­ful, there might be is­sues with dy­namic con­sis­tency: the de­ci­sions that look best now could sys­tem­at­i­cally look worse in the fu­ture, and you would re­gret them, even with perfect cer­tainty ahead of time.