Response to recent criticisms of EA “longtermist” thinking

This is a re­sponse to some re­cent crit­i­cisms of “Longter­mist” EA think­ing. I have or­ga­nized it in the form of an FAQ re­spond­ing to con­cerns.

Does the Bostro­mian paradigm rely on tran­shu­man­ism and an im­per­sonal, to­tal­ist util­i­tar­i­anism?

Some ob­ject that the long-term paradigm stems from a cou­ple of ax­iolog­i­cal po­si­tions, util­i­tar­i­anism and tran­shu­man­ism.

Bostrom’s views do not rely on util­i­tar­i­anism. They do re­quire that the fu­ture gen­er­ally be con­sid­ered po­ten­tially ex­tremely valuable rel­a­tive to the pre­sent, based on qual­ity and/​or quan­tity of life. So some sort of value ag­gre­ga­tion is re­quired. How­ever, in­trin­sic dis­count­ing, as well as a va­ri­ety of non­con­se­quen­tial­ist views on pre­sent-day things like du­ties against ly­ing/​kil­ling/​etc, are fully com­pat­i­ble with Bostrom’s paradigm.

Bostrom’s paradigm doesn’t quite re­quire tran­shu­man­ism. If hu­man­ity reaches a sta­ble state of Earthly af­fairs, we the­o­ret­i­cally might con­tinue for hun­dreds of mil­lions of years, be­ing born and dy­ing in happy 100-year cy­cles, which is suffi­cient for an ex­tremely valuable long-run fu­ture. Ex­is­ten­tial risks may be a big prob­lem over this timeframe, how­ever. Con­scious simu­la­tions or hu­man space travel coloniza­tion would be re­quired for a re­li­ably su­per-valuable far fu­ture.

Con­scious simu­la­tions might tech­ni­cally not be con­sid­ered tran­shu­man­ism. The idea that we can up­load our cur­rent brains onto com­put­ers is gen­er­ally con­sid­ered tran­shu­man­ism, but that is not the only way of hav­ing con­scious simu­la­tions/​com­pu­ta­tions. Of course, con­scious in­tel­li­gent simu­la­tions are always a pretty “out there” sci-fi sce­nario.

Space travel may re­quire ma­jor hu­man changes in or­der to be suc­cess­ful. We could, in the­ory, fo­cus 100% on ter­raform­ing and travel with Earth­like space arks; this would the­o­ret­i­cally en­able ma­jor space travel with no tran­shu­man­ism, but it would be hard and our de­scen­dants will un­doubt­edly choose a differ­ent route. If we made minor ge­netic changes to make hu­mans more re­silient against ra­di­a­tion and low-grav­ity en­vi­ron­ments, that could greatly re­duce the difficulty of space travel, though it’s un­clear if this should be con­sid­ered tran­shu­man­ism. Proper tran­shu­man­ism to make us smarter, longer-lived and more co­op­er­a­tive would broadly help, how­ever. Another op­tion is to have space travel and ter­raform­ing be done by au­to­mated sys­tems, and the first hu­mans could be very similar to us, ex­cept for be­ing con­ceived, born and raised de novo by robots. Again I don’t know if this is tech­ni­cally tran­shu­man­ism, al­though it is cer­tainly ‘out there.’

Fi­nally, you could be­lieve tran­shu­man­ism will only be done for key things like space travel. Just be­cause we can train as­tro­nauts does not mean we all want to be­come as­tro­nauts. Tran­shu­man­ism could be like as­tro­naut train­ing: some­thing clunky and un­pleas­ant that is au­tho­rized for a few, but not done by or­di­nary peo­ple on Earth or on post-ter­rafor­ma­tion wor­lds.

In sum­mary, while there are some ideas shared with util­i­tar­i­anism and tran­shu­man­ism, nei­ther util­i­tar­ian moral the­ory nor the as­pira­tion to broadly re-en­g­ineer hu­man­ity are re­ally re­quired for a long-term view.

If some­one has an ob­jec­tion to ax­iolog­i­cal util­i­tar­i­anism or ax­iolog­i­cal tran­shu­man­ism, it’s best for them to think care­fully about what their par­tic­u­lar ob­jec­tions are, and then see whether they do or don’t pose a prob­lem for the longter­mist view.

Are long-term pri­ori­ties dis­tract­ing?

One worry with long-term pri­ori­ties is that it can dis­tract us from short-term prob­lems. This is eas­ily iden­ti­fied as a spu­ri­ous com­plaint. Every cause area dis­tracts us from some other cause ar­eas. Short-term pri­ori­ties dis­tract us from long-term pri­ori­ties. That is the very na­ture of Effec­tive Altru­ism and, in­deed, the mod­ern re­source-limited world. It is not a se­ri­ous crit­i­cism.

Do long-term pri­ori­ties im­ply short-term sac­ri­fices?

Another worry is that long-term views im­ply that we might tol­er­ate do­ing bad things for the short-term if it helps the long-term. For in­stance, if start­ing a war could re­duce ex­is­ten­tial risk, it could be jus­tified.

This seems like a ba­si­cally moral com­plaint: “long-ter­mists will achieve their goals of max­i­miz­ing hu­man well-be­ing but in the pro­cess it may in­volve things I can­not tol­er­ate, due to my moral views.”

Again, this ob­jec­tion ap­plies to any kind of pri­or­ity. If you are very con­cerned with a short-term prob­lem like global dis­ease and poverty, you might similarly de­cide that some ac­tions to harm peo­ple in the long-run fu­ture are jus­tified to as­sist your own cause. Fur­ther­more, you might also de­cide that ac­tions to harm some peo­ple in the short run are jus­tified to save oth­ers in the short run. This is just the reg­u­lar trol­ley prob­lem. An act-con­se­quen­tial­ist view can com­pel you to make such trade­offs re­gard­less of whether you pri­ori­tize the short run or the long run. Mean­while, if you re­ject the idea of harm­ing a few to save the many, you will not ac­cept the idea of harm­ing peo­ple in the short run to help peo­ple in the long run, even if you gen­er­ally pri­ori­tize the long run. So in the­ory, this is not about short-term ver­sus long-term pri­ori­ties, it is just about con­se­quen­tial­ism ver­sus non­con­se­quen­tial­ism.

You might say that some peo­ple have a more nu­anced take be­tween the hard con­se­quen­tial­ist and the hard non­con­se­quen­tial­ist view. Sup­pose that some­one does not be­lieve in kil­ling 1 to save 5, but they do be­lieve in kil­ling 1 to save 10,000. This per­son might see ways that small short term harms could be offset by ma­jor long-term benefits, with­out see­ing ways that small short-term harms could be offset by other, more mod­est short-term benefits. But of course this is a con­tin­gent fact. If they ever do en­counter a situ­a­tion where they could kill 1 to save 10,000 in the short run, they will be obliged to take that op­por­tu­nity. So there is still the same moral re­duc­tio ad ab­sur­dum (as­sum­ing that you do in fact think it’s ab­surd to make such sac­ri­fices, which is du­bi­ous).

One could make a prac­ti­cal ar­gu­ment in­stead of a moral one: that longter­mist pri­ori­ties are so com­pel­ling that they make it too easy for poli­ti­ci­ans and oth­ers to jus­tify bad ag­gres­sive ac­tions against their en­e­mies. So the long-term pri­ori­ties are a perfectly good idea for us to be­lieve and to share with each other, but not some­thing to share in more pub­lic poli­ti­cal and mil­i­tary con­texts.

Spec­u­lat­ing how poli­cy­mak­ers will act based on a philos­o­phy is a very du­bi­ous ap­proach. I have my own spec­u­la­tions – I think they will act well, or at least much bet­ter than the likely al­ter­na­tives. But a bet­ter method­ol­ogy is to see what peo­ple’s mil­i­tary and poli­ti­cal views ac­tu­ally are when they sub­scribe to Bostrom’s long-term pri­ori­ties. See the views of the Can­di­date Scor­ing Sys­tem un­der “long run is­sues”, or see what other EAs have writ­ten about poli­tics and in­ter­na­tional re­la­tions. They are quite con­ven­tional.

More­over, Bostrom’s long-term pri­ori­ties are a very marginal view in the poli­ti­cal sphere, and it will be a long time be­fore they be­come the dom­i­nant paradigm, if ever.

In sum­mary, the moral ar­gu­ment does not work. Prag­mat­i­cally speak­ing, it may be good to think hard about how long-term views should be pack­aged and sold to gov­ern­ments, but that’s no rea­son to re­ject the idea, es­pe­cially not at this early stage.

Do long-term views place a per­verse pri­or­ity on sav­ing peo­ple in wealthy coun­tries?

Another ob­jec­tion to long-term views is that they could be in­ter­preted as putting a higher pri­or­ity on sav­ing the lives of peo­ple in wealthy rather than poor coun­tries, be­cause such peo­ple con­tribute more to long-run progress. This is not unique to Bostrom’s pri­or­ity, it is shared by many other views. Com­mon parochial views in the West – to give to one’s own uni­ver­sity or home­town – similarly put a higher pri­or­ity on lo­cal peo­ple. Na­tion­al­ism puts a higher pri­or­ity on one’s own coun­try. An­i­mal-fo­cused views can also come to this con­clu­sion, not for life­sav­ing but for in­creas­ing peo­ple’s wealth, based on differ­ing rates of meat con­sump­tion. A reg­u­lar short-term hu­man-fo­cused util­i­tar­ian view could also come to the same con­clu­sion, based on in­ter­na­tional differ­ences in life ex­pec­tancy and av­er­age hap­piness. In fact, the same ba­sic ar­gu­ment that peo­ple in the West con­tribute more to the global econ­omy can be used to ar­gue for differ­ing pri­ori­ties even on a short-run wor­ld­view.

Just be­cause so many views are vuln­er­a­ble to this ob­jec­tion doesn’t mean the ob­jec­tion is wrong. But it’s still not clear what this ob­jec­tion even is. As­sum­ing that sav­ing peo­ple in wealthier coun­tries is the best thing for global welfare, why should any­one ob­ject to it?

One could worry that shar­ing such an ide­ol­ogy will cause peo­ple to be­come white or Asian supremacists. On this worry, when­ever you give peo­ple a rea­son to pre­fer sav­ing a life in ad­vanced coun­tries (USA, France, Ja­pan, South Korea, etc) over sav­ing lives in poor coun­tries, that has a risk of turn­ing them into a white or Asian supremacist, be­cause the richer coun­tries hap­pen to have peo­ple of differ­ent races than poorer coun­tries, speak­ing in av­er­age terms. But hun­dreds of mil­lions of peo­ple be­lieve in one of these var­i­ous ide­olo­gies which place a higher pri­or­ity on sav­ing peo­ple in their own coun­tries, yet only a tiny minor­ity be­come racial supremacists. There­fore, even if these ide­olo­gies do cause racial supremacism, the effect size is ex­tremely small, not enough to pose a mean­ingful ar­gu­ment here. I also sus­pect that if you ac­tu­ally look at the pro­cess of how racial supremacists be­come rad­i­cal­ized, the real causes will be some­thing other than ra­tio­nal ar­gu­ments about the long-term col­lec­tive progress of hu­man­ity.

One might say that it’s still use­ful for Effec­tive Altru­ists to in­sert lan­guage in rele­vant pa­pers to dis­avow racial supremacism, be­cause there is still a tiny risk of rad­i­cal­iz­ing some­one, and isn’t it very cheap and easy to in­sert such lan­guage and make sure that no one gets the wrong idea? But any rea­son­able reader will already know that Effec­tive Altru­ists are not racial supremacists and don’t like the ide­ol­ogy one bit. And far-right peo­ple gen­er­ally be­lieve that there is strong liberal bias af­flict­ing Effec­tive Altru­ism and oth­ers in the main­stream me­dia and academia, so even if Effec­tive Altru­ists said we dis­avowed racial supremacism, far-right peo­ple would view it as a mean­ingless and pre­dictable poli­ti­cal line. As for the reader who is cen­trist or con­ser­va­tive but not far-right, such a state­ment may seem ridicu­lous, show­ing that the au­thor is para­noid or pos­sessed of a very ‘woke’ ide­ol­ogy, and this would harm the rep­u­ta­tion of the au­thor and of Effec­tive Altru­ists more gen­er­ally. As for any­one who isn’t already think­ing about these is­sues, the in­ser­tion of a state­ment against racial supremacism may seem jar­ring, like a sig­nal that the au­thor is in fact as­so­ci­ated with racial supremacism and is try­ing to deny it. If some­one de­nies alleged con­nec­tions to racial supremacism, their de­nial can be quoted and treated as ev­i­dence that the alle­ga­tions against them re­ally are not spu­ri­ous. Fi­nally, such state­ments take up space and make the doc­u­ment take longer to read. When asked, you should definitely di­rectly re­spond “I op­pose white supremacism,” but pre­emp­tively putting dis­claimers for ev­ery reader seems like a bad policy.

So much for the racial supremacism wor­ries. Still, one could say that it’s morally wrong to give money to save the lives of wealthier peo­ple, even if it’s ac­tu­ally the most effec­tive and benefi­cial thing to do. But this ar­gu­ment only makes sense if you have an egal­i­tar­ian moral frame­work, like that of Rawls, and you don’t be­lieve that broadly im­prov­ing hu­man­ity’s progress will help some ex­tremely-badly-off peo­ple in the fu­ture.

In that case, you will have a valid moral dis­agree­ment with the longter­mist rich-coun­try-pro­duc­tivity ar­gu­ment. How­ever, this is su­perflu­ous be­cause your egal­i­tar­ian view sim­ply re­jects the long-term pri­ori­ties in the first place. It already im­plies that we should give money to save the worst-off peo­ple now, not happy peo­ple in the far fu­ture and not even peo­ple in 2040 or 2080 who will be harmed by cli­mate change. (Also note that Rawls’ strict egal­i­tar­i­anism is wrong any­way, as his “origi­nal po­si­tion” ar­gu­ment should ul­ti­mately be in­ter­preted to sup­port util­i­tar­i­anism.)

Do long-term views pri­ori­tize peo­ple in the fu­ture over peo­ple to­day?

They do in the same sense that they pri­ori­tize the peo­ple of Rus­sia over the peo­ple of Fin­land. There are more Rus­si­ans than Finns. There is noth­ing wrong with this.

On an in­di­vi­d­ual ba­sis, the pri­ori­ti­za­tion will be roughly similar, ex­cept fu­ture peo­ple may live longer and be hap­pier (mak­ing them a higher pri­or­ity to save) and they may be difficult to un­der­stand and re­li­ably help (mak­ing them a lower pri­or­ity to save).

Again, there is noth­ing wrong with this.

Will long-term EAs ig­nore short-term harms?

No, for three rea­sons. First, short-term harms are gen­er­ally slight prob­a­bil­is­tic long-term harms as well. If some­one dies to­day, that makes hu­man­ity grow more slowly and makes the world a more volatile place. There­fore, fa­nat­i­cism to sac­ri­fice many peo­ple im­me­di­ately in or­der to ob­tain spec­u­la­tive long-run benefits does not make sense in the real world, un­der a fa­nat­i­cal long-term view.

Se­cond, EAs rec­og­nize some of the is­sues with long-term plan­ning, and ac­cord­ing to gen­eral un­cer­tainty on our abil­ity to pre­dict and change the fu­ture, will in­cor­po­rate some cau­tion about in­cur­ring short-run costs.

Third, in the real world, these are all spec­u­la­tive philo­soph­i­cal trol­ley prob­lems. We live in a lawful, or­dered so­ciety where caus­ing short-term harms re­sults in le­gal and so­cial pun­ish­ments, which makes it ir­ra­tional for peo­ple with long-term pri­ori­ties to try to take harm­ful ac­tions.

Go­ing off the heels of the pre­vi­ous dis­cus­sion of racial supremacism, one might won­der if be­ing as­so­ci­ated with white supremacism is good or bad for pub­lic re­la­tions in the West these days. Well, the ev­i­dence clearly shows that white supremacism is bad for PR.

A 2017 Reuters poll asked peo­ple if they fa­vored white na­tion­al­ism; 8% sup­ported it and 65% op­posed it. When asked about the alt-right, 6% sup­ported it and 52% op­posed it. When asked about neo-Nazism, 4% sup­ported it and 77% op­posed it. Th­ese re­sults show a clear ma­jor­ity op­pos­ing white supremacism, and even those few who sup­port it could be dis­missed per the Lizard­man Con­stant.

Th­ese pro­por­tions change fur­ther when you look at elites in gov­ern­ment, academia and wealthy cor­po­rate cir­cles. In these cases, white supremacism is es­sen­tially nonex­is­tent. Very many who op­pose it do not merely dis­agree with it, but ac­tively ab­hor it.

Ab­hor­rence of white supremacism ex­tends to many con­crete ac­tions to sup­press it and re­lated views in in­tel­lec­tual cir­cles. For ex­am­ples, see the “Academia” sec­tion in the Can­di­date Scor­ing Sys­tem, and this es­say about in­fringe­ments upon free speech in academia. And con­sider Fred­die DeBoer’s ob­ser­va­tion that “for ev­ery one of these con­tro­ver­sies that goes pub­lic, there are vastly more situ­a­tions where some­one self-cen­sors, or is quietly bul­lied into ac­quiesc­ing. For ev­ery odd ex­am­ple that goes viral, there is no doubt dozens more that oc­cur be­hind closed doors.”

White supremacism is also gen­er­ally banned on so­cial me­dia, in­clud­ing Red­dit and Twit­ter. And de­plat­form­ing works.

For the record, I think that de­plat­form­ing white supremacists – peo­ple like Richard Spencer – is of­ten a good thing. But I am un­der no illu­sions about the way things work.

One could re­tort that be­ing wrongly ac­cused of white supremacism can earn one pub­lic sym­pa­thy from cer­tain in­fluen­tial het­ero­dox peo­ple, like Peter Thiel and Sam Har­ris. Th­ese kinds of het­ero­dox figures are of­ten in­clined to defend some peo­ple who are ac­cused of white supremacism, like Charles Mur­ray, Noah Carl and oth­ers. How­ever, this defense only hap­pens as a par­tial push­back against broader ‘can­cel­la­tion’ con­ducted by oth­ers. The defense usu­ally fo­cuses on aca­demic free­dom and be­hav­ior rather than whether the ac­tual ideas are cor­rect. It can gain ground with some of the broader pub­lic, but elite cor­po­rate and aca­demic cir­cles re­main op­posed.

And even among the broader pub­lic and poli­ti­cal spheres, the Very On­line IDW type who pays at­ten­tion to these re-plat­formed peo­ple is ac­tu­ally pretty rare. Most peo­ple in the real world are rather poli­ti­cally dis­en­gaged, have no love for ‘poli­ti­cal cor­rect­ness’ nor for those re­garded as white supremacists, and don’t pay much at­ten­tion to on­line drama. And far-right peo­ple are of­ten ex­cluded even from right-wing poli­tics. For in­stance, the right-wing think tank Her­i­tage Foun­da­tion made some­one re­sign fol­low­ing con­tro­versy about his ar­gu­ment for giv­ing pri­or­ity in im­mi­gra­tion law to white peo­ple based on IQ.

All in all, it’s clear that be­ing as­so­ci­ated with white supremacism is bad for PR.

Sum­mary: what are the good rea­sons to dis­agree with longter­mism?

Rea­son 1: You don’t be­lieve that very large num­bers of peo­ple in the far fu­ture add up to be­ing a very big moral pri­or­ity. For in­stance, you may dis­re­gard ag­gre­ga­tion. Alter­na­tively, you may take a Rawlsian moral view com­bined with the as­sump­tion that the worst-off peo­ple who we can help are al­ive to­day.

Rea­son 2: You pre­dict that in­ter­stel­lar travel and con­scious simu­la­tions will not be adopted and hu­man­ity will not ex­pand.

Honor­able men­tion 1: If you be­lieve that fu­ture tech­nolo­gies like tran­shu­man­ism will cre­ate a bad fu­ture, then you will still fo­cus on the long run, but with a more pes­simistic view­point that wor­ries less about ex­is­ten­tial risk.

Honor­able men­tion 2: if you don’t be­lieve in mak­ing trol­ley prob­lem-type sac­ri­fices, you will have a mildly differ­ent the­o­ret­i­cal un­der­stand­ing of longter­mism than some EA thinkers who have char­ac­ter­ized it with a more con­se­quen­tial­ist an­gle. In prac­tice, it’s un­clear if there will be any differ­ence.

Honor­able men­tion 3: if you are ex­tremely wor­ried about the so­cial con­se­quences of giv­ing peo­ple a strong mo­ti­va­tion to fight for the gen­eral progress of hu­man­ity, you will want to keep longter­mism a se­cret, pri­vate point of view.

Honor­able men­tion 4: if you are ex­tremely wor­ried about the so­cial con­se­quences of giv­ing peo­ple in wealthy coun­tries a strong mo­ti­va­tion to give aid to their neigh­bors and com­pa­tri­ots, you will want to keep longter­mism a se­cret, pri­vate point of view.

There are oth­ers rea­sons to dis­agree with long-term pri­ori­ties (mainly, un­cer­tainty in pre­dict­ing and chang­ing the far fu­ture), but these are just the take­aways from the ideas I’ve dis­cussed here.

A broad plea: let’s keep Effec­tive Altru­ism grounded

Many peo­ple came into Effec­tive Altru­ism from moral philos­o­phy, or at least think about it in very rigor­ous philo­soph­i­cal terms. This is great for giv­ing us rigor­ous, clear views on a va­ri­ety of is­sues. How­ever, there is a down­side. The urge to sys­tem­atize ev­ery­thing to its log­i­cal the­o­ret­i­cal con­clu­sions in­evitably leads to cases where the con­se­quences are counter-in­tu­itive. Mo­ral philos­o­phy has tried for thou­sands of years to come up with a sin­gle moral the­ory, and it has failed, largely be­cause any con­sis­tent moral the­ory will have illog­i­cal or ab­surd con­clu­sions in edge cases. Why would Effec­tive Altru­ism want to be like a moral the­ory, bur­dened by these edge cases that don’t mat­ter in the real world? And if you are a critic of Effec­tive Altru­ism, why would you want to in­sert your­self into the kind of de­bate where your own views can be ex­posed to have similar prob­lems? Effec­tive Altru­ism can in­stead be a more grounded point of view, a prac­ti­cal philos­o­phy of liv­ing like Sto­icism. Sto­ics don’t worry about what they would do if they had to de­stroy an in­no­cent coun­try in or­der to save Stoic philos­o­phy, or other non­sense like that. And the crit­ics of Sto­icism don’t make those kinds of ob­jec­tions. In­stead, ev­ery­thing re­volves around a sim­ple ques­tion whose an­swers are in­evitably ac­cept­able: how can I re­al­is­ti­cally live the good life? (Or some­thing like that. I don’t ac­tu­ally know much about Sto­icism.)

Effec­tive Altru­ism cer­tainly should not give up for­mal rigor in an­swer­ing our main ques­tions. How­ever, we should be care­ful about which ques­tions we seek to an­swer. And we should be care­ful about which ques­tions we use as the ba­sis for crit­i­ciz­ing other Effec­tive Altru­ists. We should fo­cus on the ques­tions that re­ally mat­ter for de­cid­ing prac­ti­cal things like where we will work, where we will donate and who we will vote for. If you have in mind some un­re­al­is­tic, fan­tas­ti­cal sce­nario about how util­ity could be max­i­mized in a moral dilemma, (a) don’t talk about it, and (b) don’t com­plain about what other Effec­tive Altru­ists say or might have to say about it. It’s pointless and need­less on both sides.