Common ground for longtermists

(Cross­posted from the Cen­ter for Re­duc­ing Suffer­ing.)

Many in the longter­mist effec­tive al­tru­ism com­mu­nity fo­cus on achiev­ing a flour­ish­ing fu­ture for hu­man­ity by re­duc­ing the risk of early ex­tinc­tion or civil­i­sa­tional col­lapse (ex­is­ten­tial risks). Others, in­spired by suffer­ing-fo­cused ethics, pri­ori­tise the pre­ven­tion of worst-case out­comes with as­tro­nom­i­cal amounts of suffer­ing. This has some­times led to ten­sions be­tween effec­tive al­tru­ists, es­pe­cially over the ques­tion of how valuable pure ex­tinc­tion risk re­duc­tion is.[1]

De­spite these differ­ences, longter­mist EAs still have a lot in com­mon. This com­mon ground lies in our shared in­ter­est in im­prov­ing the long-term fu­ture (as­sum­ing civil­i­sa­tion does not go ex­tinct).[2] In this post, I’ll ar­gue that we should fo­cus (more) on this com­mon ground rather than em­pha­sis­ing our differ­ences.

This is to coun­ter­act the com­mon ten­dency to only dis­cuss what one doesn’t agree on, and thereby lose sight of pos­si­ble win-wins. In-fight­ing is a com­mon failure mode of move­ments, as it is an all too hu­man ten­dency to al­low differ­ences to di­vide us and throw us into tribal dy­nam­ics.

Of course, oth­ers have already made the case for co­op­er­a­tion with other value sys­tems (see e.g. 1, 2) and dis­cussed the idea of fo­cus­ing on im­prov­ing the long-term fu­ture con­di­tional on non-ex­tinc­tion (see e.g. 1, 2). The con­tri­bu­tion of this post is to give a (non-ex­haus­tive) overview of pri­or­ity ar­eas that (al­most) all longter­mists can agree on.

Im­prov­ing values

Long-term out­comes are in large part de­ter­mined by the val­ues that fu­ture ac­tors will hold. There­fore, bet­ter val­ues prima fa­cie trans­late into bet­ter fu­tures.[3] While there isn’t full agree­ment on what counts as “bet­ter”, and I can’t speak for ev­ery­one, I still think that longter­mist EAs can largely agree on the fol­low­ing key points:

  • We should strive to be im­par­tially al­tru­is­tic.

  • The well-be­ing of all sen­tient be­ings mat­ters. This in­cludes non-hu­man an­i­mals and pos­si­bly even non­biolog­i­cal be­ings (al­though there is dis­agree­ment about whether such en­tities will be sen­tient).

  • We should con­sider how our ac­tions im­pact not just those ex­ist­ing now, but also those ex­ist­ing in the fu­ture. In par­tic­u­lar, many would en­dorse the moral view that fu­ture in­di­vi­d­u­als mat­ter just as much as pre­sent ones.

While one can take these points for granted when im­mersed in the EA bub­ble, this is a lot of com­mon ground, con­sid­er­ing how un­com­mon these views are in wider so­ciety. Efforts to con­vince more peo­ple to broadly share this out­look seem valuable for all longter­mists. (I’m brack­et­ing dis­cus­sions of the tractabil­ity and other ques­tions around moral ad­vo­cacy—see e.g. here for more de­tails.)

Com­pro­mise rather than conflict

The emer­gence of de­struc­tive large-scale con­flicts, such as (but not limited to) great power wars, is a se­ri­ous dan­ger from any plau­si­ble longter­mist per­spec­tive. Con­flict is a key risk fac­tor for s-risks, but also in­creases the risk of ex­tinc­tion or civil­i­sa­tional col­lapse, and would gen­er­ally lead to worse long-term out­comes.

Longter­mists there­fore have a shared in­ter­est in avoid­ing se­vere con­flicts, and more broadly in im­prov­ing our abil­ity to solve co­or­di­na­tion prob­lems. We would like to move to­wards a fu­ture that fosters co­op­er­a­tion or com­pro­mise be­tween com­pet­ing ac­tors (whether on the level of in­di­vi­d­u­als, na­tions, or other en­tities). If this is suc­cess­ful, it will be pos­si­ble to achieve win-wins, es­pe­cially with ad­vanced fu­ture tech­nol­ogy; for in­stance, cul­tured meat would al­low us to avoid an­i­mal suffer­ing with­out hav­ing to change dietary habits.

Fore­sight and prudence

Another shared goal of longter­mist EAs is that we want care­ful moral re­flec­tion to guide the fu­ture to the great­est ex­tent pos­si­ble. That is, we would like to col­lec­tively de­liber­ate (cf. differ­en­tial in­tel­lec­tual progress) on what hu­man civil­i­sa­tion should do, rather than let­ting blind eco­nomic forces or Dar­wi­nian com­pe­ti­tion rule the day.

In par­tic­u­lar, we would like to care­fully ex­am­ine the risks as­so­ci­ated with pow­er­ful fu­ture tech­nolo­gies and to take pre­cau­tion­ary mea­sures to pre­vent any such risks—rather than rush­ing to de­velop any fea­si­ble tech­nol­ogy as fast as pos­si­ble. A prime ex­am­ple is work on the safety and gov­er­nance of trans­for­ma­tive ar­tifi­cial in­tel­li­gence. Another ex­am­ple may be tech­nolo­gies that en­able (im­pru­dent) space colon­i­sa­tion, which, ac­cord­ing to some, could in­crease ex­tinc­tion risks and s-risks.

To be able to in­fluence the fu­ture for the bet­ter, we also need to bet­ter un­der­stand which sce­nar­ios are plau­si­ble—es­pe­cially in terms of AI sce­nar­ios—and how we can have a last­ing and pos­i­tive im­pact on the tra­jec­tory of hu­man civil­i­sa­tion (see e.g. 1, 2). Longter­mists there­fore have an­other com­mon goal in re­search on cause pri­ori­ti­sa­tion and fu­tur­ism.

Im­prov­ing our poli­ti­cal system

Another ex­am­ple of com­mon ground is to en­sure that our poli­ti­cal sys­tem is work­ing as well as pos­si­ble, and to avoid harm­ful poli­ti­cal and so­cial dy­nam­ics. This is clearly valuable from both a suffer­ing-fo­cused and an “up­side-fo­cused” per­spec­tive, al­though it is not clear how tractable such efforts are. (For more de­tails on pos­si­ble in­ter­ven­tions, see here.)

For in­stance, a plau­si­ble worry is that the harm­ful in­di­vi­d­u­als and ide­olo­gies will be­come dom­i­nant, re­sult­ing in a per­ma­nent lock-in of a to­tal­i­tar­ian power struc­ture. His­tor­i­cal ex­am­ples of such to­tal­i­tar­ian regimes were tem­po­rary and lo­cal­ised, but a sta­ble global dic­ta­tor­ship may be­come pos­si­ble in the fu­ture.

This is par­tic­u­larly wor­ri­some in com­bi­na­tion with malev­olent per­son­al­ity traits in lead­ers (al­though those can also cause sig­nifi­cant harm in non-to­tal­i­tar­ian con­texts). Efforts to re­duce malev­olence or pre­vent a lock-in of a to­tal­i­tar­ian regime there­fore also seem valuable from many per­spec­tives.


There are sig­nifi­cant differ­ences be­tween those who pri­mar­ily want to re­duce suffer­ing and those who pri­mar­ily want a flour­ish­ing fu­ture for hu­man­ity. Nev­er­the­less, I think there is a lot of com­mon ground in terms of the shared goal of im­prov­ing the long-term fu­ture. While I do not want to dis­cour­age thought­ful dis­cus­sion of the re­main­ing points of dis­agree­ment, I think we should be aware of this com­mon ground, and fo­cus on work­ing to­wards a fu­ture that is good from many moral per­spec­tives.

  1. Ac­tual efforts to avert ex­tinc­tion (e.g., pre­vent­ing nu­clear war or biose­cu­rity) may have effects be­yond pre­vent­ing ex­tinc­tion (e.g., they might im­prove global poli­ti­cal sta­bil­ity), which are plau­si­bly also valuable from a suffer­ing-fo­cused per­spec­tive. Re­duc­ing ex­tinc­tion risk can also be pos­i­tive even from a purely suffer­ing-fo­cused per­spec­tive if we think space will coun­ter­fac­tu­ally be colon­ised by an alien civil­i­sa­tion with worse val­ues than hu­mans. ↩︎

  2. How­ever, pre­vent­ing ex­tinc­tion is also a shared in­ter­est of many value sys­tems—just not nec­es­sar­ily of (all) suffer­ing-fo­cused views, which is the sub­ject of this post. So I do not mean to im­ply that efforts to avert ex­tinc­tion are in any way “un­co­op­er­a­tive”. (One may also hold a plu­ral­is­tic or non-con­se­quen­tial­ist view that val­ues pre­serv­ing hu­man­ity while still giv­ing fore­most pri­or­ity to suffer­ing re­duc­tion.) ↩︎

  3. Of course, val­ues are not the only rele­vant fac­tor. For in­stance, the de­gree of ra­tio­nal­ity or in­tel­li­gence of ac­tors and tech­nolog­i­cal /​ phys­i­cal /​ eco­nomic con­straints also mat­ter. ↩︎