Effective Altruism is an Ideology, not (just) a Question

Link post

Introduction

In a widely-cited ar­ti­cle on the EA fo­rum, He­len Toner ar­gues that effec­tive al­tru­ism is a ques­tion, not an ide­ol­ogy. Here is her core ar­gu­ment:

What is the defi­ni­tion of Effec­tive Altru­ism? What claims does it make? What do you have to be­lieve or do, to be an Effec­tive Altru­ist?
I don’t think that any of these ques­tions make sense.
It’s not sur­pris­ing that we ask them: if you asked those ques­tions about fem­i­nism or sec­u­larism, Is­lamism or liber­tar­i­anism, the an­swers you would get would be rele­vant and illu­mi­nat­ing. Differ­ent pro­po­nents of the same move­ment might give you slightly differ­ent an­swers, but syn­the­sis­ing the an­swers of sev­eral peo­ple would give you a pretty good feel­ing for the core of the move­ment.
But each of these move­ments is an­swer­ing a ques­tion. Should men and women be equal? (Yes.) What role should the church play in gov­er­nance? (None.) What kind of gov­ern­ment should we have? (One based on Is­lamic law.) How big a role should gov­ern­ment play in peo­ple’s pri­vate lives? (A small one.)
Effec­tive Altru­ism isn’t like this. Effec­tive Altru­ism is ask­ing a ques­tion, some­thing like:
“How can I do the most good, with the re­sources available to me?”

In this es­say I will ar­gue that his view of effec­tive al­tru­ism be­ing a ques­tion and not an ide­ol­ogy is in­cor­rect. In par­tic­u­lar, I will ar­gue that effec­tive al­tru­ism is an ide­ol­ogy, mean­ing that it has par­tic­u­lar (if some­what vaguely defined) set of core prin­ci­ples and be­liefs, and as­so­ci­ated ways of view­ing the world and in­ter­pret­ing ev­i­dence. After first ex­plain­ing what I mean by ide­ol­ogy, I pro­ceed to dis­cuss the ways in which effec­tive al­tru­ists typ­i­cally ex­press their ide­ol­ogy, in­clud­ing by priv­ileg­ing cer­tain ques­tions over oth­ers, ap­ply­ing par­tic­u­lar the­o­ret­i­cal frame­works to an­swer these ques­tions, and priv­ileg­ing par­tic­u­lar an­swers and view­points over oth­ers. I should em­pha­sise at the out­set that my pur­pose in this ar­ti­cle is not to dis­par­age effec­tive al­tru­ism, but to try to strengthen the move­ment by helping EAs to bet­ter un­der­stand the in­tel­lec­tual ac­tual in­tel­lec­tual un­der­pin­nings of the move­ment.

What is an ide­ol­ogy?

The first point I want to ex­plain is what I mean when I talk about an ‘ide­ol­ogy’. Ba­si­cally, an ide­ol­ogy is a con­stel­la­tion of be­liefs and per­spec­tives that shape the way ad­her­ents of that ide­ol­ogy view the world. To flesh this out a bit, I will pre­sent two ex­am­ples of ide­olo­gies: fem­i­nism and liber­tar­i­anism. Ob­vi­ously these will be sim­plified since there is con­sid­er­able het­ero­gene­ity within any ide­ol­ogy, and there are always dis­putes about who counts as a ‘true’ ad­her­ent of any ide­ol­ogy. Nev­er­the­less, I think these quick sketches are broadly ac­cu­rate and helpful for illus­trat­ing what I am talk­ing about when I use the word ‘ide­ol­ogy’.

First con­sider fem­i­nism. Fem­i­nists typ­i­cally be­gin with the premise that the so­cial world is struc­tured in such a man­ner that men as a group sys­tem­at­i­cally op­press women as a group. There is a richly struc­tured the­ory about how this works and how this in­ter­acts with differ­ent so­cial in­sti­tu­tions, in­clud­ing the fam­ily, the econ­omy, the jus­tice sys­tem, ed­u­ca­tion, health care, and so on. In in­ves­ti­gat­ing any area, fem­i­nists typ­i­cally fo­cus on gen­dered power struc­tures and how they shape so­cial out­comes. When some­thing hap­pens, fem­i­nists ask ‘what af­fect does this have on the sta­tus and place of women in so­ciety?’ Given these per­spec­tives, fem­i­nists typ­i­cally are un­in­ter­ested in and highly scep­ti­cal of any ac­counts of so­cial differ­ences be­tween men and women based on biolog­i­cal differ­ences, or at­tempts to ra­tio­nal­ist differ­ences on the ba­sis of so­cial sta­bil­ity or co­he­sion. This way of look­ing at things, fo­cus on par­tic­u­lar is­sues at the ex­pense of oth­ers, and set of un­der­ly­ing as­sump­tions con­sti­tutes the ide­ol­ogy of fem­i­nism.

Se­cond con­sider liber­tar­i­anism. Liber­tar­i­ans typ­i­cally be­gin with the idea that in­di­vi­d­u­als are fun­da­men­tally free and equal, but that gov­ern­ments through­out the world sys­tem­at­i­cally step be­yond their le­gi­t­i­mate role of pro­tect­ing in­di­vi­d­ual free­doms by re­strict­ing those free­doms and vi­o­lat­ing in­di­vi­d­ual rights. In analysing any situ­a­tion, liber­tar­i­ans fo­cus on how the ac­tions of gov­ern­ments limit the free choices of in­di­vi­d­u­als. Liber­tar­i­ans have ex­ten­sive ac­counts as to how this oc­curs through tax­a­tion, gov­ern­ment welfare pro­grams, mon­e­tary and fis­cal policy, the crim­i­nal jus­tice sys­tem, state-spon­sored ed­u­ca­tion, the mil­i­tary in­dus­trial com­plex, and so on. When some­thing hap­pens, liber­tar­i­ans ask ‘what af­fect does this have on in­di­vi­d­ual rights and free­doms?’ Given these per­spec­tives, liber­tar­i­ans typ­i­cally are un­in­ter­ested in and highly scep­ti­cal of any at­tempts to jus­tify state in­ter­ven­tion on the ba­sis of in­creases in effi­ciency, in­creas­ing equal­ity, or im­prov­ing so­cial co­he­sion. This way of look­ing at things, fo­cus on par­tic­u­lar is­sues at the ex­pense of oth­ers, and set of un­der­ly­ing as­sump­tions con­sti­tutes the ide­ol­ogy of liber­tar­i­anism.

Given the fore­go­ing, here I sum­marise some of the key as­pects of an ide­ol­ogy:

  1. Some ques­tions are priv­ileged over oth­ers.

  2. There are par­tic­u­lar the­o­ret­i­cal frame­works for an­swer­ing ques­tions and analysing situ­a­tions.

  3. As a re­sult of 1 and 2, cer­tain view­points and an­swers to ques­tions are priv­ileged, while oth­ers are ne­glected as be­ing un­in­ter­est­ing or im­plau­si­ble.

With this frame­work in mind of what an ide­ol­ogy is, I now want to ap­ply this to the case of effec­tive al­tru­ism. In do­ing so, I will con­sider each of these three as­pects of an ide­ol­ogy in turn, and see how they re­late to effec­tive al­tru­ism.

Some ques­tions are priv­ileged over others

Effec­tive al­tru­ism, ac­cord­ing to Toner (and many oth­ers), asks a ques­tion some­thing like ‘How can I do the most good, with the re­sources available to me?’. I agree that EA does in­deed ask this ques­tion. How­ever it doesn’t fol­low that EA isn’t an ide­ol­ogy, since as we have just seen, ide­olo­gies priv­ilege some ques­tions over oth­ers. In this case we can ask – what other similar ques­tions could effec­tive al­tru­ism ask? Here are a few that come to mind:

  • What moral du­ties do we have to­wards peo­ple in ab­solute poverty, an­i­mals in fac­tory farms, or fu­ture gen­er­a­tions?

  • What would a vir­tu­ous per­son do to help those in ab­solute poverty, an­i­mals in fac­tory farms, or fu­ture gen­er­a­tions?

  • What op­pres­sive so­cial sys­tems are re­spon­si­ble for the most suffer­ing in the world, and what can be done to dis­man­tle them?

  • How should our so­cial and poli­ti­cal in­sti­tu­tions be struc­tured so as to prop­erly rep­re­sent the in­ter­ests of all per­sons, or all sen­tient crea­tures?

I’ve writ­ten each with a differ­ent eth­i­cal the­ory in mind. In or­der these are: de­on­tol­ogy, virtue ethics, Marx­ist/​post­colo­nial/​other crit­i­cal the­o­ries, and con­trac­tar­ian ethics. While some read­ers may phrase these ques­tions some­what differ­ently, my point is sim­ply to em­pha­sise that the ques­tion you ask de­pends upon your ide­ol­ogy.

Some EAs may be tempted to re­spond that all my ex­am­ples are just differ­ent ways, or more spe­cific ways, of ask­ing the EA ques­tion ‘how can we do the most good’, but I think this is sim­ply wrong. The EA ques­tion is the sort of ques­tion that a util­i­tar­ian would ask, and pre­sup­poses cer­tain as­sump­tions that are not shared by other eth­i­cal per­spec­tives. Th­ese as­sump­tions in­clude things like: there is (in prin­ci­ple) some way of com­par­ing the value of differ­ent causes, that it is of cen­tral im­por­tance to con­sider max­imis­ing the pos­i­tive con­se­quences of our ac­tions, and that his­tor­i­cal con­nec­tions be­tween us and those we might try to help are not of crit­i­cal moral rele­vance in de­ter­min­ing how to act. EAs ask­ing this ques­tion need not nec­es­sar­ily ex­plic­itly be­lieve all these as­sump­tions, but I ar­gue that in ask­ing the EA ques­tion in­stead of other ques­tions they could ask, they are im­plic­itly rely­ing upon tacit ac­cep­tance of these as­sump­tions. To as­sert that these are be­liefs shared by all other ide­olog­i­cal frame­works is to sim­ply ig­nore the differ­ences be­tween differ­ent eth­i­cal the­o­ries and the wor­ld­views as­so­ci­ated with them.

Par­tic­u­lar the­o­ret­i­cal frame­works are applied

In ad­di­tion to the ques­tions they ask, effec­tive al­tru­ists tend to have a very par­tic­u­lar ap­proach to an­swer­ing these ques­tions. In par­tic­u­lar, they tend to rely al­most ex­clu­sively on ex­per­i­men­tal ev­i­dence, math­e­mat­i­cal mod­el­ling, or highly ab­stract philo­soph­i­cal ar­gu­ments. Other the­o­ret­i­cal frame­works are gen­er­ally not taken very se­ri­ously or sim­ply ig­nored. The­o­ret­i­cal ap­proaches that EAs tend to ig­nore in­clude:

  • So­ciolog­i­cal the­ory: po­ten­tially rele­vant to un­der­stand­ing causes of global poverty, how group dy­nam­ics op­er­ates and how so­cial change oc­curs.

  • Ethnog­ra­phy: po­ten­tially highly use­ful in un­der­stand­ing causes of poverty, effi­cacy of in­ter­ven­tions, how peo­ple make dietary choices re­gard­ing meat eat­ing, the de­vel­op­ment of cul­tural norms in gov­ern­ment or re­search or­gani­sa­tions sur­round­ing safety of new tech­nolo­gies, and other such ques­tions, yet I have never heard of an EA or­gani­sa­tion con­duct­ing this sort of anal­y­sis.

  • Phenomenol­ogy and ex­is­ten­tial­ism: po­ten­tially rele­vant to de­ter­min­ing the value of differ­ent types of life and what sort of so­ciety we should fo­cus on cre­at­ing.

  • His­tor­i­cal case stud­ies: there is some use of these in the study of ex­is­ten­tial risk, mostly re­lat­ing to nu­clear war, but mostly this method is ig­nored as a po­ten­tial source of in­for­ma­tion about so­cial move­ments, im­prov­ing so­ciety, and as­sess­ing the risk of catas­trophic risks.

  • Re­gres­sion anal­y­sis: po­ten­tially highly use­ful for analysing effec­tive causes in global de­vel­op­ment, meth­ods of poli­ti­cal re­form, or even the abil­ity to in­fluence AI or nu­clear policy for­ma­tion, but largely ne­glected in favour of ei­ther ex­per­i­ments or ab­stract the­o­ris­ing.

If read­ers dis­agree with my anal­y­sis, I would in­vite them to in­ves­ti­gate the work pub­lished on EA web­sites, par­tic­u­larly re­search or­gani­sa­tions like the Fu­ture of Hu­man­ity In­sti­tute and the Global Pri­ori­ties In­sti­tute (among many oth­ers), and see what sorts of method­olo­gies they util­ise. Re­gres­sion anal­y­sis and his­tor­i­cal case stud­ies are rel­a­tively rare, and the other three tech­niques I men­tion are vir­tu­ally un­heard of. This rep­re­sents a very par­tic­u­lar set of method­olog­i­cal choices about how to best go about an­swer­ing the core EA ques­tion of how to do the most good.

Note that I am not tak­ing a po­si­tion on whether it is cor­rect to priv­ilege the types of ev­i­dence or method­olo­gies that EA typ­i­cally does. Rather, my point is sim­ply that effec­tive al­tru­ists seem to have very strong norms about what sorts of anal­y­sis is worth­while do­ing, de­spite the fact that rel­a­tively lit­tle time is spent in the com­mu­nity dis­cussing these is­sues. GiveWell does have a short dis­cus­sion of their prin­ci­ples for as­sess­ing ev­i­dence, and there is a short sec­tion in the ap­pendix of the GPI re­search agenda about har­ness­ing and com­bin­ing ev­i­dence, but over­all the amount of time spent dis­cussing these is­sues in the EA com­mu­nity is very small. I there­fore con­tent that these method­olog­i­cal choices are pri­mar­ily the re­sult of ide­olog­i­cal pre­con­cep­tions about how to go about an­swer­ing ques­tions, and not an ex­ten­sive anal­y­sis of the pros and cons of differ­ent tech­niques.

Cer­tain view­points and an­swers are privileged

Osten­si­bly, effec­tive al­tru­ism seeks to an­swer the ques­tion ‘how to do the most good’ in a rigor­ous but open-minded way, with­out mak­ing rul­ing out any pos­si­bil­ities at the out­set or mak­ing as­sump­tions about what is effec­tive with­out proper in­ves­ti­ga­tion. It seems to me, how­ever, that this is sim­ply not an ac­cu­rate de­scrip­tion of how the move­ment ac­tu­ally in­ves­ti­gates causes. In prac­tise, the move­ment seems heav­ily fo­cused on the de­vel­op­ment and im­pacts of emerg­ing tech­nolo­gies. Though not so per­ti­nent in the case of global poverty, this is some­what ap­pli­ca­ble in the case of an­i­mal welfare, given the in­creas­ing fo­cus on the de­vel­op­ment of in vitro meat and plant-based meat sub­sti­tutes. This tech­nolog­i­cal fo­cus is most ev­i­dent in the fo­cus on far fu­ture causes, since all of the main far fu­ture cause ar­eas fo­cused on by 80,000 hours and other key or­gani­sa­tions (nu­clear weapons, ar­tifi­cial in­tel­li­gence, biose­cu­rity, and nan­otech­nol­ogy) re­late to new and emerg­ing tech­nolo­gies. EA dis­cus­sions also com­monly fea­ture dis­cus­sion and spec­u­la­tion about the effects that anti-ag­ing treat­ments, ar­tifi­cial in­tel­li­gence, space travel, nan­otech­nol­ogy, and other spec­u­la­tive tech­nolo­gies are likely to have on hu­man so­ciety in the long term fu­ture.

By it­self the fact that EAs are highly fo­cused on new tech­nolo­gies doesn’t prove that they priv­ilege cer­tain view­points and an­swers over oth­ers – maybe a wide range of po­ten­tial cause ar­eas have been con­sid­ered, and many of the most promis­ing causes just hap­pen to re­late to emerg­ing tech­nolo­gies. How­ever, from my per­spec­tive this does not ap­pear to be the case. As ev­i­dence for this view, I will pre­sent as an illus­tra­tion the com­mon EA ar­gu­ment for fo­cus­ing on AI safety, and then show that much the same ar­gu­ment could also be used to jus­tify work on sev­eral other cause ar­eas that have at­tracted es­sen­tially no at­ten­tion from the EA com­mu­nity.

We can sum­marise the EA case for work­ing on AI safety as fol­lows, based on ar­ti­cles such as those from 80,000 hours and CEA (note this is an ar­gu­ment sketch and not a fully-fledged syl­l­o­gism):

  • Most AI ex­perts be­lieve that AI with su­per­hu­man in­tel­li­gence is cer­tainly pos­si­ble, and has non­triv­ial prob­a­bil­ity of ar­riv­ing within the next few decades.

  • Many ex­perts who have con­sid­ered the prob­lem have ad­vanced plau­si­ble ar­gu­ments for think­ing that su­per­hu­man AI has the po­ten­tial for highly nega­tive out­comes (po­ten­tially even hu­man ex­tinc­tion), but there are cur­rent ac­tions we can take to re­duce these risks.

  • Work on re­duc­ing the risks as­so­ci­ated with su­per­hu­man AI is highly ne­glected.

  • There­fore, the ex­pected im­pact of work­ing on re­duc­ing AI risks is very high.

The three key as­pects of this ar­gu­ment are ex­pert be­lief in plau­si­bil­ity of the prob­lem, very large im­pact of the prob­lem if it does oc­cur, and the prob­lem be­ing sub­stan­tively ne­glected. My ar­gu­ment is that we can adapt this ar­gu­ment to make par­allel ar­gu­ments for other cause ar­eas. I shall pre­sent three: over­throw­ing global cap­i­tal­ism, philos­o­phy of re­li­gion, and re­source de­ple­tion.

Over­throw­ing global capitalism

  • Many ex­perts on poli­tics and so­ciol­ogy be­lieve that the in­sti­tu­tions of global cap­i­tal­ism are re­spon­si­ble for ex­tremely large amounts of suffer­ing, op­pres­sion, and ex­ploita­tion through­out the world.

  • Although there is much work crit­i­cis­ing cap­i­tal­ism, work on de­vis­ing and im­ple­ment­ing prac­ti­cal al­ter­na­tives to global cap­i­tal­ism is highly ne­glected.

  • There­fore, the ex­pected im­pact of work­ing on de­vis­ing and im­ple­ment­ing al­ter­na­tives to global cap­i­tal­ism is very high.

Philos­o­phy of religion

  • A size­able minor­ity of philoso­phers be­lieve in the ex­is­tence of God, and there are at least some very in­tel­li­gent and ed­u­cated philoso­phers are ad­her­ents of a wide range of differ­ent re­li­gions.

  • Ac­cord­ing to many re­li­gions, hu­mans who do not adopt the cor­rect be­liefs and/​or prac­tices will be des­tined to an eter­nity (or at least a very long pe­riod) of suffer­ing in this life or the next.

  • Although re­li­gious in­sti­tu­tions have ex­ten­sive re­sources, the amount of time and money ded­i­cated to sys­tem­at­i­cally analysing the ev­i­dence and ar­gu­ments for and against differ­ent re­li­gious tra­di­tions is ex­tremely small.

  • There­fore, the ex­pected im­pact of work­ing on in­ves­ti­gat­ing the ev­i­dence and ar­gu­ments for the var­i­ous re­li­gious is very high.

Re­source depletion

  • Many sci­en­tists have ex­pressed se­ri­ous con­cern about the likely dis­as­trous effects of pop­u­la­tion growth, ecolog­i­cal degra­da­tion, and re­source de­ple­tion on the wellbe­ing of fu­ture gen­er­a­tions and even the sus­tain­abil­ity of hu­man civ­i­liza­tion as a whole.

  • Very lit­tle work has been con­ducted to de­ter­mine how best to re­spond to re­source de­ple­tion or degra­da­tion of the ecosys­tem so as to en­sure that Earth re­mains in­hab­it­able and hu­man civ­i­liza­tion is sus­tain­able over the very long term.

  • There­fore, the ex­pected im­pact of work­ing on in­ves­ti­gat­ing long-term re­sponses to re­source de­ple­tion and ecolog­i­cal col­lapse is very high.

Read­ers may dis­pute the pre­cise way I have for­mu­lated each of these ar­gu­ments or ex­actly how closely they all par­allel the case for AI safety, how­ever I hope they will see the ba­sic point I am try­ing to drive at. Speci­fi­cally, if effec­tive al­tru­ists are fo­cused on AI safety es­sen­tially be­cause of ex­pert be­lief in plau­si­bil­ity, large scope of the prob­lem, and ne­glect­ed­ness of the is­sue, a similar case can be made with re­spect to work­ing on over­throw­ing global cap­i­tal­ism, con­duct­ing re­search to de­ter­mine which re­li­gious be­lief (if any) is most likely to be cor­rect, and efforts to de­velop and im­ple­ment re­sponses to re­source de­ple­tion and ecolog­i­cal col­lapse.

One re­sponse that I fore­see is that none of these causes are re­ally ne­glected be­cause there are plenty of peo­ple fo­cused on over­throw­ing cap­i­tal­ism, re­search­ing re­li­gion, and work­ing on en­vi­ron­men­tal­ist causes, while very few peo­ple work on AI safety. But re­mem­ber, out­siders would likely say that AI safety is not re­ally ne­glected be­cause billions of dol­lars are in­vested into AI re­search by aca­demics and tech com­pa­nies around the world. The point is that there is a differ­ence be­tween work­ing in a gen­eral area and work­ing on the spe­cific sub­set of that area that is high­est im­pact and most ne­glected. In much the same way as AI safety re­search is ne­glected even if AI re­search more gen­er­ally is not, like­wise in the par­allel cases I pre­sent, I ar­gue that se­ri­ous ev­i­dence-based re­search into the spe­cific ques­tions I pre­sent is highly ne­glected, even if the broader ar­eas are not.

Po­ten­tial al­ter­na­tive causes are neglected

I sus­pect that at this point many of my read­ers will at this point be men­tally mar­shal­ing ad­di­tional ar­gu­ments as to why AI safety re­search is in fact a more wor­thy cause than the other three I have men­tioned. Doubtless there are many such ar­gu­ments that one could pre­sent, and prob­a­bly I could de­vise coun­ter­ar­gu­ments to at least some of them – and so the de­bate would progress. My point is not that the can­di­date causes I have pre­sented ac­tu­ally are good causes for EAs to work on, or that there aren’t any good rea­sons why AI safety (along with other emerg­ing tech­nolo­gies) is a bet­ter cause. My point is rather that these rea­sons are not gen­er­ally dis­cussed by EAs. That is, the ar­gu­ments gen­er­ally pre­sented for fo­cus­ing on AI safety as a cause area do not uniquely pick out AI safety (and other emerg­ing tech­nolo­gies like nan­otech­nol­ogy or bio­eng­ineered pathogens), but EAs mak­ing the case for AI safety es­sen­tially never no­tice this be­cause their ide­olog­i­cal pre­con­cep­tions bias them to­wards fo­cus­ing on new tech­nolo­gies, and away from the sorts of causes I men­tion here. Of course EAs do go into much more de­tail about the risks of new tech­nolo­gies than I have here, but the core ar­gu­ment for fo­cus­ing in AI safety in the first place is not ap­plied to other po­ten­tial cause ar­eas to see if (as I think it does) it could also ap­ply to those other causes.

Fur­ther­more, it is not as if effec­tive al­tru­ists have care­fully con­sid­ered these pos­si­ble cause ar­eas and come to the rea­soned con­clu­sion that they are not the high­est pri­ori­ties. Rather, they have sim­ply not been con­sid­ered. They have not even been on the radar, or at best barely on the radar. For ex­am­ple, I searched for ‘re­source de­ple­tion’ on the EA fo­rums and found noth­ing. I searched for ‘re­li­gion’ and found only the EA de­mo­graph­ics sur­vey and an ar­ti­cle about whether EA and re­li­gious or­gani­sa­tions can co­op­er­ate. A search for ‘so­cial­ism’ yielded one ar­ti­cle dis­cussing what is meant by ‘sys­temic change’, and one ar­ti­cle (with no com­ments and only three up­votes) ex­plic­itly out­lin­ing an effec­tive al­tru­ist plan for so­cial­ism.

This lack of in­ter­est in other cause ar­eas can also be found in the ma­jor EA or­gani­sa­tions. For ex­am­ple, the stated ob­jec­tive of the global pri­ori­ties in­sti­tute is:

To con­duct foun­da­tional re­search that in­forms the de­ci­sion-mak­ing of in­di­vi­d­u­als and in­sti­tu­tions seek­ing to do as much good as pos­si­ble. We pri­ori­tise top­ics which are im­por­tant, ne­glected, and tractable, and use the tools of mul­ti­ple dis­ci­plines, es­pe­cially philos­o­phy and eco­nomics, to ex­plore the is­sues at stake.

On the face of it this aim is con­sis­tent with all three of the sug­gested al­ter­na­tive cause ar­eas I out­lined in the pre­vi­ous sec­tion. Yet the GPI re­search agenda fo­cuses al­most en­tirely on tech­ni­cal is­sues in philos­o­phy and eco­nomics per­tain­ing to the long-ter­mism paradigm. While AI safety is not dis­cussed ex­ten­sively it is men­tioned a num­ber of times, and much of the re­search agenda ap­pears to be de­vel­oped around re­lated ques­tions in philos­o­phy and eco­nomics that the long-ter­mism paradigm gives rise to. Reli­gion and so­cial­ism are not men­tioned at all in this doc­u­ment, while re­source de­ple­tion is only men­tioned in­di­rectly by two refer­ences in the ap­pendix un­der ‘in­dices in­volv­ing en­vi­ron­men­tal cap­i­tal’.

Similarly the Fu­ture of Hu­man­ity In­sti­tute fo­cuses on AI safety, AI gov­er­nance, and biotech­nol­ogy. Strangely, it also pur­sues some work on highly ob­scure top­ics such as the aes­ti­va­tion solu­tion to the Fermi para­dox and on the prob­a­bil­ity of Earth be­ing de­stroyed by micro­scopic black holes or metastable vac­uum states. At the same time, noth­ing about any of the po­ten­tial new prob­lem ar­eas I have men­tioned.

Un­der their prob­lem pro­files, 80,000 hours does not men­tion hav­ing in­ves­ti­gated any­thing re­lat­ing to re­li­gion or over­throw­ing global cap­i­tal­ism (or even sub­stan­tially re­form­ing global eco­nomic in­sti­tu­tions). They do link to an ar­ti­cle by Robert Wiblin dis­cussing why EAs do not work on re­source scarcity, how­ever this is not a care­ful anal­y­sis or in­ves­ti­ga­tion, just his gen­eral views on the topic. Although I agree with some of the ar­gu­ments he makes, the depth of anal­y­sis is very shal­low rel­a­tive to the po­ten­tial risks and con­cern raised about this is­sue by many sci­en­tists and writ­ers over the decades. In­deed, I would ar­gue that there is about as much sub­stance in this ar­ti­cle as a re­but­tal of re­source de­ple­tion as a cause area as one finds in the typ­i­cal ar­ti­cle dis­miss­ing AI fears as ex­ag­ger­ated and hys­ter­i­cal.

In yet an­other ex­am­ple, the Foun­da­tional Re­search In­sti­tute states that:

Our mis­sion is to iden­tify co­op­er­a­tive and effec­tive strate­gies to re­duce in­vol­un­tary suffer­ing. We be­lieve that in a com­plex world where the long-run con­se­quences of our ac­tions are highly un­cer­tain, such an un­der­tak­ing re­quires foun­da­tional re­search. Cur­rently, our re­search fo­cuses on re­duc­ing risks of dystopian fu­tures in the con­text of emerg­ing tech­nolo­gies. To­gether with oth­ers in the effec­tive al­tru­ism com­mu­nity, we want care­ful eth­i­cal re­flec­tion to guide the fu­ture of our civ­i­liza­tion to the great­est ex­tent pos­si­ble.

Hence, even though it seems that in prin­ci­ple so­cial­ists, Bud­dhists, and ecolog­i­cal ac­tivists (among oth­ers) are highly con­cerned about re­duc­ing the suffer­ing of hu­mans and an­i­mals, FRI ig­nores the top­ics that these groups would tend to fo­cus on, and in­stead fo­cuses their at­ten­tion on the risks of emerg­ing tech­nolo­gies. As in the case of FHI, they also seem to find room for some top­ics of highly du­bi­ous rele­vance to any of EAs goals, such as this pa­per about the po­ten­tial for cor­re­lated ac­tions with civ­i­liza­tions lo­cated el­se­where in the mul­ti­verse.

Out­side of the main or­gani­sa­tions, there has been some dis­cus­sion about so­cial­ism as an EA cause, for ex­am­ple on r/​Effec­tiveAltru­ism and by Jeff Kauf­man. I was able to find lit­tle else about ei­ther of the two po­ten­tial cause ar­eas I out­line.

Over­all, on the ba­sis of the fore­go­ing ex­am­ples I con­clude that the amount of time and en­ergy spent by the EA com­mu­nity in­ves­ti­gat­ing the three po­ten­tial new cause ar­eas that I have dis­cussed is neg­ligible com­pared to the time and en­ergy spent in­ves­ti­gat­ing emerg­ing tech­nolo­gies. This is de­spite the fact that most of these groups are not os­ten­si­bly es­tab­lished with the ex­press pur­pose of re­duc­ing the harms of emerg­ing tech­nolo­gies, but have sim­ply cho­sen this cause area over other pos­si­bil­ities would that also po­ten­tially fulfill their broad ob­jec­tives. I have not found any ev­i­dence that this choice is the re­sult of early in­ves­ti­ga­tions demon­strat­ing that emerg­ing tech­nolo­gies are far su­pe­rior to the cause ar­eas I men­tion. In­stead, it ap­pears to be mostly the re­sult of dis­in­ter­est in the sorts of top­ics I iden­tify, and a much greater ex ante in­ter­est in emerg­ing tech­nolo­gies over other causes. I pre­sent this as ev­i­dence that the pri­mary rea­son effec­tive al­tru­ism fo­cuses so ex­ten­sively on emerg­ing tech­nolo­gies over other spec­u­la­tive but po­ten­tially high im­pact causes, is be­cause of the priv­ileg­ing of cer­tain view­points and an­swers over oth­ers. This, in turn, is the re­sult of the un­der­ly­ing ide­olog­i­cal com­mit­ments of many effec­tive al­tru­ists.

What is EA ide­ol­ogy?

If many effec­tive al­tru­ists share a com­mon ide­ol­ogy, then what is the con­tent of this ide­ol­ogy? As with any so­cial move­ment, this is difficult to spec­ify with any pre­ci­sion and will ob­vi­ously differ some­what from per­son to per­son and from one or­gani­sa­tion to an­other. That said, on the ba­sis of my re­search and ex­pe­riences in the move­ment, I would sug­gest the fol­low­ing core tenets of EA ide­ol­ogy:

  1. The nat­u­ral world is all that ex­ists, or at least all that should be of con­cern to us when de­cid­ing how to act. In par­tic­u­lar, most EAs are highly dis­mis­sive of re­li­gious or other non-nat­u­ral­is­tic wor­ld­views, and tend to just as­sume with­out fur­ther dis­cus­sion that views like du­al­ism, rein­car­na­tion, or the­ism can­not be true. For ex­am­ple, the map of EA con­cepts has listed un­der ‘im­por­tant gen­eral fea­tures of the world’ pages on ‘pos­si­bil­ity of an in­finite uni­verse’ and ‘the simu­la­tion ar­gu­ment’, yet no men­tion of the pos­si­bil­ity that any­thing could ex­ist be­yond the nat­u­ral world. It re­quires a very par­tic­u­lar ide­olog­i­cal frame­work to re­gard the simu­la­tion as is more im­por­tant or press­ing than non-nat­u­ral­ism.

  2. The cor­rect way to think about moral/​eth­i­cal ques­tions is through a util­i­tar­ian lens in which the fo­cus is on max­imis­ing de­sired out­comes and min­imis­ing un­de­sir­able ones. We should fo­cus on the effect of our ac­tions on the mar­gin, rel­a­tive to the most likely coun­ter­fac­tual. There is some dis­cus­sion of moral un­cer­tainty, but out­side of this de­on­tolog­i­cal, virtue ethics, con­trac­tar­ian, and other ap­proaches are rarely ap­plied in philo­soph­i­cal dis­cus­sion of EA is­sues. This marginal­ist, coun­ter­fac­tual, op­ti­mi­sa­tion-based way of think­ing is largely bor­rowed from neo­clas­si­cal eco­nomics, and is not widely em­ployed by many other dis­ci­plines or ide­olog­i­cal per­spec­tives (e.g. com­mu­ni­tar­i­anism).

  3. Ra­tional be­havi­our is best un­der­stood through a Bayesian frame­work, in­cor­po­rat­ing key re­sults from game the­ory, de­ci­sion the­ory, and other for­mal ap­proaches. Many of these con­cepts ap­pear in the ideal­ised de­ci­sion mak­ing sec­tion of the map of EA con­cepts, and are widely ap­plied in other EA writ­ings.

  4. The best way to ap­proach a prob­lem is to think very ab­stractly about that prob­lem, con­struct com­pu­ta­tional or math­e­mat­i­cal mod­els of the rele­vant prob­lem area, and ul­ti­mately (if pos­si­ble) test these mod­els us­ing ex­per­i­ments. The model ap­pears to be of how re­search is ap­proached in physics with some in­fluence from an­a­lytic philos­o­phy. The method­olo­gies of other dis­ci­plines are largely ig­nored.

  5. The de­vel­op­ment and in­tro­duc­tion of dis­rup­tive new tech­nolo­gies is a more fun­da­men­tal and im­por­tant driver of long-term change than so­cio-poli­ti­cal re­form or in­sti­tu­tional change. This is clear from the over­whelming fo­cus on tech­nolog­i­cal change of top EA or­gani­sa­tions, in­clud­ing 80,000 hours, the Cen­ter for Effec­tive Altru­ism, the Fu­ture of Hu­man­ity In­sti­tute, the Global Pri­ori­ties Pro­ject, the Fu­ture of Life In­sti­tute, the Cen­tre for the Study of Ex­is­ten­tial Risk, and the Ma­chine In­tel­li­gence Re­search In­sti­tute.

I’m sure oth­ers could de­vise differ­ent ways of de­scribing EA ide­ol­ogy that po­ten­tially look quite differ­ent to mine, but this is my best guess based on what I have ob­served. I be­lieve these tenets are gen­er­ally held by EAs, par­tic­u­larly those work­ing at the ma­jor EA or­gani­sa­tions, but are gen­er­ally not widely dis­cussed or cri­tiqued. That this set of as­sump­tions is fairly spe­cific to EA should be ev­i­dent if one reads var­i­ous crit­i­cisms of effec­tive al­tru­ism from those out­side the move­ment. Although they do not always ex­press their con­cerns us­ing the same lan­guage that I have, it is of­ten clear that the fun­da­men­tal rea­son for their dis­agree­ment is the re­jec­tion of one or more of the five points men­tioned above.

Conclusion

My pur­pose in this ar­ti­cle has not been to con­tend that effec­tive al­tru­ists shouldn’t have an ide­ol­ogy, or that the cur­rent dom­i­nant EA ide­ol­ogy (as I have out­lined it) is mis­taken. In fact, my view is that we can’t re­ally get any­where in ra­tio­nal in­ves­ti­ga­tion with­out cer­tain start­ing as­sump­tions, and these start­ing as­sump­tions con­sti­tute our ide­ol­ogy. It doesn’t fol­low from this that any ide­ol­ogy is equally jus­tified, but how we ad­ju­di­cate be­tween differ­ent ide­olog­i­cal frame­works is be­yond the scope of this ar­ti­cle.

In­stead, all I have tried to do is ar­gue that effec­tive al­tru­ists do in fact have an ide­ol­ogy. This ide­ol­ogy leads them to priv­ilege cer­tain ques­tions over oth­ers, to ap­ply par­tic­u­lar the­o­ret­i­cal frame­works to the ex­clu­sion of oth­ers, and to fo­cus on cer­tain view­points and an­swers while largely ig­nor­ing oth­ers. I have at­tempted to sub­stan­ti­ate my claims by show­ing how differ­ent ide­olog­i­cal frame­works would ask differ­ent ques­tions, use differ­ent the­o­ret­i­cal frame­works, and ar­rive at differ­ent con­clu­sions to those gen­er­ally found within EA, es­pe­cially the ma­jor EA or­gani­sa­tions. In par­tic­u­lar, I ar­gued that the typ­i­cal case for fo­cus­ing on AI safety can be mod­ified to serve as an ar­gu­ment for a num­ber of other cause ar­eas, all of which have been largely ig­nored by most EAs.

My view is that effec­tive al­tru­ists should ac­knowl­edge that the move­ment as a whole does have an ide­ol­ogy. We should crit­i­cally analyse this ide­ol­ogy, un­der­stand its strengths and weak­nesses, and then to the ex­tent to which we think this set of ide­olog­i­cal be­liefs is cor­rect, defend it against re­but­tals and com­pet­ing ide­olog­i­cal per­spec­tives. This is es­sen­tially what all other ide­olo­gies do – it is how the ex­change of ideas works. Effec­tive al­tru­ists should en­gage crit­i­cally in this ide­olog­i­cal dis­cus­sion, and not pre­tend they are aloof from it by re­sort­ing to the re­frain that ‘EA is a ques­tion, not an ide­ol­ogy’.