Effective Altruism and Free Riding

I’d like to thank Parker Whit­fill, An­drew Kao, Ste­fan Schu­bert, and Phil Tram­mell for very helpful com­ments. Er­rors are my own.

Many peo­ple have ar­gued that those in­volved in effec­tive al­tru­ism should “be nice”, mean­ing that they should co­op­er­ate when fac­ing pris­oner’s dilemma type situ­a­tions ([1] [2] [3]). While I be­lieve that some of these are con­vinc­ing ar­gu­ments, it seems to be un­der­ap­pre­ci­ated just how of­ten some­one at­tempt­ing to do good will face pris­oner’s dilem­mas. Pre­vi­ous au­thors seem to high­light mostly zero-sum con­flict be­tween op­pos­ing value sys­tems [3] [4] or com­mon-sense so­cial norms like ly­ing [1]. How­ever, the prob­lem faced by a group of peo­ple try­ing to do good is effec­tively a pub­lic goods prob­lem [10]; this means that, ex­cept in rare cases (like where peo­ple 100% agree on moral val­ues), some­one look­ing to do good will be play­ing a pris­oner’s dilemma against oth­ers look­ing to do good.

In this post I first give some sim­ple ex­am­ples to illus­trate how col­lec­tive ac­tion prob­lems al­most surely arise be­tween a group of peo­ple look­ing to do good. I then ar­gue that the stan­dard cause-pri­ori­ti­za­tion method­ol­ogy used within EA recom­mends to defect (“free-ride”) in these pris­oner’s dilemma set­tings. Fi­nally, I dis­cuss some po­ten­tial im­pli­ca­tions of this, in­clud­ing that there may be harms from pop­u­lariz­ing EA think­ing and that there may be large gains from im­prov­ing co­op­er­a­tion.

Main Points:

1. A group of peo­ple try­ing to do good are play­ing a form of a pub­lic goods game. Ex­cept in rare cir­cum­stances, this will lead to in­effi­cien­cies due to free-rid­ing (defect­ing), and thus gains from co­op­er­a­tion.

2. Free-rid­ing comes from in­di­vi­d­u­als putting re­sources to­ward causes which they per­son­ally view as ne­glected (be­ing un­der-val­ued by other peo­ple’s value sys­tems) at the ex­pense of causes for which there is more con­sen­sus.

3. Stan­dard EA cause pri­ori­ti­za­tion recom­mends that peo­ple free-ride on oth­ers’ efforts to do good (at least when in­ter­act­ing with peo­ple not in the EA com­mu­nity).

4. If ex­ist­ing so­cietal norms are to co­op­er­ate when try­ing to do good, EA may cause harm by en­courag­ing peo­ple to free-ride.

5. There may be large gains from im­prov­ing co­op­er­a­tion.

Col­lec­tive Ac­tion Prob­lems Among Peo­ple Try­ing to do Good

Note that the main ar­gu­ment in this sec­tion is not origi­nal to me. Others within EA have writ­ten about this, some in more gen­eral set­tings than what I look at here [10].

The stan­dard col­lec­tive ac­tion prob­lem is in a set­ting where peo­ple are self­ish (each in­di­vi­d­ual cares about their own con­sump­tion) but there’s some pub­lic good, say clean air, that they all value. The main is­sue is that when de­cid­ing whether to pol­lute the air or not, an in­di­vi­d­ual doesn’t con­sider the nega­tive im­pacts that pol­lu­tion will have on ev­ery­one else. This cre­ates a pris­oner’s dilemma, where they would all be bet­ter off if they didn’t pol­lute, but any in­di­vi­d­ual is bet­ter off by pol­lut­ing (defect­ing). Th­ese prob­lems are of­ten solved through gov­ern­ments or through in­for­mal norms of co­op­er­a­tion.

Here I ar­gue that this col­lec­tive ac­tion prob­lem is al­most surely pre­sent among a group of peo­ple try­ing to do good, even if ev­ery mem­ber of the group is com­pletely un­selfish. All that is needed is that peo­ple’s value sys­tems place some weight on how good the world is (they are not sim­ply warm-glow givers) and that they have some dis­agree­ment about what counts as good (there’s some differ­ence in val­ues). The key in­tu­ition is that in an un­co­op­er­a­tive set­ting each al­tru­ist will donate to causes based on their own value sys­tem with­out con­sid­er­ing how much other al­tru­ists value those causes. This leads to un­der­in­vest­ment in causes which many differ­ent value sys­tems place pos­i­tive weight on (causes with pos­i­tive ex­ter­nal­ities for other value sys­tems) and over­in­vest­ment in causes which many value sys­tems view nega­tively (causes with nega­tive ex­ter­nal­ities). Ex­cept in a few un­likely cir­cum­stances, an al­lo­ca­tion can be found which is preferred by ev­ery value sys­tem (a pareto im­prove­ment) over the non-co­op­er­a­tive equil­ibrium, just like with any other pub­lic goods game.

For most read­ers, I ex­pect that the ex­am­ples be­low will get the main point across. If any­one is es­pe­cially in­ter­ested, here is a more gen­eral model of al­tru­is­tic co­or­di­na­tion that I used to check the in­tu­ition.


A. Two fun­ders, pos­i­tive externalities

Take a situ­a­tion with two fun­ders: a to­tal util­i­tar­ian and an en­vi­ron­men­tal­ist (taken to mean some­one who in­trin­si­cally val­ues en­vi­ron­men­tal preser­va­tion). Each has a to­tal of $1000 to donate. The to­tal util­i­tar­ian thinks that cli­mate change miti­ga­tion is a very im­por­tant cause, but they would pre­fer that fund­ing in­stead goes to­ward AI safety re­search, which they think is about 50% more im­por­tant than cli­mate change. The en­vi­ron­men­tal­ist also thinks cli­mate change miti­ga­tion is im­por­tant, but they would pre­fer to spend money on near-term con­ser­va­tion efforts, which they view as be­ing 50% more im­por­tant than cli­mate change. The en­vi­ron­men­tal­ist places al­most no value on AI safety re­search and the to­tal util­i­tar­ian places al­most no value on near-term con­ser­va­tion efforts. If they don’t co­op­er­ate, the unique Nash equil­ibrium has them both spend­ing their money on their own preferred causes, so $1000 goes to AI safety, $1000 to con­ser­va­tion, and $0 to cli­mate change. If they could co­op­er­a­tively al­lo­cate dona­tions, they would choose to give all of the money ($2000) to cli­mate change, which gives each of them a pay­off 33% higher than in the non-co­op­er­a­tive case.

B. Two fun­ders, nega­tive externalities

The gains from co­op­er­a­tion would be even larger if each fun­der placed nega­tive value on the other fun­der’s preferred cause. For ex­am­ple, if one fun­der’s preferred cause was pro-choice ad­vo­cacy and the other’s was pro-life ad­vo­cacy, then their pay­offs in the non-co­op­er­a­tive set­ting may be nearly zero (their dona­tions can­cel each other out), which means the co­op­er­a­tive set­ting will have nearly in­finitely higher pay­offs in per­centage terms. This idea has been noted be­fore in writ­ings on moral trade [4].

Im­por­tantly, even if fun­ders’ prefer­ences for di­rect work lead to no nega­tive ex­ter­nal­ities, there could be nega­tive ex­ter­nal­ities in their prefer­ences for ad­vo­cacy. For ex­am­ple, in the situ­a­tion in ex­am­ple A, nei­ther fun­der places nega­tive value on the other fun­der’s preferred cause. How­ever, if we al­low the util­i­tar­ian to fund ad­vo­cacy which per­suades peo­ple to donate to AI safety rather than cli­mate change or con­ser­va­tion, this ad­vo­cacy would be nega­tively val­ued by the en­vi­ron­men­tal­ist. Thus, even small differ­ences in prefer­ences for di­rect work can lead to zero-sum con­flict on the ad­vo­cacy front (for fur­ther dis­cus­sion see [3] and [12]).

C. Mul­ti­ple fun­ders, pos­i­tive externalities

Now no­tice that we could add a third fun­der to ex­am­ple A who was in a sym­met­ric situ­a­tion (say they val­ued anti-ag­ing re­search, which the other two fun­ders hardly value at all, 50% more than cli­mate change, but place no value on AI safety or con­ser­va­tion). In this case the gains from co­op­er­at­ing (putting all the money into cli­mate change re­search) in­crease to 50% for each per­son. In gen­eral, adding fun­ders with their own “weird” cause will in­crease the gains from co­op­er­at­ing on causes for which there is more con­sen­sus.

D. No externalities

One case where co­op­er­a­tion does not lead to any gains is where peo­ple’s value sys­tems are perfectly per­pen­dicu­lar to each other, so that there are no ex­ter­nal­ities. The most fa­mous ex­am­ple of this is an econ­omy with self­ish in­di­vi­d­u­als (so ev­ery­one only cares about their own con­sump­tion and places no value, pos­i­tive or nega­tive, on the con­sump­tion of oth­ers). The non-co­op­er­a­tive equil­ibrium in this set­ting will be effi­cient, mean­ing that there can be no gains from co­op­er­a­tion (foot­note: this is similar to the first-welfare the­o­rem). This could also oc­cur (al­though I think it’s very un­likely) in a set­ting with al­tru­is­tic in­di­vi­d­u­als. In the set­ting from ex­am­ple A, if we change prefer­ences so that both the en­vi­ron­men­tal­ist and the util­i­tar­ian place no value on cli­mate change, then the non-co­op­er­a­tive equil­ibrium of the game can­not be im­proved upon. How­ever, as was noted above, the pos­si­bil­ity of ad­vo­cacy can cre­ate nega­tive ex­ter­nal­ities be­tween fun­ders, and thus sig­nifi­cant op­por­tu­ni­ties for co­op­er­a­tion. Also, I think in re­al­ity we see sig­nifi­cant over­lap in val­ues, lead­ing to large pos­i­tive ex­ter­nal­ities from dona­tions to cer­tain causes.

E. Iden­ti­cal Value Systems

Another case in which the non-co­op­er­a­tive equil­ibrium is effi­cient is when there is no value dis­agree­ment among fun­ders. Imag­ine two to­tal util­i­tar­i­ans in the set­ting from ex­am­ple A. They would both choose to fund AI safety re­search in the non-co­op­er­a­tive set­ting, which is also the co­op­er­a­tive choice.

How­ever, no­tice that this con­clu­sion de­pends on the as­sump­tion that peo­ple are perfectly moral. If we add that they are par­tially self­ish, but still agree on what is morally right, then in the non-co­op­er­a­tive set­ting they will over­in­vest in their own per­sonal con­sump­tion. This leads to gains from co­op­er­at­ing by spend­ing more on the pub­lic good (AI safety), like in the clas­si­cal col­lec­tive ac­tion prob­lem.

Per­haps the cur­rent EA com­mu­nity is close to hav­ing iden­ti­cal moral value sys­tems (and is mostly un­selfish) to the point where the gains from co­op­er­a­tion are low. I ex­pect that this isn’t true. It seems like there is a lot of het­ero­gene­ity in value sys­tems within EA, and even small value differ­ences can lead to a lot of in­effi­ciency due to the ad­vo­cacy chan­nel men­tioned above [12]. Also, even if peo­ple’s moral val­ues are iden­ti­cal, there seems be a lot of dis­agree­ment about difficult-to-an­swer em­piri­cal ques­tions within EA (such as the ques­tion of whether we are liv­ing in the most im­por­tant cen­tury [13]). Th­ese dis­agree­ments, as long as they per­sist, also lead to col­lec­tive ac­tion prob­lems.

EA Cause-Pri­ori­ti­za­tion and Free-Riding

Hav­ing es­tab­lished that peo­ple at­tempt­ing to do the most good are typ­i­cally play­ing a pris­oner’s dilemma, I now want to look at what EA or­ga­ni­za­tions (mainly 80,000 Hours) have sug­gested for peo­ple to do. Here I would like to dis­t­in­guish be­tween co­op­er­a­tion with peo­ple in­volved in EA vs peo­ple out­side of it. It seems that within EA it is com­monly ac­cepted that peo­ple should co­op­er­ate with peo­ple who have differ­ent val­ues [2]. Peo­ple of­ten speak of max­i­miz­ing “our” im­pact rather than my im­pact. And, im­por­tantly, peo­ple seem to dis­ap­prove of choices which benefit your own value sys­tem at the ex­pense of oth­ers’ val­ues.

With pris­oner’s dilem­mas against peo­ple out­side of EA, it seems that the stan­dard ad­vice is to defect. In 80,000 Hours’ cause pri­ori­ti­za­tion frame­work, the goal is to es­ti­mate the marginal benefit (mea­sured by your value sys­tem, pre­sum­ably) of an ex­tra unit of re­sources be­ing in­vested in a cause area [5]. No men­tion is given to how oth­ers value a cause, ex­cept to say that cause ar­eas which you value a lot rel­a­tive to oth­ers are likely to have the high­est re­turns. This is ex­actly the logic of free-rid­ing which led to co­or­di­na­tion failures in the above ex­am­ples: ev­ery in­di­vi­d­ual makes de­ci­sions ir­re­spec­tive of the benefits or harms to other value sys­tems, which leads to un­der­in­vest­ment in causes which many peo­ple value pos­i­tively and over­in­vest­ment in causes which many value nega­tively.

The cause ar­eas in ex­am­ple A were cho­sen be­cause I think cli­mate change is one area where EA is prob­a­bly free-rid­ing off other peo­ple’s efforts to do good. Given its wide range of nega­tive con­se­quences (harm to GDP, the global poor, an­i­mals, the en­vi­ron­ment, and ex­tinc­tion risk), a va­ri­ety of moral sys­tems place pos­i­tive weight on miti­gat­ing cli­mate change. Per­haps for this rea­son, gov­ern­ments and other groups are putting a large amount of re­sources to­wards the prob­lem. This large amount of re­sources, along with the as­sump­tion of diminish­ing re­turns, has led many EAs to not put re­sources to­ward cli­mate change (be­cause it is not ne­glected), and in­stead fo­cus on other cause ar­eas. In effect, this a de­ci­sion to free-ride on the cli­mate change miti­ga­tion work be­ing done by those with differ­ent value sys­tems. I ex­pect this is also the case for many other causes which EAs re­gard as “im­por­tant but not ne­glected”.

What Should We Do About This?

Although I be­lieve that the EA com­mu­nity fre­quently defects in pris­oner’s dilem­mas, I am much less cer­tain about whether this is a bad thing. If ev­ery­one else is defect­ing, and it’s very costly to im­prove co­op­er­a­tion, then the best that we can do is to defect our­selves. How­ever, if there cur­rently is some co­op­er­a­tion go­ing on, fol­low­ing EA ad­vice could re­duce that co­op­er­a­tion, and thus be sub-op­ti­mal. Fur­ther­more, even if there isn’t much co­op­er­a­tion cur­rently, work­ing to im­prove co­op­er­a­tion could be more valuable than sim­ply not co­op­er­at­ing, de­pend­ing on how costly it is to do so.

Work­ing to Not De­stroy Cooperation

There are a few rea­sons why I think it’s pos­si­ble that there’s cur­rently some co­op­er­a­tion be­tween peo­ple with differ­ent value sys­tems. First is that a large liter­a­ture in be­hav­ioral eco­nomics finds that peo­ple fre­quently co­op­er­ate when play­ing pris­oner’s dilem­mas, at least when they ex­pect their op­po­nent to also co­op­er­ate [6]. A num­ber of peo­ple have also ar­gued that study­ing eco­nomics causes peo­ple to defect more of­ten in pris­oner’s dilem­mas (al­though the em­piri­cal re­search on this topic is in­con­clu­sive) [14]. Hope­fully learn­ing about effec­tive al­tru­ism doesn’t lead to a similar be­hav­ior change among moral ac­tors. How­ever, it should be noted that in be­hav­ioral re­search the out­comes are typ­i­cally mon­e­tary pay­offs to par­ti­ci­pants. I’m not aware of any re­search show­ing that peo­ple tend to co­op­er­ate when the out­comes of the game are moral ob­jec­tives (like in the ex­am­ples I listed above). For all I know, peo­ple don’t co­op­er­ate much in such situ­a­tions, and thus it would not be pos­si­ble for EA to cause more defec­tion.

Next, some crit­i­cisms of effec­tive al­tru­ism seem to be in line with the con­cern that it will re­duce co­op­er­a­tion among those who wish to do good. Daron Ace­moglu’s crit­i­cism of effec­tive al­tru­ism from 2015 is one ex­am­ple [7] (note that Ace­moglu is one of the most in­fluen­tial economists in the world). Although much of his cri­tique is on earn­ing to give, I think the sub­stance of the cri­tique ap­plies more gen­er­ally. He claims that effec­tive al­tru­ism of­ten ad­vo­cates for do­ing good in ways that have nega­tive ex­ter­nal­ities for oth­ers (like earn­ing to give through high fre­quency trad­ing), and thus it may be harm­ful if it be­came nor­mal to view earn­ing to give as an eth­i­cal life. He thinks many ex­ist­ing norms are more benefi­cial, such as the view that things like civil ser­vice or com­mu­nity ac­tivism are eth­i­cal ac­tivi­ties.

More gen­er­ally, there is a lot of crit­i­cism of pri­vate philan­thropy for be­ing “un­demo­cratic” [8]. Free-rid­ing is­sues among those look­ing to do good are one ba­sis for this crit­i­cism. The gov­ern­ment is the main in­sti­tu­tion we have for co­op­er­at­ing to solve col­lec­tive ac­tion prob­lems, which in­cludes col­lec­tive ac­tion prob­lems be­tween those look­ing to do good. Although any in­di­vi­d­ual could do more good by donat­ing their time and money to pri­vate philan­thropy (defect­ing), we all may be bet­ter off if we all worked through the gov­ern­ment or through some other co­op­er­a­tive chan­nel. The large amount of crit­i­cism of pri­vate philan­thropy may be ev­i­dence that co­op­er­a­tive norms around do­ing good are some­what com­mon in so­ciety.

If the above sto­ries are true, and there ac­tu­ally is a de­gree of co­op­er­a­tive be­hav­ior hap­pen­ing, then spread­ing the method­ol­ogy cur­rently used within EA could be harm­ful, as it could lead to a de­crease in co­op­er­a­tion. One may think we can still use this method­ol­ogy with­out ad­vo­cat­ing that oth­ers do it, which may avoid any nega­tive con­se­quences. This is ba­si­cally the idea of defect­ing in se­cret. As Brian To­masik dis­cusses [1], this seems un­likely to suc­ceed; if EA has any ma­jor suc­cesses, then even with­out any ad­vo­cacy other peo­ple are likely to no­tice and to imi­tate our method­ol­ogy.

Another im­pli­ca­tion of this is that fur­ther in­vest­ments in EA cause pri­ori­ti­za­tion could be harm­ful. One of the main differ­ences be­tween the cause pri­ori­ti­za­tion work done by EA or­ga­ni­za­tions and work more com­monly done in eco­nomics is that EA cause pri­ori­ti­za­tion takes the per­spec­tive of a benev­olent in­di­vi­d­ual rather than a gov­ern­ment. Per­haps, as EA cause pri­ori­ti­za­tion con­tinues to im­prove, more peo­ple will choose to use their ad­vice and act unilat­er­ally rather than co­op­er­a­tively.

I should also note that even if the above sto­ries are true, the other benefits of EA (mainly, en­courag­ing peo­ple to do good effec­tively) may out­weigh any nega­tive effects from re­duc­ing co­op­er­a­tion.

Work­ing to Im­prove Cooperation

Even if there isn’t much co­op­er­a­tion cur­rently hap­pen­ing, there could be large gains to work­ing to build such co­op­er­a­tion. For ex­am­ple, if co­op­er­a­tive norms aren’t wide­spread, then we could work to build those norms. If the gov­ern­ment is cur­rently very dys­func­tional and non-co­op­er­a­tive, then we can work to im­prove it. A num­ber of EA ini­ti­a­tives already in­volve in­creas­ing co­op­er­a­tion, in­clud­ing:

1. Work on im­prov­ing in­sti­tu­tional de­ci­sion-mak­ing [9] and in­ter­na­tional cooperation

2. Work on mechanism de­sign for al­tru­is­tic co­or­di­na­tion [10]

3. CLR’s re­search ini­ti­a­tive on co­op­er­a­tion [11]

The ar­gu­ments given here only strengthen the case for work­ing on those causes. There are also a num­ber of aca­demic liter­a­tures that could be valuable, in­clud­ing those on the pri­vate pro­vi­sion of pub­lic goods and group con­flict.

There are some other im­por­tant con­sid­er­a­tions here. One is that meth­ods for build­ing co­op­er­a­tion be­tween a more like-minded group of peo­ple may not work for build­ing co­op­er­a­tion among more di­verse groups. For ex­am­ple, in­creas­ing the warm glow from fight­ing for a com­mon cause may help solve col­lec­tive ac­tion prob­lems within a poli­ti­cal party, but it may make it more difficult to get party mem­bers to sup­port com­pro­mise with an op­pos­ing party (be­cause com­pro­mise pre­vents them from get­ting warm-glow from fight­ing).

Also, there may be rea­sons to pri­ori­tize build­ing mechanisms for co­op­er­a­tion within effec­tive al­tru­ism be­fore ex­pand­ing to a more value-di­verse group of peo­ple. Let’s as­sume that peo­ple of sig­nifi­cantly differ­ent value sys­tems to the av­er­age EA tend to mostly be in­effi­cient in their efforts to do good. If they are in­tro­duced to EA, they will be able to more effec­tively achieve their goals, which may ac­tu­ally have nega­tive ex­ter­nal­ities on those cur­rently in­volved in EA (through the ad­vo­cacy chan­nels men­tioned above, for ex­am­ple). Thus, it may be bet­ter to first de­velop good mechanisms for co­op­er­a­tion, so that once these other peo­ple are in­tro­duced to EA ideas it will be ra­tio­nal for them to co­op­er­ate as well.

Fi­nally, and more spec­u­la­tively, I ex­pect that many ways to im­prove co­op­er­a­tion in­volve in­creas­ing re­turns to scale, at least in a nar­row sense. For ex­am­ple, im­prov­ing in­sti­tu­tions at the na­tional or in­ter­na­tional level may only suc­ceed if a very large num­ber of peo­ple par­ti­ci­pate, which may be very difficult to achieve if the cur­rent norm is that al­tru­ists don’t co­op­er­ate much (you have to con­vince ev­ery­one to co­or­di­nate on an­other equil­ibrium). More ap­peal­ing would be to pur­sue meth­ods of co­op­er­at­ing which provide benefits even if smaller num­bers of peo­ple par­ti­ci­pate. This could in­clude re­form­ing lo­cal gov­ern­ments, one at a time, then tak­ing the re­forms to state and na­tional gov­ern­ments. Or it could in­clude build­ing a mechanism for co­op­er­at­ing within effec­tive al­tru­ism and then adding more peo­ple into that mechanism in­cre­men­tally.


There is no gen­eral rea­son to be­lieve that good out­comes will arise when ev­ery in­di­vi­d­ual aims to do the most good with re­spect to their own value sys­tem. In fact, in stan­dard set­tings (like a group of peo­ple in­de­pen­dently choos­ing where to donate money), the out­come when in­di­vi­d­u­als aim to max­i­mize their own im­pact will al­most surely be in­effi­cient. This means that there can be large gains to co­op­er­a­tion be­tween al­tru­is­tic in­di­vi­d­u­als. It also means that the effec­tive al­tru­ism move­ment, which en­courages in­di­vi­d­u­als to max­i­mize their im­pact, could have nega­tive con­se­quences.


[1] https://​​longtermrisk.org/​​rea­sons-to-be-nice-to-other-value-sys­tems/​​

[2] https://​​80000hours.org/​​ar­ti­cles/​​co­or­di­na­tion/​​

[3] https://​​ra­tio­nalaltru­ist.com/​​2013/​​06/​​13/​​against-moral-ad­vo­cacy/​​

[4] https://​​www.fhi.ox.ac.uk/​​wp-con­tent/​​up­loads/​​moral-trade-1.pdf

[5] https://​​80000hours.org/​​ar­ti­cles/​​prob­lem-frame­work/​​

[6] https://​​www.sci­encedi­rect.com/​​sci­ence/​​ar­ti­cle/​​pii/​​S1574071406010086

[7] http://​​boston­re­view.net/​​fo­rum/​​logic-effec­tive-al­tru­ism/​​daron-ace­moglu-re­sponse-effec­tive-altruism

[8] https://​​www.vox.com/​​fu­ture-perfect/​​2019/​​5/​​27/​​18635923/​​philan­thropy-change-the-world-char­ity-phil-buchanan

[9] https://​​80000hours.org/​​prob­lem-pro­files/​​im­prov­ing-in­sti­tu­tional-de­ci­sion-mak­ing/​​

[10] https://​​drive.google.com/​​file/​​d/​​1_Tob-zKBVBr­nuQ0kWEBFFuuo_4A6WIRj/​​view

[11] https://​​longtermrisk.org/​​topic/​​co­op­er­a­tion/​​

[12] https://​​www.philip­tram­mell.com/​​blog/​​43

[13] https://​​fo­rum.effec­tivealtru­ism.org/​​posts/​​XXLf6FmWu­jkxna3E6/​​are-we-liv­ing-at-the-most-in­fluen­tial-time-in-his­tory-1

[14] https://​​jour­nals.sagepub.com/​​doi/​​full/​​10.1177/​​0569434519829433?casa_to­ken=0PVchSAHh­lgAAAAA%3AYm­fImpy­iZaCm2zc_ccK5GGYM6I2cJh4pnPEkx6f8onL3ZU8RIf0hDE7-kp0fSYle2kiR5N7FkJmy