Why I left EA

I don’t in­tend to con­vince you to leave EA, and I don’t ex­pect you to con­vince me to stay. But typ­i­cal in­sider “steel-manned” ar­gu­ments against EA lack imag­i­na­tion about other peo­ple’s per­spec­tives: for ex­am­ple, they as­sume that the au­di­ence is util­i­tar­ian. Out­sider anti-EA ar­gu­ments are of­ten mean-spir­ited or mis­rep­re­sent EA (though I think EAs still un­der-value these per­spec­tives). So I provide a unique per­spec­tive: a former “in­sider” who had a change of heart about the prin­ci­ples of EA.

Like many EAs, I’m a moral anti-re­al­ist. This is why I find it frus­trat­ing that EAs act as if util­i­tar­i­anism is self-ev­i­dent and would be the nat­u­ral con­clu­sion of any ra­tio­nal per­son. (I used to be guilty of this.) My view is that moral­ity is largely the product of the whims of his­tory, cul­ture, and psy­chol­ogy. Any at­tempt to sys­tem­atize such com­plex be­lief sys­tems will nec­es­sar­ily lead to un­wanted con­clu­sions. Given anti-re­al­ism, I don’t know what com­pels me to “bite bul­lets” and ac­cept these con­clu­sions. Mo­ral par­tic­u­larism is clos­est to my cur­rent be­liefs.

Some spe­cific is­sues with EA ethics:

  • Ab­surd ex­pected value calcu­la­tions/​Pas­cal’s mugging

  • Hy­po­thet­i­cally caus­ing harm to in­di­vi­d­u­als for the good of the group. Some util­i­tar­i­ans come up with ways around this (e.g. the rep­u­ta­tion cost would out­weigh the benefits). But this raises the pos­si­bil­ity that in some cases the costs won’t out­weigh the benefits, and we’ll be com­pel­led to do harm to in­di­vi­d­u­als.

  • Un­der-valu­ing vi­o­lence. Many EAs glibly act as if a death from civil war or geno­cide is no differ­ent from a death from malaria. Yet this con­tra­dicts deeply held in­tu­itions about the costs of vi­o­lence. For ex­am­ple, many peo­ple would agree that a par­ent break­ing a child’s arm through abuse is far worse than a child break­ing her arm by fal­ling out of a tree. You could frame this as a moral claim that vi­o­lence holds a spe­cial hor­ror, or as an em­piri­cal claim that vi­o­lence causes psy­cholog­i­cal trauma and other harms, which must be ac­counted for in a util­i­tar­ian frame­work. The unique costs of vi­o­lence are also ap­par­ent through peo­ple’s ex­treme ac­tions to avoid vi­o­lence. Large mi­gra­tions of peo­ple are most as­so­ci­ated with war. Eco­nomic down­turns cause in­creases in mi­gra­tion to a lesser de­gree, and dis­ease out­breaks to a far lesser de­gree. This pri­ori­ti­za­tion doesn’t line up with how bad EAs think these prob­lems are.

Once I re­jected util­i­tar­i­anism, much of the rest of EA fell apart for me:

  • Valu­ing ex­is­ten­tial risk and high-risk, high-re­ward ca­reers rely on ex­pected value calculations

  • Pri­ori­tiz­ing an­i­mals (par­tic­u­larly in­ver­te­brates) re­lied on to­tal-view util­i­tar­i­anism (for me). I value an­i­mals (par­tic­u­larly non-mam­mals) very lit­tle com­pared to hu­mans and find the ev­i­dence for an­i­mal char­i­ties very weak, so the only con­vinc­ing ar­gu­ment for pri­ori­tiz­ing farmed an­i­mals was their large num­bers. (I still en­dorse ve­g­anism, I just don’t donate to an­i­mal char­i­ties.)

  • GiveWell’s recom­men­da­tions are overly fo­cused on dis­ease-as­so­ci­ated mor­tal­ity and short-term eco­nomic in­di­ca­tors, from my per­spec­tive. They fail to ad­dress vi­o­lence and ex­ploita­tion, which are ma­jor causes of poverty in the de­vel­op­ing world. (In­ci­den­tally, I also think that they un­der­value how much re­pro­duc­tive free­dom benefits women.)

The re­main­ing prin­ci­ples of EA, such as donat­ing sig­nifi­cant amounts of one’s money and en­sur­ing that a char­ity is effec­tive in achiev­ing its goals, weren’t unique enough to con­vince me to stay in the com­mu­nity.