My Coming of Age as an EA: 12 Problems with Effective Altruism

This text has many, many hy­per­links, it is use­ful to at least glance at front­page of the linked ma­te­rial to get it. It is an ex­pres­sion of me think­ing so it has many com­mu­nity jar­gon terms. Thank Oliver Habryka, Daniel Koko­ta­jlo and James Nor­ris for com­ments. No, re­ally, check the front page of the hy­per­links.
  • Why I Grew Skep­ti­cal of Transhumanism

  • Why I Grew Skep­ti­cal of Immortalism

  • Why I Grew Skep­ti­cal of Effec­tive Altruism

  • Only Game in Town

Won­der­land’s rab­bit said it best: The hur­rier I go, the be­hin­der I get.

We ap­proach 2016, and the more I see light, the more I see brilli­ance pop­ping ev­ery­where, the Effec­tive Altru­ism move­ment grow­ing, TEDs and Elons spread­ing the word, the more we switch our heroes in the right di­rec­tion, the be­hin­der I get. But why? - you say.

Clar­ity, pre­ci­sion, I am tempted to re­ply. I have left the in­tel­lec­tual sub­urbs of Brazil, straight into the strongest hub of pro­duc­tion of things that mat­ter, The Bay Area, via Oxford’s FHI office, I now split my time be­tween UC Berkeley, and the CFAR/​MIRI office. In the pro­cess, I have nav­i­gated an ocean of in­for­ma­tion, read hun­dreds of books, pa­pers, saw thou­sands of classes, be­came profi­cient in a hand­ful of lan­guages and a hand­ful of in­tel­lec­tual dis­ci­plines. I’ve vis­ited the Olym­pus and I met our liv­ing demi­gods in per­son as well.

Against the over­whelming forces of an ex­tremely up­beat per­son­al­ity sur­fing a hy­per base-level hap­piness, these three forces: ap­proach­ing the cen­ter, learn­ing vo­ra­ciously, and meet­ing the so-called heroes, have brought me to the cur­rent state of pes­simism.

I was a tran­shu­man­ist, an im­mor­tal­ist, and an effec­tive al­tru­ist.

Why I Grew Skep­ti­cal of Transhumanism

The tran­shu­man­ist in me is skep­ti­cal of tech­nolog­i­cal de­vel­op­ment fast enough for im­prov­ing the hu­man con­di­tion to be worth it now, he sees most tech­nolo­gies as fancy toys that don’t get us there. Our tech­nolo­gies can’t and won’t for a while lead our minds to peaks any­where near the peaks we found by sim­ply in­tro­duc­ing weirdly shaped molecules into our brains. The strangeness of Salvia, the beauty of LSD, the love of MDMA are or­ders and or­ders of mag­ni­tude be­yond what we know how to change from an en­g­ineer­ing per­spec­tive. We can in­duce a rain­bow, but we don’t even have the con­cept of force yet. Our knowl­edge about the brain, given our goals about the brain, is at the level of knowl­edge of physics of some­one who found out that spray­ing wa­ter on a sunny day causes the rain­bow. It’s not even physics yet.

Believe me, I have read thou­sands of pages of pa­pers in the most ad­vanced top­ics in cog­ni­tive neu­ro­science, my ad­vi­sor spent his en­tire ca­reer, from Har­vard to Tenure, do­ing neu­ro­science, and was the first per­son to im­plant neu­rons that ac­tu­ally healed a brain to the point of re­cov­er­ing func­tion­al­ity by us­ing non-hu­man neu­rons. As Marvin Min­sky, who nearly in­vented AI and the multi-agent com­pu­ta­tional the­ory of mind, told me: I don’t recom­mend en­ter­ing a field where ev­ery four years all knowl­edge is ob­so­lete, they just don’t know it yet.

Why I Grew Skep­ti­cal of Immortalism

The im­mor­tal­ist in me is skep­ti­cal be­cause he un­der­stands the com­plex­ity of biol­ogy from con­ver­sa­tions with the cen­timil­lion­aires and with the chief sci­en­tists of anti-ag­ing re­search fa­cil­ities wor­ld­wide, he met the bio-startup founders and gets that the struc­ture of in­cen­tives does not look good for bio-star­tups any­way, so al­though he was once very ex­cited about the prospect of defeat­ing the mechanisms of age­ing, back when less than 300 thou­sand dol­lars were di­rectly in­vested in it, he is now, with billions pledged against age­ing, con­fi­dent that the prob­lem is sub­stan­tially harder to sur­mount than the num­ber of man-hours left to be in­vested in the prob­lem, at least dur­ing my life­time, or be­fore the In­tel­li­gence Ex­plo­sion.

Believe me, I was the first cry­on­i­cist among the 200 mil­lion peo­ple strid­ing my coun­try, won a prize for anti-age­ing re­search at the bright young age of 17, and hang out on a reg­u­lar ba­sis with all the peo­ple in this world who want to beat death that still share in our priv­ilege of liv­ing, just in case some new in­sight comes that changes the tides, but none has come in the last ten years, as our friend Aubrey will be keen to tell you in de­tail.

Why I Grew Skep­ti­cal of Effec­tive Altruism

The Effec­tive Altru­ist is skep­ti­cal too, al­though less so, I’m still found­ing an EA re­search in­sti­tute, keep­ing a lov­ing eye on the one I left be­hind, liv­ing with EAs, work­ing at EA offices and mostly broad­cast­ing ideas and re­search­ing with EAs. Here are some prob­lems with EA which make me skep­ti­cal af­ter be­ing shook around by the three forces:

  1. The Sta­tus Games: Sig­nal­ling, coun­tersig­nal­ling, go­ing one more meta-level up, out­smart­ing your op­po­nent, see­ing oth­ers as op­po­nents, my cause is the only true cause, zero-sum mat­ing scarcity, pre­tend­ing that poly elimi­nates mat­ing scarcity, founders X join­ers, re­searchers X ex­ec­u­tives, us in­sti­tu­tions ver­sus them in­sti­tu­tions, cheap in­di­vi­d­u­als ver­sus ex­pen­sive in­sti­tu­tional salaries, it’s gore all the way up and down.

  2. Rea­son­ing by Anal­ogy: Few EAs are able to and do­ing their due in­tel­lec­tual dili­gence. I don’t blame them, the space of Cru­cial Con­sid­er­a­tions is not only very large, but ex­tremely un­com­fortable to look at, who wants to know our species has not even found the step­ping stones to make sure that what mat­ters is pre­served and guaran­teed at the end of the day? It is a hefty or­deal. Nev­er­the­less, it is prob­le­matic that fewer than 20 EAs (one in 300?) are ac­tu­ally rea­son­ing from first prin­ci­ples, think­ing all things through from the very be­gin­ning. Most of us are look­ing away from at least some philo­soph­i­cal as­sump­tion or tech­nolog­i­cal pre­dic­tion. Most of us are cooks and not yet chefs. Some of us have not even waken up yet.

  3. Ba­bies with a De­to­na­tor: Most EAs still carry their tran­si­tional ob­jects around, cling­ing des­per­ately to an idea or a per­son they think more guaran­teed to be true, be it hard­core pat­ternism about philos­o­phy of mind, global ag­grega­tive util­i­tar­i­anism, ve­g­anism, or the ex­pec­ta­tion of im­mor­tal­ity.

  4. The Size of the Prob­lem: No mat­ter if you are fight­ing suffer­ing, Na­ture, Chronos (death), Aza­thoth (evolu­tion­ary forces) or Moloch (de­ranged emer­gent struc­tures of in­cen­tives), the size of the prob­lem is just tremen­dous. One com­pletely or­di­nary rea­son to not want to face the prob­lem, or to be in de­nial, is the prob­lem’s enor­mity.

  5. The Com­plex­ity of The Solu­tion: Let me spell this out, the na­ture of the solu­tion is not sim­ple in the least. It’s pos­si­ble that we luck out and it turns out the Orthog­o­nal­ity Th­e­sis and the Dooms­day Ar­gu­ment and Mind Crime are just philo­soph­i­cal cu­ri­osi­ties that have no prac­ti­cal bear­ing in our earthly en­g­ineer­ing efforts, that the AGI or Emu­la­tion will by de­fault fall into an at­trac­tor basin which im­ple­ments some form of Max­iPok with de­tails that it only grasps af­ter CEV or the Crypto, and we will be Ok. It is pos­si­ble, and it is more likely than that our efforts will end up be­ing the de­ci­sive fac­tor. We need to fo­cus our ac­tions in the branches where they mat­ter though.

  6. The Na­ture of the Solu­tion: So let’s sit down side by side and stare at the void to­gether for a bit. The na­ture of the solu­tion is get­ting a group of apes who just in­vented the in­ter­net from ev­ery­where around the world, and get them to co­or­di­nate an effort that fills in the en­tire box of Cru­cial Con­sid­er­a­tions yet un­known—this is the goal of Con­ver­gence Anal­y­sis, by the way—find ev­ery sin­gle last one of them to the point where the box is filled, then, once we have all the Cru­cial Con­sid­er­a­tions available, de­velop, faster than any­one else try­ing, a trans­la­tion scheme that trans­lates our val­ues to a ma­chine or em­u­la­tion, in a phys­i­cally sound and tech­ni­cally ro­bust way (that’s if we don’t find a Cru­cial Con­sid­er­a­tion oth­er­wise which, say, steers our course to­wards Mars). Then we need to de­velop the en­g­ineer­ing pre­req­ui­sites to im­ple­ment a think­ing be­ing smarter than all our sci­en­tists to­gether who can re­flect philo­soph­i­cally bet­ter than the last two thou­sand years of effort while be­com­ing the most pow­er­ful en­tity in the uni­verse’s his­tory, that will fall into the right at­trac­tor basin within mindspace. That’s if Su­per­in­tel­li­gences are even pos­si­ble tech­ni­cally. Add to that we or it have to guess cor­rectly all the philo­soph­i­cal prob­lems that are A)Rele­vant B)Un­solv­able within physics (if any) or by com­put­ers, all of this has to hap­pen while the most pow­er­ful cor­po­ra­tions, States, armies and in­di­vi­d­u­als at­tempt to seize con­trol of the smart sys­tems them­selves. with­out be­ing cur­tailed by the hin­drance counter in­cen­tive of not de­stroy­ing the world ei­ther be­cause they don’t re­al­ize it, or be­cause the first mover ad­van­tage seems worth the risk, or be­cause they are about to die any­way so there’s not much to lose.

  7. How Large an Uncer­tainty: Our un­cer­tain­ties loom large. We have some tech­ni­cal but not much philo­soph­i­cal un­der­stand­ing of suffer­ing, and our tech­ni­cal un­der­stand­ing is in­suffi­cient to con­fi­dently as­sign moral sta­tus to other en­tities, spe­cially if they di­verge in more di­men­sions than brain size and ar­chi­tec­ture. We’ve barely scratched the sur­face of tech­ni­cal un­der­stand­ing on hap­piness in­crease, and philo­soph­i­cal un­der­stand­ing is also in its first steps.

  8. Macros­trat­egy is Hard: A Chess Grand­mas­ter usu­ally takes many years to ac­quire suffi­cient strate­gic skill to com­mand the ti­tle. It takes a deep and profound un­der­stand­ing of un­fold­ing struc­tures to grasp how to beam a mes­sage or a change into the fu­ture. We are at­tempt­ing to beam a com­plete value lock-in in the right basin, which is pro­por­tion­ally harder.

  9. Prob­a­bil­is­tic Rea­son­ing = Rea­son­ing by Anal­ogy: We need a com­mu­nity that at once un­der­stands prob­a­bil­ity the­ory, doesn’t play refer­ence class ten­nis, and doesn’t lose mo­ti­va­tion by con­sid­er­ing the base rates of other peo­ple try­ing to do some­thing, be­cause the other peo­ple were cooks, not chefs, and also be­cause some­times you ac­tu­ally need to try a one in ten thou­sand chance. But peo­ple are too proud of their com­mand of Bayes to let go of the easy chance of show­ing off their abil­ity to find math­e­mat­i­cally sound rea­sons not to try.

  10. Ex­ces­sive Trust in In­sti­tu­tions: Very of­ten peo­ple go through a sim­plify­ing set of as­sump­tions that col­lapses a brilli­ant idea into an awful dona­tion, when they rea­son:
    I have con­cluded that cause X is the most rele­van­t
    In­sti­tu­tion A is an EA or­ga­ni­za­tion fight­ing for cause X
    There­fore I donate to in­sti­tu­tion A to fight for cause X.
    To be­gin with, this is very ex­pen­sive com­pared to donat­ing to any of the three P’s: pro­jects, peo­ple or prizes. Fur­ther­more, the cru­cial points to fund in­sti­tu­tions are when they are about to die, just start­ing, or build­ing a type of mo­men­tum that has a nar­row win­dow of op­por­tu­nity where the deriva­tive gains are par­tic­u­larly large or you have pri­vate in­for­ma­tion about their cur­rent value. To agree with you about a cause be­ing im­por­tant is far from suffi­cient to as­sess the ex­pected value of your dona­tion.

  11. Delu­sional Op­ti­mism: Every­one who like past-me moves in with delu­sional op­ti­mism will always have a blind spot in the fea­ture of re­al­ity about which they are in de­nial. It is not a prob­lem to have some in­di­vi­d­u­als with a blind spot, as long as the rate doesn’t sur­pass some group san­ity thresh­old, yet, on an in­di­vi­d­ual level, it is of­ten the case that those who can gaze into the void a lit­tle longer than the rest end up be­ing the ones who ac­com­plish things. Star­ing into the void makes peo­ple show up.

  12. Con­ver­gence of opinions may strengthen sep­a­ra­tion within EA: Thus far, the longer some­one is an EA for, the more likely they are to tran­si­tion to an opinion in the sub­se­quent boxes in this flowchart from whichever box they are at at the time. There are still peo­ple in all the opinion boxes, but the trend has been to move in that flow. In­sti­tu­tions how­ever have a harder time es­cap­ing be­ing locked into a spe­cific opinion. As FHI moves deeper into AI, and GWWC into poverty, 80k into ca­reer se­lec­tion etc… they be­come more con­gealed. Peo­ple’s opinions are still chang­ing, and some of the money fol­lows, but in­sti­tu­tions are crys­tal­liz­ing into some opinions, and in the fu­ture they might pre­vent tran­si­tion be­tween opinion clusters and free mo­bil­ity of in­di­vi­d­u­als, like na­tional fron­tiers already do. Once in­sti­tu­tions, which in the­ory are com­manded by peo­ple who agree with in­sti­tu­tional val­ues, no­tice that their rate of loss to­wards the EA move­ment is higher than their rate of gain, they will have in­cen­tives to pre­vent the flow of tal­ent, ideas and re­sources that has so far been a hal­l­mark of Effec­tive Altru­ism and why many of us find it im­pres­sive, it’s be­ing an in­ten­sional move­ment. Any part that con­geals or be­comes ex­ten­sional will drift off be­hind, and this may cre­ate un­sur­mountable sep­a­ra­tion be­tween groups that want to claim ‘EA’ for them­selves.

Only Game in Town

The rea­sons above have trans­formed a patholog­i­cal op­ti­mist into a wary skep­ti­cal about our fu­ture, and the value of our plans to get there. And yet, I don’t see other op­tion than to con­tinue the bat­tle. I wake up in the morn­ing and con­sider my al­ter­na­tives: He­donism, well, that is fun for a while, and I could try a quan­ti­ta­tive ap­proach to guaran­tee max­i­mal hap­piness over the course of the 300 000 hours I have left. But all things con­sid­ered, any­one read­ing this is already too close to the epi­cen­ter of some­thing that can be­come ex­tremely im­por­tant and change the world to have the af­for­dance to wan­der off in­de­ter­mi­nately. I look at my high base-hap­piness and don’t feel jus­tified in max­i­miz­ing it up to the point of no marginal re­turn, there clearly is value el­se­where than here (points in­wards), clearly the self of which I am made has strong al­tru­is­tic urges any­way, so at least above a thresh­old of hap­piness, has rea­son to pur­chase the ex­tremely good deals in ex­pected value hap­piness of oth­ers that seem to be on the mar­ket. Other al­ter­na­tives? Ex­is­ten­tial­ism? Well, yes, we always have a fun­da­men­tal choice and I feel the thrown­ness into this world as much as any Kierkegaard does. Power? When we read Niet­zsche it gives that fan­tasy im­pres­sion that power is re­ally in­ter­est­ing and worth fight­ing for, but at the end of the day we still live in a uni­verse where the wealthy are of­ten re­duced to hav­ing to spend their power in pa­thetic sig­nal­ling games and zero sum dis­putes or co­erc­ing minds to act against their will. Nihilism and Mo­ral Fic­tion­al­ism, like Ex­is­ten­tial­ism all col­lapse into hav­ing a choice, and if I have a choice my choice is always go­ing to be the choice to, most of the time, care, try and do.

Ideally, I am still a tran­shu­man­ist and an im­mor­tal­ist. But in prac­tice, I have aban­doned those no­ble ideals, and prag­mat­i­cally only con­tinue to be an EA.

It is the only game in town.


Most com­ments are on Less­wrong.