Why are you here? An origin stories thread.

tl;dr I think ori­gin sto­ries are use­ful. Please share yours if you like. Here’s mine.


I gen­er­ally find “ori­gin sto­ries”—per­sonal ac­counts of how peo­ple first be­come in­volved with EA—to be quite illu­mi­nat­ing, and I think that the bulk of their value comes from their ag­gre­ga­tion. We have a lot of data on this at a high level, but I would ex­pect a much smaller num­ber of sto­ries at a much finer level of gran­u­lar­ity to be about as use­ful. You might no­tice par­tic­u­lar com­mu­ni­ties that many EA folk were in­volved in be­fore EA. You might see par­tic­u­lar ar­ti­cles, in­di­vi­d­u­als, ideas or events that were com­mon for­ma­tive mo­ments for peo­ple. Or you might spot com­mon fea­tures of paths of peo­ple from un­der­rep­re­sented groups in EA[1]. All of these things can help in­form more effec­tive strate­gies for com­mu­nity-build­ing.

I of­ten ask about ori­gin sto­ries in per­son. (Pro tip: “Why are you here?” is great for click­bait, bit con­fronta­tional for a first meet­ing; try “How did you get in­volved in EA?”, “What was it about EA that first ap­pealed to you?”, “What brings you here to­day?” or even sim­ply “What’s your story? Tell me more about you.”) But it’s very difficult for me or oth­ers to spot trends in a col­lec­tion of anec­dotes stored in my brain, so I in­vite you all to share your story here. If you’d like me to share it anony­mously on your be­half, you can share it here. Read­ers, don’t for­get the sam­pling bias we have, e.g. if “I used to be part of this other on­line fo­rum” comes up a lot in these sto­ries, no that does not mean that EA folk ba­si­cally all love on­line fo­rums.

Share as much as you wish to. One line is fine. If longer, don’t feel that you need to ex­plain any gaps or vague­ness (but of course, please don’t be de­liber­ately mis­lead­ing). Don’t cen­sor your­self too much with the thought that “Oh that’s prob­a­bly just me—that bit won’t be use­ful for spot­ting trends”. And con­sider in­clud­ing any “sub” ori­gin sto­ries that are ap­pli­ca­ble e.g. How did you first join an offline or on­line com­mu­nity? What’s the story be­hind your first sig­nifi­cant dona­tion or al­tru­is­ti­cally-mo­ti­vated ca­reer change? When have you made a sig­nifi­cant shift in cause pri­ori­ti­sa­tion and why?

I’ll go first.

Why I’m here


Thanks to an ob­ses­sion with record-keep­ing and a good mem­ory, I can see some very early roots. As a child I had a stan­dard magic wish: “Hap­piness for ev­ery­one for­ever.” I (told my­self I) kil­led spi­ders be­cause that’s one quick death for a spi­der against many flies be­ing slowly eaten al­ive. My favourite games were “schools” i.e. “teach my sister to read de­spite her protests” and “boat crash”/​”plane crash” i.e. “all of my toys are on the verge of death and we man­age to save them all”. A con­se­quen­tial­ist hero syn­drome if ever there was one.

Early teens

In my early teens, teach­ers and friends prompted me to think more about ethics and ra­tio­nal­ity within the con­text of re­li­gion. I ended a school es­say on abor­tion with “if there’s no pain, there’s no prob­lem” (refer­ring to the foe­tus). I went to church for two years for rea­son­ing similar to Pas­cal’s Wager, de­spite this choice gen­er­ally lead­ing to a lot of mis­ery for me and ev­ery­one around me. At this par­tic­u­lar church, I was made fun of for con­sid­er­ing Is­lam and for Googling for more ques­tions about Chris­ti­an­ity rather than for an­swers, and the judge­ments of my ro­man­tic life re­ally stung, but my poor fel­low church­go­ers did try so hard with me and must have thought me mad for at­tend­ing for so long as a very an­guished ag­nos­tic.

At some point in this pe­riod I also be­came a moral non-cog­ni­tivist (thought that “right”, “wrong”, “good”, “bad” etc. didn’t re­fer to any­thing fun­da­men­tal in the uni­verse) - I’d gained more ap­pre­ci­a­tion for the in­fluence of so­ciety on our so-called “moral” be­liefs, and I couldn’t con­ceive of what it would mean for any­thing to re­ally mat­ter. I wrote in my di­ary: “The mean­ing of life is that there is no mean­ing”.

Mid teens

Still, my in­ter­est in the fun­da­men­tal ques­tions about life per­sisted and at the age of 16 I dis­cov­ered philos­o­phy. Yes! This was it! I de­voured the sub­ject, spend­ing nearly ev­ery spare mo­ment of col­lege in the library, filling ring­binders with notes and dis­cussing philos­o­phy with any­one who’d hu­mour me. As with re­li­gion, I strug­gled to un­der­stand how other peo­ple could go about their lives with­out giv­ing these ques­tions se­ri­ous thought. “Sure,” I thought, “philos­o­phy is no­to­ri­ous for its lack of progress, but that’s in part be­cause its suc­cesses split off into new dis­ci­plines and in any case, don’t you all at least want to try??” I dis­cov­ered Peter Singer’s work and loved his ra­tio­nal ap­proach to ethics. At some point I was walk­ing home, and I still re­mem­ber the place where I stopped as a thought hit me: “Hap­piness is in­trin­si­cally good”. There’s lit­tle more I can say about that, but suffice it to say that from that mo­ment on I have been a moral cog­ni­tivist. Per­haps noth­ing ac­tu­ally mat­ters, but I think that con­tem­pla­tion of a par­tic­u­lar fea­ture of di­rect ex­pe­rience has al­lowed me to at least con­ceive of some­thing re­ally, ul­ti­mately “mat­ter­ing”.

Thanks to my new-found moral cog­ni­tivism built around what felt like an in­sight—hap­piness is in­trin­si­cally good—I fo­cused my philos­o­phy A-level on ethics where I could, and con­tributed to my sixth form’s Ac­tivists’ So­ciety.

Beyond this “in­sight” and ba­sic prin­ci­ples of ra­tio­nal­ity, I hadn’t yet come across any­thing that seemed rele­vant to what was ul­ti­mately right or wrong (I wasn’t think­ing about use­ful day-to-day moral habits, in­tu­itions or heuris­tics yet, rea­son­ing that that could only come af­ter I had some fun­da­men­tal prin­ci­ples in place to draw from). Ex­cept per­haps my own rather ex­treme risk-aver­sion when it comes to my per­sonal safety, and I won­dered if I should con­clude from that that our moral obli­ga­tions are always with the worst-off. For a while, my top can­di­date eth­i­cal the­o­ries were clas­si­cal util­i­tar­i­anism and what I called “bar con­se­quen­tial­ism” (un­til I found ex­ist­ing terms for similar ideas) - ba­si­cally the idea that we should always fo­cus en­tirely on try­ing to in­crease the hap­piness of who­ever is suffer­ing the most in the world. Utili­tar­i­anism didn’t seem to put enough weight on avoid­ing suffer­ing, and “bar con­se­quen­tial­ism” seemed ridicu­lous for at­tribut­ing no value to any in­creases in hap­piness or alle­vi­a­tion of suffer­ing un­less the sub­ject was the worst-off per­son in ex­is­tence, but any way of com­bin­ing the two seemed ar­bi­trary. Pri­ori­tar­i­anism (whereby if you’re suffer­ing more then it be­comes more im­por­tant—but not ex­clu­sively im­por­tant—to help you) showed some promise, but then I con­cluded that it was ac­tu­ally just the same as clas­si­cal util­i­tar­i­anism, and that I could just be a clas­si­cal util­i­tar­ian who puts rel­a­tively large num­bers on ex­treme suffer­ing when it comes to judge­ments around ex­actly which ex­pe­riences count as ex­actly which de­grees of hap­piness/​suffer­ing. Nice. Utili­tar­i­anism it is. Job done.

Not quite. I was still far from cer­tain, and al­though I knew that a life­time would not be long enough to find an­swers, I was go­ing to do my best. I drew up a life plan that in­volved study­ing ethics through philos­o­phy (and, to some ex­tent, the­ol­ogy) as a ca­reer at the best uni­ver­sity I could get into, pub­lish­ing what­ever I learnt and donat­ing what I could.

Alongside all my philosophis­ing at sixth form I’d been step­ping up my al­tru­is­tic be­havi­our. I thought most farm an­i­mals prob­a­bly had happy lives even though some of them were awful, so it was fine to eat meat be­cause oth­er­wise those an­i­mals wouldn’t ex­ist at all. Then I re­al­ised one day that this was the wrong way to think about it—that the awful lives were re­ally awful, and since we didn’t re­ally know where our meat came from, we shouldn’t take the chance—and de­clared my­self a veg­e­tar­ian in the same breath. I donated to and vol­un­teered for sev­eral char­i­ties. And I took over the Ac­tivists’ So­ciety when no one else would (a habit picked up from math class where I felt de­spised for putting my hand up and get­ting the an­swer right, but I nev­er­the­less wanted the les­son to progress...also a be­havi­our I’d re­peat a few times in the years to come). I learnt about per­ceived self-righ­teous­ness quickly, and we re­branded as the self-de­p­re­cat­ing Save The World Club. I don’t know why my al­tru­is­tic mo­ti­va­tion steadily in­creased over this pe­riod, but it did.

Late teens to present

I moved to Oxford for uni­ver­sity and soon heard about this so­ciety that had just launched called Giv­ing What We Can. What an awe­some pro­ject! I at­tended a talk by the founder, Toby Ord, on veg­e­tar­i­anism and re­mem­ber think­ing, “It’s an­other Peter Singer”.

My men­tal health de­clined at Oxford (as I thought it prob­a­bly would) and I sent emails to Peter Singer and Toby Ord ask­ing for ad­vice about whether to drop out, say­ing “When other peo­ple are giv­ing me ad­vice they never fac­tor in the ‘ethics’ part any­where near as much as I do so its not always that helpful.” They made time for me and told me what I ex­pected to hear, but I re­ally val­ued the re­as­surance of hear­ing some­one else say it. I dropped out.

But I main­tained my con­nec­tion to Oxford. At the first Giv­ing What We Can talk I at­tended I au­dibly gasped at the differ­ences in cost-effec­tive­ness es­ti­mates from the DCP2 re­port. I met one of the Feli­ci­fia ad­mins at the talk and be­came an en­thu­si­as­tic user; for some rea­son I’d never thought to Google “util­i­tar­i­anism fo­rum”. It was there that I came across the won­der­fully con­cise line: “Utili­tar­i­anism and Nihilism are the only eth­i­cal sys­tems that make any sense. If nihilism is true, it doesn’t mat­ter what I do, so I might as well as­sume it’s false.” (I’m nowhere near that con­fi­dent of course, but it’s a nice sum­mary of why I don’t bother think­ing about the pos­si­bil­ity of nihilism any more.) There was some over­lap be­tween the Giv­ing What We Can crew and the tran­shu­man­ists/​ra­tio­nal­ists and one of the peo­ple in this over­lap told me the as­tro­nom­i­cal waste ar­gu­ment at a pub meet-up. I thought it was ridicu­lous. Then I wrote my un­der­grad­u­ate dis­ser­ta­tion on why it was sen­si­ble. I kept up my co-lead­er­ship of the Oxford anti-geno­cide so­ciety for a while (so cho­sen be­cause I figured the most effec­tive ways to max­imise hap­piness might still be to fo­cus on the worst-off, and I couldn’t think of any­thing worse), but even­tu­ally my co-lead­er­ship of the new Giv­ing What We Can: Oxford so­ciety took over. I also launched an­other char­i­ta­ble stu­dent so­ciety at one point but even­tu­ally that too was handed on so that I could fo­cus more on the emerg­ing “Effec­tive Altru­ism” move­ment. I couldn’t get enough of it. In those early years I learnt so much, never failed to be ex­cited at first meet­ings with like-minded souls, and at­tended sev­eral re­treats where I ex­pe­rienced a very strong sense of com­mu­nity, of a beau­tiful, mean­ingful shared pur­pose and heart­felt mu­tual sup­port to help each other get there.

By the time we de­cided on the name Cen­tre for Effec­tive Altru­ism I had one of the fancy “di­rec­tor” ti­tles (we stu­dents love our fancy ti­tles) and was work­ing on com­mu­nity sup­port for Giv­ing What We Can. When The Life You Can Save needed new lead­er­ship, I stepped up, and when my men­tal health de­te­ri­o­rated, I stepped down. I then spent three years in jobs in which my pri­ori­ties were (i) my men­tal health and (ii) ex­plo­ra­tion of a large va­ri­ety of in­dus­tries and peo­ple. Then last sum­mer I was able to take some months off to reeval­u­ate and, feel­ing more men­tally healthy and re­al­is­ing that I’d learnt rel­a­tively lit­tle of value in my three years “off”, I de­cided to try EA com­mu­nity-build­ing full-time again.

This is one ver­sion of my story. It’s already a bit more ex­po­sure than I re­ally feel com­fortable with at the mo­ment, so I’ve left out a lot of the em­bar­rass­ing mis­takes I made (nearly all around be­ing too con­fi­dent and/​or emo­tional in my judge­ments) and a lot of the things I found difficult. But I hope that you find some­thing use­ful in it, that you en­joy re­flect­ing on your own story, and that you re­mem­ber that we each have a story rid­dled with per­sonal mis­takes and challenges but united in one be­lief: To­mor­row can be brighter than to­day[2].

Fi­nal notes

Read­ers, please re­mem­ber to keep in mind that these sto­ries are not who we are. They are some of the places we have been and/​or snap­shots of where we hap­pen to be to­day. And no doubt they con­tain many hon­est in­nac­cu­ra­cies.

For any­one in­ter­ested in more on this topic, see The Life You Can Save’s Sup­port­ers Sto­ries, Tom Ash’s A tax­on­omy of EA ori­gin sto­ries, and some more from Ori­gin Sto­ries Month in Jan­uary 2015. [Edit: Also on the re­lated ques­tion of how peo­ple found one of the top sources of EA folk, the LessWrong sur­vey (2014) lists refer­rals as fol­lows: a link (464, 31%), Harry Pot­ter and the Meth­ods Of Ra­tion­al­ity (385, 26%), Over­com­ing Bias (210, 14%), friend (199, 13%), search en­g­ine (114, 8%), other fic­tion (17, 1%).]


[1]By “un­der­rep­re­sented groups” I mean “the col­lec­tion of peo­ple who cur­rently pos­sess to a rel­a­tively high de­gree the kinds of skills, ex­pe­riences, mo­ti­va­tions, re­sources, mind­sets, habits or other char­ac­ter­is­tics that you would like to see more of in the EA com­mu­nity”. Maybe for you that in­cludes de­mo­graph­ics severely un­der­rep­re­sented in EA com­pared to the global pop­u­la­tion. Maybe it’s “very high al­tru­is­tic ded­i­ca­tion” etc. Of course, qual­i­ta­tive ori­gin sto­ries are not the only way to col­lect rele­vant data on this.

[2]This post is not just about data col­lec­tion; the timing is no co­in­ci­dence. My hope is that this might also serve as a kind of “EA grat­i­tude jour­nal­ling”—that re­flect­ing on your early days in EA and what you loved or grew to love about it will help gen­er­ate pos­i­tive feel­ings of nos­tal­gia, ap­pre­ci­a­tion and ca­ma­raderie. At the time of writ­ing, I sense that ten­sions are par­tic­u­larly high in our com­mu­nity. I of course have my own thoughts on what mis­takes par­tic­u­lar peo­ple/​or­gani­sa­tions have made or are mak­ing, and on whose judg­ment or hon­esty I most trust on which mat­ters, and I think it is of­ten ex­tremely im­por­tant to dis­cuss them. And of­ten emo­tion is in the driv­ing seat when I’m dis­cussing my lat­est thoughts, de­spite my self-de­cep­tion to the con­trary, we’re all hu­man. But I sus­pect that an ex­tra dose of em­pa­thy and mu­tual ap­pre­ci­a­tion would be use­ful for the dis­agree­ments be­ing aired right now and I hope that tak­ing part in this ex­er­cise, even pri­vately, will help. Al­most no one is evil. Al­most ev­ery­thing is bro­ken.