Joan Gass: Opening talk

Joan Gass, Manag­ing Direc­tor at the Cen­tre for Effec­tive Altru­ism, gives the open­ing talk for the EA Global 2020 Vir­tual Con­fer­ence. She dis­cusses what she val­ues in the EA com­mu­nity, shares the story of how she be­came in­volved in it, and re­flects on where the move­ment has been in the past decade — and could be headed next.

Below is a tran­script of Joan’s talk, which we’ve lightly ed­ited for clar­ity. You can also watch it on YouTube and read it on effec­tivealtru­ism.org.

The Talk

Hello. I know for a lot of folks, this is a re­ally difficult time. So I’m glad that we could come to­gether for the first vir­tual EA Global con­fer­ence.

Be­fore I get started, I want to spend a few min­utes talk­ing about the EA com­mu­nity and our re­ac­tion to coro­n­avirus. One thing that I think is no­table is that be­fore the virus was even talked about in the main­stream, there were EAs dis­cussing it a lot on­line. I saw 100-com­ment Face­book threads where peo­ple were talk­ing about the best ways to pre­pare and try to keep so­ciety safe.

I think these early con­ver­sa­tions are no­table, but [I was also struck by] the sci­en­tific mind­set that peo­ple brought to them. I saw EAs mak­ing mod­els with any available data, think­ing about things like trig­gers for so­cial ac­tion, try­ing to imag­ine the best ways to pro­tect them­selves, and — even more im­por­tantly — to pro­tect oth­ers who might be more at risk.

In ad­di­tion to con­ver­sa­tions within the EA com­mu­nity, I think the val­ues we have were up­held in the re­ac­tion of the EA Global Team, par­tic­u­larly Amy [Labenz]. If you can’t tell, Amy lives for EA Global. She spent months prepar­ing for the con­fer­ence. And so when it came time to de­cide whether or not to can­cel it, she had a re­ally tough de­ci­sion to make — and she knew that she couldn’t be ob­jec­tive, so she de­cided to bring on an ad­vi­sory board. She iden­ti­fied that she might be bi­ased and put other safe­guards in place. She did that through­out the de­ci­sion-mak­ing pro­cess. I wasn’t part of the ad­vi­sory board, but I did get to watch the con­ver­sa­tion [un­fold].

One thing that im­pressed me is how Amy asked ev­ery per­son, once they’d made a recom­men­da­tion, “What would make you change your mind about that recom­men­da­tion?” She also col­lected peo­ple’s ideas and ad­vice through an anony­mous Google doc­u­ment, so that ar­gu­ments could be eval­u­ated on their own mer­its, in­de­pen­dent of peo­ple’s po­si­tions. I think this up­holds a value that’s re­ally im­por­tant within the EA com­mu­nity: try­ing to iden­tify our per­sonal bi­ases and take steps to min­i­mize them.

Of course, an­other thing on my mind is how many peo­ple within the EA com­mu­nity are think­ing about tak­ing di­rect ac­tions to try and re­duce the harms of COVID. I’ve seen EAs write up doc­u­ments and ar­ti­cles that have gone viral. I know sev­eral EAs who are work­ing with gov­ern­ments at a na­tional level. And of course, there are peo­ple who have ded­i­cated their en­tire ca­reers to biose­cu­rity, or gov­ern­ment more broadly, to try and de­crease the risk of dan­ger­ous situ­a­tions.

So I’m start­ing to­day feel­ing in­cred­ibly grate­ful to be part of a com­mu­nity that uses a sci­en­tific mind­set to think about how to pro­tect and help so­ciety. I’m grate­ful to be part of a com­mu­nity that tries to iden­tify and re­duce its own bi­ases. And I’m grate­ful to be part of a com­mu­nity alongside peo­ple who are think­ing about how to use their ca­reers to solve press­ing global prob­lems.

In my talk to­day, I want to speak about a few things that I think make the EA com­mu­nity spe­cial by tel­ling you a bit of my per­sonal story. And then I want to take some time at the end to re­flect on how far we’ve come in the last 10 years, and what might be in store for the fu­ture.

How the EA com­mu­nity has helped me

For me, the EA com­mu­nity has been in­cred­ibly pow­er­ful in a cou­ple of differ­ent ways. It has helped me move from paral­y­sis to ac­tion. It has helped me ex­plore differ­ent causes. It has helped me make big changes in my ca­reer, and it has helped me think about how to live out my val­ues in a way that’s sus­tain­able.

My story starts when I was 19. The first time I read The Life You Can Save, over 10 years ago, I was re­ally com­pel­led by the book, but I also felt par­a­lyzed by it. I didn’t know how to make de­ci­sions like… buy­ing a Star­bucks coffee [be­cause I would think] about how far that money could have gone if I had donated it abroad in­stead.

A few years later, I moved to Uganda. I met a friend there, who I’m go­ing to call David, and who gave me per­mis­sion to share our sto­ries. David had started the Giv­ing What We Can chap­ter at his uni­ver­sity, and we talked a lot about The Life You Can Save. He told me that he thought de­cid­ing to donate 10% of his in­come meant that he could make a mean­ingful differ­ence in other peo­ple’s lives with­out a sig­nifi­cant sac­ri­fice for him­self. He talked about how he liked the com­mit­ment be­cause he wasn’t par­a­lyzed by ev­ery­day fi­nan­cial de­ci­sions, and could always in­crease the amount he gave in the fu­ture. This re­ally res­onated with me. Through my con­ver­sa­tions with him, I was able to move from paral­y­sis to ac­tion.

My con­ver­sa­tions with David were also sig­nifi­cant as I thought about which causes to ex­plore. I grew up in a mod­er­ate, re­li­gious com­mu­nity in Texas. And so, com­ing out as bi­sex­ual was, at times, a pretty rough ex­pe­rience. I felt a lot of soli­dar­ity with peo­ple who’d had similar ex­pe­riences. That soli­dar­ity, in part, mo­ti­vated me to move to Uganda and work with Ugan­dan LGBT ac­tivists. I learned a lot from the ac­tivists I worked with about grass­roots or­ga­niz­ing and com­mu­nity build­ing. It was im­por­tant work.

I also had ques­tions about how — and whom — I could help the most. David and I talked about this. I re­mem­ber one night when we were walk­ing over the hills of Naguru and he told me he had had similar ques­tions. He shared a story about how his grand­father was a sur­vivor of the Holo­caust, and how David got into so­cial im­pact work be­cause he cared about helping the Jewish com­mu­nity. And then, he re­flected on that de­ci­sion. He re­al­ized that be­ing able to re­late to some­one didn’t mean that he wanted to help them more. He de­cided he wanted to help peo­ple as much as pos­si­ble, no mat­ter who they were, rather than just helping peo­ple who shared his own re­li­gious back­ground.

Our con­ver­sa­tion struck a deep chord in me. It made me re­al­ize that I didn’t think that the suffer­ing of LGBT peo­ple mat­tered more than the suffer­ing of other peo­ple. It clearly mat­ters, but other peo­ple’s suffer­ing mat­ters too. And I didn’t want to pri­ori­tize only the suffer­ing that was most like mine. I re­ally want a world in which there is no ho­mo­pho­bia or trans­pho­bia or gen­der-based dis­crim­i­na­tion. And I also re­ally want a world in which moth­ers don’t have to bury their chil­dren be­cause of com­pletely pre­ventable dis­eases, or have to worry about how they’re go­ing to feed their fam­i­lies.

My con­ver­sa­tion with David prompted me to think about how I could most im­pact the most peo­ple. I re­al­ized that I had to make a lot of choices be­cause I had limited time and re­sources.

Like David, I de­cided that I wanted to con­sider the ways I might do the most good, in gen­eral, in­clud­ing for peo­ple who were un­like me. Our con­ver­sa­tions prompted me to move from paral­y­sis to ac­tion and con­sider ex­plor­ing new causes. They also helped in­tro­duce me to the wider effec­tive al­tru­ism com­mu­nity, which helped me think about big ca­reer de­ci­sions.

By the time I was ap­ply­ing to grad­u­ate school, I felt like my iden­tity was pretty set for the fu­ture. The ap­pli­ca­tion pro­cess for pub­lic policy and MBA pro­grams re­quired me to pitch my­self. I had to con­stantly talk about the non­profit and ed­u­ca­tion sec­tors that I started work­ing in when I was in Uganda, and how I wanted to go to grad­u­ate school to grow that work, then tran­si­tion it into a so­cial en­ter­prise.

When I moved to Bos­ton, I was re­ally ex­cited about pur­su­ing that vi­sion. I also be­came more in­volved in the in-per­son EA Bos­ton com­mu­nity. That’s when I re­al­ized I had sev­eral ques­tions about my fun­da­men­tal as­sump­tions. I started think­ing about whether I should be pay­ing more at­ten­tion to an­i­mal welfare or work­ing on pro­jects re­lated to fu­ture gen­er­a­tions. I also did a few calcu­la­tions about so­cial en­ter­prises and how of­ten they suc­ceed, and at what scale. And I started to think that maybe work­ing in gov­ern­ment was go­ing to be more im­pact­ful be­cause of the amount of re­sources, on av­er­age, that peo­ple had the chance to in­fluence.

This was all pretty over­whelming. But my house­mate, Scott, was an effec­tive al­tru­ist and it was re­ally in­ter­est­ing watch­ing his jour­ney. He was in the mid­dle of a pub­lic health mas­ter’s de­gree. He had started that de­gree be­cause he wanted to cre­ate the next GiveWell char­ity. About halfway through his de­gree, he started think­ing about whether he should fo­cus on an­i­mal welfare more. He even­tu­ally pivoted, even though an­i­mal welfare had no re­la­tion­ship to his grad­u­ate school de­gree, be­cause he thought the is­sue was more ne­glected, and there­fore de­cided his ca­reer could have a big­ger im­pact [in that area] on the mar­gin.

I found this re­ally in­spiring and quite differ­ent from the other com­mu­ni­ties I was a part of. In my busi­ness school com­mu­ni­ties, ca­reer pur­suits that would al­low you to make a lot of money or join a re­ally cool new tech com­pany were re­warded. And I think so­ciety, in gen­eral, re­wards sta­tus and pres­tige.

Some­thing that I think is re­ally spe­cial about the EA com­mu­nity is that peo­ple are re­warded for do­ing what they think will have the most im­pact. And the pro­cess of figur­ing that out might look like chang­ing pro­jects or chang­ing cause ar­eas. I think this is in­cred­ibly im­por­tant and re­ally spe­cial. I work with peo­ple who have re­ally weird re­sumes, and I think that’s pretty awe­some.

That brings me to the fourth thing that I think is re­ally valuable about the EA com­mu­nity. Not only did it help me move from paral­y­sis to ac­tion, ex­plore differ­ent causes, and be open to differ­ent ca­reer op­tions, but it also helped me think about how to pur­sue EA in a way that was sus­tain­able for me.

I re­mem­ber at the end of grad­u­ate school, when I was think­ing about ca­reers in gov­ern­ment re­lated to emerg­ing tech­nol­ogy, and I had two big con­cerns. First, I was wor­ried about the cul­ture of na­tional se­cu­rity or­ga­ni­za­tions and how I would fit in. And sec­ond, I was wor­ried about go­ing from a ca­reer in which I felt I could see the con­sis­tent im­pact that I was mak­ing to one that was much more spec­u­la­tive. I felt wor­ried about my mo­ti­va­tion and whether I might burn out — and then I felt guilty about hav­ing those feel­ings in the first place.

Talk­ing to EAs who were in the roles that I was con­sid­er­ing was in­cred­ibly helpful. It was so helpful to be hon­est about my wor­ries. It turned out that some of my as­sump­tions were com­pletely off-base and some were pretty ac­cu­rate. Many EAs who were older than I helped me de­velop ways to eval­u­ate a few differ­ent path­ways pretty quickly and pres­sure-test the things that I was most un­sure about, but thought could be par­tic­u­larly im­pact­ful. And then, most im­por­tantly, they would di­gest the con­ver­sa­tions with me, let­ting me think through which things I thought would work for me and which ones wouldn’t, helping me gain a bet­ter sense of my per­sonal sus­tain­abil­ity and fit.

There have been other ways that the com­mu­nity has been re­ally helpful for me in this re­gard. I still think about the trade-off be­tween spend­ing money on my­self and sav­ing it and donat­ing it. And I’m still figur­ing out how much I want to work and what that looks like in terms of be­ing able to do it in a way that sets me up for the long haul. And it is so valuable to have friends in the com­mu­nity who al­ign with and re­spect my val­ues, and can help me come up with guide­posts in a way that I think will work.

I think that this vuln­er­a­bil­ity — this abil­ity to talk about the things that we’re wor­ried about in terms of per­sonal fit and what we might be strug­gling with — is so im­por­tant, be­cause I think that if we be­lieve that most of our im­pact might come decades from now in our ca­reers or with our dona­tions, then it feels re­ally im­por­tant to figure out how to pur­sue do­ing good, sus­tain­ably.

Cur­rent challenges for the EA com­mu­ni­ty

So I’ve told you about some of the things I re­ally value about the EA com­mu­nity. Now, I want to tell you about one thing that I think is im­por­tant for us to be on the look­out for if we want to safe­guard this com­mu­nity in the fu­ture. That is the pres­sure to con­form.

I think we have a com­mu­nity that re­wards the pur­suit of do­ing good, but I don’t think we’re im­mune from is­sues re­lated to sta­tus within our com­mu­nity. And I think these can some­times pre­vent us from hav­ing gen­uine de­bates. I’ve had peo­ple tell me about situ­a­tions where they want to ex­press a view, but they think they’ll be dis­missed or looked down upon be­cause it’s one that’s un­pop­u­lar.

One re­ally ex­treme ex­am­ple of this is a friend of mine who was hav­ing lunch at an EA or­ga­ni­za­tion, and they ex­pressed a well-thought-out, in­side view of why AI [ar­tifi­cial in­tel­li­gence] timelines were sig­nifi­cantly longer than the dom­i­nant view in that or­ga­ni­za­tion. And then some­one they were hav­ing lunch with [jok­ingly] called them a heretic.

I think this is re­ally con­cern­ing. We definitely don’t have ev­ery­thing right, and we’re go­ing to need a lot of hereti­cal ideas, a whole mar­ket­place of ideas, in or­der for us to pres­sure-test our as­sump­tions, call out our bi­ases, and cor­rect our­selves when we’re wrong.

So how do we counter the nat­u­ral hu­man ten­dency to in­flate cer­tainty? How do we try to re­duce in­fluence based on sta­tus? I have two sug­ges­tions.

The first is that if you have a po­si­tion of sta­tus, or if you hold a view that’s pop­u­lar, take spe­cial care to ex­press un­cer­tainty. I think there are two ways we could do this. One could be to men­tion what would make us change our mind about an is­sue. The other could be to men­tion the level of con­fi­dence we have in the cor­rect­ness of our po­si­tion. I am con­stantly sur­prised when I talk to lead­ers of or­ga­ni­za­tions that are EA-al­igned about some­thing that they’re do­ing, or an ap­proach they’re tak­ing; they al­most always ex­press a level of con­fi­dence in it that’s lower than what I ex­pect them to say.

The sec­ond thing that I think we can do to coun­ter­act this pres­sure to con­form is to make space for minor­ity views. I think we need to do this proac­tively. We need to re­ward peo­ple who raise cri­tiques. We need to be char­i­ta­ble to peo­ple whose views are differ­ent from our own. We need to ap­ply the “steel man” tac­tic, some­times, when cri­tiquing our own opinions, be­cause new ideas can help us dis­cover new things. They can re­veal our bi­ases and our blind spots.

Most im­por­tantly, I think we need to think about [how we re­ward ideas]. We shouldn’t just take short­cuts by re­ward­ing peo­ple who share our opinions. We should re­ward effort. We should praise peo­ple who have good method­olo­gies and well-thought-out ideas, even if, and maybe es­pe­cially if, they ar­rive at con­clu­sions that are differ­ent from our own.

Where the EA com­mu­nity has been — and might go next

I’ve told you about some things I re­ally ad­mire about the EA com­mu­nity. And I’ve also told you about some ways that I think we can pre­serve this com­mu­nity for the fu­ture. I want to wrap up by re­flect­ing on where we’ve come in the last 10 years — and where we might go in the fu­ture.

It seems ap­pro­pri­ate to travel back 10 years to 2010, be­cause this is the first con­fer­ence of 2020. [Although] that might be kind of hard — both be­cause video con­ferenc­ing was much worse in 2010, [and] be­cause the EA com­mu­nity barely ex­isted. You could take all of the mem­bers of Giv­ing What We Can and put them into a sin­gle house. And then, on that front porch of that house, you could fit all of the AI safety re­searchers in the world. And in the back­yard, there might be peo­ple cook­ing up old-school veg­gie burg­ers, but the Im­pos­si­ble Burger cer­tainly didn’t ex­ist. It was a differ­ent time.

The world has changed a lot in the last decade, and the EA com­mu­nity is still small in ab­solute terms, but I’ve found the progress that we’ve made in­spiring. To­day, you could not fit all of Giv­ing What We Can in a house. The com­mu­nity has grown to 100 times the size it was 10 years ago. That’s enough to reen­act the largest bat­tle of Game of Thrones nine times over. And AI safety re­searchers have way more than a front porch. Oxford and Cam­bridge have their own ex­is­ten­tial risk cen­ters. That’s just one ex­am­ple. And we have so many Im­pos­si­ble Burg­ers now. We have Beyond Burg­ers. They’re in fast-food chains across the United States. Good Dot sells burg­ers across In­dia. And the Good Food In­sti­tute, an EA-al­igned or­ga­ni­za­tion, has dozens of peo­ple work­ing to cre­ate an ecosys­tem of al­ter­na­tive pro­teins and make plant-based meats main­stream.

I’m par­tic­u­larly ex­cited about some of the progress we’ve made in terms of policy work in EA. And I’m to­tally bi­ased be­cause I went to policy school. One ex­am­ple of this is the Alpen­glow Group, a non­profit that was launched this year, whose goal is to put fu­ture gen­er­a­tions at the heart of poli­cy­mak­ing in the U.K. They’ve been work­ing on is­sues re­lated to biose­cu­rity in emerg­ing tech­nol­ogy and civil ser­vice re­form.

So all of this makes me cu­ri­ous about what the EA com­mu­nity will look like 10 years from now. And one thing that’s cer­tain is that there’s a lot that we don’t know. In 2030, it could be the case that some of the EA ideas and pre­dic­tions that we cling to strongly now haven’t hap­pened at all. And some of them might be even more ac­cu­rate than we were ex­pect­ing.

I wouldn’t be sur­prised if the first prime minister or pres­i­dent in­spired by EA prin­ci­ples has already been born and she’ll need a lot of peo­ple to help in terms of nav­i­gat­ing elec­toral poli­tics and poli­cies. Or it could be the case by 2030 that a so­cial his­to­rian has be­come one of the most im­pact­ful peo­ple in EA, helping us think about new ways that we can have large-scale im­pact. Or it could be that there’s an artist who has found ways to deeply and mov­ingly think about ex­pand­ing moral cir­cles, push­ing peo­ple to think about com­mu­ni­ties be­yond their lo­ca­tion, or how to cross the hu­man-an­i­mal di­vide, or re­late to and em­pathize with fu­ture gen­er­a­tions.

It’s prob­a­bly the case that, in 2030, peo­ple will have jobs that don’t ex­ist to­day. What­ever is in store for us, I think it’s im­por­tant that we [con­tinue to sup­port the core prin­ci­ples] of EA: al­tru­ism and col­lab­o­ra­tive truth-seek­ing. This will help us to pur­sue new knowl­edge and ap­proaches as op­por­tu­ni­ties arise — and to let go of old ideas, even if they are be­loved, so that we can dis­cover bet­ter ways to help oth­ers.

To sum up. I think it’s been a pretty in­cred­ible decade for EA. I think we have saved lives, made in­tel­lec­tual progress, and taken steps to­ward a safer and a kinder world. And I also think the next decade can be even bet­ter. EA has a lot of room to grow and learn, but I think that if we in­vest in our com­mu­nity and con­tinue to pro­mote good norms, we will be well-po­si­tioned to [fulfill much of our] po­ten­tial.

As I think back on my own story, I re­mem­ber Scott and David. They’ve had re­ally im­pact­ful, EA-in­spired ca­reers in their own right already, but through a few con­ver­sa­tions with me, they pretty sig­nifi­cantly changed my ca­reer tra­jec­tory. So as you go into this con­fer­ence, I en­courage you to think about how you can be a Scott and a David to oth­ers.

It is pretty in­cred­ible that we get to do this to­gether as a com­mu­nity. I hope that you take an open mind as you ex­plore our mar­ket­place of ideas within this con­fer­ence. And I hope you have a won­der­ful next two days.