Joan Gass, Managing Director at the Centre for Effective Altruism, gives the opening talk for the EA Global 2020 Virtual Conference. She discusses what she values in the EA community, shares the story of how she became involved in it, and reflects on where the movement has been in the past decade — and could be headed next.
Below is a transcript of Joan’s talk, which we’ve lightly edited for clarity. You can also watch it on YouTube and read it on effectivealtruism.org.
The Talk
Hello. I know for a lot of folks, this is a really difficult time. So I’m glad that we could come together for the first virtual EA Global conference.
Before I get started, I want to spend a few minutes talking about the EA community and our reaction to coronavirus. One thing that I think is notable is that before the virus was even talked about in the mainstream, there were EAs discussing it a lot online. I saw 100-comment Facebook threads where people were talking about the best ways to prepare and try to keep society safe.
I think these early conversations are notable, but [I was also struck by] the scientific mindset that people brought to them. I saw EAs making models with any available data, thinking about things like triggers for social action, trying to imagine the best ways to protect themselves, and — even more importantly — to protect others who might be more at risk.
In addition to conversations within the EA community, I think the values we have were upheld in the reaction of the EA Global Team, particularly Amy [Labenz]. If you can’t tell, Amy lives for EA Global. She spent months preparing for the conference. And so when it came time to decide whether or not to cancel it, she had a really tough decision to make — and she knew that she couldn’t be objective, so she decided to bring on an advisory board. She identified that she might be biased and put other safeguards in place. She did that throughout the decision-making process. I wasn’t part of the advisory board, but I did get to watch the conversation [unfold].
One thing that impressed me is how Amy asked every person, once they’d made a recommendation, “What would make you change your mind about that recommendation?” She also collected people’s ideas and advice through an anonymous Google document, so that arguments could be evaluated on their own merits, independent of people’s positions. I think this upholds a value that’s really important within the EA community: trying to identify our personal biases and take steps to minimize them.
Of course, another thing on my mind is how many people within the EA community are thinking about taking direct actions to try and reduce the harms of COVID. I’ve seen EAs write up documents and articles that have gone viral. I know several EAs who are working with governments at a national level. And of course, there are people who have dedicated their entire careers to biosecurity, or government more broadly, to try and decrease the risk of dangerous situations.
So I’m starting today feeling incredibly grateful to be part of a community that uses a scientific mindset to think about how to protect and help society. I’m grateful to be part of a community that tries to identify and reduce its own biases. And I’m grateful to be part of a community alongside people who are thinking about how to use their careers to solve pressing global problems.
In my talk today, I want to speak about a few things that I think make the EA community special by telling you a bit of my personal story. And then I want to take some time at the end to reflect on how far we’ve come in the last 10 years, and what might be in store for the future.
How the EA community has helped me
For me, the EA community has been incredibly powerful in a couple of different ways. It has helped me move from paralysis to action. It has helped me explore different causes. It has helped me make big changes in my career, and it has helped me think about how to live out my values in a way that’s sustainable.
My story starts when I was 19. The first time I read The Life You Can Save, over 10 years ago, I was really compelled by the book, but I also felt paralyzed by it. I didn’t know how to make decisions like… buying a Starbucks coffee [because I would think] about how far that money could have gone if I had donated it abroad instead.
A few years later, I moved to Uganda. I met a friend there, who I’m going to call David, and who gave me permission to share our stories. David had started the Giving What We Can chapter at his university, and we talked a lot about The Life You Can Save. He told me that he thought deciding to donate 10% of his income meant that he could make a meaningful difference in other people’s lives without a significant sacrifice for himself. He talked about how he liked the commitment because he wasn’t paralyzed by everyday financial decisions, and could always increase the amount he gave in the future. This really resonated with me. Through my conversations with him, I was able to move from paralysis to action.
My conversations with David were also significant as I thought about which causes to explore. I grew up in a moderate, religious community in Texas. And so, coming out as bisexual was, at times, a pretty rough experience. I felt a lot of solidarity with people who’d had similar experiences. That solidarity, in part, motivated me to move to Uganda and work with Ugandan LGBT activists. I learned a lot from the activists I worked with about grassroots organizing and community building. It was important work.
I also had questions about how — and whom — I could help the most. David and I talked about this. I remember one night when we were walking over the hills of Naguru and he told me he had had similar questions. He shared a story about how his grandfather was a survivor of the Holocaust, and how David got into social impact work because he cared about helping the Jewish community. And then, he reflected on that decision. He realized that being able to relate to someone didn’t mean that he wanted to help them more. He decided he wanted to help people as much as possible, no matter who they were, rather than just helping people who shared his own religious background.
Our conversation struck a deep chord in me. It made me realize that I didn’t think that the suffering of LGBT people mattered more than the suffering of other people. It clearly matters, but other people’s suffering matters too. And I didn’t want to prioritize only the suffering that was most like mine. I really want a world in which there is no homophobia or transphobia or gender-based discrimination. And I also really want a world in which mothers don’t have to bury their children because of completely preventable diseases, or have to worry about how they’re going to feed their families.
My conversation with David prompted me to think about how I could most impact the most people. I realized that I had to make a lot of choices because I had limited time and resources.
Like David, I decided that I wanted to consider the ways I might do the most good, in general, including for people who were unlike me. Our conversations prompted me to move from paralysis to action and consider exploring new causes. They also helped introduce me to the wider effective altruism community, which helped me think about big career decisions.
By the time I was applying to graduate school, I felt like my identity was pretty set for the future. The application process for public policy and MBA programs required me to pitch myself. I had to constantly talk about the nonprofit and education sectors that I started working in when I was in Uganda, and how I wanted to go to graduate school to grow that work, then transition it into a social enterprise.
When I moved to Boston, I was really excited about pursuing that vision. I also became more involved in the in-person EA Boston community. That’s when I realized I had several questions about my fundamental assumptions. I started thinking about whether I should be paying more attention to animal welfare or working on projects related to future generations. I also did a few calculations about social enterprises and how often they succeed, and at what scale. And I started to think that maybe working in government was going to be more impactful because of the amount of resources, on average, that people had the chance to influence.
This was all pretty overwhelming. But my housemate, Scott, was an effective altruist and it was really interesting watching his journey. He was in the middle of a public health master’s degree. He had started that degree because he wanted to create the next GiveWell charity. About halfway through his degree, he started thinking about whether he should focus on animal welfare more. He eventually pivoted, even though animal welfare had no relationship to his graduate school degree, because he thought the issue was more neglected, and therefore decided his career could have a bigger impact [in that area] on the margin.
I found this really inspiring and quite different from the other communities I was a part of. In my business school communities, career pursuits that would allow you to make a lot of money or join a really cool new tech company were rewarded. And I think society, in general, rewards status and prestige.
Something that I think is really special about the EA community is that people are rewarded for doing what they think will have the most impact. And the process of figuring that out might look like changing projects or changing cause areas. I think this is incredibly important and really special. I work with people who have really weird resumes, and I think that’s pretty awesome.
That brings me to the fourth thing that I think is really valuable about the EA community. Not only did it help me move from paralysis to action, explore different causes, and be open to different career options, but it also helped me think about how to pursue EA in a way that was sustainable for me.
I remember at the end of graduate school, when I was thinking about careers in government related to emerging technology, and I had two big concerns. First, I was worried about the culture of national security organizations and how I would fit in. And second, I was worried about going from a career in which I felt I could see the consistent impact that I was making to one that was much more speculative. I felt worried about my motivation and whether I might burn out — and then I felt guilty about having those feelings in the first place.
Talking to EAs who were in the roles that I was considering was incredibly helpful. It was so helpful to be honest about my worries. It turned out that some of my assumptions were completely off-base and some were pretty accurate. Many EAs who were older than I helped me develop ways to evaluate a few different pathways pretty quickly and pressure-test the things that I was most unsure about, but thought could be particularly impactful. And then, most importantly, they would digest the conversations with me, letting me think through which things I thought would work for me and which ones wouldn’t, helping me gain a better sense of my personal sustainability and fit.
There have been other ways that the community has been really helpful for me in this regard. I still think about the trade-off between spending money on myself and saving it and donating it. And I’m still figuring out how much I want to work and what that looks like in terms of being able to do it in a way that sets me up for the long haul. And it is so valuable to have friends in the community who align with and respect my values, and can help me come up with guideposts in a way that I think will work.
I think that this vulnerability — this ability to talk about the things that we’re worried about in terms of personal fit and what we might be struggling with — is so important, because I think that if we believe that most of our impact might come decades from now in our careers or with our donations, then it feels really important to figure out how to pursue doing good, sustainably.
Current challenges for the EA community
So I’ve told you about some of the things I really value about the EA community. Now, I want to tell you about one thing that I think is important for us to be on the lookout for if we want to safeguard this community in the future. That is the pressure to conform.
I think we have a community that rewards the pursuit of doing good, but I don’t think we’re immune from issues related to status within our community. And I think these can sometimes prevent us from having genuine debates. I’ve had people tell me about situations where they want to express a view, but they think they’ll be dismissed or looked down upon because it’s one that’s unpopular.
One really extreme example of this is a friend of mine who was having lunch at an EA organization, and they expressed a well-thought-out, inside view of why AI [artificial intelligence] timelines were significantly longer than the dominant view in that organization. And then someone they were having lunch with [jokingly] called them a heretic.
I think this is really concerning. We definitely don’t have everything right, and we’re going to need a lot of heretical ideas, a whole marketplace of ideas, in order for us to pressure-test our assumptions, call out our biases, and correct ourselves when we’re wrong.
So how do we counter the natural human tendency to inflate certainty? How do we try to reduce influence based on status? I have two suggestions.
The first is that if you have a position of status, or if you hold a view that’s popular, take special care to express uncertainty. I think there are two ways we could do this. One could be to mention what would make us change our mind about an issue. The other could be to mention the level of confidence we have in the correctness of our position. I am constantly surprised when I talk to leaders of organizations that are EA-aligned about something that they’re doing, or an approach they’re taking; they almost always express a level of confidence in it that’s lower than what I expect them to say.
The second thing that I think we can do to counteract this pressure to conform is to make space for minority views. I think we need to do this proactively. We need to reward people who raise critiques. We need to be charitable to people whose views are different from our own. We need to apply the “steel man” tactic, sometimes, when critiquing our own opinions, because new ideas can help us discover new things. They can reveal our biases and our blind spots.
Most importantly, I think we need to think about [how we reward ideas]. We shouldn’t just take shortcuts by rewarding people who share our opinions. We should reward effort. We should praise people who have good methodologies and well-thought-out ideas, even if, and maybe especially if, they arrive at conclusions that are different from our own.
Where the EA community has been — and might go next
I’ve told you about some things I really admire about the EA community. And I’ve also told you about some ways that I think we can preserve this community for the future. I want to wrap up by reflecting on where we’ve come in the last 10 years — and where we might go in the future.
It seems appropriate to travel back 10 years to 2010, because this is the first conference of 2020. [Although] that might be kind of hard — both because video conferencing was much worse in 2010, [and] because the EA community barely existed. You could take all of the members of Giving What We Can and put them into a single house. And then, on that front porch of that house, you could fit all of the AI safety researchers in the world. And in the backyard, there might be people cooking up old-school veggie burgers, but the Impossible Burger certainly didn’t exist. It was a different time.
The world has changed a lot in the last decade, and the EA community is still small in absolute terms, but I’ve found the progress that we’ve made inspiring. Today, you could not fit all of Giving What We Can in a house. The community has grown to 100 times the size it was 10 years ago. That’s enough to reenact the largest battle of Game of Thrones nine times over. And AI safety researchers have way more than a front porch. Oxford and Cambridge have their own existential risk centers. That’s just one example. And we have so many Impossible Burgers now. We have Beyond Burgers. They’re in fast-food chains across the United States. Good Dot sells burgers across India. And the Good Food Institute, an EA-aligned organization, has dozens of people working to create an ecosystem of alternative proteins and make plant-based meats mainstream.
I’m particularly excited about some of the progress we’ve made in terms of policy work in EA. And I’m totally biased because I went to policy school. One example of this is the Alpenglow Group, a nonprofit that was launched this year, whose goal is to put future generations at the heart of policymaking in the U.K. They’ve been working on issues related to biosecurity in emerging technology and civil service reform.
So all of this makes me curious about what the EA community will look like 10 years from now. And one thing that’s certain is that there’s a lot that we don’t know. In 2030, it could be the case that some of the EA ideas and predictions that we cling to strongly now haven’t happened at all. And some of them might be even more accurate than we were expecting.
I wouldn’t be surprised if the first prime minister or president inspired by EA principles has already been born and she’ll need a lot of people to help in terms of navigating electoral politics and policies. Or it could be the case by 2030 that a social historian has become one of the most impactful people in EA, helping us think about new ways that we can have large-scale impact. Or it could be that there’s an artist who has found ways to deeply and movingly think about expanding moral circles, pushing people to think about communities beyond their location, or how to cross the human-animal divide, or relate to and empathize with future generations.
It’s probably the case that, in 2030, people will have jobs that don’t exist today. Whatever is in store for us, I think it’s important that we [continue to support the core principles] of EA: altruism and collaborative truth-seeking. This will help us to pursue new knowledge and approaches as opportunities arise — and to let go of old ideas, even if they are beloved, so that we can discover better ways to help others.
To sum up. I think it’s been a pretty incredible decade for EA. I think we have saved lives, made intellectual progress, and taken steps toward a safer and a kinder world. And I also think the next decade can be even better. EA has a lot of room to grow and learn, but I think that if we invest in our community and continue to promote good norms, we will be well-positioned to [fulfill much of our] potential.
As I think back on my own story, I remember Scott and David. They’ve had really impactful, EA-inspired careers in their own right already, but through a few conversations with me, they pretty significantly changed my career trajectory. So as you go into this conference, I encourage you to think about how you can be a Scott and a David to others.
It is pretty incredible that we get to do this together as a community. I hope that you take an open mind as you explore our marketplace of ideas within this conference. And I hope you have a wonderful next two days.
I’m just a lurker/casually interested in EA but I’m glad to see this. I agree this community rewarding people for doing what they feel is most impactful is a great feature. This talk also goes into personal details, which makes it more authentic.