Reflections on EA Global from a first-time attendee
Hello everyone—long-time lurker, first-time poster here. It was a pleasure to meet a number of you at EA Global last month. Given all the discussion at the conference about how to expand the effective altruism community responsibly, I thought I might share some feedback and observations from the perspective of someone who may be part of the target audience.
First, a bit about me. I imagine that I’m basically a decade-older version of many of you. I’ve been donating to GiveWell-recommended charities since 2010, and have had an interest in using evidence to improve resource allocation in the social sector for nearly a decade. I’m currently VP of strategy and analytics for a nonprofit in the United States, and am also the founder of a grassroots think tank that has drawn a lot of inspiration from GiveWell’s approach and style of thinking.
I’ve been sort of floating on the periphery of the effective altruist movement for a couple of years now. I attended a couple of events at my local chapter in DC back when Ben Hoffman first set it up, and even hosted one myself, but leading up to this year’s summit I wasn’t at the point where I go around telling people that I’m an effective altruist. There had really been two questions holding me back thus far from deeper engagement: first, how seriously do I take these people? And second, am I really one of them? I want to address both of those in this post, because I suspect these are two questions that a lot of people newly encountering the movement may ask themselves.
How seriously do I take these people?
I first became aware of GiveWell when Holden attended a philanthropy conference at my business school that I had helped to organize, and I have maintained connections in the institutional philanthropy world since. In advance of EA Global this year, I wrote a whole bunch of those contacts to ask if I would be seeing them at the conference. To my surprise, all of them said they weren’t going. I was able to set up meetings with several of them before the conference started, and brought up effective altruism each time to get their thoughts. I heard a lot of skepticism expressed around the movement, mostly familiar criticisms like overconfidence, naivete about the real-world circumstances in which people make decisions, and excessive dismissiveness toward non-favored causes. What surprised me was not the critiques themselves, but the fact that I was hearing them from prominent people in the West Coast smart-philanthropy set who advocate evidence-based giving every day. One of them had even been to EA Global in 2015; when I asked him why none of his peers seemed to be attending this year, he told me, “I think we all decided one year was enough.”
Because these are people that I do take seriously, that experience lowered my expectations of how seriously to take EA Global. In truth, before that week, I’d had no idea that there was so little overlap between the EA movement and institutional philanthropy, affirming quotes from Bill Gates aside. I was expecting the ratio of foundation evaluation and strategy officers to philosophy grad students at the conference to be a hell of a lot higher than it was.
EA Global, however, convinced me that institutional philanthropy is making a mistake not to pay more attention to EA – and I wrote my contacts to tell them so. While I agree with all of the critiques they had to some extent, I think they overlook some important points when it comes to the movement.
First, I definitely got the sense that EA leadership has heard a lot of the criticism that resulted from some of the overzealous rhetoric of the 2015 summit, and is striving to make changes such as emphasizing programming intended to fight cognitive bias (hi CFAR!). I do still perceive a lot of naivete in the community, it’s true, especially around understanding large-scale social dynamics and how hard it can be to make change within them. But here’s the thing: the people in this movement are insanely smart, and to the extent all that smartness is currently tempered by youth and inexperience, well, the youth and inexperience part of that equation is going to change really fast.
I enjoyed the programming at EA Global, but what really sold me on the community was the break times. I’ve been to a lot of conferences, and usually people are looking for the first excuse to play hooky and hit the beach. At EA, everyone’s milling around looking for the next person to play mental gymnastics with. There is an inexhaustible appetite for abstract thinking and debate, which I found to be amazingly absent of the smug pretentiousness that usually accompanies such an appetite. I don’t think I’ve ever been so intellectually engaged in a group setting as I was in Berkeley that weekend.
So that definitely increased my affinity for EA. But still, the question remained:
Am I really one of them?
Although I’ve been a GiveWell donor since before effective altruism had a name, I didn’t learn about effective altruism from GiveWell. I learned about it from an op-ed Peter Singer wrote in the New York Times slamming the field that I work in as “bad charity.”
Right. So there’s one important detail I haven’t told you yet about who I am. I work in the arts. I majored in music in college, had a brief career as a semi-professional composer, ensemble leader, and singer, and have worked in arts administration ever since. That think tank I mentioned in the second paragraph? It’s called Createquity, and it’s all about making the world a better place through the arts. I learned about effective altruism because that Peter Singer op-ed sparked a storm of controversy in my professional community, and it seemed like everyone I knew was composing a rebuttal with equal parts indignation and terror for the future of their jobs.
Createquity responded too, but unlike some of our peers, we tried to learn about who we were talking to first. At the time, we had a series that we called “Uncomfortable Thoughts,” in which we brought up topics that we didn’t see other people in our field considering. We ended up framing our EA reflection as an Uncomfortable Thoughts piece, posing the basic question, “what if they’re right?” You can read it here; among the approving commenters was a certain William MacAskill.
That experience explains a lot about why I’ve felt torn about committing to EA. On the one hand, I love the focus on problem-solving through science that I see from people in the movement, and I think those principles should be applied everywhere. On the other, it can be hard to identify as an effective altruist when you’re working in a cause that’s not just out of favor, but actively being used as a scapegoat by prominent members of the community.
You don’t have to convince me that there is a lot of room for improvement in the way we use resources in the arts. And it’s for exactly that reason that I believe the notion of applying EA principles within the arts is compatible with the goals of the EA movement more generally. Realistically, I don’t think the EA movement is going to expand to the scale it wants to—i.e., where it’s having the maximum amount of positive impact that it could—without softening its stance on cause neutrality. To be clear, I think cause neutrality is probably EA’s greatest innovation, and I am not at all suggesting it be abandoned. But EA has tended to treat cause specificity as the enemy of cause neutrality in a battle to the death, whereas I see a future in which they coexist peacefully and indeed advance each other’s goals. In part, this is because so much of the rest of the world is structured around cause specificity, and as I mentioned earlier, it is much harder than many people realize to change large-scale social dynamics like that.
Coming into EA Global, it was a big question in my mind how others would see that proposition. To be honest, I got mixed signals on this while I was there. On the one hand, every time I would introduce myself as someone interested in applying EA principles within the arts, the response was almost universally enthusiastic. Often it would dominate conversation for the next five to ten minutes because people were so interested. One woman even told me it was deeply refreshing for her to talk about the arts after so many successive conversations on AI risk and the like. There was no question that I belonged in those conversations.
On the other hand, when I would bring up the idea of applying EA principles within the arts as part of a general principle that it was appropriate to consider non-favored cause areas as relevant to the effective altruism movement, I encountered a lot more skepticism, especially from people who had been involved with EA for a while. I hope to share a more fleshed-out case for this idea in a future post. But it was clear to me that it was making people uncomfortable. Throughout the conference, the enthusiasm among speakers and EA leaders for the growth of the EA movement was tempered with occasional warnings about the danger of welcoming people into the community too soon who were not aligned. And I understood that they might be talking about people who have views like mine. I think we have to be honest about the fact that the goal of including new people with diverse perspectives and ways of thinking in EA is in some tension with the goal of protecting the perspectives and ways of thinking that have made EA what it is to date.
Summing up
Since coming back from EA Global, I’ve worn my Effective Altruism T-shirt out and about several times, made a bunch of new connections on Facebook, chatted a bunch with CEA staff members, started reading this forum more regularly and catching up on its archives, and spoken approvingly of the EA community to people in my circle. But I still don’t call myself an effective altruist—yet. Am I one of you? I suppose that’s for you to tell me. Thanks for reading.
- 10 Sep 2021 13:45 UTC; 15 points) 's comment on Apply now | EA Global: London (29-31 Oct) | EAGxPrague (3-5 Dec) by (
- All causes are EA causes by 25 Sep 2016 18:44 UTC; 12 points) (
- Save the dates for EA Global: San Francisco & London by 22 Feb 2018 16:16 UTC; 9 points) (
One of the most upvoted articles on this forum is how we shouldn’t think of effective altruism as a fixed identity, lest we lose sight of the question of how to do the most good. Despite the apparent consensus here, the identity of ‘effective altruist’ keeps being a thing. Part of the difficulty is the ambiguity of words. ‘Effective altruism’ is and has been a convenient name which will nonetheless always have negative connotations. Nobody has thought of a phrase that gets across the core ideas that doesn’t sound presumptuous, and yet isn’t unwieldy.
A first question asked is, “if they’re effective altruists, are they implying that everyone else are ineffective?” The answer is, of course, no, because there are lots of people doing lots of great work who don’t identify with effective altruism at all. The term is meant to imply an intent to be effectively altruistic, not a monopoly on what are or aren’t effective ways of doing good.
I believe another factor is people tend to unconsciously build for themselves and their communities a sense of identity, even if they’re consciously aware this can cause problems like groupthink and have an urge to to stay impartial among all considerations. Building a social identity is natural human behaviour. So, it seems keeping one’s identity small is not something to be done once, but a process of constant vigilance and maintenance.
All that stated, we’ll probably keep calling each others ‘effective altruists’ (often abbreviated as EAs) for the indefinite future. Nobody can specifically tell you if you’re an effective altruist or not. It’s socially constructed. It’s an idea and explicit identity that didn’t exist even five years ago. First of all, I don’t think you can be an effective altruist unless you think of yourself as one. There are lots of people who have good exactly the way effective altruism (EA) would’ve in a similar position, utterly independent of the community. When EA reaches out to them, some decline to consider themselves part of EA because of what they consider shortcomings or flaws in our ideas. I think time will tell if they’re right and wrong, and while I don’t respect all differences people have with effective altruism, I respect efforts to engage its ideas in good faith.
Of course, whether one is part of the worldwide EA community will depend on what members of the community itself think. There isn’t any clean process for generating this sort of consensus, or any hard qualifying criteria. If you get and stay involved with EA, people may start to think of you as an ‘effective altruist’. The label isn’t super special, though. As “effective altruism” grows as a brand, becoming more high-status, it can confer benefits to its adherents. Like, a charity brandishing the “effective altruism” label may attract more donors. So, the most important function of the phrase may be what it (currently) rules out. This means denying the franchise of “effective altruism” or “effective altruist” to those who aren’t actually effective and/or altruistic in their work. This hasn’t come up much, but when it does, it’s carried out through informal community policing. There isn’t yet any clear process for how this is or should be done, either.
Thanks, Evan. I did see that article, but I think its thesis makes more sense in theory than in practice. The reality is that people coming into these spaces from the outside will think of “effective altruists” as an identity whether we/you want them to or not, because that’s a frame that is familiar to them from other contexts. Communities are sometimes defined as much by people outside of them as by people on the inside.
I think there are also unconsidered benefits of community solidarity as well. Community solidarity and identification might not absolutely hold up as net positive when its negative impacts are also counted. Yet if communities like EA will be pigeonholed and defined as an identity by outsiders anyway, we might as well steer into the skid and realize the benefits of a common identity.
This is a shame, and I hope we can work to overcome this situation, but it’s maybe not as surprising as it first seems. The existing “strategic philanthropy” community has very little overlap with EA, both in terms of its membership and priorities (e.g. strategic philanthropists often think cause selection is a bad idea).
If you were a big shot in this community controlling hundreds of millions of dollars of funding, it would take a lot of humility to come to an EA event where you’re an outsider, everyone’s speaking about different topics, and a bunch of kids are the high-status ones rather than yourself.
It seems similar to lots of other cases of industries getting disrupted by outsiders. The established group don’t recognise them until the outsider group gets so large that they have no choice.
I think there is some truth to the second part of this, although I would encourage folks not to see it as a reason to be dismissive towards people in the strategic philanthropy community. A lot of them have spent decades banging their heads against the wall trying to motivate the kinds of changes that EA advocates, and they have valuable lessons to share from that experience.
Definitely.
Do you think we can get one of these people to write up their thoughts in a formal EA critique?
Thank you for posting this, Ian; I very much approve of what you’ve written here.
In general, people’s ape-y human needs are important, and the EA movement could become more pleasant (and more effective!) by recognizing this. Your involvement with EA is commendable, and your involvement with the arts doesn’t diminish this.
Ideally, I wouldn’t have to justify the statement that people’s human needs are important on utilitarian grounds, but maybe I should: I’d estimate that I’ve lost a minimum of $1k worth of productivity over the last 6 months that could have trivially been recouped if several less-nice-than-average EAs had shown an average level of kindness to me.
I would be more comfortable with you calling yourself an effective altruist than I would be with you not doing so; if you’re interested in calling yourself an EA, but hesitate because of your interests and past work, that means that we’re the ones doing something wrong.
I appreciate the vote of confidence! But I should also clarify that my wavering on self-identification with effective altruism has mostly not been due to lack of kindness from other EAs. I’ve sometimes been asked tough and direct questions, but I fully expected that and didn’t consider it any kind of harassment (with one exception where the guy later apologized). It sounds like you’ve experienced much worse and I’m sorry for that.
The world would be a better place if Effective Altruism principles made there way into other fields like art. However, that doesn’t mean that we have to instigate this ourselves. As EA grows, we will naturally influence other fields, including art. If we softened cause neutrality, then this would indeed speed the spread of other EA principles into other fields.
However, we have to also consider the cost that this would have. The best option in one cause area, could easily be a hundred or a thousand times better than the best option in another cause area. For example, if softening our position on art had even a 1% chance of being counter-productive for global poverty—say less people swapping from one to another—then the trade-off doesn’t seem to be worth it.
Additionally, if we softened cause neutrality with regards to broad focus areas, it would be hard to keep neutrality within broad focus areas too. For example, I can imagine a person who likes water charities arguing that if art can be considered EA, why can’t water, which tends to rate much higher from an EA perspective. At this point, EA would have then collapsed into something much less substantial.
I agree—my position is roughly: if you want to spend time on art and also be an EA, that’s totally fine, but probably don’t classify the time you spend on art among your ‘EA activities’. In the same way you can be an artist at a math conference, you can be an artist at an EA conference. There’s some overlap, but most art isn’t topical at a math conference, and the same is true at an EA conference.
Some people see problems with the EA movement and say “I want to join the EA movement so I can fix these problems”. Others see problems with the EA movement and say “I won’t join the EA movement because it has these problems”. Is there a way to predict which category an individual will fall in to, or nudge an individual in to one category or another?
(I don’t necessarily think we should try to nudge everyone in to the first category—if the problem someone sees is that the EA movement is not giving money to their nonprofit to teach homeless children about nautical flag signaling, I wouldn’t be enthusiastic about that person joining EA in a deliberate attempt to gain influence and drive donations for their nonprofit.)
Do you think your contacts in the “West Coast smart-philanthropy set” might have been selected based on your own career in arts administration?